repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
OSGeoLabBp/tutorials
english/data_processing/lessons/ml_clustering.ipynb
cc0-1.0
# modules import sklearn from numpy import where from sklearn.datasets import make_classification from matplotlib import pyplot """ Explanation: <a href="https://colab.research.google.com/github/OSGeoLabBp/tutorials/blob/master/english/data_processing/lessons/ml_clustering.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Clustering with Machine Learning What is Machine Learning? Nowadays Machine Learning algorithms are widely used. This technology is behind chatbots, language translation apps, the shows Netflix suggest to you and how your social media feeds are presented. This is the basis of the idea of autonomous vehicles and machines too. Machine Learning (ML) is a subfield of Artificial Intelligence (AI). The basic idea of the ML is to teach computers to 'learn' information directly from data with computational methods. There are three subcategories of Machine Learning: In the following we are going to focus on an unsupervised learning method, within that clustering methods. Clustering Clustering or cluster analysis is an unsupervised learning problem. There are many types of clustering algorithm. Most of these use similarity or distance measures between points. Some of the clustering algorithms require to specify or guess at the number of clusters to discover in the data, whereas others require the specification of some minimum distance between observations in which examples may be considered “close” or “connected.” Cluster analysis is an iterative process where subjective evaluation of the identified clusters is fed back into changes to algorithm configuration until a desired or appropriate result is achieved. There are several clustering algorithm to choose from: - Affinity Propagation - Agglomerative Clustering - BIRCH - DBSCAN - K-Means - Mini-Batch K-Means - Mean Shift - OPTICS - Spectral Clustering - Mixture of Gaussians - etc... In the following we are going to check on some of these with the help of scikit-learn library. Scikit-learn is an open source machine learning library that supports supervised and unsupervised learning. It also provides various tools for model fitting, data preprocessing, model selection, model evaluation, and many other utilities. Let's import the modules! End of explanation """ # define dataset X, y = make_classification(n_samples=1000, n_features=2, n_informative=2, n_redundant=0, n_clusters_per_class=1, random_state=4) # create scatter plot for samples from each class for class_value in range(2): # get row indexes for samples with this class row_ix = where(y == class_value) # create scatter of these samples pyplot.scatter(X[row_ix, 0], X[row_ix, 1]) # show the plot pyplot.title('The generated dataset') pyplot.xlabel('x') pyplot.ylabel('y') pyplot.show() """ Explanation: To test different clustering methods we need a sample data. In the scikit-learining module there are built-in functions to create it. We will use make_classification() to create a dataset of 1000 points with 2 clusters. End of explanation """ from sklearn.cluster import AffinityPropagation from numpy import unique # define the model model = AffinityPropagation(damping=0.9) # fit the model model.fit(X) # assign a cluster to each example yhat = model.predict(X) # retrieve unique clusters clusters = unique(yhat) # create scatter plot for samples from each cluster for cluster in clusters: # get row indexes for samples with this cluster row_ix = where(yhat == cluster) # create scatter of these samples pyplot.scatter(X[row_ix, 0], X[row_ix, 1]) # show the plot pyplot.title('Affinity propagation clustering') pyplot.xlabel('x') pyplot.ylabel('y') pyplot.show() """ Explanation: Now let's apply the different clustering algorithms on the dataset! Affinity propagation The method takes as input measures of similarity between pairs of data points. Real-valued messages are exchanged between data points until a high-quality set of exemplars and corresponding clusters gradually emerges. End of explanation """ from sklearn.cluster import AgglomerativeClustering # define the model model = AgglomerativeClustering(n_clusters=2) # fit model and predict clusters yhat = model.fit_predict(X) # retrieve unique clusters clusters = unique(yhat) # create scatter plot for samples from each cluster for cluster in clusters: # get row indexes for samples with this cluster row_ix = where(yhat == cluster) # create scatter of these samples pyplot.scatter(X[row_ix, 0], X[row_ix, 1]) # show the plot pyplot.title('Agglomerative clustering') pyplot.xlabel('x') pyplot.ylabel('y') pyplot.show() """ Explanation: Agglomerative clustering It is type of hierarchical clustering, which is a general family of clustering algorithms that build nested clusters by merging or splitting them successively. This hierarchy of clusters is represented as a tree. The root of the tree is the unique cluster that gathers all the samples, the leaves being the clusters with only one sample. Agglomerative clustering performs using a bottom up approach: each observation starts in its own cluster, and clusters are successively merged together. The merging continues until the desired number of clusters is achieved. The merge strategy contains the following steps: - minimizes the sum of squared differences within all clusters - minimizes the maximum distance between observations of pairs of clusters - minimizes the average of the distances between all observations of pairs of clusters - minimizes the distance between the closest observations of pairs of clusters To use agglomerative clustering the number of clusters have to be defined. End of explanation """ from sklearn.cluster import Birch model = Birch(threshold=0.01, n_clusters=2) # fit the model model.fit(X) # assign a cluster to each example yhat = model.predict(X) # retrieve unique clusters clusters = unique(yhat) # create scatter plot for samples from each cluster for cluster in clusters: # get row indexes for samples with this cluster row_ix = where(yhat == cluster) # create scatter of these samples pyplot.scatter(X[row_ix, 0], X[row_ix, 1]) # show the plot pyplot.title('BRICH clustering') pyplot.xlabel('x') pyplot.ylabel('y') pyplot.show() """ Explanation: BIRCH BIRCH clustering (Balanced Iterative Reducing and Clustering using Hierarchies) involves constructing a tree structure from which cluster centroids are extracted. BRICH incrementally and dynamically clusters incoming multi-dimensional metric data points to try to produce the best quality clustering with the available resources. This is the first clustering algorothm that handle noise effectively. It is also effective on large datasets like point clouds. To use this method the threshold and number of clusters values have to be deifned. End of explanation """ from sklearn.cluster import DBSCAN from matplotlib import pyplot # define the model model = DBSCAN(eps=0.30, min_samples=9) # fit model and predict clusters yhat = model.fit_predict(X) # retrieve unique clusters clusters = unique(yhat) # create scatter plot for samples from each cluster for cluster in clusters: # get row indexes for samples with this cluster row_ix = where(yhat == cluster) # create scatter of these samples pyplot.scatter(X[row_ix, 0], X[row_ix, 1]) # show the plot pyplot.title('DBSCAN clustering') pyplot.xlabel('x') pyplot.ylabel('y') pyplot.show() """ Explanation: DBSCAN DBSCAN clustering (Density-Based Spatial Clustering of Applications with Noise) involves finding high-density areas in the domain and expanding those areas of the feature space around them as clusters. It can be used on large databases with good efficiency. The usage of the DBSCAN is not complicated, it requires only one parameter. The number of clusters are determined by the algorithm. End of explanation """ from sklearn.cluster import KMeans # define the model model = KMeans(n_clusters=2) # fit the model model.fit(X) # assign a cluster to each example yhat = model.predict(X) # retrieve unique clusters clusters = unique(yhat) # create scatter plot for samples from each cluster for cluster in clusters: # get row indexes for samples with this cluster row_ix = where(yhat == cluster) # create scatter of these samples pyplot.scatter(X[row_ix, 0], X[row_ix, 1]) # show the plot pyplot.title('k-Means clustering') pyplot.xlabel('x') pyplot.ylabel('y') pyplot.show() """ Explanation: k-Means clustering May be the most widely known clustering method. During the creation of the clusters the algorithm trys to minimize the variance within each cluster. To use it we have to define the number of clusters. End of explanation """ from sklearn.cluster import MeanShift # define the model model = MeanShift() # fit model and predict clusters yhat = model.fit_predict(X) # retrieve unique clusters clusters = unique(yhat) # create scatter plot for samples from each cluster for cluster in clusters: # get row indexes for samples with this cluster row_ix = where(yhat == cluster) # create scatter of these samples pyplot.scatter(X[row_ix, 0], X[row_ix, 1]) # show the plot pyplot.title('Mean shift clustering') pyplot.xlabel('x') pyplot.ylabel('y') pyplot.show() """ Explanation: There is a modified version of k-Means, which is called Mini-Batch K-Means clustering. The difference between the two that updated vesion using mini-batches of samples rather than the entire dataset. It makes faster for large datasets, and more robust to statistical noise. Mean shift clustering The algorithm is finding and adapting centroids based on the density of examples in the feature space. To apply it we don't have to define any parameters. End of explanation """ !wget -q https://github.com/OSGeoLabBp/tutorials/raw/master/english/data_processing/lessons/code/barnag_roofs.ply """ Explanation: The main characteristics of the clustering algorithms Task - Test the different clustering algorithms on different datasets! - Check and use scikit-learn's documentation to compare the algorithms! Applying ML based clustering algorithm on point cloud The presented culstering method can be useful when we would like to separate group of points in a point cloud. Most cases when we would like to apply clustering on a point cloud the number of clusters is unknown, but as we have seen above there are severeal algorithms (like DBSCAN, OPTICS, mean shift) where the number of clusters don't have to be defined. Therefore, in the following section we are going to apply one of these, the DBSCAN clustering algorithm to separate roof points of buildings. First, let's download the point cloud! End of explanation """ !pip install open3d -q """ Explanation: Let's install Open3D! End of explanation """ import open3d as o3d import numpy as np from numpy import unique from numpy import where from sklearn.datasets import make_classification from sklearn.cluster import DBSCAN from matplotlib import pyplot pc = o3d.io.read_point_cloud('barnag_roofs.ply',format='ply') xyz = np.asarray(pc.points) # display the point cloud pyplot.scatter(xyz[:, 0], xyz[:, 1]) pyplot.title('The point cloud of the roofs') pyplot.xlabel('y_EOV [m]') pyplot.ylabel('x_EOV [m]') pyplot.axis('equal') pyplot.show() ''' 3d display TODO fig = plt.figure() ax = fig.add_subplot(projection='3d') ax.scatter(xyz[:, 0], xyz[:, 1],xyz[:, 2]) ax.view_init(30, 70) ''' # define the model model = DBSCAN(eps=0.30, min_samples=100) # fit model and predict clusters yhat = model.fit_predict(xyz) #print(yhat) # retrieve unique clusters clusters = unique(yhat) print('Number of clusters: '+str(clusters)) """ Explanation: After the installation import modules and display the point cloud! End of explanation """ # Save clusters as for cluster in clusters: # get row indexes for samples with this cluster row_ix = where(yhat == cluster) # create scatter of these samples pyplot.scatter(xyz[row_ix, 0], xyz[row_ix, 1], label=str(cluster)+' cluster') # export the clusters as a point cloud xyz_cluster = xyz[row_ix] pc_cluster = o3d.geometry.PointCloud() pc_cluster.points = o3d.utility.Vector3dVector(xyz_cluster) if cluster >= 0: o3d.io.write_point_cloud('cluster_' + str(cluster) + '.ply', pc_cluster) # export .ply format else: o3d.io.write_point_cloud('noise.ply', pc_cluster) # export noise # show the plot pyplot.title('Point cloud clusters') pyplot.xlabel('y_EOV [m]') pyplot.ylabel('x_EOV [m]') pyplot.axis('equal') pyplot.show() """ Explanation: Let's use DBSCAN on the imported point cloud. End of explanation """
CompPhysics/MachineLearning
doc/Programs/ANN/perceptron.ipynb
cc0-1.0
# Do This: Load in the iris.csv file and plot the data based on the iris classifications import csv import matplotlib.pyplot as plt import numpy as np sepal_length = [] sepal_width = [] label = [] with open('iris.csv', 'r') as data: datareader = csv.reader(data, delimiter=',', quotechar='|') for i,row in enumerate(datareader): if i == 0: continue sepal_length.append(float(row[0])) sepal_width.append(float(row[1])) label.append(row[2]) colors = [] for i in label: if i == 'Iris-setosa': colors.append(-1) elif i == 'Iris-versicolor': colors.append(1) else: colors.append(2) dataset = np.vstack((np.asarray(sepal_length), np.asarray(sepal_width))) dataset = dataset.T plt.scatter(sepal_length, sepal_width, c=colors) plt.show() print(len(colors)) """ Explanation: Day 16 In-Class Assignment: Introduction to Machine Learning <img src="https://goo.gl/FYHM5q" width=600pc > <p style="text-align: right;">Image from: https://goo.gl/ypY9G2</p> Scientific motivation Classifying data (iris types) Modeling tools Machine Learning (Perceptron) Programming concepts Creating Classes and re-usable code Pulling in data from outside sources Using external libraries Agenda for today's class </p> Review of pre-class assignment Problem Statement Basics of the perceptron model Loading and inspecting the data Building the perceptron model Plotting the decision boundary 1. Review of pre-class assignment Were there any specific questions that came up in the pre-class assignment? 2. Problem Statement We want to build a model that can accurately classifying two types of flowers based off of measurements we have collected. Building this model will allow us to understand how basic machine learning models learn and what exactly is happening 'under the hood'. It will provide a slightly more intuitive view of machine learning and make it seem as less of a black box and more of a tool that we understand. 3. The Basics of the Perceptron Model The perceptron is a what is known as a basic binary classifier. It takes in a set of training data that is linerally separable and then computes a set of weights and a bias term to apply to input data as to properly classify it. Perceptrons only works for data that contains two classes and is linearly separable. For data that does not have these properties, the classifier cannot properly learn the weights and bias term. Since the perceptron is based on linearly separable data, we can think of the model as trying to learn the slope of a line, $y = m~x + b$. However in machine learning, we usually define $X$ to be an input vector, which is a $1$ by $N$ matrix, where $N$ is the number of measurements or "features" for a given sample. Then, we can create a similar matrix of weights, $W$, also of length N, and re-write our equation to be: $$ Y = W \cdot X + B$$ where $Y$ represents the resulting classification, consisting of either -1 or 1, depending on the output of $W \cdot X$, the dot product of $W$ and $X$, and $B$ represents the bias term. More explicitly, we can look at this in a matrix format: $$ Y = \begin{bmatrix} w_1 & w_2 & \dots & w_n \end{bmatrix} \cdot \begin{bmatrix} x_1 \ x_2 \ \vdots \ x_n \end{bmatrix} + B $$ But, how do we go about learning the weights of the model? We learn the model weights by attempting to predict the class of our input data using an initial guess for the weights, and then update our weight values based on our prediction. We can define a "step size" for how much we change our weight between subsequent guess in the following way: step_size = eta * (target class - predicted class) Then, using our "step size", we then update all of our weights by multiplying our step size times the corresponding feature values: $$w_{1,new} = w_{1,old} + (\mathrm{step~size} \times x_{1,i})$$ $$w_{2,new} = w_{2,old} + (\mathrm{step~size} \times x_{2,i})$$ $$ \vdots $$ $$w_{N,new} = w_{N,old} + (\mathrm{step~size} \times x_{N,i})$$ We also have to update our bias term, but just use the step size for this update: $B_{new} = B_{old} + \mathrm{step~size}$). In this model, we use eta to represent our "learning_rate", which takes on a value between 0 and 1. The step size should always be a positive or negative decimal value depending on eta. For example, if we set the learning rate, eta, to be .1 and our target is -1 and we predict 1 then the we get the following equation. step_size = .1 * (-1-1) = -.2 Alternativey, if our target is 1 and we predict it as -1 then we will get the following. step_size = .1 * (1 - -1) = .2 This process occurs iteratively. So, for a set number of iterations we calculate a step size and adjust the weights accordingly. But, how do we handle the "learning" process when we have multiple samples? In this case, we need to update the weights based on all of the sample features. So our original equation above becomes: $$w_{1,new} = w_{1,old} + \sum_{i=0}^{M} (\mathrm{step~size} \times x_{1,i})$$ $$w_{2,new} = w_{2,old} + \sum_{i=0}^{M} (\mathrm{step~size} \times x_{2,i})$$ $$ \vdots $$ $$w_{N,new} = w_{N,old} + \sum_{i=0}^{M} (\mathrm{step~size} \times x_{N,i})$$ where $M$ is our total number of samples and we compute new weights for every features value. 4. Loading and inspecting the data Before we build a machine learning model, we need data to base it off of. The data set we are going to use has been provided for you in the directory for this assignment, irsi.csv. This dataset contains measurements for the properties of two different iris varieties. Load the data into python and visualize (with a plot) to get a sense for what it looks like. Use different colors to represent the two different iris classifications. End of explanation """ import numpy as np class perceptron(): def __init__(self, eta, n_iter, bias=0): self.eta = eta self.n_iter = n_iter self.bias = bias def fit(self, data): '''does the learning''' self.weights = np.zeros(data.shape[1]) for i in range(self.n_iter): for j in range(len(self.weights)): change = 0 for k in range(len(data)): target = float(colors[k]) predicted = self.predict(data[k]) print(predicted) ss = self.eta * (target - predicted) change += ss*data[k][j] self.bias += ss self.weights[j] += change def predict(self, values): '''outputs the predicted class''' prediction = np.dot(self.weights, values) + self.bias if prediction >= 0: return 1.0 else: return -1.0 A = perceptron(0.1,100) A.fit(dataset[0:80]) A.weights for i in range(20): a = A.predict(dataset[i+80]) b = colors[i+80] if int(a) == b: print("true") else: print("false") """ Explanation: Questions: Is the data linearly separable? How many data points do we have? How many of each class? Put your answers in the cell below Yes it is! There are exactly 100 data points. 5. Building the perceptron model Now that we have some data to work with, we want to start building a model, Part 3 outlines how to use the perceptron model to fit the data and properly update weights. Your job is to create a Python perceptron class that matches the following specifications: Define the perceptron class with an __init__ method The class should be initialized with the following attributes: user defined input value for eta, the learning rate for the perceptron. a number of interations to be used by the model, n_iter. This should also be an input parameter. an initial values for the bias (you can choose whether or not the user can set this values or if you want a standard default). Create two methods for the perceptron clas, a fit method that does the learning and a predict method that outputs the predicted class The fit method should: define an array of weights the same length as the input vector. You can choose how to initialize the weight values. go through a set number of iterations (based on n_iter) where it makes predictions and updates the weights vector accordingly for each subsequent round of predictions. The predict method should: take in a feature vector and return the predicted class based on the current weights hint: The prediction is just a dot product of the weights and the features plus the bias term. The resulting ouput should be in the range (-1,1), depending on the value of the dot product. If the prediction is less than 0 it should return -1, otherwise it should return 1. End of explanation """ # First, make sure you data is in the right format to be fed into your perceptron class and split your data into a training set and a testing set # Then, train your model using your `fit` method. # Finally, test your trained model on the testing data. """ Explanation: Testing the new perceptron class Now that we have a classifier built, we need to test it on data, but first we need to make sure our data is in the right format so we can properly train a classifier on it. This means, if you haven't already, that you need to make sure that you classes are either -1 or 1. You'll also need to make sure the features vectors can be fed into your perceptron class correctly. Also, remember, for a good model, we want the training data to have an even sample of both classes so it knows what to look for and doesn't end up biased towards a particular classifications. As a rule of thumb, you should use ~75% of your sample data as your training data and reserve the remaining ~25% as testing data. Using your new perceptron class, train the model using your training and then test the results on your testing data. You may want to come up with a method for computing how many predictions are right versus wrong. End of explanation """ # Compute the decision boundary and make a plot of it, along with the data m = -A.weights[0]/A.weights[1] b = -A.bias/A.weights[1] def line(x,m,b): return m*x + b x = np.linspace(4,7,num=50) plt.plot(x,line(x,m,b)) plt.scatter(sepal_length, sepal_width, c=colors) plt.show() """ Explanation: 6. Plotting the decision boundary Finally, to better understand our classifier, it might help to plot what is known as the "decision boundary". The decision boundary is the line that seperates the classes in the classifier. The line is defined by the weights and the bias term that we calculated for our model. The slope of the decision boundary is defined as: $$ m = -\frac{w_1}{w_2} $$ And the $y$-intercept, $b$, is defined as: $$ b = -\frac{B}{w_2} $$ You should be able to generate a set of evenly spaced $x$-axis values and then used the equation for a line ($y = mx + b$) to compute the decision boundary for making a plot of the line. You should get something that looks like this: <img src=https://i.imgur.com/UPX8XDy.png> End of explanation """ from IPython.display import HTML HTML( """ <iframe src="https://goo.gl/forms/sxPAah1RyU3bCk0z1" width="80%" height="500px" frameborder="0" marginheight="0" marginwidth="0"> Loading... </iframe> """ ) """ Explanation: Assignment Wrap-up Fill out the following Google Form before submitting your assignment to D2L! End of explanation """
pacoqueen/ginn
extra/install/ipython2/ipython-5.10.0/examples/IPython Kernel/Background Jobs.ipynb
gpl-2.0
from IPython.lib import backgroundjobs as bg import sys import time def sleepfunc(interval=2, *a, **kw): args = dict(interval=interval, args=a, kwargs=kw) time.sleep(interval) return args def diefunc(interval=2, *a, **kw): time.sleep(interval) raise Exception("Dead job with interval %s" % interval) def printfunc(interval=1, reps=5): for n in range(reps): time.sleep(interval) print('In the background... %i' % n) sys.stdout.flush() print('All done!') sys.stdout.flush() """ Explanation: Simple interactive bacgkround jobs with IPython We start by loading the backgroundjobs library and defining a few trivial functions to illustrate things with. End of explanation """ jobs = bg.BackgroundJobManager() # Start a few jobs, the first one will have ID # 0 jobs.new(sleepfunc, 4) jobs.new(sleepfunc, kw={'reps':2}) jobs.new('printfunc(1,3)') """ Explanation: Now, we can create a job manager (called simply jobs) and use it to submit new jobs. Run the cell below, it will show when the jobs start. Wait a few seconds until you see the 'all done' completion message: End of explanation """ jobs.status() """ Explanation: You can check the status of your jobs at any time: End of explanation """ jobs[0].result """ Explanation: For any completed job, you can get its result easily: End of explanation """ # This makes a couple of jobs which will die. Let's keep a reference to # them for easier traceback reporting later diejob1 = jobs.new(diefunc, 1) diejob2 = jobs.new(diefunc, 2) """ Explanation: Errors and tracebacks The jobs manager tries to help you with debugging: End of explanation """ print("Status of diejob1: %s" % diejob1.status) diejob1.traceback() # jobs.traceback(4) would also work here, with the job number """ Explanation: You can get the traceback of any dead job. Run the line below again interactively until it prints a traceback (check the status of the job): End of explanation """ jobs.traceback() """ Explanation: This will print all tracebacks for all dead jobs: End of explanation """ jobs.flush() """ Explanation: The job manager can be flushed of all completed jobs at any time: End of explanation """ jobs.status() """ Explanation: After that, the status is simply empty: End of explanation """ j = jobs.new(sleepfunc, 2) j.join? """ Explanation: Jobs have a .join method that lets you wait on their thread for completion: End of explanation """
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/ml_ops/stage5/get_started_with_vertex_private_endpoints.ipynb
apache-2.0
import os # The Vertex AI Workbench Notebook product has specific requirements IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME") IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists( "/opt/deeplearning/metadata/env_version" ) # Vertex AI Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_WORKBENCH_NOTEBOOK: USER_FLAG = "--user" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG -q ! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG -q ! pip3 install tensorflow-hub $USER_FLAG -q ! pip3 install --upgrade tensorflow $USER_FLAG -q # Temporary, until feature pushed to Pypi ! pip3 uninstall google-cloud-aiplatform -y ! pip install --user git+https://github.com/googleapis/python-aiplatform.git@private-ep """ Explanation: E2E ML on GCP: MLOps stage 5 : Get started with Vertex AI Private Endpoints <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage5/get_started_with_vertex_private_endpoints.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samplestree/main/notebooks/community/ml_ops/stage5/get_started_with_vertex_private_endpoints.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage5/get_started_with_vertex_private_endpoints.ipynb"> <img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo"> Open in Vertex AI Workbench </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use Vertex AI SDK to create and use Vertex AI Endpoint resources for serving models. Vertex AI Private Endpoints. A Private Endpoint provides peer-to-peer network gRPC communication (i.e., intranet) between client and server, within the same network. This eliminates the overhead of networking switching and routing of a public endpoint which uses the HTTP protocol (i.e., internet). Learn more about Private Endpoints. Objective In this tutorial, you learn how to use Vertex AI Private Endpoint resources. This tutorial uses the following Google Cloud ML services and resources: Vertex AI Endpoints Vertex AI Models Vertex AI Prediction The steps performed include: Creating a Private Endpoint resource. Configure a VPC peering connection. Configuring the serving binary of a Model resource for deployment to a Private Endpoint resource. Deploying a Model resource to a Private Endpoint resource. Send a prediction request to a Private Endpoint How does Private Endpoint setup differ from Public Endpoint Enable two additional APIs: Service Networking and Cloud DNS. Add Compute Admin Network role to your (default) service account. Issue two gcloud commands to setup the VPC peering for your service account. There is currently no SDK support yet, so private endpoint is created with GAPIC client and has an extra argument for the peering network. To send a request, you can't use SDK/GAPIC since they do a HTTP internet request. Instead, you use curl to send a peer-to-peer request. Future plans are SDK support for endpoint and predict request using peer-to-peer protocol. Dataset This tutorial uses a pre-trained image classification model from TensorFlow Hub, which is trained on ImageNet dataset. Learn more about ResNet V2 pretained model. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Installation Install the following packages to execute this notebook. End of explanation """ # Automatically restart kernel after installs import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) """ Explanation: Restart the kernel After you install the additional packages, you need to restart the notebook kernel so it can find the packages. End of explanation """ PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID """ Explanation: Before you begin GPU runtime Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage. Enable the Service Networking API. Enable the Cloud DNS API. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $. Set your project ID If you don't know your project ID, you may be able to get your project ID using gcloud. End of explanation """ REGION = "[your-region]" # @param {type: "string"} if REGION == "[your-region]": REGION = "us-central1" """ Explanation: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services. Learn more about Vertex AI regions. End of explanation """ from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") """ Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. End of explanation """ # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. import os import sys # If on Vertex AI Workbench, then don't execute this code IS_COLAB = False if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv( "DL_ANACONDA_HOME" ): if "google.colab" in sys.modules: IS_COLAB = True from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' """ Explanation: Authenticate your Google Cloud account If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation """ BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"} BUCKET_URI = f"gs://{BUCKET_NAME}" if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]": BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP BUCKET_URI = "gs://" + BUCKET_NAME """ Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. End of explanation """ ! gsutil mb -l $REGION $BUCKET_URI """ Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation """ ! gsutil ls -al $BUCKET_URI """ Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation """ import google.cloud.aiplatform as aip import tensorflow as tf import tensorflow_hub as hub """ Explanation: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants End of explanation """ aip.init(project=PROJECT_ID, staging_bucket=BUCKET_URI) """ Explanation: Initialize Vertex AI SDK for Python Initialize the Vertex AI SDK for Python for your project and corresponding bucket. End of explanation """ if os.getenv("IS_TESTING_DEPLOY_GPU"): DEPLOY_GPU, DEPLOY_NGPU = ( aip.gapic.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_DEPLOY_GPU")), ) else: DEPLOY_GPU, DEPLOY_NGPU = (None, None) """ Explanation: Set hardware accelerators You can set hardware accelerators for training and prediction. Set the variables DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4) Otherwise specify (None, None) to use a container image to run on a CPU. Learn more about hardware accelerator support for your region. Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support. End of explanation """ if os.getenv("IS_TESTING_TF"): TF = os.getenv("IS_TESTING_TF") else: TF = "2.5".replace(".", "-") if TF[0] == "2": if DEPLOY_GPU: DEPLOY_VERSION = "tf2-gpu.{}".format(TF) else: DEPLOY_VERSION = "tf2-cpu.{}".format(TF) else: if DEPLOY_GPU: DEPLOY_VERSION = "tf-gpu.{}".format(TF) else: DEPLOY_VERSION = "tf-cpu.{}".format(TF) DEPLOY_IMAGE = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format( REGION.split("-")[0], DEPLOY_VERSION ) print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU) """ Explanation: Set pre-built containers Set the pre-built Docker container image for prediction. For the latest list, see Pre-built containers for prediction. End of explanation """ if os.getenv("IS_TESTING_DEPLOY_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Train machine type", DEPLOY_COMPUTE) """ Explanation: Set machine type Next, set the machine type to use for prediction. Set the variable DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for prediction. machine type n1-standard: 3.75GB of memory per vCPU. n1-highmem: 6.5GB of memory per vCPU n1-highcpu: 0.9 GB of memory per vCPU vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ] Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs. End of explanation """ tfhub_model = tf.keras.Sequential( [hub.KerasLayer("https://tfhub.dev/google/imagenet/resnet_v2_101/classification/5")] ) tfhub_model.build([None, 224, 224, 3]) tfhub_model.summary() """ Explanation: Get pretrained model from TensorFlow Hub For demonstration purposes, this tutorial uses a pretrained model from TensorFlow Hub (TFHub), which is then uploaded to a Vertex AI Model resource. Once you have a Vertex AI Model resource, the model can be deployed to a Vertex AI Private Endpoint resource. Download the pretrained model First, you download the pretrained model from TensorFlow Hub. The model gets downloaded as a TF.Keras layer. To finalize the model, in this example, you create a Sequential() model with the downloaded TFHub model as a layer, and specify the input shape to the model. End of explanation """ MODEL_DIR = BUCKET_URI + "/model" tfhub_model.save(MODEL_DIR) """ Explanation: Save the model artifacts At this point, the model is in memory. Next, you save the model artifacts to a Cloud Storage location. End of explanation """ CONCRETE_INPUT = "numpy_inputs" def _preprocess(bytes_input): decoded = tf.io.decode_jpeg(bytes_input, channels=3) decoded = tf.image.convert_image_dtype(decoded, tf.float32) resized = tf.image.resize(decoded, size=(224, 224)) return resized @tf.function(input_signature=[tf.TensorSpec([None], tf.string)]) def preprocess_fn(bytes_inputs): decoded_images = tf.map_fn( _preprocess, bytes_inputs, dtype=tf.float32, back_prop=False ) return { CONCRETE_INPUT: decoded_images } # User needs to make sure the key matches model's input @tf.function(input_signature=[tf.TensorSpec([None], tf.string)]) def serving_fn(bytes_inputs): images = preprocess_fn(bytes_inputs) prob = m_call(**images) return prob m_call = tf.function(tfhub_model.call).get_concrete_function( [tf.TensorSpec(shape=[None, 224, 224, 3], dtype=tf.float32, name=CONCRETE_INPUT)] ) tf.saved_model.save(tfhub_model, MODEL_DIR, signatures={"serving_default": serving_fn}) """ Explanation: Upload the model for serving Next, you will upload your TF.Keras model from the custom job to Vertex Model service, which will create a Vertex Model resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex AI, your serving function ensures that the data is decoded on the model server before it is passed as input to your model. How does the serving function work When you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a tf.string. The serving function consists of two parts: preprocessing function: Converts the input (tf.string) to the input shape and data type of the underlying model (dynamic graph). Performs the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc. post-processing function: Converts the model output to format expected by the receiving application -- e.q., compresses the output. Packages the output for the the receiving application -- e.g., add headings, make JSON object, etc. Both the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content. One consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported. Serving function for image data Preprocessing To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes, and then preprocessed to match the model input requirements, before it is passed as input to the deployed model. To resolve this, you define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is fused to the underlying model (instead of upstream on a CPU). When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (tf.string), which is passed to the serving function (serving_fn). The serving function preprocesses the tf.string into raw (uncompressed) numpy bytes (preprocess_fn) to match the input requirements of the model: io.decode_jpeg- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB). image.convert_image_dtype - Changes integer pixel values to float 32, and rescales pixel data between 0 and 1. image.resize - Resizes the image to match the input shape for the model. At this point, the data can be passed to the model (m_call), via a concrete function. The serving function is a static graph, while the model is a dynamic graph. The concrete function performs the tasks of marshalling the input data from the serving function to the model, and marshalling the prediction result from the model back to the serving function. End of explanation """ loaded = tf.saved_model.load(MODEL_DIR) serving_input = list( loaded.signatures["serving_default"].structured_input_signature[1].keys() )[0] print("Serving function input:", serving_input) """ Explanation: Get the serving function signature You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer. For your purpose, you need the signature of the serving function. Why? Well, when we send our data for prediction as a HTTP request packet, the image data is base64 encoded, and our TF.Keras model takes numpy input. Your serving function will do the conversion from base64 to a numpy array. When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request. End of explanation """ model = aip.Model.upload( display_name="example_" + TIMESTAMP, artifact_uri=MODEL_DIR, serving_container_image_uri=DEPLOY_IMAGE, ) print(model) """ Explanation: Upload the TensorFlow Hub model to a Vertex AI Model resource Finally, you upload the model artifacts from the TFHub model into a Vertex AI Model resource. End of explanation """ # This is for display only; you can name the range anything. PEERING_RANGE_NAME = "vertex-ai-prediction-peering-range" NETWORK = "default" # NOTE: `prefix-length=16` means a CIDR block with mask /16 will be # reserved for use by Google services, such as Vertex AI. ! gcloud compute addresses create $PEERING_RANGE_NAME \ --global \ --prefix-length=16 \ --description="peering range for Google service" \ --network=$NETWORK \ --purpose=VPC_PEERING """ Explanation: Setup VPC peering network To use a Private Endpoint, you setup a VPC peering network between your project and the Vertex AI Prediction service project that is hosting VMs running your model. This eliminates additional hops in network traffic and allows using efficient gRPC protocol. Learn more about VPC peering. IMPORTANT: you can only setup one VPC peering to servicenetworking.googleapis.com per project. Create VPC peering for default network For simplicity, we setup VPC peering to the default network. You can create a different network for your project. If you setup VPC peering with any other network, make sure that the network already exists and that your VM is running on that network. End of explanation """ ! gcloud services vpc-peerings connect \ --service=servicenetworking.googleapis.com \ --network=$NETWORK \ --ranges=$PEERING_RANGE_NAME \ --project=$PROJECT_ID """ Explanation: Create the VPC connection Next, create the connection for VPC peering. Note: If you get a PERMISSION DENIED, you may not have the neccessary role 'Compute Network Admin' set for your default service account. In the Cloud Console, do the following steps. Goto IAM &amp; Admin Find your service account. Click edit icon. Select Add Another Role. Enter 'Compute Network Admin'. Select Save End of explanation """ ! gcloud compute networks peerings list --network $NETWORK """ Explanation: Check the status of your peering connections. End of explanation """ project_number = model.resource_name.split("/")[1] print(project_number) full_network_name = f"projects/{project_number}/global/networks/{NETWORK}" """ Explanation: Construct the full network name You need to have the full network resource name when you subsequently create an Private Endpoint resource for VPC peering. End of explanation """ endpoint = aip.PrivateEndpoint.create( display_name="private_" + TIMESTAMP, network=full_network_name ) """ Explanation: Creating an Endpoint resource You create an Endpoint resource using the PrivateEndpoint.create() method. In this example, the following parameters are specified: display_name: A human readable name for the Private Endpoint resource. network: The full network resource name for the VPC peering. Learn more about Vertex AI Endpoints. End of explanation """ endpoint.gca_resource """ Explanation: Get details on an Endpoint resource You can get the underlying details of an Endpoint object with the property gca_resource. End of explanation """ response = endpoint.deploy( model=model, deployed_model_display_name="example_" + TIMESTAMP, machine_type=DEPLOY_COMPUTE, traffic_split={}, # no traffic split ) print(endpoint) """ Explanation: Deploying Model resources to an Endpoint resource. You can deploy one of more Vertex AI Model resource instances to the same endpoint. Each Vertex AI Model resource that is deployed will have its own deployment container for the serving binary. Note: For this example, you specified the deployment container for the TFHub model in the previous step of uploading the model artifacts to a Vertex AI Model resource. Deploying a single Endpoint resource In the next example, you deploy a single Vertex AI Model resource to a Vertex AI Endpoint resource. To deploy, you specify the following additional configuration settings: The machine type. The (if any) type and number of GPUs. Static, manual or auto-scaling of VM instances. For a Private Endpoint, you can only deploy a single model. As such, there is no traffic split. In this example, you deploy the model with the minimal amount of specified parameters, as follows: model: The Model resource. deployed_model_displayed_name: The human readable name for the deployed model instance. machine_type: The machine type for each VM instance. traffic_split: Set to {} to indicate no traffic split. Do to the requirements to provision the resource, this may take upto a few minutes. End of explanation """ print(endpoint.gca_resource.deployed_models[0]) """ Explanation: Get information on the deployed model You can get the deployment settings of the deployed model from the Endpoint resource configuration data gca_resource.deployed_models. In this example, only one model is deployed -- hence the reference to the subscript [0]. End of explanation """ ! gsutil cp gs://cloud-ml-data/img/flower_photos/daisy/100080576_f52e8ee070_n.jpg test.jpg import base64 with open("test.jpg", "rb") as f: data = f.read() b64str = base64.b64encode(data).decode("utf-8") """ Explanation: Prepare test data for prediction Next, you will load a compressed JPEG image into memory and then base64 encode it. For demonstration purposes, you use an image from the Flowers dataset. End of explanation """ import json with open("instances.json", "w") as f: f.write(json.dumps({"instances": [{serving_input: {"b64": b64str}}]})) """ Explanation: Make the prediction Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource. Request Since a Private Endpoint would block a request from a public HTTP request, you will send the request using curl to a private URI. First, you will construct the request as a JSON file. To pass the test data to the prediction service, you encode the bytes into base64 -- which makes the content safe from modification while transmitting binary data over the network. The format of each instance is: { serving_input: { 'b64': base64_encoded_bytes } } Since the serving binary can take multiple items (instances), send your single test item as a list of one test item. End of explanation """ endpoint_id = endpoint.resource_name ENDPOINT_URL = ! gcloud beta ai endpoints describe {endpoint_id} \ --region={REGION} \ --format="value(deployedModels.privateEndpoints.predictHttpUri)" private_url = ENDPOINT_URL[1] print(private_url) """ Explanation: Construct the Private Endpoint URI Next, you construct the URI for the Private Endpoint. End of explanation """ output = ! curl -X POST -d@instances.json $private_url predictions = output[5] print(predictions) ! rm test.jpg instances.json """ Explanation: Make the prediction request using curl Use curl to make the prediction request to the private URI. End of explanation """ instances = [{serving_input: {"b64": b64str}}] prediction = endpoint.predict(instances=instances) print(prediction) """ Explanation: Make the prediction request using SDK Finally, use the Vertex AI SDK to make a prediction request. End of explanation """ deployed_model_id = endpoint.gca_resource.deployed_models[0].id print(deployed_model_id) endpoint.undeploy(deployed_model_id) """ Explanation: Undeploy Model resource from Endpoint resource When a Model resource is deployed to an Endpoint resource, the deployed Model resource instance is assigned an ID -- commonly referred to as the deployed model ID. You can undeploy a specific Model resource instance with the undeploy() method, with the following parameters: deployed_model_id: The ID assigned to the deployed model. End of explanation """ endpoint.undeploy_all() """ Explanation: Undeploy all Model resources from an Endpoint resource. Finally, you can undeploy all Model instances from an Endpoint resource using the undeploy_all() method. End of explanation """ endpoint.delete() """ Explanation: Delete an Endpoint resource You can delete an Endpoint resource with the delete() method if the Endpoint resource has no deployed models. End of explanation """ delete_bucket = False delete_model = True if delete_model: try: model.delete() except Exception as e: print(e) if delete_bucket or os.getenv("IS_TESTING"): ! gsutil rm -rf {BUCKET_URI} """ Explanation: Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: End of explanation """
sdpython/actuariat_python
_doc/notebooks/exercices/pyramide_bigarree.ipynb
mit
from jyquickhelper import add_notebook_menu add_notebook_menu() %matplotlib inline """ Explanation: Tracer une pyramide bigarrée Ce notebook est la réponse à l'exercice proposé lors de l'article de blog qui consiste à afficher des boules de trois couleurs différentes de sorte qu'aucune boule n'est de voisine de la même couleur : tracer une pyramide bigarrée. End of explanation """ from IPython.display import Image Image("http://lesenfantscodaient.fr/_images/biodiversite_tri2.png") """ Explanation: Problème Il s'agit de dessiner la pyramide suivante à l'aide de matplotlib. End of explanation """ from pyquickhelper.helpgen import NbImage NbImage("data/hexa.png") """ Explanation: Idée de la solution On sépare le problème en deux plus petits : Trouver la position des boules dans un repère cartésien. Choisir la bonne couleur. Le repère est hexagonal. L'image suivante est tirée de la page wikipédia empilement compact. End of explanation """ Image("http://lesenfantscodaient.fr/_images/pyramide_num2.png") """ Explanation: Mais dans un premier temps, il faut un moyen de repérer chaque boule. On les numérote avec deux indices. End of explanation """ import matplotlib.pyplot as plt fig, ax = plt.subplots(1,1) n = 10 x = [] y = [] for i in range(1,n+1): for j in range(i, n+1): x.append(i) y.append(j) size = [300 for c in x] colors = ["r" for c in x] ax.scatter(x, y, s=size, c=colors, alpha=0.5) """ Explanation: Les coordonnées On prend comme exemple scatter_demo.py sur le site de matplotlib. End of explanation """ fig, ax = plt.subplots(1,1) n = 10 x = [] y = [] for i in range(1,n+1): for j in range(i, n+1): x.append(i) y.append(-j) size = [300 for c in x] colors = ["r" for c in x] ax.scatter(x, y, s=size, c=colors, alpha=0.5) """ Explanation: On inverse. End of explanation """ fig, ax = plt.subplots(1,1) n = 10 x = [] y = [] for i in range(1,n+1): for j in range(i, n+1): x.append(i - j*0.5) y.append(-j) size = [300 for c in x] colors = ["r" for c in x] ax.scatter(x, y, s=size, c=colors, alpha=0.5) """ Explanation: On décale. End of explanation """ fig, ax = plt.subplots(1,1, figsize=(4, 4*(3**0.5)/2)) n = 10 x = [] y = [] for i in range(1,n+1): for j in range(i, n+1): x.append(i - j*0.5) y.append(-j*(3**0.5)/2) size = [300 for c in x] colors = ["r" for c in x] ax.scatter(x, y, s=size, c=colors, alpha=0.5) """ Explanation: Cela ressemble à de l'hexagonal mais ce n'est pas encore tout à fait cela. La hauteur d'un triangle équilatéral de côté un est $\frac{\sqrt{3}}{2}$. Ca tombe bien car dans l'exemple précédente, le côté de chaque triangle est 1. Et on change la dimension du graphe tracé avec matplotlib pour éviter de réduire nos effort à néant. End of explanation """ fig, ax = plt.subplots(1,1, figsize=(4, 4*(3**0.5)/2)) n = 10 x = [] y = [] colors = [] trois = "rgb" for i in range(1,n+1): for j in range(i, n+1): x.append(i - j*0.5) y.append(-j*(3**0.5)/2) colors.append(trois[(i+j) % 3]) size = [300 for c in x] ax.scatter(x, y, s=size, c=colors, alpha=0.5) """ Explanation: La couleur Je vous laisse retourner sur les deux premières images et observer la couleur de toutes les boules qui vérifient (i+j)%3 == 1. End of explanation """
wmfschneider/CHE30324
Homework/HW6-soln.ipynb
gpl-3.0
import numpy as np N = 14.0067 # amu O = 15.999 # amu mu = N*O/(N+O) # amu r = 1.15077 # bond length (angstrom) h = 6.62607E-34 # J*s hbar = 1.05457E-34 # J*s NA = 6.02214E23 #molecules/mol c = 299792458 # m/s I = mu*r**2 #amu * anstrom^2 print('The moment of inertia is',round(I,2),'amu*angstrom^2.') B = hbar**2/2/I*6.022e26*(1e10)**2*NA/1000 #kJ/mol print('The rotational energy constant is', round(B,5),' kJ/mol.') Btilde = B/h/c*1000/NA/100 #1/cm print('The rotational spectral constant is', round(Btilde,3),' cm^-1.') """ Explanation: Chem 30324, Spring 2020, Homework 6 Due March 4, 2020 The diatomic nitric oxide (NO) is an unusual and important molecule. It has an odd number of electrons, which is a rarity for stable molecule. It acts as a signaling molecule in the body, helping to regulate blood pressure, is a primary pollutant from combustion, and is a key constituent of smog. It exists in several isotopic forms, but the most common, ${}^{14}$N= ${}^{16}$O, has a bond length of 1.15077 Å and harmonic vibrational frequency of 1904 cm$^{-1}$. Spin the NO. 1. Calculate the moment of inertia of ${}^{14}$N=${}^{16}$O, in amu Å$^2$, the rotational energy constant, $B=\hbar^2/2I$, in kJ mol$^{-1}$, and the rotational spectral constant, $\tilde{B}=B/hc$, in cm$^{-1}$. End of explanation """ print('From problem 1, we know that B =',round(B,5), 'kJ/mol.') h = 6.62607E-34 # J*s c = 299792458 # m/s NA = 6.02214E23 #molecules/mol v = [] for l in [1,3,5,7]: v.append(B*l/h/c*1000/100/NA) print('0 to 1:',round(v[0],3),'cm^-1') print('1 to 2:',round(v[1],3),'cm^-1') print('2 to 3:',round(v[2],3),'cm^-1') print('3 to 4:',round(v[3],3),'cm^-1') """ Explanation: 2. Imagine that the NO molecule is adsorbed flat on a surface upon which it is free to rotate. Plot out the energies of the four lowest-energy rotational quantum states, in units of $\tilde{B}$, being sure to include appropriate quantum numbers and degeneracies. Also indicate the total rotational angular moment of each state, in units of $\hbar$. Since we are looking at a molecule adsorbed flat on a surface, we will use the 2-D rigid rotor model. $E_{ml} = \frac{\hbar^2}{2I}m_l^2$ The four lowest-energy rotational quantum states are: $m_l = \pm 0,\pm 1, \pm2, \pm3$ 3. Whether light can induce an NO to jump from some rotational level $m_l$ to some other one $m_l^\prime$ is determined by whether the transition dipole moment integral $\langle\psi_{m_l}\lvert x\rvert\psi_{m_l'}\rangle$ is zero or non-zero. Find the selection rule on $\Delta m_l$ that make the integral non-zero. Recall that $x$ can be written $r \cos\phi$ in polar coordinates. Wave Function for 2-D rotor model: $\Psi_{m_l}(\phi) = \frac{1}{\sqrt{2\pi}}e^{-im_l\phi}, m_l = 0,\pm1,\pm2,etc$ Transition dipole moment integral: $<\psi_{m_l}|x|\psi_{m_l'}> = \int_{0}^{2\pi}\frac{1}{\sqrt{2\pi}}e^{-im_l\phi}xe^{-im_l'\phi} = \frac{r}{2\pi}\int_{0}^{2\pi}e^{i\Delta m_l\phi}d\phi$ $cos\phi = \frac{1}{2} (e^{-i\phi}+e^\phi)$ $<\psi_{m_l}|x|\psi_{m_l'}> = \int_{0}^{2\pi}\frac{r}{4\pi}e^{i(\Delta m_l-1)\phi}e^{i(\Delta m_l+1)\phi}d\phi $ For an integer $n$, we can have $\int_{0}^{2\pi}e^{ni\phi}d\phi \neq 0$ when n = 0 We can have $<\psi_{m_l}|x|\psi_{m_l'}> \neq 0$ only when $\Delta m_l = \pm1$ 4. Use your selection rule to determine the frequencies, in wavenumbers, of the four lowest-energy rotational transitions of an ${}^{14}$N=${}^{16}$O adsorbed flat on a surface. $\Delta E_{m_{l0}\rightarrow m_{l1}} = B$ $\Delta E_{m_{l1}\rightarrow m_{l2}} = 3B$ $\Delta E_{m_{l2}\rightarrow m_{l3}} = 5B$ $\Delta E_{m_{l3}\rightarrow m_{l4}} = 7B$ Wavenumber $\tilde{v}{m{l0}\rightarrow m_{l1}} = \frac{B}{hc}$ $\tilde{v}{m{l1}\rightarrow m_{l2}} = \frac{3B}{hc}$ $\tilde{v}{m{l2}\rightarrow m_{l3}} = \frac{5B}{hc}$ $\tilde{v}{m{l3}\rightarrow m_{l4}} = \frac{7B}{hc}$ End of explanation """ import numpy as np kB = 1.3806e-23 #J/K T = 298 #K P = [] for i in [0,1,2,3]: p = (2*i+1)*np.exp(-B*i*(i+1)/kB/T*1000/6.022e23) P.append(p) total = np.sum(P) print('When l = 0, relative population =',round(P[0]/total,4)) print('When l = 1, relative population =',round(P[1]/total,4)) print('When l = 2, relative population =',round(P[2]/total,4)) print('When l = 3, relative population =',round(P[3]/total,4)) print('At 298 K, all of these states potentially contribute to the rotational spectrum of NO.') """ Explanation: 5. Use your selection rule to determine the change in angular momentum of the ${}^{14}$N=${}^{16}$O in each allowed transition. Compare your result to the angular momentum of a photon, $\hbar$. Absolute change in angular momentum $|\Delta l_z| = \hbar$, identical to angular momentum of a photon. NO flips 5. Now imagine the NO molecule is free to rotate in three-dimensional space. As in Question 2 above, plot out the energies of the four lowest-energy rotational quantum states, in units of $\tilde{B}$, being sure to include appropriate quantum numbers and degeneracies. 6. Predict the relative populations of the first four rotational quantum states at 298 K. Do you expect one or all of these states to potentially contribute to the rotational spectrum of NO? End of explanation """ import numpy as np import matplotlib.pyplot as plt rho = np.linspace(0,16,100) a = 1 R20 = 1/(np.sqrt(2*a**3))*(1-rho/2)*np.exp(-rho/2) P20 = a**2*rho**2*R20**2 plt.plot(rho,P20) plt.xlabel('Length Ratio ($r/a_0$)') plt.ylabel('Probability ($1/a_0$)') plt.axvline(x = 6, color = 'r', linestyle = '--', label = 'Expected') plt.axvline(x = 3 + np.sqrt(5), color = 'g', linestyle = '--', label = 'Most Probable') plt.axvline(x = 8, color = 'k', linestyle = '--', label = 'Classical Limit') plt.legend() plt.show() """ Explanation: Quantum mechanics of an H atom: Consider an excited hydrogen atom with a 2s$^1$ electron configuration. The 2s radial wavefunction is given by $$R_{2,0} = \frac{1}{\sqrt{2a_0^3}}(1-\rho/2)e^{-\rho/2}, \rho = r/a_0$$ where $a_0$ = bohr radius. (Hint: It’s easiest to solve the following using $a_0$ as the unit of length.) 7. Provide a complete set of possible quantum numbers for an electron with the 2s wavefunction. Principle Quantum Number: $n = 2$ Angular Quantum Number: $m_\ell = 0$ Azimuthal Quantum Number: $\ell = 0$ Spin Quantum Number: $m_s = \pm1/2$ 8. Provide a hand sketch of the 3-D shape of a 2s wavefunction (“orbital”). Be sure to indicate the sign of the wavefunction in each region and the location(s) of nodes. How many radial and how many angular nodes does a 2s wavefunction have? 9. Plot the radial probability function $P_{20}(r) = r^2\lvert R_{2,0}(r)\rvert^2$. Be sure to label both axes appropriately. (Hint: Stick with $a_0$ as the length unit.) $P_{20}(r) = r^2\lvert R_{2,0}(r)\rvert^2 = \rho^2a_0^2[\frac{1}{\sqrt{2a_0^3}}(1-\rho/2)e^{-\rho/2}]^2 = \frac{\rho^2}{2}(1-\rho/2)^2e^{-\rho}\frac{1}{a_0}$. Plot $\frac{\rho^2}{2}(1-\rho/2)^2e^{-\rho}$ vs $\rho$, the unit of P is $\frac{1}{a_0}$. End of explanation """ from sympy import * rho = symbols('rho') I = integrate(rho**3/2*(1-rho/2)**2*exp(-rho),(rho,0,oo)) print('The expectation value of the distance of the electron from the nucleus is',I,"* a0.") """ Explanation: 10. Determine and indicate on your plot $\langle r\rangle$, the expectation value of the distance of the electron from the nucleus. $\langle r\rangle = \int_{0}^{\infty}rP_{20}dr = \int_{0}^{\infty}\frac{\rho^3}{2}(1-\rho/2)^2e^{-\rho}dr = a_0\int_{0}^{\infty}\frac{\rho^3}{2}(1-\rho/2)^2e^{-\rho}d\rho$ End of explanation """ print("Possible solutions are ", solve(diff(rho**2/2*(1-rho/2)**2*exp(-rho),rho),rho)) print('Comparing these solutions, the universal maximum occurs at sqrt(5) + 3') """ Explanation: 11. Determine and indicate on your plot $r_{MP}$, the most probable distance of the electron from the nucleus. End of explanation """ rho_ = symbols('rho_') I = integrate(rho_**2/2*(1-rho_/2)**2*exp(-rho_),(rho_,8,oo)) #intrgrate from 8 to infinity print("Prob = %f"%I) """ Explanation: 12. Determine and indicate on your plot the maximum classical distance of the electron from the nucleus in this orbital. Classical theory states that orbitals energy must equals to Coulombic energy: $$-\frac{\hbar^2}{2m_ea_o^2}\frac{1}{N^2} = -\frac{e^2}{4\pi\epsilon_0}\frac{1}{r}, where\quad N=2\quad and \quad a_0 = \frac{4\pi\epsilon_0\hbar^2}{m_ee^2}$$ $$r_{max,classic} = 8a_0$$ 13. What is the probability of finding the electron beyond the classical distance? (Evaluate the necessary integral numerically.) End of explanation """
vadim-ivlev/STUDY
handson-data-science-python/DataScience-Python3/TTest.ipynb
mit
import numpy as np from scipy import stats A = np.random.normal(25.0, 5.0, 10000) B = np.random.normal(26.0, 5.0, 10000) stats.ttest_ind(A, B) """ Explanation: T-Tests and P-Values Let's say we're running an A/B test. We'll fabricate some data that randomly assigns order amounts from customers in sets A and B, with B being a little bit higher: End of explanation """ B = np.random.normal(25.0, 5.0, 10000) stats.ttest_ind(A, B) A = np.random.normal(25.0, 5.0, 100000) B = np.random.normal(25.0, 5.0, 100000) stats.ttest_ind(A, B) """ Explanation: The t-statistic is a measure of the difference between the two sets expressed in units of standard error. Put differently, it's the size of the difference relative to the variance in the data. A high t value means there's probably a real difference between the two sets; you have "significance". The P-value is a measure of the probability of an observation lying at extreme t-values; so a low p-value also implies "significance." If you're looking for a "statistically significant" result, you want to see a very low p-value and a high t-statistic (well, a high absolute value of the t-statistic more precisely). In the real world, statisticians seem to put more weight on the p-value result. Let's change things up so both A and B are just random, generated under the same parameters. So there's no "real" difference between the two: End of explanation """ A = np.random.normal(25.0, 5.0, 1000000) B = np.random.normal(25.0, 5.0, 1000000) stats.ttest_ind(A, B) """ Explanation: Our p-value actually got a little lower, and the t-test a little larger, but still not enough to declare a real difference. So, you could have reached the right decision with just 10,000 samples instead of 100,000. Even a million samples doesn't help, so if we were to keep running this A/B test for years, you'd never acheive the result you're hoping for: End of explanation """ stats.ttest_ind(A, A) """ Explanation: If we compare the same set to itself, by definition we get a t-statistic of 0 and p-value of 1: End of explanation """
mommermi/Introduction-to-Python-for-Scientists
notebooks/python_basics_20160909.ipynb
mit
# this is a single line comment """ this is a multi line comment """ """ Explanation: Python Basics (2016-09-09) Content Comments Data Types Simple Arithmetics Strings Comments Comments provide important documentation for your code. End of explanation """ a = 5.1 print 'a', type(a) b = 3 print 'b', type(b) """ Explanation: Data Types (see https://docs.python.org/2/tutorial/introduction.html#numbers) Python supports the most common data types: integer, float, character, string, boolean. Based on the provided input, it selects the most simple data type. End of explanation """ print a+b, type(a+b) """ Explanation: What will be the data type of a+b? End of explanation """ print '3/2', 3/2 # integer divided by integer print '3./2', 3./2 # float divided by integer """ Explanation: It's a float. But why? If it was an integer, information would get lost. Python chooses the simplest data type that preserves all information. However, this can be tricky if you don't pay attention... Important: Always be aware what data type Python uses to prevent loss of information. End of explanation """ c = 'g' print 'c', type(c) d = 'stuff' print 'd', type(d) e = True print 'e', type(e) """ Explanation: Other common data types: End of explanation """ print a+b # addition print a-b # subtraction print a*b # multiplication print a/b # division print a//b # floor division print a%b # modulus print a**2 # power """ Explanation: Simple Arithmetics End of explanation """ #print a+d # results in a TypeError """ Explanation: Again, be aware of the data types: End of explanation """ print 'abc' + "def" # concatenate strings (' and " can both be used, but not mixed) print 'thereisabunnyhidinginhere'.find('bunny') # index where the bunny hides print 'thereisabunnyhidinginhere'.find('wolf') # return -1 if failed print ' lots of space '.strip() # strip blanks print '____lots of underscores____'.strip('_') # strip underscores print 'string_with_underscores'.replace('_', ' ') # replace underscores print 'list,of,things,like,in,a,csv,file'.split(',') # split in to list print 'one\nword\nper\nline' # use '\n' to represent a line break print '123abc'.isalpha() # check if string consists of letters only print '123'.isdigit() # check if string consists of numbers only """ Explanation: For more complex mathematical operations, refer to the math or numpy modules, for instance. Strings (see https://docs.python.org/2/tutorial/introduction.html#strings) End of explanation """
jrbourbeau/cr-composition
notebooks/fraction-correct.ipynb
mit
%load_ext watermark %watermark -a 'Author: James Bourbeau' -u -d -v -p numpy,matplotlib,scipy,pandas,sklearn,mlxtend """ Explanation: <a id='top'> </a> End of explanation """ from __future__ import division, print_function from collections import defaultdict import itertools import numpy as np from scipy import interp import pandas as pd import matplotlib.pyplot as plt import seaborn.apionly as sns import pyprind from sklearn.metrics import accuracy_score, confusion_matrix, roc_curve, auc, classification_report from sklearn.model_selection import cross_val_score, StratifiedShuffleSplit, KFold, StratifiedKFold from mlxtend.feature_selection import SequentialFeatureSelector as SFS import comptools as comp import comptools.analysis.plotting as plotting # color_dict allows for a consistent color-coding for each composition color_dict = comp.analysis.get_color_dict() %matplotlib inline """ Explanation: Fraction correctly identified Table of contents Define analysis free parameters Data preprocessing Fitting random forest Fraction correctly identified Spectrum Unfolding End of explanation """ config = 'IC86.2012' # config = 'IC79' comp_class = True comp_key = 'MC_comp_class' if comp_class else 'MC_comp' comp_list = ['light', 'heavy'] if comp_class else ['P', 'He', 'O', 'Fe'] """ Explanation: Define analysis free parameters [ back to top ] Whether or not to train on 'light' and 'heavy' composition classes, or the individual compositions End of explanation """ pipeline_str = 'GBDT' pipeline = comp.get_pipeline(pipeline_str) """ Explanation: Get composition classifier pipeline End of explanation """ energybins = comp.analysis.get_energybins() """ Explanation: Define energy binning for this analysis End of explanation """ sim_train, sim_test = comp.load_dataframe(datatype='sim', config=config, comp_key=comp_key) len(sim_train) + len(sim_test) # feature_list = ['lap_cos_zenith', 'log_s125', 'log_dEdX', 'invqweighted_inice_radius_1_60'] # feature_list, feature_labels = comp.analysis.get_training_features(feature_list) feature_list, feature_labels = comp.analysis.get_training_features() """ Explanation: Data preprocessing [ back to top ] 1. Load simulation/data dataframe and apply specified quality cuts 2. Extract desired features from dataframe 3. Get separate testing and training datasets End of explanation """ frac_correct_folds = comp.analysis.get_CV_frac_correct(sim_train, feature_list, pipeline_str, comp_list) frac_correct_gen_err = {key: np.std(frac_correct_folds[key], axis=0) for key in frac_correct_folds} """ Explanation: Fraction correctly identified [ back to top ] Calculate classifier performance via 10-fold CV End of explanation """ fig, ax = plt.subplots() for composition in comp_list: # for composition in comp_list + ['total']: print(composition) performance_mean = np.mean(frac_correct_folds[composition], axis=0) performance_std = np.std(frac_correct_folds[composition], axis=0) # err = np.sqrt(frac_correct_gen_err[composition]**2 + reco_frac_stat_err[composition]**2) plotting.plot_steps(energybins.log_energy_bins, performance_mean, yerr=performance_std, ax=ax, color=color_dict[composition], label=composition) plt.xlabel('$\mathrm{\log_{10}(E_{reco}/GeV)}$') ax.set_ylabel('Classification accuracy') # ax.set_ylabel('Classification accuracy \n (statistical + 10-fold CV error)') ax.set_ylim([0.0, 1.0]) ax.set_xlim([energybins.log_energy_min, energybins.log_energy_max]) ax.grid() leg = plt.legend(loc='upper center', frameon=False, bbox_to_anchor=(0.5, # horizontal 1.15),# vertical ncol=len(comp_list)+1, fancybox=False) # set the linewidth of each legend object for legobj in leg.legendHandles: legobj.set_linewidth(3.0) cv_str = 'Avg. accuracy:\n{:0.2f}\% (+/- {:0.1f}\%)'.format(np.mean(frac_correct_folds['total'])*100, np.std(frac_correct_folds['total'])*100) ax.text(7.4, 0.2, cv_str, ha="center", va="center", size=10, bbox=dict(boxstyle='round', fc="white", ec="gray", lw=0.8)) # plt.savefig('/home/jbourbeau/public_html/figures/frac-correct-{}.png'.format(pipeline_str)) plt.show() """ Explanation: Plot fraction of events correctlty classified vs energy This is done via 10-fold cross-validation. This will give an idea as to how much variation there is in the classifier due to different trainig and testing samples. End of explanation """ avg_frac_correct_data = {'values': np.mean(frac_correct_folds['total'], axis=0), 'errors': np.std(frac_correct_folds['total'], axis=0)} avg_frac_correct, avg_frac_correct_err = comp.analysis.averaging_error(**avg_frac_correct_data) reco_frac, reco_frac_stat_err = comp.analysis.get_frac_correct(sim_train, sim_test, pipeline, comp_list) # Plot fraction of events correctlt classified vs energy fig, ax = plt.subplots() for composition in comp_list + ['total']: err = np.sqrt(frac_correct_gen_err[composition]**2 + reco_frac_stat_err[composition]**2) plotting.plot_steps(energybins.log_energy_bins, reco_frac[composition], err, ax, color_dict[composition], composition) plt.xlabel('$\log_{10}(E_{\mathrm{reco}}/\mathrm{GeV})$') ax.set_ylabel('Fraction correctly identified') ax.set_ylim([0.0, 1.0]) ax.set_xlim([energybins.log_energy_min, energybins.log_energy_max]) ax.grid() leg = plt.legend(loc='upper center', frameon=False, bbox_to_anchor=(0.5, # horizontal 1.1),# vertical ncol=len(comp_list)+1, fancybox=False) # set the linewidth of each legend object for legobj in leg.legendHandles: legobj.set_linewidth(3.0) cv_str = 'Accuracy: {:0.2f}\% (+/- {:0.1f}\%)'.format(avg_frac_correct*100, avg_frac_correct_err*100) ax.text(7.4, 0.2, cv_str, ha="center", va="center", size=10, bbox=dict(boxstyle='round', fc="white", ec="gray", lw=0.8)) plt.savefig('/home/jbourbeau/public_html/figures/frac-correct-{}.png'.format(pipeline_str)) plt.show() """ Explanation: Determine the mean and standard deviation of the fraction correctly classified for each energy bin End of explanation """
Ykharo/notebooks
Trabajando_con_R_Python/Trabajando_de_forma_conjunta_con_Python_y_con_R.ipynb
bsd-2-clause
# Importamos pandas y numpy para manejar los datos que pasaremos a R import pandas as pd import numpy as np # Usamos rpy2 para interactuar con R import rpy2.robjects as ro # Activamos la conversión automática de tipos de rpy2 import rpy2.robjects.numpy2ri rpy2.robjects.numpy2ri.activate() import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Trabajando de forma conjunta con Python y con R. Hoy vamos a ver como podemos juntar lo bueno de R, algunas de sus librerías, con Python usando rpy2. Pero, lo primero de todo, ¿qué es rpy2? rpy2 es una interfaz que permite que podamos comunicar información entre R y Python y que podamos acceder a funcionalidad de R desde Python. Por tanto, podemos estar usando Python para todo nuestro análisis y en el caso de que necesitemos alguna librería estadística especializada de R podremos acceder a la misma usando rpy2. Para poder usar rpy2 necesitarás tener instalado tanto Python (CPython versión >= 2.7.x) como R (versión >=3), además de las librerías R a las que quieras acceder. Conda permite realizar todo el proceso de instalación de los intérpretes de Python y R, además de librerías, pero no he trabajado con Conda y R por lo que no puedo aportar mucho más en este aspecto. Supongo que será parecido a lo que hacemos con Conda y Python. Para este microtutorial voy a hacer uso de la librería extRemes de R que permite hacer análisis de valores extremos usando varias de las metodologías más comúnmente aceptadas. Como siempre, primero de todo, importaremos la funcionalidad que necesitamos para la ocasión. End of explanation """ codigo_r = """ saluda <- function(cadena) { return(paste("Hola, ", cadena)) } """ ro.r(codigo_r) """ Explanation: En el anterior código podemos ver una serie de cosas nuevas que voy a explicar brevemente: import rpy2.robjects as ro, esto lo explicaremos un poquito más abajo. import rpy2.robjects.numpy2ri, importamos el módulo numpy2ri. Este módulo permite que hagamos conversión automática de objetos numpy a objetos rpy2. rpy2.robjects.numpy2ri.activate(), hacemos uso de la función activate que activa la conversión automática de objetos que hemos comentado en la línea anterior. Brevísima introducción a algunas de las cosas más importantes de rpy2. Para evaluar directamente código R podemos hacerlo usando rpy2.robjects.r con el código R expresado como una cadena (rpy2.robjects lo he importado como ro en este caso, como podéis ver más arriba): End of explanation """ saluda_py = ro.globalenv['saluda'] """ Explanation: En la anterior celda hemos creado una función R llamada saluda y que ahora está disponible en el espacio de nombres global de R. Podemos acceder a la misma desde Python de la siguiente forma: End of explanation """ res = saluda_py('pepe') print(res[0]) """ Explanation: Y podemos usarla de la siguiente forma: End of explanation """ print(type(res)) print(res.shape) """ Explanation: En la anterior celda véis que para acceder al resultado he tenido que usar res[0]. En realidad, lo que nos devuleve rpy2 es: End of explanation """ print(saluda_py.r_repr()) """ Explanation: En este caso un numpy array con diversa información del objeto rpy2. Como el objeto solo devuelve un string pues el numpy array solo tiene un elemento. Podemos acceder al código R de la función de la siguiente forma: End of explanation """ variable_r_creada_desde_python = ro.FloatVector(np.arange(1,5,0.1)) """ Explanation: Hemos visto como acceder desde Python a nombres disponibles en el entorno global de R. ¿Cómo podemos hacer para que algo que creemos en Python este accesible en R? End of explanation """ variable_r_creada_desde_python """ Explanation: Veamos como es esta variable_r_creada_desde_python dentro de Python End of explanation """ print(variable_r_creada_desde_python.r_repr()) """ Explanation: ¿Y lo que se tendría que ver en R? End of explanation """ ro.r('variable_r_creada_desde_python') """ Explanation: Pero ahora mismo esa variable no está disponible desde R y no la podríamos usar dentro de código R que permanece en el espacio R (vaya lío, ¿no?) End of explanation """ ro.globalenv["variable_ahora_en_r"] = variable_r_creada_desde_python print(ro.r("variable_ahora_en_r")) """ Explanation: Vale, tendremos que hacer que sea accesible desde R de la siguiente forma: End of explanation """ print(ro.r('sum(variable_ahora_en_r)')) print(np.sum(variable_r_creada_desde_python)) """ Explanation: Ahora que ya la tenemos accesible la podemos usar desde R. Por ejemplo, vamos a usar la función sum en R que suma los elementos pero directamente desde R: End of explanation """ # Importamos la librería extRemes de R from rpy2.robjects.packages import importr extremes = importr('extRemes') """ Explanation: Perfecto, ya sabemos, de forma muy sencilla y básica, como podemos usar R desde Python, como podemos pasar información desde R hacia Python y desde Python hacia R. ¡¡¡Esto es muy poderoso!!!, estamos juntando lo mejor de dos mundos, la solidez de las herramientas científicas de Python con la funcionalidad especializada que nos pueden aportar algunas librerías de R no disponibles en otros ámbitos. Trabajando de forma híbrida entre Python y R Vamos a empezar importando la librería extRemes de R: End of explanation """ data = pd.read_csv('datasets/Synthetic_data.txt', sep = '\s*', skiprows = 1, parse_dates = [[0, 1]], names = ['date','time','wspd'], index_col = 0) data.head(3) """ Explanation: En la anterior celda hemos hecho lo siguiente: from rpy2.robjects.packages import importr, La función importr nos servirá para importar las librerías R extremes = importr('extRemes'), de esta forma importamos la librería extRemes de R, sería equivalente a hacer en R library(extRemes). Leemos datos con pandas. En el mismo repo donde está este notebook está también un fichero de texto con datos que creé a priori. Supuestamente son datos horarios de velocidad del viento por lo que vamos a hacer análisis de valores extremos de velocidad del viento horaria. End of explanation """ max_y = data.wspd.groupby(pd.TimeGrouper(freq = 'A')).max() """ Explanation: Extraemos los máximos anuales los cuales usaremos posteriormente dentro de R para hacer cálculo de valores extremos usando la distribución generalizada de valores extremos (GEV): End of explanation """ max_y.plot(kind = 'bar', figsize = (12, 4)) """ Explanation: Dibujamos los valores máximos anuales usando Pandas: End of explanation """ fevd = extremes.fevd """ Explanation: Referenciamos la funcionalidad fevd (fit extreme value distribution) dentro del paquete extremes de R para poder usarla directamente con los valores máximos que hemos obtenido usando Pandas y desde Python. End of explanation """ print(fevd.__doc__) """ Explanation: Como hemos comentado anteriormente, vamos a calcular los parámetros de la GEV usando el método de ajuste GMLE (Generalised Maximum Lihelihood Estimation) y los vamos a guardar directamente en una variable Python. Veamos la ayuda antes: End of explanation """ res = fevd(max_y.values, type = "GEV", method = "GMLE") """ Explanation: Y ahora vamos a hacer un cálculo sin meternos mucho en todas las opciones posibles. End of explanation """ print(type(res)) print(res.r_repr) """ Explanation: ¿Qué estructura tiene la variable res que acabamos de crear y que tiene los resultados del ajuste? End of explanation """ res.names """ Explanation: Según nos indica lo anterior, ahora res es un vector que está compuesto de diferentes elementos. Los vectores pueden tener un nombre para todos o algunos de los elementos. Para acceder a estor nombres podemos hacer: End of explanation """ results = res.rx('results') print(results.r_repr) """ Explanation: Según el output anterior, parece que hay un nombre results, ahí es donde se guardan los valores del ajuste, los estimadores. Para acceder al mismo podemos hacerlo de diferentes formas. Con Python tendriamos que saber el índice y acceder de forma normal (__getitem__()). Existe una forma alternativa usando el método rx que nos permite acceder directamente con el nombre: End of explanation """ results = results[0] results.r_repr """ Explanation: Parece que tenemos un único elemento: End of explanation """ location, scale, shape = results.rx('par')[0][:] print(location, scale, shape) """ Explanation: Vemos ahora que results tiene un elemento con nombre par donde se guardan los valores de los estimadores del ajuste a la GEV que hemos obtenido usando GMLE. Vamos a obtener finalmente los valores de los estimadores: End of explanation """ %load_ext rpy2.ipython """ Explanation: Funcion mágica para R (antigua rmagic) Usamos la antigua función mágica rmagic que ahora se activará en el notebook de la siguiente forma: End of explanation """ help(rpy2.ipython.rmagic.RMagics.R) """ Explanation: Veamos como funciona la functión mágica de R: End of explanation """ %R -i res plot.fevd(res) """ Explanation: A veces, será más simple usar la función mágica para interactuar con R. Veamos un ejemplo donde le pasamos a R el valor obtenido de la función fevd del paquete extRemes de R que he usado anteriormente y corremos cierto código directamente desde R sin tener que usar ro.r. End of explanation """ ro.globalenv['res'] = res ro.r("plot.fevd(res)") """ Explanation: En la anterior celda de código le he pasado como parámetro de entrada (- i res) la variable res que había obtenido anteriormente para que esté disponible desde R. y he ejecutado código R puro (plot.fevd(res)). Si lo anterior lo quiero hacer con rpy2 puedo hacer lo siquiente: <p class="alert alert-info">CUIDADO, la siguiente celda de código puede provocar que se reinicialice el notebook y se rompa la sesión. Si has hecho cambios en el notebook guárdalos antes de ejecutar la celda, por lo que pueda pasar...</p> End of explanation """ metodos = ["MLE", "GMLE"] tipos = ["GEV", "Gumbel"] """ Explanation: Lo anterior me bloquea el notebook y me 'rompe' la sesión (en windows, al menos) ya que la ventana de gráficos se abre de forma externa... Por tanto, una buena opción para trabajar de forma interactiva con Python y R de forma conjunta y que no se 'rompa' nada es usar tanto rpy2 como su extensión para el notebook de Jupyter (dejaremos de llamarlo IPython poco a poco). Usando Python y R combinando rpy2 y la función mágica Vamos a combinar las dos formas de trabajar con rpy2 en el siguiente ejemplo: End of explanation """ for t in tipos: for m in metodos: print('tipo de ajuste: ', t) print('método de ajuste: ', m) res = fevd(max_y.values, method = m, type = t) if m == "Bayesian": print(res.rx('results')[0][-1][0:-2]) elif m == "Lmoments": print(res.rx('results')[0]) else: print(res.rx('results')[0].rx('par')[0][:]) %R -i res plot.fevd(res) """ Explanation: Lo que vamos a hacer es calcular los parámetros del ajuste usando la distribución GEV y Gumbel, que es un caso especial de la GEV. El ajuste lo calculamos usando tanto MLE como GMLE. Además de mostrar los valores resultantes del ajuste para los estimadores vamos a mostrar el dibujo de cada uno de los ajustes y algunos test de bondad. Usamos Python para toda la maquinaria de los bucles, usamos rpy2 para obtener los estimadores y usamos la función mágica de rpy2 para mostrar los gráficos del resultado. End of explanation """
dwhswenson/openpathsampling
examples/alanine_dipeptide_tps/AD_tps_2b_run_fixed.ipynb
mit
from __future__ import print_function import openpathsampling as paths """ Explanation: Running a fixed-length TPS simulation This is file runs the main calculation for the fixed length TPS simulation. It requires the file ad_fixed_tps_traj.nc, which is written in the notebook AD_tps_1b_trajectory.ipynb. In this file, you will learn: how to set up and run a fixed length TPS simulation NB: This is a long calculation. In practice, it would be best to export the Python from this notebook, remove the live_visualizer, and run non-interactively on a computing node. End of explanation """ old_storage = paths.Storage("ad_fixed_tps_traj.nc", "r") engine = old_storage.engines['300K'] network = old_storage.networks['fixed_tps_network'] traj = old_storage.trajectories[0] """ Explanation: Load engine, trajectory, and states from file End of explanation """ scheme = paths.OneWayShootingMoveScheme(network, selector=paths.UniformSelector(), engine=engine) initial_conditions = scheme.initial_conditions_from_trajectories(traj) storage = paths.Storage("ad_fixed_tps.nc", "w") storage.save(initial_conditions); # save these to give storage a template sampler = paths.PathSampling(storage=storage, move_scheme=scheme, sample_set=initial_conditions) sampler.run(10000) old_storage.close() storage.close() """ Explanation: TPS The only difference between this and the flexible path length example in AD_tps_2a_run_flex.ipynb is that we used a FixedLengthTPSNetwork. We selected the length=400 (8 ps) as a maximum length based on the results from a flexible path length run. End of explanation """
hunterherrin/phys202-2015-work
assignments/assignment04/MatplotlibExercises.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np """ Explanation: Visualization 1: Matplotlib Basics Exercises End of explanation """ z=np.random.randn(2,10) x=z[0,:] x y=z[1.:] y plt.scatter(x, y, s=60) plt.xlabel('x') plt.ylabel('y') plt.title('y vs x') plt.tight_layout() plt.grid(True) plt.yticks([-4,-2,0,2,4], [-4,-2,0,2,4]) plt.ylim(-5, 5) """ Explanation: Scatter plots Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot. Generate random data using np.random.randn. Style the markers (color, size, shape, alpha) appropriately. Include an x and y label and title. End of explanation """ f=np.random.randn(10) plt.hist(f, bins=3, color='r') plt.xlabel('x') plt.ylabel('y') plt.tick_params(length=0) plt.title('y vs x') """ Explanation: Histogram Learn how to use Matplotlib's plt.hist function to make a 1d histogram. Generate randpom data using np.random.randn. Figure out how to set the number of histogram bins and other style options. Include an x and y label and title. End of explanation """
MoonRaker/pvlib-python
docs/tutorials/notebooks/forecast.ipynb
bsd-3-clause
%matplotlib inline import matplotlib.pyplot as plt try: import seaborn as sns sns.set(rc={"figure.figsize": (12, 6)}) except ImportError: print('We suggest you install seaborn using conda or pip and rerun this cell') # built in python modules from datetime import datetime, timedelta import os # python add-ons import numpy as np import pandas as pd try: import netCDF4 from netCDF4 import num2date except ImportError: print('We suggest you install netCDF4 using conda rerun this cell') # for accessing UNIDATA THREDD servers from siphon.catalog import TDSCatalog from siphon.ncss import NCSS from pvlib.forecast import GFS,HRRR_ESRL,NAM,NDFD,HRRR,RAP # Choose a location and time. # Tucson, AZ latitude = 32.2 longitude = -110.9 tz = 'US/Arizona' start = datetime.now() # today's date end = start + timedelta(days=7) # 7 days from today timerange = pd.date_range(start, end, tz=tz) """ Explanation: Forecast Tutorial This tutorial will walk through forecast data from Unidata forecast model data using the forecast.py module within pvlib. Table of contents: 1. Setup 2. Intialize and Test Each Forecast Model This tutorial has been tested against the following package versions: * Python 3.4.3 * IPython 4.0.1 * pandas 0.17.1 * matplotlib 1.5.0 * netcdf4 1.2.1 * siphon 0.3.2 It should work with other Python and Pandas versions. It requires pvlib >= 0.2.0 and IPython >= 3.0. Authors: * Derek Groenendyk (@moonraker), University of Arizona, November 2015 * Will Holmgren (@wholmgren), University of Arizona, November 2015 Setup End of explanation """ # GFS model, defaults to 0.5 degree resolution fm = GFS() # retrieve data data = fm.get_query_data(latitude, longitude , timerange) data['temperature'].plot() plt.ylabel(fm.var_stdnames['temperature'] + ' (%s)' % fm.var_units['temperature']) cloud_vars = ['total_clouds','low_clouds','mid_clouds','high_clouds'] for varname in cloud_vars: data[varname].plot() plt.ylabel('Cloud cover' + ' %') plt.xlabel('Forecast Time ('+str(data.index.tz)+')') plt.title('GFS 0.5 deg') plt.legend(bbox_to_anchor=(1.18,1.0)) total_cloud_cover = data['total_clouds'] total_cloud_cover.plot(color='r', linewidth=2) plt.ylabel('Total cloud cover' + ' (%s)' % fm.var_units['total_clouds']) plt.xlabel('Forecast Time ('+str(data.index.tz)+')') plt.title('GFS 0.5 deg') """ Explanation: Instantiate GFS forecast model End of explanation """ # GFS model at 0.25 degree resolution fm = GFS(res='quarter') # retrieve data data = fm.get_query_data(latitude, longitude , timerange) for varname in cloud_vars: data[varname].plot(ls='-', linewidth=2) plt.ylabel('Cloud cover' + ' %') plt.xlabel('Forecast Time ('+str(data.index.tz)+')') plt.title('GFS 0.25 deg') plt.legend(bbox_to_anchor=(1.18,1.0)) """ Explanation: Instantiate GFS forecast model End of explanation """ fm = NAM() # retrieve data data = fm.get_query_data(latitude, longitude, timerange) for varname in cloud_vars: data[varname].plot(ls='-', linewidth=2) plt.ylabel('Cloud cover' + ' %') plt.xlabel('Forecast Time ('+str(data.index.tz)+')') plt.title('NAM') plt.legend(bbox_to_anchor=(1.18,1.0)) data['ghi'].plot(linewidth=2, ls='-') plt.ylabel('GHI W/m**2') plt.xlabel('Forecast Time ('+str(data.index.tz)+')') """ Explanation: Instaniate NAM forecast model End of explanation """ fm = NDFD() # retrieve data data = fm.get_query_data(latitude, longitude, timerange) total_cloud_cover = data['total_clouds'] temp = data['temperature'] wind = data['wind_speed'] total_cloud_cover.plot(color='r', linewidth=2) plt.ylabel('Total cloud cover' + ' (%s)' % fm.var_units['total_clouds']) plt.xlabel('Forecast Time ('+str(data.index.tz)+')') plt.title('NDFD') plt.ylim(0,100) temp.plot(color='r', linewidth=2) plt.ylabel('Temperature' + ' (%s)' % fm.var_units['temperature']) plt.xlabel('Forecast Time ('+str(data.index.tz)+')') wind.plot(color='r', linewidth=2) plt.ylabel('Wind Speed' + ' (%s)' % fm.var_units['wind_speed']) plt.xlabel('Forecast Time ('+str(data.index.tz)+')') """ Explanation: Instantiate NDFD forecast model End of explanation """ fm = RAP() # retrieve data data = fm.get_query_data(latitude, longitude, timerange) cloud_vars = ['total_clouds','high_clouds','mid_clouds','low_clouds'] for varname in cloud_vars: data[varname].plot(ls='-', linewidth=2) plt.ylabel('Cloud cover' + ' %') plt.xlabel('Forecast Time ('+str(data.index.tz)+')') plt.title('RAP') plt.legend(bbox_to_anchor=(1.18,1.0)) """ Explanation: Instantiate RAP forecast model End of explanation """ fm = HRRR() # retrieve data data = fm.get_query_data(latitude, longitude, timerange) cloud_vars = ['total_clouds','high_clouds','mid_clouds','low_clouds'] for varname in cloud_vars: data[varname].plot(ls='-', linewidth=2) plt.ylabel('Cloud cover' + ' %') plt.xlabel('Forecast Time ('+str(data.index.tz)+')') plt.title('HRRR') plt.legend(bbox_to_anchor=(1.18,1.0)) """ Explanation: Instantiate HRRR forecast model End of explanation """ fm = HRRR_ESRL() # retrieve data data = fm.get_query_data(latitude, longitude, timerange) cloud_vars = ['total_clouds','high_clouds','mid_clouds','low_clouds'] for varname in cloud_vars: data[varname].plot(ls='-', linewidth=2) plt.ylabel('Cloud cover' + ' %') plt.xlabel('Forecast Time ('+str(data.index.tz)+')') plt.title('HRRR_ESRL') plt.legend(bbox_to_anchor=(1.18,1.0)) data['ghi'].plot(linewidth=2, ls='-') plt.ylabel('GHI W/m**2') plt.xlabel('Forecast Time ('+str(data.index.tz)+')') """ Explanation: Instanciate HRRR ESRL forecast model End of explanation """
NervanaSystems/coach
tutorials/3. Implementing a Hierarchical RL Graph.ipynb
apache-2.0
import os import sys module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path) sys.path.append(module_path + '/rl_coach') from typing import Union import numpy as np from rl_coach.agents.ddpg_agent import DDPGAgent, DDPGAgentParameters, DDPGAlgorithmParameters from rl_coach.spaces import SpacesDefinition from rl_coach.core_types import RunPhase """ Explanation: In this tutorial we'll demonstrate Coach's hierarchical RL support, by building a new agent that implements the Hierarchical Actor Critic (HAC) algorithm (https://arxiv.org/pdf/1712.00948.pdf), and a preset that runs the agent on Mujoco's pendulum challenge. The Agent First, some imports. Note that HAC is based on DDPG, hence we will be importing the relevant classes. End of explanation """ class HACDDPGAlgorithmParameters(DDPGAlgorithmParameters): def __init__(self): super().__init__() self.sub_goal_testing_rate = 0.5 self.time_limit = 40 class HACDDPGAgentParameters(DDPGAgentParameters): def __init__(self): super().__init__() self.algorithm = DDPGAlgorithmParameters() """ Explanation: Now let's define the HAC algorithm and agent parameters. See tutorial 1 for more details on the content of each of these classes. End of explanation """ class HACDDPGAgent(DDPGAgent): def __init__(self, agent_parameters, parent: Union['LevelManager', 'CompositeAgent']=None): super().__init__(agent_parameters, parent) self.sub_goal_testing_rate = self.ap.algorithm.sub_goal_testing_rate self.graph_manager = None def choose_action(self, curr_state): # top level decides, for each of his generated sub-goals, for all the layers beneath him if this is a sub-goal # testing phase graph_manager = self.parent_level_manager.parent_graph_manager if self.ap.is_a_highest_level_agent: graph_manager.should_test_current_sub_goal = np.random.rand() < self.sub_goal_testing_rate if self.phase == RunPhase.TRAIN: if graph_manager.should_test_current_sub_goal: self.exploration_policy.change_phase(RunPhase.TEST) else: self.exploration_policy.change_phase(self.phase) action_info = super().choose_action(curr_state) return action_info def update_transition_before_adding_to_replay_buffer(self, transition): graph_manager = self.parent_level_manager.parent_graph_manager # deal with goals given from a higher level agent if not self.ap.is_a_highest_level_agent: transition.state['desired_goal'] = self.current_hrl_goal transition.next_state['desired_goal'] = self.current_hrl_goal self.distance_from_goal.add_sample(self.spaces.goal.distance_from_goal( self.current_hrl_goal, transition.next_state)) goal_reward, sub_goal_reached = self.spaces.goal.get_reward_for_goal_and_state( self.current_hrl_goal, transition.next_state) transition.reward = goal_reward transition.game_over = transition.game_over or sub_goal_reached # each level tests its own generated sub goals if not self.ap.is_a_lowest_level_agent and graph_manager.should_test_current_sub_goal: _, sub_goal_reached = self.spaces.goal.get_reward_for_goal_and_state( transition.action, transition.next_state) sub_goal_is_missed = not sub_goal_reached if sub_goal_is_missed: transition.reward = -self.ap.algorithm.time_limit return transition def set_environment_parameters(self, spaces: SpacesDefinition): super().set_environment_parameters(spaces) if self.ap.is_a_highest_level_agent: # the rest of the levels already have an in_action_space set to be of type GoalsSpace, thus they will have # their GoalsSpace set to the in_action_space in agent.set_environment_parameters() self.spaces.goal = self.spaces.action self.spaces.goal.set_target_space(self.spaces.state[self.spaces.goal.goal_name]) if not self.ap.is_a_highest_level_agent: self.spaces.reward.reward_success_threshold = self.spaces.goal.reward_type.goal_reaching_reward """ Explanation: Now we'll define the agent itself - HACDDPGAgent - which subclasses the DDPG agent class. The main difference between the DDPG agent and the HACDDPGAgent is the subgoal a higher level agent defines to a lower level agent, hence the overrides of the DDPG Agent functions. End of explanation """ from rl_coach.architectures.tensorflow_components.layers import Dense from rl_coach.base_parameters import VisualizationParameters, EmbeddingMergerType, EmbedderScheme from rl_coach.architectures.embedder_parameters import InputEmbedderParameters from rl_coach.memories.episodic.episodic_hindsight_experience_replay import HindsightGoalSelectionMethod, \ EpisodicHindsightExperienceReplayParameters from rl_coach.memories.episodic.episodic_hrl_hindsight_experience_replay import \ EpisodicHRLHindsightExperienceReplayParameters from rl_coach.memories.memory import MemoryGranularity from rl_coach.spaces import GoalsSpace, ReachingGoal from rl_coach.exploration_policies.ou_process import OUProcessParameters from rl_coach.core_types import EnvironmentEpisodes, EnvironmentSteps, RunPhase, TrainingSteps time_limit = 1000 polar_coordinates = False distance_from_goal_threshold = np.array([0.075, 0.075, 0.75]) goals_space = GoalsSpace('achieved_goal', ReachingGoal(default_reward=-1, goal_reaching_reward=0, distance_from_goal_threshold=distance_from_goal_threshold), lambda goal, state: np.abs(goal - state)) # raw L1 distance top_agent_params = HACDDPGAgentParameters() # memory - Hindsight Experience Replay top_agent_params.memory = EpisodicHRLHindsightExperienceReplayParameters() top_agent_params.memory.max_size = (MemoryGranularity.Transitions, 10000000) top_agent_params.memory.hindsight_transitions_per_regular_transition = 3 top_agent_params.memory.hindsight_goal_selection_method = HindsightGoalSelectionMethod.Future top_agent_params.memory.goals_space = goals_space top_agent_params.algorithm.num_consecutive_playing_steps = EnvironmentEpisodes(32) top_agent_params.algorithm.num_consecutive_training_steps = 40 top_agent_params.algorithm.num_steps_between_copying_online_weights_to_target = TrainingSteps(40) # exploration - OU process top_agent_params.exploration = OUProcessParameters() top_agent_params.exploration.theta = 0.1 # actor - note that the default middleware is overriden with 3 dense layers top_actor = top_agent_params.network_wrappers['actor'] top_actor.input_embedders_parameters = {'observation': InputEmbedderParameters(scheme=EmbedderScheme.Empty), 'desired_goal': InputEmbedderParameters(scheme=EmbedderScheme.Empty)} top_actor.middleware_parameters.scheme = [Dense([64])] * 3 top_actor.learning_rate = 0.001 top_actor.batch_size = 4096 # critic - note that the default middleware is overriden with 3 dense layers top_critic = top_agent_params.network_wrappers['critic'] top_critic.input_embedders_parameters = {'observation': InputEmbedderParameters(scheme=EmbedderScheme.Empty), 'action': InputEmbedderParameters(scheme=EmbedderScheme.Empty), 'desired_goal': InputEmbedderParameters(scheme=EmbedderScheme.Empty)} top_critic.embedding_merger_type = EmbeddingMergerType.Concat top_critic.middleware_parameters.scheme = [Dense([64])] * 3 top_critic.learning_rate = 0.001 top_critic.batch_size = 4096 """ Explanation: The Preset Defining the top agent in the hierarchy. Note that the agent's base parameters are the same as the DDPG agent's parameters. We also define here the memory, exploration policy and network topology. End of explanation """ from rl_coach.schedules import ConstantSchedule from rl_coach.exploration_policies.e_greedy import EGreedyParameters bottom_agent_params = HACDDPGAgentParameters() bottom_agent_params.algorithm.in_action_space = goals_space bottom_agent_params.memory = EpisodicHindsightExperienceReplayParameters() bottom_agent_params.memory.max_size = (MemoryGranularity.Transitions, 12000000) bottom_agent_params.memory.hindsight_transitions_per_regular_transition = 4 bottom_agent_params.memory.hindsight_goal_selection_method = HindsightGoalSelectionMethod.Future bottom_agent_params.memory.goals_space = goals_space bottom_agent_params.algorithm.num_consecutive_playing_steps = EnvironmentEpisodes(16 * 25) # 25 episodes is one true env episode bottom_agent_params.algorithm.num_consecutive_training_steps = 40 bottom_agent_params.algorithm.num_steps_between_copying_online_weights_to_target = TrainingSteps(40) bottom_agent_params.exploration = EGreedyParameters() bottom_agent_params.exploration.epsilon_schedule = ConstantSchedule(0.2) bottom_agent_params.exploration.evaluation_epsilon = 0 bottom_agent_params.exploration.continuous_exploration_policy_parameters = OUProcessParameters() bottom_agent_params.exploration.continuous_exploration_policy_parameters.theta = 0.1 # actor bottom_actor = bottom_agent_params.network_wrappers['actor'] bottom_actor.input_embedders_parameters = {'observation': InputEmbedderParameters(scheme=EmbedderScheme.Empty), 'desired_goal': InputEmbedderParameters(scheme=EmbedderScheme.Empty)} bottom_actor.middleware_parameters.scheme = [Dense([64])] * 3 bottom_actor.learning_rate = 0.001 bottom_actor.batch_size = 4096 # critic bottom_critic = bottom_agent_params.network_wrappers['critic'] bottom_critic.input_embedders_parameters = {'observation': InputEmbedderParameters(scheme=EmbedderScheme.Empty), 'action': InputEmbedderParameters(scheme=EmbedderScheme.Empty), 'desired_goal': InputEmbedderParameters(scheme=EmbedderScheme.Empty)} bottom_critic.embedding_merger_type = EmbeddingMergerType.Concat bottom_critic.middleware_parameters.scheme = [Dense([64])] * 3 bottom_critic.learning_rate = 0.001 bottom_critic.batch_size = 4096 """ Explanation: The bottom agent End of explanation """ agents_params = [top_agent_params, bottom_agent_params] """ Explanation: Now we define the parameters of all the agents in the hierarchy from top to bottom End of explanation """ from rl_coach.environments.gym_environment import Mujoco from rl_coach.environments.environment import SelectedPhaseOnlyDumpMethod from rl_coach.graph_managers.hrl_graph_manager import HRLGraphManager from rl_coach.graph_managers.graph_manager import ScheduleParameters env_params = Mujoco() env_params.level = "rl_coach.environments.mujoco.pendulum_with_goals:PendulumWithGoals" env_params.additional_simulator_parameters = {"time_limit": time_limit, "random_goals_instead_of_standing_goal": False, "polar_coordinates": polar_coordinates, "goal_reaching_thresholds": distance_from_goal_threshold} env_params.frame_skip = 10 env_params.custom_reward_threshold = -time_limit + 1 vis_params = VisualizationParameters() vis_params.video_dump_methods = [SelectedPhaseOnlyDumpMethod(RunPhase.TEST)] vis_params.dump_mp4 = False vis_params.native_rendering = False schedule_params = ScheduleParameters() schedule_params.improve_steps = EnvironmentEpisodes(40 * 4 * 64) # 40 epochs schedule_params.steps_between_evaluation_periods = EnvironmentEpisodes(4 * 64) # 4 small batches of 64 episodes schedule_params.evaluation_steps = EnvironmentEpisodes(64) schedule_params.heatup_steps = EnvironmentSteps(0) """ Explanation: Define the environment, visualization and schedule parameters. The schedule parameters refer to the top level agent. End of explanation """ graph_manager = HRLGraphManager(agents_params=agents_params, env_params=env_params, schedule_params=schedule_params, vis_params=vis_params, consecutive_steps_to_run_each_level=EnvironmentSteps(40)) graph_manager.visualization_parameters.render = True """ Explanation: Lastly, we create a HRLGraphManager that will execute the hierarchical agent we defined according to the parameters. Note that the bottom level agent will run 40 steps on each single step of the top level agent. End of explanation """ from rl_coach.base_parameters import TaskParameters, Frameworks log_path = '../experiments/pendulum_hac' if not os.path.exists(log_path): os.makedirs(log_path) task_parameters = TaskParameters(framework_type=Frameworks.tensorflow, evaluate_only=False, experiment_path=log_path) task_parameters.__dict__['checkpoint_save_secs'] = None task_parameters.__dict__['verbosity'] = 'low' graph_manager.create_graph(task_parameters) graph_manager.improve() """ Explanation: Running the Preset End of explanation """
paivaismael/datasets
GHCND.ipynb
mit
data2 = data[(data.TMIN>-9999)] data3 = data2[(data2.DATE>=20150601) & (data2.DATE<=20150630) & (data2.PRCP>0)] """ Explanation: In order to select the stations, we can select the following data from the initial amount: End of explanation """ stations = data2[(data2.STATION=='GHCND:USC00047326') | (data2.STATION=='GHCND:USC00047902') | (data2.STATION=='GHCND:USC00044881')] st = stations.groupby(['STATION']) temp = st.agg({'TMIN' : [np.min], 'TMAX' : [np.max]}) temp.plot(kind='bar') """ Explanation: So we can print data3 and, then, select the stations in the table that will be printed. End of explanation """ june = stations[(stations.DATE>=20150601) & (stations.DATE<=20150630)] rain = june.groupby(['STATION']) rain.plot('DATE','PRCP') """ Explanation: Analysing the plot above, we can see that the 3 cities experienced a big variation of temperature in the time of observation. The variation was more expressive in Lee Vining. End of explanation """
BiG-CZ/notebook_data_demo
notebooks/2017-07-04-WOFpy_ulmo.ipynb
bsd-3-clause
%matplotlib inline import pytz import matplotlib.pyplot as plt import pandas as pd import ulmo from ulmo.util import convert_datetime """ Explanation: Testing WOFpy LBR sample DB Emilio Mayorga. Run on my conda environment uwapl_em_mc_1aui. 3/5,4/2017. Test Don's Amazon cloud deployment End of explanation """ print(ulmo.cuahsi.wof.__doc__) print([obj for obj in dir(ulmo.cuahsi.wof) if not obj.startswith('__')]) # WaterML/WOF WSDL endpoints wsdlurl = 'http://54.186.36.247:8080/mysqlodm2timeseries/soap/cuahsi_1_0/.wsdl' # WOF 1.0 # 'network code' networkcd = 'mysqlodm2timeseries' """ Explanation: CUAHSI WaterOneFlow: ulmo, SOAP endpoint, and other general info End of explanation """ sitecd = 'USU-LBR-Mendon' siteinfo = ulmo.cuahsi.wof.get_site_info(wsdlurl, networkcd+':'+sitecd) type(siteinfo), siteinfo.keys() siteinfo['network'], siteinfo['code'], siteinfo['name'] print(siteinfo['location']) type(siteinfo['series']), len(siteinfo['series']), siteinfo['series'].keys() siteinfo['series']['mysqlodm2timeseries:USU33'].keys() siteinfo['series']['mysqlodm2timeseries:USU33'] """ Explanation: Get site information one of two sites in the LBR sample DB End of explanation """ def site_series_values_to_df(series_values, variable_name): # Create a clean timeseries list of (dt, val) tuples tsdt_tuplst = [ (convert_datetime(valdict['datetime']).replace(tzinfo=pytz.utc), float(valdict['value'])) for valdict in series_values['values'] ] dt, val = zip(*tsdt_tuplst) ts_df = pd.DataFrame({'time': dt, variable_name: val}) ts_df.set_index('time', inplace=True) ts_df.sort_index(ascending=True, inplace=True) return ts_df print( ulmo.cuahsi.wof.get_values.__doc__.replace('<', '').replace('>', '') ) """ Explanation: Get Values End of explanation """ variablecd = 'USU33' site_values = ulmo.cuahsi.wof.get_values(wsdlurl, networkcd+':'+sitecd, networkcd+':'+variablecd) site_values.keys() sitevariable = site_values['variable'] sitevariable """ Explanation: 'odm2timeseries:USU33' is 'Oxygen, dissolved percent of saturation' End of explanation """ type(site_values['values']), site_values['values'][0].keys() """ Explanation: site_values['values'] is a list of individual time series values (timestamp and data value) End of explanation """ site_values['values'][0]['datetime'], site_values['values'][-1]['datetime'] """ Explanation: Start and end timestamps (local time with time offset vs utc; iso8601 format) End of explanation """ variable_name = '%s (%s)' % (sitevariable['name'], sitevariable['value_type']) variable_name dtstr_last = site_values['values'][-1]['datetime'] convert_datetime(dtstr_last).replace(tzinfo=pytz.utc) """ Explanation: Set a nice, user-friendly variable name string. End of explanation """ ts_df = site_series_values_to_df(site_values, variable_name) ts_df.tail() type(ts_df), ts_df.columns, ts_df.index.dtype, ts_df.index.min(), ts_df.index.max() fig, ax = plt.subplots(figsize=(10, 4)) varlabel = ts_df.columns[0] ts_df[varlabel].plot(style='-', ax=ax) ax.set_ylabel(varlabel + ', ' + sitevariable['units']['abbreviation']); """ Explanation: Hmm, this failed: python convert_datetime(dtstr_last).astimezone(pytz.utc) ValueError: astimezone() cannot be applied to a naive datetime End of explanation """
rbiswas4/simlib
example/Demo_TilingClass.ipynb
mit
import opsimsummary as oss from opsimsummary import Tiling, HealpixTiles # import snsims import healpy as hp %matplotlib inline import matplotlib.pyplot as plt from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as plt """ Explanation: Note For this to work, you will need the lsst.sims stack to be installed. - opsimsummary uses healpy which is installed with the sims stack, but also available from conda End of explanation """ class NoTile(Tiling): pass noTile = NoTile() """ Explanation: This section pertains to how to write a new Tiling class ``` noTile = snsims.Tiling() TypeError Traceback (most recent call last) <ipython-input-9-5f6f8a94508e> in <module>() ----> 1 noTile = snsims.Tiling() TypeError: Can't instantiate abstract class Tiling with abstract methods init, area, pointingSequenceForTile, tileIDSequence, tileIDsForSN ``` The class snsims.Tiling is an abstract Base class. Therefore, this cannot be instantiated. It must be subclassed, and the set of methods outlined have to be implemented for this to work. End of explanation """ class MyTile(Tiling): def __init__(self): pass @property def tileIDSequence(self): return np.arange(100) def tileIDsForSN(self, ra, dec): x = ra + dec y = np.remainder(x, 100.) return np.floor(y) def area(self, tileID): return 1. def pointingSequenceForTile(self, tileID, pointings): return None def positions(self): pass myTile = MyTile() """ Explanation: ``` """noTile = NoTile() TypeError Traceback (most recent call last) <ipython-input-4-8ddedac7fb97> in <module>() ----> 1 noTile = NoTile() TypeError: Can't instantiate abstract class NoTile with abstract methods init, area, pointingSequenceForTile, positions, tileIDSequence, tileIDsForSN """ ``` The above fails because the methods are not implemented. Below is a stupid (ie. not useful) but minimalist class that would work: End of explanation """ issubclass(HealpixTiles, Tiling) help(HealpixTiles) datadir = os.path.join(oss.__path__[0], 'example_data') opsimdb = os.path.join(datadir, 'enigma_1189_micro.db') healpixelizedDB = os.path.join(datadir, 'healpixels_micro.db') #opsimdb = '/Users/rbiswas/data/LSST/OpSimData/minion_1016_sqlite.db' NSIDE = 256 hpOpSim = oss.HealPixelizedOpSim.fromOpSimDB(opsimdb, NSIDE=NSIDE) opsout = oss.OpSimOutput.fromOpSimDB(opsimdb) #x = np.ones(hp.nside2npix(256)) * hp.UNSEEN #x[1] = -1 x = np.arange(hp.nside2npix(256)) hp.mollview(x, nest=True) hptiles = HealpixTiles(healpixelizedOpSim=hpOpSim, nside=NSIDE) from opsimsummary import HPTileVis hptvis = HPTileVis(hptiles, opsout) from opsimsummary import pixelsForAng, plot_south_steradian_view pixelsForAng(54., -27.5, 4) ra, dec = hptvis.pointingCenters(0) figoverlaps = hptvis.plotTilePointings(135, projection='cyl', paddingFactors=1, query='night < 365', **dict(fill=False, color='g', alpha=0.1)) """ Explanation: Using the class HealpixTiles Currently there is only concrete tiling class that has been implemented. This is the snsims.HealpixTiles class. This shows how to use the HealpixTiles Class End of explanation """ hpchips = HealpixTiles(nside=128, preComputedMap=healpixelizedDB) hpchips.pointingSequenceForTile(0) np.unique(hpchips.pointingSequenceForTile(0)).size from opsimsummary import convertToSphericalCoordinates angs = opsout.summary[['fieldRA', 'fieldDec']] ra, dec = angs.fieldRA.values, angs.fieldDec.values convertToSphericalCoordinates?? theta, phi = convertToSphericalCoordinates(ra, dec, unit='radians') vecs = hp.ang2vec(theta, phi) vec = hp.pix2vec(128, 0, nest=True) opsout.summary.iloc[np.abs(np.dot(vecs, vec)).argmin()].fieldID obsHistIDsStatic = opsout.summary.query('fieldID==1859').index.values def staticPoints(tileID): return obsHistIDsStatic staticPoints(0) hpchipsStatic = HealpixTiles(nside=128, preComputedMap=staticPoints) np.unique(hpchipsStatic.pointingSequenceForTile(0)) hpvis = HPTileVis(hpchips, opsout) fig2, tc, _ = hpvis.plotTilePointings(140785, projection='cea', paddingFactors=4, drawPointings=True, **dict(fill=False, color='g', alpha=1.0)) """ Explanation: Faster evaluations with preComputedMap End of explanation """ figtile.savefig('HealpixTile_Pointings.pdf') from opsimsummary import convertToCelestialCoordinates, healpix_boundaries, pixelsForAng from matplotlib.patches import Polygon class HPTileVis(HealpixTiles): def __init__(self, hpTile, opsout): """ """ self.hpTile = hpTile self.opsout = opsout self.nside = self.hpTile.nside def tileIDfromCelestialCoordinates(self, ra, dec, opsout, units='degrees'): """ Parameters ----------- ra : dec : units: {'degrees', 'radians'} """ return pixelsForAng(lon=ra,lat=dec, unit=units ) def tileCenter(self, tileID): theta, phi = hp.pix2ang(self.nside, tileID, nest=True) ra, dec = convertToCelestialCoordinates(theta, phi, input_unit='radians', output_unit='degrees') return ra, dec def pointingSummary(self, tileID, columns=('ditheredRA', 'ditheredDec'), allPointings=None): #if allPointings is None: # allPointings = opsout.summary # if query is not None: # allPointings = allPointings.query(query) # allPointings = allPointings.index.values obsHistIDs = self.hpTile.pointingSequenceForTile(tileID=tileID, allPointings=allPointings) return self.opsout.summary.ix[obsHistIDs]#[list(columns)] def pointingCenters(self, tileID, raCol='ditheredRA', decCol='ditheredDec', query=None): summary = self.pointingSummary(tileID)#, columns=[raCol, decCol]) if query is not None: summary = summary.query(query) ra = summary[raCol].apply(np.degrees).values dec = summary[decCol].apply(np.degrees).values return ra, dec def plotTilePointings(self, tileID, raCol='ditheredRA', decCol='ditheredDec', radius=1.75, paddingFactors=1, query=None, ax=None, projection='cyl',**kwargs): """ Parameters ---------- """ if ax is None: fig, ax = plt.subplots() padding = np.degrees(hp.max_pixrad(self.nside)) + radius ra_tile, dec_tile = self.tileCenter(tileID) llcrnrlat = dec_tile - padding * paddingFactors urcrnrlat = dec_tile + padding * paddingFactors llcrnrlon = ra_tile - padding * paddingFactors urcrnrlon = ra_tile + padding * paddingFactors m = Basemap(llcrnrlat=llcrnrlat, llcrnrlon=llcrnrlon, urcrnrlat=urcrnrlat, urcrnrlon=urcrnrlon, projection=projection, lon_0=ra_tile, lat_0=dec_tile, ax=ax) parallels = np.linspace(llcrnrlat, urcrnrlat, 3) meridians = np.linspace(llcrnrlon, urcrnrlon, 3) m.drawparallels(parallels, labels=(1, 0, 0, 0)) #np.ones(len(parallels), dtype=bool)) m.drawmeridians(meridians, labels=(0, 1, 1, 1)) #np.ones(len(meridians), dtype=bool)) ra, dec = self.pointingCenters(tileID, raCol=raCol, decCol=decCol, query=query) lon, lat = healpix_boundaries(tileID, nside=self.nside, units='degrees', convention='celestial', step=10, nest=True) x, y = m(lon, lat) xy = zip(x, y) healpixels = Polygon(xy, facecolor='w',fill=False, alpha=1., edgecolor='k', lw=2) for ra, dec in zip(ra, dec): m.tissot(ra, dec, radius, 100, **kwargs) ax.add_patch(healpixels) return fig tileID = pixelsForAng(54., -27.5, 4, unit='degrees') tileID theta, phi = hp.pix2ang(hptiles.nside, 0, nest=True) ra, dec = oss.convertToCelestialCoordinates(theta, phi) hptvis = HPTileVis(hptiles, opsout) from palettable.colorbrewer import sequential import palettable palettable from matplotlib.colorbar import hptvis.plotTilePointings(, raCol='ditheredRA', decCol='ditheredDec', projection='gnom', query=None,#'night <3650', **dict(fill=False, color='g', alpha=1., lw=1., ls='solid')) hptvis.centers(0) opsout.summary.ix[hpTileshpOpSim.pointingSequenceForTile(0, allPointings=None)][['ditheredRA', 'ditheredDec']] phi, theta = hpTileshpOpSim.positions(1, 10000) mapvals = np.ones(hp.nside2npix(NSIDE)) * hp.UNSEEN mapvals[1] = 100 hp.ang2pix(NSIDE, np.radians(theta), np.radians(phi), nest=True) theta_c, phi_c = hp.pix2ang(4, 1, nest=True) hp.mollview(mapvals, nest=True) hp.projscatter(np.radians(theta), np.radians(phi), **dict(s=0.0002)) hp.projscatter(theta_c, phi_c, **dict(s=8., c='r')) %timeit hpTileshpOpSim.pointingSequenceForTile(33, allPointings=None) preCompMap = os.path.join(oss.__path__[0], 'example_data', 'healpixels_micro.db') import os os.path.exists(preCompMap) hpTilesMap = HealpixTiles(nside=1, preComputedMap=preCompMap) obsHistIDs = hpTilesMap.pointingSequenceForTile(10, allPointings=None) from sqlalchemy import create_engine engine = create_engine('sqlite:////' + preCompMap) import pandas as pd df = pd.read_sql_query('SELECT * FROM simlib Limit 5', engine) preCompMap %timeit hpOpSim.obsHistIdsForTile(34) hpTiles = HealpixTiles(healpixelizedOpSim=hpOpSim) hpTiles.pointingSequenceForTile(34, allPointings=None) df = opsout.summary.copy() df.fieldID.unique().size df.query('fieldID == 744')[['fieldRA', 'fieldDec', 'ditheredRA', 'ditheredDec', 'expMJD', 'filter']].head() df.query('fieldID == 744')['ditheredDec'] = df.query('fieldID == 744')['fieldDec'] def fixdec(mydf, fieldIDs): return mydf.fieldID.apply(lambda x: x in fieldIDs) fixdec(df, (744, 1427, 290)).astype(np.int) * df.fieldRA + (1.0 - fixdec(df, (744, 1427, 290)).astype(np.int)) df.query('fieldID == 744')[['fieldRA', 'fieldDec', 'ditheredRA', 'ditheredDec', 'expMJD', 'filter']].head() df = opsout.summary.copy() grouped = df.groupby('propID') grouped.groups.keys() grouped.get_group(366).fieldRA """ Explanation: Scratch End of explanation """
qutip/qutip-notebooks
examples/spin-chain.ipynb
lgpl-3.0
%matplotlib inline import matplotlib.pyplot as plt import numpy as np from qutip import * """ Explanation: QuTiP example: Dynamics of a Spin Chain J.R. Johansson and P.D. Nation For more information about QuTiP see http://qutip.org End of explanation """ def integrate(N, h, Jx, Jy, Jz, psi0, tlist, gamma, solver): si = qeye(2) sx = sigmax() sy = sigmay() sz = sigmaz() sx_list = [] sy_list = [] sz_list = [] for n in range(N): op_list = [] for m in range(N): op_list.append(si) op_list[n] = sx sx_list.append(tensor(op_list)) op_list[n] = sy sy_list.append(tensor(op_list)) op_list[n] = sz sz_list.append(tensor(op_list)) # construct the hamiltonian H = 0 # energy splitting terms for n in range(N): H += - 0.5 * h[n] * sz_list[n] # interaction terms for n in range(N-1): H += - 0.5 * Jx[n] * sx_list[n] * sx_list[n+1] H += - 0.5 * Jy[n] * sy_list[n] * sy_list[n+1] H += - 0.5 * Jz[n] * sz_list[n] * sz_list[n+1] # collapse operators c_op_list = [] # spin dephasing for n in range(N): if gamma[n] > 0.0: c_op_list.append(np.sqrt(gamma[n]) * sz_list[n]) # evolve and calculate expectation values if solver == "me": result = mesolve(H, psi0, tlist, c_op_list, sz_list) elif solver == "mc": ntraj = 250 result = mcsolve(H, psi0, tlist, c_op_list, sz_list, ntraj) return result.expect # # set up the calculation # solver = "me" # use the ode solver #solver = "mc" # use the monte-carlo solver N = 10 # number of spins # array of spin energy splittings and coupling strengths. here we use # uniform parameters, but in general we don't have too h = 1.0 * 2 * np.pi * np.ones(N) Jz = 0.1 * 2 * np.pi * np.ones(N) Jx = 0.1 * 2 * np.pi * np.ones(N) Jy = 0.1 * 2 * np.pi * np.ones(N) # dephasing rate gamma = 0.01 * np.ones(N) # intial state, first spin in state |1>, the rest in state |0> psi_list = [] psi_list.append(basis(2,1)) for n in range(N-1): psi_list.append(basis(2,0)) psi0 = tensor(psi_list) tlist = np.linspace(0, 50, 200) sz_expt = integrate(N, h, Jx, Jy, Jz, psi0, tlist, gamma, solver) fig, ax = plt.subplots(figsize=(10,6)) for n in range(N): ax.plot(tlist, np.real(sz_expt[n]), label=r'$\langle\sigma_z^{(%d)}\rangle$'%n) ax.legend(loc=0) ax.set_xlabel(r'Time [ns]') ax.set_ylabel(r'\langle\sigma_z\rangle') ax.set_title(r'Dynamics of a Heisenberg spin chain'); """ Explanation: Hamiltonian: $\displaystyle H = - \frac{1}{2}\sum_n^N h_n \sigma_z(n) - \frac{1}{2} \sum_n^{N-1} [ J_x^{(n)} \sigma_x(n) \sigma_x(n+1) + J_y^{(n)} \sigma_y(n) \sigma_y(n+1) +J_z^{(n)} \sigma_z(n) \sigma_z(n+1)]$ End of explanation """ from qutip.ipynbtools import version_table version_table() """ Explanation: Software version: End of explanation """
doc-E-brown/FacialLandmarkingReview
experiments/Sec3_FeatureExtraction/Viola-Jones.ipynb
gpl-3.0
import numpy as np from scipy.misc import imread from matplotlib import rcParams from skimage.transform import integral_image import matplotlib.pyplot as plt %matplotlib inline rcParams['figure.figsize'] = (10, 10) def integral_image(image): """Integral image / summed area table. The integral image contains the sum of all elements above and to the left of it, i.e.: .. math:: S[m, n] = \sum_{i \leq m} \sum_{j \leq n} X[i, j] Parameters ---------- image : ndarray Input image. Returns ------- S : ndarray Integral image/summed area table of same shape as input image. References ---------- .. [1] F.C. Crow, "Summed-area tables for texture mapping," ACM SIGGRAPH Computer Graphics, vol. 18, 1984, pp. 207-212. """ S = image for i in range(image.ndim): S = S.cumsum(axis=i) return S """ Explanation: Viola Jones Algorithm This jupyter notebook has been written to partner with the article A review of image based facial landmark identification techniques. This notebook is licensed under the BSD 3-Clause license. Integral Image Algorithm The scikit-image package provides an efficient implementation of the integral image computation. The following function definition is contained within scikit-image and is covered under the BSD 3-Clause license. As described within the review article; the entire integral image can be calculated in one pass using the following: $$ \begin{align} ii(x, y) &= \sum\limits_{x'\le x,y' \le y}i(x',y') \end{align} $$ where $ii(x,y)$ is the integral image value at pixel location $(x,y)$ and $i(x',y')$ is the original image value. $$ \begin{align} s(x,y) &= s(x, y - 1) + i(x,y) \ ii(x,y) &= ii(x - 1, y) + s(x,y) \ \end{align} $$ where $s(x,y)$ is the cumulative sum of pixel values of row $x$, noting that $s(x, -1)=0$ and $ii(-1, y)=0$. End of explanation """ I = np.array([[5, 2, 5], [3, 6, 3], [5, 2, 5]]) ii = integral_image(I) print("The simple integral image (ii)") print(ii) """ Explanation: The following simple example, using a 9 pixel image demonstrates the use and calculation of an integral image $ii$. $I = \begin{bmatrix} 5 & 2 & 5\ 3 & 6 & 3\ 5 & 2 & 5\ \end{bmatrix} $ End of explanation """ I = imread('../Sec2_Dataset_selection/display_image.jpg') plt.imshow(I) ii = integral_image(I) print("Displaying an extract of the image") print(I[:3, :3, :]) print("\n Displaying an extract of the integral image") print(ii[:3, :3, :]) """ Explanation: The integral Image for an actual image Compute the integral image for the display_image.jpg End of explanation """
jegibbs/phys202-2015-work
days/day08/Display.ipynb
mit
class Ball(object): pass b = Ball() b.__repr__() print(b) """ Explanation: Display of Rich Output In Python, objects can declare their textual representation using the __repr__ method. End of explanation """ class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) """ Explanation: Overriding the __repr__ method: End of explanation """ from IPython.display import display """ Explanation: IPython expands on this idea and allows objects to declare other, rich representations including: HTML JSON PNG JPEG SVG LaTeX A single object can declare some or all of these representations; all of them are handled by IPython's display system. . Basic display imports The display function is a general purpose tool for displaying different representations of objects. Think of it as print for these rich representations. End of explanation """ from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) """ Explanation: A few points: Calling display on an object will send all possible representations to the Notebook. These representations are stored in the Notebook document. In general the Notebook will use the richest available representation. If you want to display a particular representation, there are specific functions for that: End of explanation """ from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) """ Explanation: Images To work with images (JPEG, PNG) use the Image class. End of explanation """ i print(i) """ Explanation: Returning an Image object from an expression will automatically display it: End of explanation """ Image(url='http://python.org/images/python-logo.gif') """ Explanation: An image can also be displayed from raw data or a URL. End of explanation """ from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) """ Explanation: HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the HTML class. End of explanation """ %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: skyblue; font-family: times new roman; } </style> """ Explanation: You can also use the %%html cell magic to accomplish the same thing. End of explanation """ from IPython.display import Javascript """ Explanation: You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as d3.js for output. End of explanation """ js = Javascript('alert("hi")'); display(js) """ Explanation: Pass a string of JavaScript source code to the JavaScript object and then display it. End of explanation """ %%javascript alert("hi"); """ Explanation: The same thing can be accomplished using the %%javascript cell magic: End of explanation """ Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); """ Explanation: Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples. End of explanation """ from IPython.display import Audio Audio("./scrubjay.mp3") """ Explanation: Audio IPython makes it easy to work with sounds interactively. The Audio display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the Image display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. End of explanation """ import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) """ Explanation: A NumPy array can be converted to audio. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook. For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as beats occur: End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') """ Explanation: Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: End of explanation """ from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) """ Explanation: External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: End of explanation """ from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') """ Explanation: Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the FileLink object: End of explanation """ FileLinks('./') """ Explanation: Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, FileLinks would work in a recursive manner creating links to files in all sub-directories as well. End of explanation """
dataworkshop/xgboost
step4.ipynb
mit
import pandas as pd import xgboost as xgb import numpy as np import seaborn as sns from hyperopt import hp from hyperopt import hp, fmin, tpe, STATUS_OK, Trials %matplotlib inline train = pd.read_csv('bike.csv') train['datetime'] = pd.to_datetime( train['datetime'] ) train['day'] = train['datetime'].map(lambda x: x.day) """ Explanation: Hyperparameter Optimization xgboost What the options there're for tuning? * GridSearch * RandomizedSearch All right! Xgboost has about 20 params: 1. base_score 2. colsample_bylevel 3. colsample_bytree 4. gamma 5. learning_rate 6. max_delta_step 7. max_depth 8. min_child_weight 9. missing 10. n_estimators 11. nthread 12. objective 13. reg_alpha 14. reg_lambda 15. scale_pos_weight 16. seed 17. silent 18. subsample Let's for tuning will be use 12 of them them with 5-10 possible values, so... there're 12^5 - 12^10 possible cases. If you will check one case in 10s, for 12^5 you need 30 days for 12^10 about 20K years :). This is too long.. but there's a thid option - Bayesan optimisation. End of explanation """ def assing_test_samples(data, last_training_day=0.3, seed=1): days = data.day.unique() np.random.seed(seed) np.random.shuffle(days) test_days = days[: int(len(days) * 0.3)] data['is_test'] = data.day.isin(test_days) def select_features(data): columns = data.columns[ (data.dtypes == np.int64) | (data.dtypes == np.float64) | (data.dtypes == np.bool) ].values return [feat for feat in columns if feat not in ['count', 'casual', 'registered'] and 'log' not in feat ] def get_X_y(data, target_variable): features = select_features(data) X = data[features].values y = data[target_variable].values return X,y def train_test_split(train, target_variable): df_train = train[train.is_test == False] df_test = train[train.is_test == True] X_train, y_train = get_X_y(df_train, target_variable) X_test, y_test = get_X_y(df_test, target_variable) return X_train, X_test, y_train, y_test def fit_and_predict(train, model, target_variable): X_train, X_test, y_train, y_test = train_test_split(train, target_variable) model.fit(X_train, y_train) y_pred = model.predict(X_test) return (y_test, y_pred) def post_pred(y_pred): y_pred[y_pred < 0] = 0 return y_pred def rmsle(y_true, y_pred, y_pred_only_positive=True): if y_pred_only_positive: y_pred = post_pred(y_pred) diff = np.log(y_pred+1) - np.log(y_true+1) mean_error = np.square(diff).mean() return np.sqrt(mean_error) assing_test_samples(train) def etl_datetime(df): df['year'] = df['datetime'].map(lambda x: x.year) df['month'] = df['datetime'].map(lambda x: x.month) df['hour'] = df['datetime'].map(lambda x: x.hour) df['minute'] = df['datetime'].map(lambda x: x.minute) df['dayofweek'] = df['datetime'].map(lambda x: x.dayofweek) df['weekend'] = df['datetime'].map(lambda x: x.dayofweek in [5,6]) etl_datetime(train) train['{0}_log'.format('count')] = train['count'].map(lambda x: np.log2(x) ) for name in ['registered', 'casual']: train['{0}_log'.format(name)] = train[name].map(lambda x: np.log2(x+1) ) """ Explanation: Modeling End of explanation """ def objective(space): model = xgb.XGBRegressor( max_depth = space['max_depth'], n_estimators = int(space['n_estimators']), subsample = space['subsample'], colsample_bytree = space['colsample_bytree'], learning_rate = space['learning_rate'], reg_alpha = space['reg_alpha'] ) X_train, X_test, y_train, y_test = train_test_split(train, 'count') eval_set = [( X_train, y_train), ( X_test, y_test)] (_, registered_pred) = fit_and_predict(train, model, 'registered_log') (_, casual_pred) = fit_and_predict(train, model, 'casual_log') y_test = train[train.is_test == True]['count'] y_pred = (np.exp2(registered_pred) - 1) + (np.exp2(casual_pred) -1) score = rmsle(y_test, y_pred) print "SCORE:", score return{'loss':score, 'status': STATUS_OK } space ={ 'max_depth': hp.quniform("x_max_depth", 2, 20, 1), 'n_estimators': hp.quniform("n_estimators", 100, 1000, 1), 'subsample': hp.uniform ('x_subsample', 0.8, 1), 'colsample_bytree': hp.uniform ('x_colsample_bytree', 0.1, 1), 'learning_rate': hp.uniform ('x_learning_rate', 0.01, 0.1), 'reg_alpha': hp.uniform ('x_reg_alpha', 0.1, 1) } trials = Trials() best = fmin(fn=objective, space=space, algo=tpe.suggest, max_evals=15, trials=trials) print(best) """ Explanation: Tuning hyperparmeters using Bayesian optimization algorithms End of explanation """
diging/networks
.ipynb_checkpoints/3. Flow control. if, elif, else, and friends-checkpoint.ipynb
gpl-3.0
import random # Ignore me for now! """ Explanation: Programming for Network Research Erick Peirson, PhD | erick.peirson@asu.edu | @captbarberousse Last updated 20 January, 2016 0. Introduction. 1. First steps with Python. 2. Objects and types. 3. Flow control: if, elif, else, and friends. 4: Functions and functional programming. 5: I/O: working with data! Numpy and Pandas. 6: Our first tabular graph. Layout and visualization in Cytoscape. 7: Intro to NetworkX. GraphML. Advanced visualization in Cytoscape. 8: Blockmodels in NetworkX. 9: Properties of graphs: whole-graph statistics in NetworkX and Cytoscape. 10: Properties of nodes: centrality statistics. 11: ... End of explanation """ x = random.random() # This generates a random float between 0. and 1. if x < 1./292000000: # That there's some pretty low odss. print 'I won the powerball!' else: print ':-(' """ Explanation: 3. Flow control: if, elif, else, and friends. In most real-life programming situations, you'll need to do more than a linear set of procedures. Your programs will need to be able to make decisions based on a range of possible conditions. In this notebook we'll talk about how to perform logical operations. 3.1. if and else The simplest form of decision-making in Python is dichotomous: if some condition is true, then do something; otherwise, do something else. If the weather is nice, then I'll wear sandals; otherwise, I'll wear shoes. If Moe and Curly win the election, I'll move to Norway; otherwise, I'll stay in New York. Notice the pattern: if...then...otherwise. In Python, we can write those kind of instructions almost as I've written the sentences above, using the statements if and else. End of explanation """ invitees = ['Jack', 'Jane', 'Jim', 'Jill', 'Joseph', 'Jay', 'Josephus'] myfriend = 'Greg' if myfriend in invitees: print 'w00t!' else: print 'l4me' """ Explanation: if says, "if the following statement is True*, then do the following...". Don't foget the colon (:) afterwards, too. The part after the if statement, print 'I won...., is indented, so that Python knows which part should be conditionally executed. else says, "otherwise, do this...". Don't forget the colon! else is at the same level of indentation as the if statement, so that Python knows they go together. The part afterward, print ':-(', is indented, so that Python knows it's the part of the code that will get run in the "otherwise" case. * In Python, the statement need only by "truthy" to be considered True. That is, it could be True, not 0 (or 0.0), or not an empty string. There are all kinds of comparative operators. For a complete list, see this page. Here's an example of how to check whether or not a particular item is contained within a list. Say, for example, that I have a list of invitees for a super-secret-underground LAN party, and I want to know if my friend is on the list. End of explanation """ x = random.random() # Come on, lucky numbers! if myfriend in invitees and x < 1./292000000: print 'Best day ever!' else: print 'meh.' """ Explanation: If I were to win the Powerball AND my friend could come to the LAN party, it would be a pretty great day. I can check for both of those things in one statement, using the and operator. End of explanation """ x = random.random() # I gamble for the thrill. is_there_a_puppy = True if (myfriend in invitees and x < 1./292000000) or is_there_a_puppy: print "This was much more likely, and I've adjusted my expectations." else: print ':-|' """ Explanation: Right, pretty unlucky. But I could get over my disappointment if I saw a puppy. So I'll throw in an or operator. End of explanation """ weather = 'blizzard' footwear = None # I know that I'm wearing something, but I'm not sure what yet. if weather == 'sunny': # ``==`` means "equal to". footwear = 'sandals' # ``=`` means "assign the expression on the right to the variable on the left." elif weather == 'blizzard': footwear = 'snow boots' else: footwear = 'shoes' print footwear """ Explanation: Notice that I threw in some parentheses to group my evaluations. In the example above, the statement python if (myfriend in invitees and x &lt; 1./292000000) or is_there_a_puppy: evaluates as python if (False and False) or True: which gets further simplified to python if False or True: which is just python if True: 3.2. elif Quite often I want to make a decision that has more than two potential outcomes. Sure, if it's sunny outside I'm down for sandals, otherwise I'd rather wear shoes. But if it's a blizzard, I'm going for snow boots. Consider the elif operator (short for "else if"): End of explanation """ weather = 'cows' footwear = None if weather == 'sunny': footwear = 'sandals' elif weather == 'blizzard': footwear = 'snow boots' elif weather == 'rain': footwear = 'galoshes' elif weather == 'cows': footwear = 'run away!' else: footwear = 'shoes' print footwear """ Explanation: You can use as many elifs as you like! End of explanation """ my_life_savings = 100000 while my_life_savings > 0: # While I still have money in the bank, my_life_savings -= 1 # ``-=`` means "subtract one and store the result in the same variable." # This is the same as writing: my_life_savings = my_life_savings - 1 x = random.random() # Wooo! if x < 1./292000000: print 'JACKPOT!!' break """ Explanation: 3.3. Doing things over and over and over. Sure, my chances of winning the powerball are pretty low for any given draw. The real trick is to play all the time! SOMEONE has to win, right? This kind of repetitive task calls for a loop. A loop is a control structure in which we do some series of operations repeatedly until some condition is met. For example, suppose that I have saved up for years and years and now have around $100,000 that I want to gamble away--- er, invest, in the Powerball. End of explanation """ interactions = [40, 44, 5, 26, 4, 47, 41, 41, 29, 5, 49, 32, # You can write a list over multiple lines, 31, 6, 13, 13, 38, 42, 35, 34, 49, 43, 23, 21] # just be sure to end the line with a comma. """ Explanation: The statement... python while ...: says: "as long as the following statement (...) is True, do the following thing over and over and over. Unsurprisingly, this is called a "while" loop. In the example above, we decrement (subtract 1) from our life savings every iteration, and stop when there is no money left (my_life_savings &lt;= 0). Now, in the extremely unlikely case that I win the Powerball, there's really no incentive to keep playing. In that case, I want to stop the loop prematurely. To do that, I use the command break. break will stop the iteration at that very moment. WARNING WARNING WARNING: avoid endless loops! If you use a while loop, make sure that the condition for the loop (the part that gets evaluated each iteration) will become False in some reasonable duration of time. Consider what would happen if I removed the line that decremented my life savings (my_life_savings -= 1): we'd go on playing the powerball forever! If you do start an endless loop by accident, use the square "stop" button (or select Kernel > Interrupt) in the menu at the top of the notebook. Here's another case: suppose I have a bunch of measurements about how often individual ants in a colony of Pogonomyrmex barbatus interact with each other. I sat there with my little notebook for twenty minutes, and counted the number of times each pair of ants touched antennae. Now I have a list of interaction counts, one entry per pair of ants. It's a small colony, only 4 ants, so there are only $4! = 24$ entries. End of explanation """ interaction_frequencies = [] # Make a new, empty list. for count in interactions: # Each iteration, I get the next item, and I refer to it as ``count``. frequency = float(count)/20. # Interactions per minute. interaction_frequencies.append(frequency) # ``append()`` adds the value to the end of the list. print interaction_frequencies """ Explanation: Now I need to turn those interaction counts into interaction rates, expressed in interactions per minute. I can iterate over the list of interaction counts, perform my math, and store the result in a new list. Notice: my counts are ints, so in order to get a precise result during division I'll need to coerce them into floats. End of explanation """ for number in [0, 1, 2, 3, 4, 5]: print number """ Explanation: This is called a for loop (bet you didn't see that coming). Unlike a while loop, which relies on a logical evaluation of some kind, for loops require something to iterate over. We call the thing that we're iterating over the iterator. A list is an iterator: it contains a bunch of elements, and I can get them one at a time. End of explanation """ for character in 'this string': print character """ Explanation: Strings are also iterators! I can get one character at a time: End of explanation """ factorial = 6 for i in xrange(1, factorial - 1): # Yields the ints 1, 2, 3, and 4. # ``*=`` means "Multiply this value by the value on the right, and store the result in the same variable. factorial *= factorial - i # The same as: factorial = factorial * (factorial - i) print factorial """ Explanation: Frequently you will need to perform a task some specific number of times, but not necessarily because you have an iterator of a particular size. For example, suppose I want to calculate $6!$. I know that I will need to perform four multiplication steps ($65432$). I can use the function xrange to create an iterator over a specified range of integers. For example: End of explanation """
asimshankar/tensorflow
tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb
apache-2.0
!pip install unidecode """ Explanation: Copyright 2018 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the "License"). Text Generation using a RNN <table class="tfo-notebook-buttons" align="left"><td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td><td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/generative_examples/text_generation.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on Github</a></td></table> This notebook demonstrates how to generate text using an RNN using tf.keras and eager execution. If you like, you can write a similar model using less code. Here, we show a lower-level impementation that's useful to understand as prework before diving in to deeper examples in a similar, like Neural Machine Translation with Attention. This notebook is an end-to-end example. When you run it, it will download a dataset of Shakespeare's writing. We'll use a collection of plays, borrowed from Andrej Karpathy's excellent The Unreasonable Effectiveness of Recurrent Neural Networks. The notebook will train a model, and use it to generate sample output. Here is the output(with start string='w') after training a single layer GRU for 30 epochs with the default settings below: ``` were to the death of him And nothing of the field in the view of hell, When I said, banish him, I will not burn thee that would live. HENRY BOLINGBROKE: My gracious uncle-- DUKE OF YORK: As much disgraced to the court, the gods them speak, And now in peace himself excuse thee in the world. HORTENSIO: Madam, 'tis not the cause of the counterfeit of the earth, And leave me to the sun that set them on the earth And leave the world and are revenged for thee. GLOUCESTER: I would they were talking with the very name of means To make a puppet of a guest, and therefore, good Grumio, Nor arm'd to prison, o' the clouds, of the whole field, With the admire With the feeding of thy chair, and we have heard it so, I thank you, sir, he is a visor friendship with your silly your bed. SAMPSON: I do desire to live, I pray: some stand of the minds, make thee remedies With the enemies of my soul. MENENIUS: I'll keep the cause of my mistress. POLIXENES: My brother Marcius! Second Servant: Will't ple ``` Of course, while some of the sentences are grammatical, most do not make sense. But, consider: Our model is character based (when we began training, it did not yet know how to spell a valid English word, or that words were even a unit of text). The structure of the output resembles a play (blocks begin with a speaker name, in all caps similar to the original text). Sentences generally end with a period. If you look at the text from a distance (or don't read the invididual words too closely, it appears as if it's an excerpt from a play). As a next step, you can experiment training the model on a different dataset - any large text file(ASCII) will do, and you can modify a single line of code below to make that change. Have fun! Install unidecode library A helpful library to convert unicode to ASCII. End of explanation """ # Import TensorFlow >= 1.10 and enable eager execution import tensorflow as tf # Note: Once you enable eager execution, it cannot be disabled. tf.enable_eager_execution() import numpy as np import os import re import random import unidecode import time """ Explanation: Import tensorflow and enable eager execution. End of explanation """ path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt') """ Explanation: Download the dataset In this example, we will use the shakespeare dataset. You can use any other dataset that you like. End of explanation """ text = unidecode.unidecode(open(path_to_file).read()) # length of text is the number of characters in it print (len(text)) """ Explanation: Read the dataset End of explanation """ # unique contains all the unique characters in the file unique = sorted(set(text)) # creating a mapping from unique characters to indices char2idx = {u:i for i, u in enumerate(unique)} idx2char = {i:u for i, u in enumerate(unique)} # setting the maximum length sentence we want for a single input in characters max_length = 100 # length of the vocabulary in chars vocab_size = len(unique) # the embedding dimension embedding_dim = 256 # number of RNN (here GRU) units units = 1024 # batch size BATCH_SIZE = 64 # buffer size to shuffle our dataset BUFFER_SIZE = 10000 """ Explanation: Creating dictionaries to map from characters to their indices and vice-versa, which will be used to vectorize the inputs End of explanation """ input_text = [] target_text = [] for f in range(0, len(text)-max_length, max_length): inps = text[f:f+max_length] targ = text[f+1:f+1+max_length] input_text.append([char2idx[i] for i in inps]) target_text.append([char2idx[t] for t in targ]) print (np.array(input_text).shape) print (np.array(target_text).shape) """ Explanation: Creating the input and output tensors Vectorizing the input and the target text because our model cannot understand strings only numbers. But first, we need to create the input and output vectors. Remember the max_length we set above, we will use it here. We are creating max_length chunks of input, where each input vector is all the characters in that chunk except the last and the target vector is all the characters in that chunk except the first. For example, consider that the string = 'tensorflow' and the max_length is 9 So, the input = 'tensorflo' and output = 'ensorflow' After creating the vectors, we convert each character into numbers using the char2idx dictionary we created above. End of explanation """ dataset = tf.data.Dataset.from_tensor_slices((input_text, target_text)).shuffle(BUFFER_SIZE) dataset = dataset.batch(BATCH_SIZE, drop_remainder=True) """ Explanation: Creating batches and shuffling them using tf.data End of explanation """ class Model(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, units, batch_size): super(Model, self).__init__() self.units = units self.batch_sz = batch_size self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) if tf.test.is_gpu_available(): self.gru = tf.keras.layers.CuDNNGRU(self.units, return_sequences=True, return_state=True, recurrent_initializer='glorot_uniform') else: self.gru = tf.keras.layers.GRU(self.units, return_sequences=True, return_state=True, recurrent_activation='sigmoid', recurrent_initializer='glorot_uniform') self.fc = tf.keras.layers.Dense(vocab_size) def call(self, x, hidden): x = self.embedding(x) # output shape == (batch_size, max_length, hidden_size) # states shape == (batch_size, hidden_size) # states variable to preserve the state of the model # this will be used to pass at every step to the model while training output, states = self.gru(x, initial_state=hidden) # reshaping the output so that we can pass it to the Dense layer # after reshaping the shape is (batch_size * max_length, hidden_size) output = tf.reshape(output, (-1, output.shape[2])) # The dense layer will output predictions for every time_steps(max_length) # output shape after the dense layer == (max_length * batch_size, vocab_size) x = self.fc(output) return x, states """ Explanation: Creating the model We use the Model Subclassing API which gives us full flexibility to create the model and change it however we like. We use 3 layers to define our model. Embedding layer GRU layer (you can use an LSTM layer here) Fully connected layer End of explanation """ model = Model(vocab_size, embedding_dim, units, BATCH_SIZE) optimizer = tf.train.AdamOptimizer() # using sparse_softmax_cross_entropy so that we don't have to create one-hot vectors def loss_function(real, preds): return tf.losses.sparse_softmax_cross_entropy(labels=real, logits=preds) """ Explanation: Call the model and set the optimizer and the loss function End of explanation """ checkpoint_dir = './training_checkpoints' checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt") checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) """ Explanation: Checkpoints (Object-based saving) End of explanation """ # Training step EPOCHS = 20 for epoch in range(EPOCHS): start = time.time() # initializing the hidden state at the start of every epoch hidden = model.reset_states() for (batch, (inp, target)) in enumerate(dataset): with tf.GradientTape() as tape: # feeding the hidden state back into the model # This is the interesting step predictions, hidden = model(inp, hidden) # reshaping the target because that's how the # loss function expects it target = tf.reshape(target, (-1,)) loss = loss_function(target, predictions) grads = tape.gradient(loss, model.variables) optimizer.apply_gradients(zip(grads, model.variables)) if batch % 100 == 0: print ('Epoch {} Batch {} Loss {:.4f}'.format(epoch+1, batch, loss)) # saving (checkpoint) the model every 5 epochs if (epoch + 1) % 5 == 0: checkpoint.save(file_prefix = checkpoint_prefix) print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss)) print('Time taken for 1 epoch {} sec\n'.format(time.time() - start)) """ Explanation: Train the model Here we will use a custom training loop with the help of GradientTape() We initialize the hidden state of the model with zeros and shape == (batch_size, number of rnn units). We do this by calling the function defined while creating the model. Next, we iterate over the dataset(batch by batch) and calculate the predictions and the hidden states associated with that input. There are a lot of interesting things happening here. The model gets hidden state(initialized with 0), lets call that H0 and the first batch of input, lets call that I0. The model then returns the predictions P1 and H1. For the next batch of input, the model receives I1 and H1. The interesting thing here is that we pass H1 to the model with I1 which is how the model learns. The context learned from batch to batch is contained in the hidden state. We continue doing this until the dataset is exhausted and then we start a new epoch and repeat this. After calculating the predictions, we calculate the loss using the loss function defined above. Then we calculate the gradients of the loss with respect to the model variables(input) Finally, we take a step in that direction with the help of the optimizer using the apply_gradients function. Note:- If you are running this notebook in Colab which has a Tesla K80 GPU it takes about 23 seconds per epoch. End of explanation """ # restoring the latest checkpoint in checkpoint_dir checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir)) """ Explanation: Restore the latest checkpoint End of explanation """ # Evaluation step(generating text using the model learned) # number of characters to generate num_generate = 1000 # You can change the start string to experiment start_string = 'Q' # converting our start string to numbers(vectorizing!) input_eval = [char2idx[s] for s in start_string] input_eval = tf.expand_dims(input_eval, 0) # empty string to store our results text_generated = '' # hidden state shape == (batch_size, number of rnn units); here batch size == 1 hidden = [tf.zeros((1, units))] for i in range(num_generate): predictions, hidden = model(input_eval, hidden) # using argmax to predict the word returned by the model predicted_id = tf.argmax(predictions[-1]).numpy() # We pass the predicted word as the next input to the model # along with the previous hidden state input_eval = tf.expand_dims([predicted_id], 0) text_generated += idx2char[predicted_id] print (start_string + text_generated) """ Explanation: Predicting using our trained model The below code block is used to generated the text We start by choosing a start string and initializing the hidden state and setting the number of characters we want to generate. We get predictions using the start_string and the hidden state Then we use argmax to calculate the index of the predicted word. We use this predicted word as our next input to the model The hidden state returned by the model is fed back into the model so that it now has more context rather than just one word. After we predict the next word, the modified hidden states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words. If you see the predictions, the model knows when to capitalize, make paragraphs and the text follows a shakespeare style of writing which is pretty awesome! End of explanation """
BinRoot/TensorFlow-Book
ch10_rnn/Concept02_rnn.ipynb
mit
import numpy as np import tensorflow as tf from tensorflow.contrib import rnn """ Explanation: Ch 10: Concept 02 Recurrent Neural Network Import the relevant libraries: End of explanation """ class SeriesPredictor: def __init__(self, input_dim, seq_size, hidden_dim=10): # Hyperparameters self.input_dim = input_dim self.seq_size = seq_size self.hidden_dim = hidden_dim # Weight variables and input placeholders self.W_out = tf.Variable(tf.random_normal([hidden_dim, 1]), name='W_out') self.b_out = tf.Variable(tf.random_normal([1]), name='b_out') self.x = tf.placeholder(tf.float32, [None, seq_size, input_dim]) self.y = tf.placeholder(tf.float32, [None, seq_size]) # Cost optimizer self.cost = tf.reduce_mean(tf.square(self.model() - self.y)) self.train_op = tf.train.AdamOptimizer().minimize(self.cost) # Auxiliary ops self.saver = tf.train.Saver() def model(self): """ :param x: inputs of size [T, batch_size, input_size] :param W: matrix of fully-connected output layer weights :param b: vector of fully-connected output layer biases """ cell = rnn.BasicLSTMCell(self.hidden_dim, reuse=tf.get_variable_scope().reuse) outputs, states = tf.nn.dynamic_rnn(cell, self.x, dtype=tf.float32) num_examples = tf.shape(self.x)[0] W_repeated = tf.tile(tf.expand_dims(self.W_out, 0), [num_examples, 1, 1]) out = tf.matmul(outputs, W_repeated) + self.b_out out = tf.squeeze(out) return out def train(self, train_x, train_y): with tf.Session() as sess: tf.get_variable_scope().reuse_variables() sess.run(tf.global_variables_initializer()) for i in range(1000): _, mse = sess.run([self.train_op, self.cost], feed_dict={self.x: train_x, self.y: train_y}) if i % 100 == 0: print(i, mse) save_path = self.saver.save(sess, 'model.ckpt') print('Model saved to {}'.format(save_path)) def test(self, test_x): with tf.Session() as sess: tf.get_variable_scope().reuse_variables() self.saver.restore(sess, './model.ckpt') output = sess.run(self.model(), feed_dict={self.x: test_x}) return output """ Explanation: Define the RNN model: End of explanation """ if __name__ == '__main__': predictor = SeriesPredictor(input_dim=1, seq_size=4, hidden_dim=10) train_x = [[[1], [2], [5], [6]], [[5], [7], [7], [8]], [[3], [4], [5], [7]]] train_y = [[1, 3, 7, 11], [5, 12, 14, 15], [3, 7, 9, 12]] predictor.train(train_x, train_y) test_x = [[[1], [2], [3], [4]], # 1, 3, 5, 7 [[4], [5], [6], [7]]] # 4, 9, 11, 13 actual_y = [[[1], [3], [5], [7]], [[4], [9], [11], [13]]] pred_y = predictor.test(test_x) print("\nLets run some tests!\n") for i, x in enumerate(test_x): print("When the input is {}".format(x)) print("The ground truth output should be {}".format(actual_y[i])) print("And the model thinks it is {}\n".format(pred_y[i])) """ Explanation: Now, we'll train a series predictor. Let's say we have a sequence of numbers [a, b, c, d] that we want to transform into [a, a+b, b+c, c+d]. We'll give the RNN a couple examples in the training data. Let's see how well it learns this intended transformation: End of explanation """
marc-moreaux/Deep-Learning-classes
notebooks/Logistic_regression_colum.ipynb
mit
import matplotlib.pyplot as plt import numpy as np x_1 = np.array([0,1,0,1,0]) x_2 = np.array([0,1,0,1,1]) def to_img(vec): matrix = np.ones((5, 3)) matrix[:, 1] = 1-vec return matrix fig, axs = plt.subplots(1,2) axs[0].imshow(to_img(x_1), cmap='gray') axs[1].imshow(to_img(x_2), cmap='gray') plt.show() """ Explanation: Colun or semicolun ? In this notebook, you are going to implement a logistic regression algrorithm. - 1st, you'll build a dataset - 2nd, you'll you are going do define a model - 3rd, a backpropagation method - 4th, a gradient descent method Dataset We build a dataset to illustrate our purpose. The dataset we build is supposed to help us converting a paper scan into a ASCII string. Lets imagine that, when a paper is scaned, we can detect, with high confidence that we are over a colun or a semicolun. Our objective here is to detect wether it's one or the other. Therefore, our algorithm is fed with a vector $x_i \in [0,1]^5$ which represent the intensity of the pen stroke writting on the paper. Here below, you have an example of 'perfect' strokes for $x_1$ an example of colun, and $x_2$ an example of semicolun. End of explanation """ y_1 = 0 y_2 = 1 """ Explanation: Whenever a sample $x_i$ belongs to the class colun, we'll label it with $y_i=0$. Likewise, whenever a sample $x_i$ belongs to the class semicolun, we'll label it with $y_i=1$. End of explanation """ X = np.array([ [.0, 1., 0., 1., 0.], [.0, .9, 0., .9, 0.], [.2, .8, 0., .8, .2], [.0, 1., 0., 1., 1.], [.0, 1., 0., .5, .5], [.2, .8, 0., .7, .7]]) y = np.array([0,0,0,1,1,1]) fig, axs = plt.subplots(1,6) for i in range(len(X)): axs[i].imshow(to_img(X[i]), cmap='gray') plt.show() """ Explanation: Dataset generation End of explanation """ # Write the code ^^ """ Explanation: Define a logistic regression model (You may want to read this : http://cs229.stanford.edu/notes/cs229-notes1.pdf). You're going to build a model which outputs a prediction value $p_i$ given an input $x_i$. This prediction $p_i$ will reflect the propability that your input $x_i$ belongs to class 1. $$ \begin{align} p_i &= P(Y=1 | W, x_i) \ p(x_i,W) &= P(Y=1 | W, x_i) \end{align} $$ As $p_i$ is a probability, it must be in [0,1]. The model we'll consider perform a weighted sum of its input: - Weighted sum : $ s = (W^t \cdot X + b) $ And then squizes the values between 0 and 1 (which is our prediction value): - prediction : $ p(s) = \frac{1}{1 + e^{-s}} $ End of explanation """ # Pen, paper, and your code ^^ """ Explanation: Compare these predicted values ($p_i$) with the true output ($y_i$) Overall, we would like to maximize the likelihood that we are right at predicting a label. $$ \begin{align} \max \text{likelihood} &= \text{argmax}_w \Pi_i P(Y | W, x_i) \ &= \text{argmax}_w \Pi_i \big( P(Y=y_i | W, x_i) \big) \ &= \text{argmax}_w \Pi_i \big( P(Y=1 | W, x_i)^{y_i} \cdot P(Y=0 | W, x_i)^{1-y_i} \big) \ &= \text{argmax}_w \Pi_i \big( P(Y=1 | W, x_i)^{y_i} \cdot 1-P(Y=1 | W, x_i)^{1-y_i}\big) \ &= \text{argmax}_w \Pi_i \big( p_i^{y_i} \cdot 1-p_i^{1-y_i}\big) \ &= \text{argmax}_w \sum_i \big( y_i \ln(p_i) + (1-y_i) \ln(1-p_i) \big) \ &= \text{argmin}_w - \sum_i \big( y_i \ln(p_i) + (1-y_i) \ln(1-p_i) \big) \ \end{align} $$ And this term is going to be our loss that we want to reduce: $$ L(x_i, W, y_i) = - \sum_i \big( y_i \ln(p_i) + (1-y_i) \ln(1-p_i) \big) $$ This is how you compare the prediction you made ($p_i$) to the true output you expected ($y_i$). In our example : In means of colun and semicolun : remember $x_0$, it's a colun, therefore it's label is $y_0=0$. If your classifier is good you'ld expect it to predict it's a semicolun, hense have $p_i = $"Something small like 0.1". The error for this one sample is going to be: $$ \begin{align} L(X, W, y) &= - \sum_i \big( y_i \ln(p_i) + (1-y_i) \ln(1-p_i) \big) \ &= y_0 \ln(p_0) + (1-y_0) \ln(1-p_0) \ &= - 0 \ln(.9) + (1-0) \ln(1-.9) \ &= - \ln(.1) \end{align} $$ Find the minimum of the Loss function To reduce the error, we have to find the minimum of $L(x, W, y)$. Hense, we derive it with respect to $W$ and find the 'zeros'. End of explanation """ # Code """ Explanation: Apply Stochastic gradient descent to solve this We are going to solve this with Stochastic Gradient Descent (SGD), meaning that we start with some values for $W$ and update this values such that our loss value disminushes. $$ W = W + \alpha \frac{\delta L(x, W, y)}{\delta W} $$ End of explanation """
Neurosim-lab/netpyne
netpyne/tutorials/netpyne-course-2021/import_cells.ipynb
mit
!pwd """ Explanation: Importing Cells in NetPyNE (1) Clone repository and compile mod files Determine your location in the directory structure End of explanation """ %cd /content/ """ Explanation: Move to (or stay in) the '/content' directory End of explanation """ !pwd """ Explanation: Ensure you are in the correct directory --> Expected output: "/content" End of explanation """ !pip install neuron !pip install netpyne import matplotlib import os import json %matplotlib inline """ Explanation: Install NEURON and NetPyNE, and import matplotlib End of explanation """ if os.path.isdir('/content/cells_netpyne2021'): !rm -r /content/cells_netpyne2021 """ Explanation: This next line will detect if the directory already exists (i.e. you are re-running this code), and will delete it to prevent future errors. End of explanation """ !git clone https://github.com/ericaygriffith/cells_netpyne2021.git """ Explanation: Clone repository with the necessary cell and mod files End of explanation """ cd cells_netpyne2021/ """ Explanation: Move into the repository with all the necessary files End of explanation """ !pwd """ Explanation: Ensure you are in the repository with the 'pwd' command --> Expected output: '/content/cells_netpyne2021' End of explanation """ !nrnivmodl """ Explanation: Compile the mod files --> Expected output: creation of an 'x86_64' directory End of explanation """ from netpyne import specs, sim # Network parameters netParams = specs.NetParams() # object of class NetParams to store the network parameters """ Explanation: (2) Importing cells from different file formats Set up netParams object End of explanation """ netParams.loadCellParamsRule(label='TC_reduced', fileName = 'TC_reduced_cellParams.json') netParams.cellParams['TC_reduced'] """ Explanation: 2a. Import cell from .json format End of explanation """ netParams.importCellParams( label='PYR_HH3D_swc', conds={'cellType': 'PYR', 'cellModel': 'HH3D_swc'}, fileName='BS0284.swc', cellName='swc_cell') netParams.cellParams.keys() """ Explanation: 2b. Import a detailed morphology from a .swc file End of explanation """ netParams.importCellParams( label='PYR_HH3D_hoc', conds={'cellType': 'PYR', 'cellModel': 'HH3D_hoc'}, fileName='geom.hoc', cellName='E21', importSynMechs=False) netParams.cellParams.keys() """ Explanation: 2c. Import a cell from a .hoc (NEURON) file End of explanation """ netParams.importCellParams( label='sRE_py', conds={'cellType': 'sRE', 'cellModel': 'HH'}, fileName='sRE.py', cellName='sRE', importSynMechs=False) netParams.cellParams.keys() """ Explanation: 2d. Import a cell from a .py (python) file End of explanation """ netParams.importCellParams( label='mouse_hipp_swc', conds={'cellType': 'hipp','cellModel': 'HH3D'}, fileName='mouseGABA_hipp.swc', cellName='swc_hippCell' ) netParams.cellParams.keys() """ Explanation: EXERCISE: import the other swc file contained in the cells_netpyne2021 directory End of explanation """ netParams.cellParams.keys() """ Explanation: (3) Explore and manipulate cell parameters Explore the cell types located in the netParams.cellParams dictionary End of explanation """ netParams.cellParams['TC_reduced']['secs']['soma']['geom']['L'] geom_TC = netParams.cellParams['TC_reduced']['secs']['soma']['geom'] geom_TC['L'] """ Explanation: EXERCISE: Find the geometry (length & diameter) of the soma compartment for each of the above cells End of explanation """ netParams.cellParams['TC_reduced']['secs']['soma']['mechs'].keys() """ Explanation: EXERCISE: List all of the channel mechanisms in the soma compartment of the thalamocortical cell model (TC_reduced) End of explanation """ netParams.cellParams['TC_reduced']['secs']['soma']['mechs'].keys() netParams.cellParams['TC_reduced']['secs']['soma']['mechs']['pas'].keys() netParams.cellParams['TC_reduced']['secs']['soma']['mechs']['pas']['g'] = 5.0e05 netParams.cellParams['TC_reduced']['secs']['soma']['mechs']['pas']['g'] """ Explanation: Now we want to explore (and change) the values of a channel parameter in a given cell model End of explanation """ netParams.cellParams['mouse_hipp_swc']['secs']['soma_0']['mechs']['pas'] = {'g': 0.0000357, 'e': -70} """ Explanation: EXERCISE: Change the conductance of the leak channel in the soma compartment of the reticular cell model (sRE.py) EXERCISE: Insert a passive leak channel ('pas') into the soma compartment of the mouseGABA_hipp.swc cell model End of explanation """ for sec in netParams.cellParams['PYR_HH3D_swc']['secs'].keys(): netParams.cellParams['PYR_HH3D_swc']['secs'][sec]['geom']['cm'] = 1 """ Explanation: EXERCISE: Change the capacitance of all compartments in the model defined by BS0284.swc (PYR_HH3D_swc) End of explanation """ netParams.popParams['TC_pop'] = {'cellType': 'TC', 'numCells': 1, 'cellModel': 'HH_reduced'} """ Explanation: Now let's see how these changes affect the cell behavior by plotting cell's response to current input before and after param changes! EXERCISE: First create a population of thalamocortical cells End of explanation """ netParams.stimTargetParams['Input->TC_pop'] = {'source': 'Input', 'sec':'soma', 'loc': 0.5, 'conds': {'pop':'TC_pop'}} """ Explanation: EXERCISE: Add hyperpolarizing current clamp stimulation of -0.1 nA to thalamocortical cell pop End of explanation """ ## cfg cfg = specs.SimConfig() # object of class SimConfig to store simulation configuration cfg.duration = 2*1e3 # Duration of the simulation, in ms cfg.dt = 0.01 # Internal integration timestep to use cfg.verbose = 1 # Show detailed messages cfg.recordTraces = {'V_soma':{'sec':'soma','loc':0.5,'var':'v'}} # Dict with traces to record cfg.recordStep = 0.01 cfg.filename = 'model_output' # Set file output name cfg.saveJson = False cfg.analysis['plotTraces'] = {'include': [0], 'saveFig': True} # Plot recorded traces for this list of cells cfg.hParams['celsius'] = 36 """ Explanation: Add cfg params End of explanation """ sim.createSimulateAnalyze(netParams = netParams, simConfig = cfg) """ Explanation: Create network and run simulation End of explanation """ ## cfg cfg = specs.SimConfig() # object of class SimConfig to store simulation configuration cfg.duration = 2*1e3 # Duration of the simulation, in ms cfg.dt = 0.01 # Internal integration timestep to use cfg.verbose = 1 # Show detailed messages cfg.recordTraces = {'V_soma':{'sec':'soma','loc':0.5,'var':'v'}} # Dict with traces to record cfg.recordStep = 0.01 cfg.filename = 'model_output' # Set file output name cfg.saveJson = False cfg.analysis['plotTraces'] = {'include': [0], 'saveFig': True} # Plot recorded traces for this list of cells cfg.hParams['celsius'] = 36 """ Explanation: EXERCISE: We see a rebound burst! T-type calcium channels are normally considered responsible for this behavior. What happens if we set the conductance of this channel to 0? cfg params End of explanation """ sim.createSimulateAnalyze(netParams = netParams, simConfig = cfg) """ Explanation: Run the sim End of explanation """ netParams.popParams['HH3D_pop_hoc'] = {'cellType': 'PYR', 'numCells': 1, 'cellModel': 'HH3D_hoc'} sim.createSimulateAnalyze(netParams = netParams, simConfig = cfg) %matplotlib inline sim.analysis.plotShape(includePre = [], includePost=['HH3D_pop_hoc'], showSyns=False, figSize=(4,9), dist=0.8, saveFig=True) """ Explanation: (4) Plotting Morphology End of explanation """ netParams.sizeX = 200 """ Explanation: EXERCISE: Try plotting the morphology of other cell models (5) Making a Network EXERCISE: To begin creating a network, specify the geometry of the area you would like to model. End of explanation """ netParams.propVelocity = 100.0 # propagation velocity (um/ms) netParams.probLengthConst = 150.0 # length constant for conn probability (um) """ Explanation: Now let's set the propagation velocity and length constant: End of explanation """ netParams.synMechParams['exc'] = {'mod': 'Exp2Syn', 'tau1': 0.8, 'tau2': 5.3, 'e': 0} # NMDA synaptic mechanism netParams.synMechParams['inh'] = {'mod': 'Exp2Syn', 'tau1': 0.6, 'tau2': 8.5, 'e': -75} # GABA synaptic mechanism """ Explanation: EXERCISE: Now establish a few populations of cells Now we need some synaptic mechanism parameters End of explanation """ netParams.stimSourceParams['bkg'] = {'type': 'NetStim', 'rate': 40, 'noise': 0.3} """ Explanation: Add some network stimulation parameters End of explanation """ netParams.stimTargetParams['bkg->all'] = {'source': 'bkg', 'conds': {'cellType': ['E','I']}, 'weight': 10.0, 'sec': 'soma', 'delay': 'max(1, normal(5,2))', 'synMech': 'exc'} """ Explanation: EXERCISE: modify the line below such that your stim object can target the populations in your network End of explanation """ netParams.connParams['E->all'] = { 'preConds': {'cellType': 'E'}, 'postConds': {'y': [100,1000]}, # E -> all (100-1000 um) 'probability': 0.1 , # probability of connection 'weight': '5.0*post_ynorm', # synaptic weight 'delay': 'dist_3D/propVelocity', # transmission delay (ms) 'synMech': 'exc'} # synaptic mechanism """ Explanation: Add cell connectivity rules EXERCISE: modify the lines below to fit your network End of explanation """ cfg.analysis['plot2Dnet'] = {'saveFig': True} # plot 2D cell positions and connections cfg.analysis['plotConn'] = {'saveFig': True} # plot connectivity matrix """ Explanation: EXERCISE: Add the appropriate line(s) to run the network and plot a 2D representation of your network w/ connectivity between cells End of explanation """
enbanuel/phys202-2015-work
days/day08/Display.ipynb
mit
class Ball(object): pass b = Ball() b.__repr__() print(b) """ Explanation: Display of Rich Output In Python, objects can declare their textual representation using the __repr__ method. End of explanation """ class Ball(object): def __repr__(self): return 'TEST' b = Ball() print(b) """ Explanation: Overriding the __repr__ method: End of explanation """ from IPython.display import display """ Explanation: IPython expands on this idea and allows objects to declare other, rich representations including: HTML JSON PNG JPEG SVG LaTeX A single object can declare some or all of these representations; all of them are handled by IPython's display system. . Basic display imports The display function is a general purpose tool for displaying different representations of objects. Think of it as print for these rich representations. End of explanation """ from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg ) """ Explanation: A few points: Calling display on an object will send all possible representations to the Notebook. These representations are stored in the Notebook document. In general the Notebook will use the richest available representation. If you want to display a particular representation, there are specific functions for that: End of explanation """ from IPython.display import Image i = Image(filename='./ipython-image.png') display(i) """ Explanation: Images To work with images (JPEG, PNG) use the Image class. End of explanation """ i """ Explanation: Returning an Image object from an expression will automatically display it: End of explanation """ Image(url='http://python.org/images/python-logo.gif') """ Explanation: An image can also be displayed from raw data or a URL. End of explanation """ from IPython.display import HTML s = """<table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table>""" h = HTML(s) display(h) """ Explanation: HTML Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the HTML class. End of explanation """ %%html <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> %%html <style> #notebook { background-color: salmon; font-family: comic sans ms; } </style> """ Explanation: You can also use the %%html cell magic to accomplish the same thing. End of explanation """ from IPython.display import Javascript """ Explanation: You can remove the abvove styling by using "Cell"$\rightarrow$"Current Output"$\rightarrow$"Clear" with that cell selected. JavaScript The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as d3.js for output. End of explanation """ js = Javascript('alert("hi")'); display(js) """ Explanation: Pass a string of JavaScript source code to the JavaScript object and then display it. End of explanation """ %%javascript alert("hi"); """ Explanation: The same thing can be accomplished using the %%javascript cell magic: End of explanation """ Javascript( """$.getScript('https://cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""" ) %%html <style type="text/css"> circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px; } .leaf circle { fill: #ff7f0e; fill-opacity: 1; } text { font: 10px sans-serif; } </style> %%javascript // element is the jQuery element we will append to var e = element.get(0); var diameter = 600, format = d3.format(",d"); var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; }); var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)"); d3.json("./flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); }); }); d3.select(self.frameElement).style("height", diameter + "px"); """ Explanation: Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples. End of explanation """ from IPython.display import Audio Audio("./scrubjay.mp3") """ Explanation: Audio IPython makes it easy to work with sounds interactively. The Audio display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the Image display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. End of explanation """ import numpy as np max_time = 3 f1 = 120.0 f2 = 124.0 rate = 8000.0 L = 3 times = np.linspace(0,L,rate*L) signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times) Audio(data=signal, rate=rate) """ Explanation: A NumPy array can be converted to audio. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook. For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as beats occur: End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('sjfsUzECqK0') """ Explanation: Video More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: End of explanation """ from IPython.display import IFrame IFrame('https://ipython.org', width='100%', height=350) """ Explanation: External sites You can even embed an entire page from another site in an iframe; for example this is IPython's home page: End of explanation """ from IPython.display import FileLink, FileLinks FileLink('../Visualization/Matplotlib.ipynb') """ Explanation: Links to local files IPython provides builtin display classes for generating links to local files. Create a link to a single file using the FileLink object: End of explanation """ FileLinks('./') """ Explanation: Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, FileLinks would work in a recursive manner creating links to files in all sub-directories as well. End of explanation """
deepchem/deepchem
examples/tutorials/Training_a_Generative_Adversarial_Network_on_MNIST.ipynb
mit
!pip install --pre deepchem import deepchem deepchem.__version__ """ Explanation: Training a Generative Adversarial Network on MNIST In this tutorial, we will train a Generative Adversarial Network (GAN) on the MNIST dataset. This is a large collection of 28x28 pixel images of handwritten digits. We will try to train a network to produce new images of handwritten digits. Colab This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link. End of explanation """ import deepchem as dc import tensorflow as tf from deepchem.models.optimizers import ExponentialDecay from tensorflow.keras.layers import Conv2D, Conv2DTranspose, Dense, Reshape import matplotlib.pyplot as plot import matplotlib.gridspec as gridspec %matplotlib inline mnist = tf.keras.datasets.mnist.load_data(path='mnist.npz') images = mnist[0][0].reshape((-1, 28, 28, 1))/255 dataset = dc.data.NumpyDataset(images) """ Explanation: To begin, let's import all the libraries we'll need and load the dataset (which comes bundled with Tensorflow). End of explanation """ def plot_digits(im): plot.figure(figsize=(3, 3)) grid = gridspec.GridSpec(4, 4, wspace=0.05, hspace=0.05) for i, g in enumerate(grid): ax = plot.subplot(g) ax.set_xticks([]) ax.set_yticks([]) ax.imshow(im[i,:,:,0], cmap='gray') plot_digits(images) """ Explanation: Let's view some of the images to get an idea of what they look like. End of explanation """ class DigitGAN(dc.models.WGAN): def get_noise_input_shape(self): return (10,) def get_data_input_shapes(self): return [(28, 28, 1)] def create_generator(self): return tf.keras.Sequential([ Dense(7*7*8, activation=tf.nn.relu), Reshape((7, 7, 8)), Conv2DTranspose(filters=16, kernel_size=5, strides=2, activation=tf.nn.relu, padding='same'), Conv2DTranspose(filters=1, kernel_size=5, strides=2, activation=tf.sigmoid, padding='same') ]) def create_discriminator(self): return tf.keras.Sequential([ Conv2D(filters=32, kernel_size=5, strides=2, activation=tf.nn.leaky_relu, padding='same'), Conv2D(filters=64, kernel_size=5, strides=2, activation=tf.nn.leaky_relu, padding='same'), Dense(1, activation=tf.math.softplus) ]) gan = DigitGAN(learning_rate=ExponentialDecay(0.001, 0.9, 5000)) """ Explanation: Now we can create our GAN. Like in the last tutorial, it consists of two parts: The generator takes random noise as its input and produces output that will hopefully resemble the training data. The discriminator takes a set of samples as input (possibly training data, possibly created by the generator), and tries to determine which are which. This time we will use a different style of GAN called a Wasserstein GAN (or WGAN for short). In many cases, they are found to produce better results than conventional GANs. The main difference between the two is in the discriminator (often called a "critic" in this context). Instead of outputting the probability of a sample being real training data, it tries to learn how to measure the distance between the training distribution and generated distribution. That measure can then be directly used as a loss function for training the generator. We use a very simple model. The generator uses a dense layer to transform the input noise into a 7x7 image with eight channels. That is followed by two convolutional layers that upsample it first to 14x14, and finally to 28x28. The discriminator does roughly the same thing in reverse. Two convolutional layers downsample the image first to 14x14, then to 7x7. A final dense layer produces a single number as output. In the last tutorial we used a sigmoid activation to produce a number between 0 and 1 that could be interpreted as a probability. Since this is a WGAN, we instead use a softplus activation. It produces an unbounded positive number that can be interpreted as a distance. End of explanation """ def iterbatches(epochs): for i in range(epochs): for batch in dataset.iterbatches(batch_size=gan.batch_size): yield {gan.data_inputs[0]: batch[0]} gan.fit_gan(iterbatches(100), generator_steps=0.2, checkpoint_interval=5000) """ Explanation: Now to train it. As in the last tutorial, we write a generator to produce data. This time the data is coming from a dataset, which we loop over 100 times. One other difference is worth noting. When training a conventional GAN, it is important to keep the generator and discriminator in balance thoughout training. If either one gets too far ahead, it becomes very difficult for the other one to learn. WGANs do not have this problem. In fact, the better the discriminator gets, the cleaner a signal it provides and the easier it becomes for the generator to learn. We therefore specify generator_steps=0.2 so that it will only take one step of training the generator for every five steps of training the discriminator. This tends to produce faster training and better results. End of explanation """ plot_digits(gan.predict_gan_generator(batch_size=16)) """ Explanation: Let's generate some data and see how the results look. End of explanation """
Islast/BrainNetworksInPython
tutorials/tutorial.ipynb
mit
import numpy as np import networkx as nx import scona as scn import scona.datasets as datasets """ Explanation: scona scona is a tool to perform network analysis over correlation networks of brain regions. This tutorial will go through the basic functionality of scona, taking us from our inputs (a matrix of structural regional measures over subjects) to a report of local network measures for each brain region, and network level comparisons to a cohort of random graphs of the same degree. End of explanation """ # Read in sample data from the NSPN WhitakerVertes PNAS 2016 paper. df, names, covars, centroids = datasets.NSPN_WhitakerVertes_PNAS2016.import_data() df.head() """ Explanation: Importing data A scona analysis starts with four inputs. * regional_measures A pandas DataFrame with subjects as rows. The columns should include structural measures for each brain region, as well as any subject-wise covariates. * names A list of names of the brain regions. This will be used to specify which columns of the regional_measures matrix to want to correlate over. * covars (optional) A list of your covariates. This will be used to specify which columns of regional_measure you wish to correct for. * centroids A list of tuples representing the cartesian coordinates of brain regions. This list should be in the same order as the list of brain regions to accurately assign coordinates to regions. The coordinates are expected to obey the convention the the x=0 plane is the same plane that separates the left and right hemispheres of the brain. End of explanation """ df_res = scn.create_residuals_df(df, names, covars) df_res """ Explanation: Create a correlation matrix We calculate residuals of the matrix df for the columns of names, correcting for the columns in covars. End of explanation """ M = scn.create_corrmat(df_res, method='pearson') """ Explanation: Now we create a correlation matrix over the columns of df_res End of explanation """ G = scn.BrainNetwork(network=M, parcellation=names, centroids=centroids) """ Explanation: Create a weighted graph A short sidenote on the BrainNetwork class: This is a very lightweight subclass of the Networkx.Graph class. This means that any methods you can use on a Networkx.Graph object can also be used on a BrainNetwork object, although the reverse is not true. We have added various methods which allow us to keep track of measures that have already been calculated, which, especially later on when one is dealing with 10^3 random graphs, saves a lot of time. All scona measures are implemented in such a way that they can be used on a regular Networkx.Graph object. For example, instead of G.threshold(10) you can use scn.threshold_graph(G, 10). Also you can create a BrainNetwork from a Networkx.Graph G, using scn.BrainNetwork(network=G) Initialise a weighted graph G from the correlation matrix M. The parcellation and centroids arguments are used to label nodes with names and coordinates respectively. End of explanation """ H = G.threshold(10) """ Explanation: Threshold to create a binary graph We threshold G at cost 10 to create a binary graph with 10% as many edges as the complete graph G. Ordinarily when thresholding one takes the 10% of edges with the highest weight. In our case, because we want the resulting graph to be connected, we calculate a minimum spanning tree first. If you want to omit this step, you can pass the argument mst=False to threshold. The threshold method does not edit objects inplace End of explanation """ H.report_nodal_measures().head() """ Explanation: Calculate nodal summary. calculate_nodal_measures will compute and record the following nodal measures average_dist (if centroids available) total_dist (if centroids available) betweenness closeness clustering coefficient degree interhem (if centroids are available) interhem_proportion (if centroids are available) nodal partition participation coefficient under partition calculated above shortest_path_length export_nodal_measure returns nodal attributes in a DataFrame. Let's try it now. End of explanation """ H.calculate_nodal_measures() H.report_nodal_measures().head() """ Explanation: Use calculate_nodal_measures to fill in a bunch of nodal measures End of explanation """ nx.set_node_attributes(H, name="hat", values={x: x**2 for x in H.nodes}) """ Explanation: We can also add measures as one might normally add nodal attributes to a networkx graph End of explanation """ H.report_nodal_measures(columns=['name', 'degree', 'hat']).head() """ Explanation: These show up in our DataFrame too End of explanation """ H.calculate_global_measures() H.rich_club(); """ Explanation: Calculate Global measures End of explanation """ brain_bundle = scn.GraphBundle([H], ['NSPN_cost=10']) """ Explanation: Create a GraphBundle The GraphBundle object is the scona way to handle across network comparisons. What is it? Essentially it's a python dictionary with BrainNetwork objects as values. End of explanation """ brain_bundle """ Explanation: This creates a dictionary-like object with BrainNetwork H keyed by 'NSPN_cost=10' End of explanation """ # Note that 10 is not usually a sufficient number of random graphs to do meaningful analysis, # it is used here for time considerations brain_bundle.create_random_graphs('NSPN_cost=10', 10) brain_bundle """ Explanation: Now add a series of random_graphs created by edge swap randomisation of H (keyed by 'NSPN_cost=10') End of explanation """ brain_bundle.report_global_measures() brain_bundle.report_rich_club() """ Explanation: Report on a GraphBundle The following method will calculate global measures ( if they have not already been calculated) for all of the graphs in graph_bundle and report the results in a DataFrame. We can do the same for rich club coefficients below. End of explanation """
CyberCRI/dataanalysis-herocoli-redmetrics
tests/merge.ipynb
cc0-1.0
keyEN = ['red', 'yellow', 'green', 'blue', 'black'] keyFR1 = ['rouge', 'jaune', 'vert', 'bleu', 'noir'] keyFR2 = ['jaune', 'vert', 'bleu', 'noir', 'rouge'] keyDE = ['gelb', 'gruen', 'blau', 'schwartz', 'rot'] dataENFR = pd.DataFrame({'keyEN' : keyEN, 'keyFR' : keyFR1}) dataENFR dataFRDE = pd.DataFrame({'keyFR' : keyFR2, 'keyDE' : keyDE}) dataFRDE simpleMerge = pd.merge(dataENFR, dataFRDE, on='keyFR', how='outer') simpleMerge """ Explanation: http://pandas.pydata.org/pandas-docs/version/0.13.1/merging.html Simple simple 1-1 correspondance End of explanation """ users = ['Tom', 'Tom', 'Tom', 'Bill', 'Bill', 'Bill', 'Bill', 'Jack', 'Bob', 'Jim'] sessionsUsers = ['sessionTom1', 'sessionTom2', 'sessionTom3', 'sessionBill1', 'sessionBill2', 'sessionBill3', 'sessionBill4', 'sessionJack', 'sessionBob', 'sessionJim'] sessionsChapters = [ 'sessionTom1', 'sessionTom1', 'sessionTom1', 'sessionTom2', 'sessionTom2', 'sessionTom3', 'sessionBill1', 'sessionBill2', 'sessionBill2', 'sessionBill3', 'sessionBill3', 'sessionBill3', 'sessionBill4', 'sessionBill4', 'sessionBill4', 'sessionBill4', 'sessionJack', 'sessionJack', 'sessionJack', 'sessionBob', 'sessionJim'] chaptersSessions = ['1', '2', '3', '1', '2', '1', '1', '2', '3', '4', '5', '6', '5', '6', '5', '6', '9', '10', '11', '10', '1'] times = 100 * np.random.rand(len(chaptersSessions)) times.sort() times """ Explanation: Complex 1-n correspondance End of explanation """ dataUsers = pd.DataFrame({'users' : users, 'sessions' : sessionsUsers}) #dataUsers dataChapters = pd.DataFrame({'sessions' : sessionsChapters, 'chapters' : chaptersSessions, 'times' : times}) #dataChapters complexMerge = pd.merge(dataUsers, dataChapters, on='sessions', how='outer') complexMerge usersChapters = complexMerge.drop('sessions', 1) usersChapters.groupby('users').max() """ Explanation: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.rand.html End of explanation """
bjshaw/phys202-2015-work
assignments/assignment03/ProjectEuler8.ipynb
mit
import numpy as np d1000 = """ 73167176531330624919225119674426574742355349194934 96983520312774506326239578318016984801869478851843 85861560789112949495459501737958331952853208805511 12540698747158523863050715693290963295227443043557 66896648950445244523161731856403098711121722383113 62229893423380308135336276614282806444486645238749 30358907296290491560440772390713810515859307960866 70172427121883998797908792274921901699720888093776 65727333001053367881220235421809751254540594752243 52584907711670556013604839586446706324415722155397 53697817977846174064955149290862569321978468622482 83972241375657056057490261407972968652414535100474 82166370484403199890008895243450658541227588666881 16427171479924442928230863465674813919123162824586 17866458359124566529476545682848912883142607690042 24219022671055626321111109370544217506941658960408 07198403850962455444362981230987879927244284909188 84580156166097919133875499200524063689912560717606 05886116467109405077541002256983155200055935729725 71636269561882670428252483600823257530420752963450 """ """ Explanation: Project Euler: Problem 8 https://projecteuler.net/problem=8 The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832. (see the number below) Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product? Use NumPy for this computation End of explanation """ d1000_new = d1000.replace('\n', '') """ Explanation: First, I replaced all the \n's in d1000 with a space to have just a string of digits End of explanation """ lst = [] for i in d1000_new: lst.append(int(i)) """ Explanation: Then, I appended each integer value in d1000_new to a new list End of explanation """ a = np.array(lst) """ Explanation: Then I made this list into an array End of explanation """ maximum_prod = 0 for n in range(0, len(a)+1): if n <= (len(a) - 12): prod = np.prod(a[n:n+13]) if prod > maximum_prod: maximum_prod = prod print(maximum) assert True # leave this for grading """ Explanation: I started the main code by defining the maximum product as zero, and then a for loop that looped through all the digits in the list of digits. Using an if statement, I set a bound so no groups of thirteen would attempted to be made where no values would be found(like near the end of the number). I used np.prod to calculate the product of all thirteen digit groups using array slicing. A final if statement goes through the products as they're created, and if they're larger than the maximum the previous maximum product, then that will be the new maximum product. Finally, I print the max product. End of explanation """
marburg-open-courseware/gmoc
docs/mpg-if_error_continue/examples/e-03-1_quicksort.ipynb
mit
n = 5 e = 1 for i in range(1, n+1): e = e * i #e def fac(n): if n <= 1: return n return(n * fac(n-1)) d = fac(5) print(d) """ Explanation: Sorting <hr> The following examples show two implementations of a quicksort algorithm, one using the Lomot, one using the Horade partitioning approach, and one example for merge sort. End of explanation """ def partitionLomoto(my_list, low, high): pivot = my_list[high] print("Actual pivot", pivot) print("Actual list before partitioning", my_list[low:high+1]) i = low for j in range(low, high): if my_list[j] <= pivot: my_list[i], my_list[j] = my_list[j], my_list[i] i = i+1 my_list[i], my_list[high] = my_list[high], my_list[i] pivot = i print("Actual list after partitioning", my_list[low:high+1]) print("New pivot position: ", pivot) print("------------------") return pivot def quickSortLomoto(my_list, low, high): if low < high: pivot = partitionLomoto(my_list, low, high) quickSortLomoto(my_list, low, pivot-1) quickSortLomoto(my_list, pivot+1, high) return my_list my_unsorted_list = [21,11,31,9,8,19,1] my_list = my_unsorted_list quickSortLomoto(my_unsorted_list, 0, len(my_unsorted_list)-1) """ Explanation: Quicksort with partition algorithm of Lomoto Example following pseudocode taken from Wikipedia End of explanation """ def partitionHoare(my_list, low, high): pivot = my_list[low] print("Actual pivot", pivot) print("Actual list before partitioning", my_list[low:high]) i = low j = high while True: while my_list[i] < pivot: i = i + 1 print(i) while my_list[j] > pivot: j = j - 1 print(j) if i >= j: print("Actual list after partitioning", my_list[low:high]) print("New pivot position: ", pivot) print("------------------") return j else: my_list[i], my_list[j] = my_list[j], my_list[i] return pivot def quickSortHoare(my_list, low, high): if low < high: pivot = partitionHoare(my_list, low, high) quickSortHoare(my_list, low, pivot) quickSortHoare(my_list, pivot+1, high) return my_list my_unsorted_list = [21,11,31,9, 25, 8,19,1] my_list = my_unsorted_list quickSortHoare(my_unsorted_list, 0, len(my_unsorted_list)-1) """ Explanation: Quicksort with partition algorithm of Hoare Example following pseudocode taken from Wikipedia End of explanation """ def mergeSort(my_list): if len(my_list) <= 1: return my_list half = len(my_list)//2 left_list = my_list[:half] right_list = my_list[half:] print("Left list :", left_list, "Right list :", right_list) left_list = mergeSort(left_list) right_list = mergeSort(right_list) return merge(left_list, right_list) def merge(left_list, right_list): print("Merging...") result = [] while len(left_list) > 0 and len(right_list) > 0: if left_list[0] < right_list[0]: print("Left :", left_list[0], "Right :", right_list[0]) result.append(left_list.pop(0)) print("Result :", result) else: print("Left :", left_list[0], "Right :", right_list[0]) result.append(right_list.pop(0)) print("Result :", result) while len(left_list) > 0: print("Left :", left_list[0]) result.append(left_list.pop(0)) print("Result :", result) while len(right_list) > 0: print("Right :", right_list[0]) result.append(right_list.pop(0)) print("Result :", result) print("------------------") return result alist = [54,26,93,17,77,31,44,55,20] sort = mergeSort(alist) print(sort) """ Explanation: Merge sort Example following pseudocode taken from Wikipedia End of explanation """
Hvass-Labs/TensorFlow-Tutorials
05_Ensemble_Learning.ipynb
mit
from IPython.display import Image Image('images/02_network_flowchart.png') """ Explanation: TensorFlow Tutorial #05 Ensemble Learning by Magnus Erik Hvass Pedersen / GitHub / Videos on YouTube WARNING! This tutorial does not work with TensorFlow v. 1.9 due to the PrettyTensor builder API apparently no longer being updated and supported by the Google Developers. It is recommended that you use the Keras API instead, which also makes it much easier to train or load multiple models to create an ensemble, see e.g. Tutorial #10 for inspiration on how to load and use pre-trained models using Keras. Introduction This tutorial shows how to use a so-called ensemble of convolutional neural networks. Instead of using a single neural network, we use several neural networks and average their outputs. This is used on the MNIST data-set for recognizing hand-written digits. The ensemble improves the classification accuracy slightly on the test-set, but the difference is so small that it is possibly random. Furthermore, the ensemble mis-classifies some images that are correctly classified by some of the individual networks. This tutorial builds on the previous tutorials, so you should have a basic understanding of TensorFlow and the add-on package Pretty Tensor. A lot of the source-code and text here is similar to the previous tutorials and may be read quickly if you have recently read the previous tutorials. Flowchart The following chart shows roughly how the data flows in a single Convolutional Neural Network that is implemented below. The network has two convolutional layers and two fully-connected layers, with the last layer being used for the final classification of the input images. See Tutorial #02 for a more detailed description of this network and convolution in general. This tutorial implements an ensemble of 5 such neural networks, where the network structure is the same but the weights and other variables are different for each network. End of explanation """ %matplotlib inline import matplotlib.pyplot as plt import tensorflow as tf import numpy as np from sklearn.metrics import confusion_matrix import time from datetime import timedelta import math import os # Use PrettyTensor to simplify Neural Network construction. import prettytensor as pt """ Explanation: Imports End of explanation """ tf.__version__ """ Explanation: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version: End of explanation """ pt.__version__ """ Explanation: PrettyTensor version: End of explanation """ from tensorflow.examples.tutorials.mnist import input_data data = input_data.read_data_sets('data/MNIST/', one_hot=True) """ Explanation: Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path. End of explanation """ print("Size of:") print("- Training-set:\t\t{}".format(len(data.train.labels))) print("- Test-set:\t\t{}".format(len(data.test.labels))) print("- Validation-set:\t{}".format(len(data.validation.labels))) """ Explanation: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets, but we will make random training-sets further below. End of explanation """ data.test.cls = np.argmax(data.test.labels, axis=1) data.validation.cls = np.argmax(data.validation.labels, axis=1) """ Explanation: Class numbers The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test- and validation-sets, so we calculate them now. End of explanation """ combined_images = np.concatenate([data.train.images, data.validation.images], axis=0) combined_labels = np.concatenate([data.train.labels, data.validation.labels], axis=0) """ Explanation: Helper-function for creating random training-sets We will train 5 neural networks on different training-sets that are selected at random. First we combine the original training- and validation-sets into one big set. This is done for both the images and the labels. End of explanation """ print(combined_images.shape) print(combined_labels.shape) """ Explanation: Check that the shape of the combined arrays is correct. End of explanation """ combined_size = len(combined_images) combined_size """ Explanation: Size of the combined data-set. End of explanation """ train_size = int(0.8 * combined_size) train_size """ Explanation: Define the size of the training-set used for each neural network. You can try and change this. End of explanation """ validation_size = combined_size - train_size validation_size """ Explanation: We do not use a validation-set during training, but this would be the size. End of explanation """ def random_training_set(): # Create a randomized index into the full / combined training-set. idx = np.random.permutation(combined_size) # Split the random index into training- and validation-sets. idx_train = idx[0:train_size] idx_validation = idx[train_size:] # Select the images and labels for the new training-set. x_train = combined_images[idx_train, :] y_train = combined_labels[idx_train, :] # Select the images and labels for the new validation-set. x_validation = combined_images[idx_validation, :] y_validation = combined_labels[idx_validation, :] # Return the new training- and validation-sets. return x_train, y_train, x_validation, y_validation """ Explanation: Helper-function for splitting the combined data-set into a random training- and validation-set. End of explanation """ # We know that MNIST images are 28 pixels in each dimension. img_size = 28 # Images are stored in one-dimensional arrays of this length. img_size_flat = img_size * img_size # Tuple with height and width of images used to reshape arrays. img_shape = (img_size, img_size) # Number of colour channels for the images: 1 channel for gray-scale. num_channels = 1 # Number of classes, one class for each of 10 digits. num_classes = 10 """ Explanation: Data Dimensions The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below. End of explanation """ def plot_images(images, # Images to plot, 2-d array. cls_true, # True class-no for images. ensemble_cls_pred=None, # Ensemble predicted class-no. best_cls_pred=None): # Best-net predicted class-no. assert len(images) == len(cls_true) # Create figure with 3x3 sub-plots. fig, axes = plt.subplots(3, 3) # Adjust vertical spacing if we need to print ensemble and best-net. if ensemble_cls_pred is None: hspace = 0.3 else: hspace = 1.0 fig.subplots_adjust(hspace=hspace, wspace=0.3) # For each of the sub-plots. for i, ax in enumerate(axes.flat): # There may not be enough images for all sub-plots. if i < len(images): # Plot image. ax.imshow(images[i].reshape(img_shape), cmap='binary') # Show true and predicted classes. if ensemble_cls_pred is None: xlabel = "True: {0}".format(cls_true[i]) else: msg = "True: {0}\nEnsemble: {1}\nBest Net: {2}" xlabel = msg.format(cls_true[i], ensemble_cls_pred[i], best_cls_pred[i]) # Show the classes as the label on the x-axis. ax.set_xlabel(xlabel) # Remove ticks from the plot. ax.set_xticks([]) ax.set_yticks([]) # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show() """ Explanation: Helper-function for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image. End of explanation """ # Get the first images from the test-set. images = data.test.images[0:9] # Get the true classes for those images. cls_true = data.test.cls[0:9] # Plot the images and labels using our helper-function above. plot_images(images=images, cls_true=cls_true) """ Explanation: Plot a few images to see if data is correct End of explanation """ x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x') """ Explanation: TensorFlow Graph The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time. TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives. TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs. A TensorFlow graph consists of the following parts which will be detailed below: Placeholder variables used for inputting data to the graph. Variables that are going to be optimized so as to make the convolutional network perform better. The mathematical formulas for the neural network. A loss measure that can be used to guide the optimization of the variables. An optimization method which updates the variables. In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial. Placeholder variables Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below. First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat. End of explanation """ x_image = tf.reshape(x, [-1, img_size, img_size, num_channels]) """ Explanation: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is: End of explanation """ y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true') """ Explanation: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case. End of explanation """ y_true_cls = tf.argmax(y_true, dimension=1) """ Explanation: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point. End of explanation """ x_pretty = pt.wrap(x_image) """ Explanation: Neural Network This section implements the Convolutional Neural Network using Pretty Tensor, which is much simpler than a direct implementation in TensorFlow, see Tutorial #03. The basic idea is to wrap the input tensor x_image in a Pretty Tensor object which has helper-functions for adding new computational layers so as to create an entire neural network. Pretty Tensor takes care of the variable allocation, etc. End of explanation """ with pt.defaults_scope(activation_fn=tf.nn.relu): y_pred, loss = x_pretty.\ conv2d(kernel=5, depth=16, name='layer_conv1').\ max_pool(kernel=2, stride=2).\ conv2d(kernel=5, depth=36, name='layer_conv2').\ max_pool(kernel=2, stride=2).\ flatten().\ fully_connected(size=128, name='layer_fc1').\ softmax_classifier(num_classes=num_classes, labels=y_true) """ Explanation: Now that we have wrapped the input image in a Pretty Tensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code. Note that pt.defaults_scope(activation_fn=tf.nn.relu) makes activation_fn=tf.nn.relu an argument for each of the layers constructed inside the with-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The defaults_scope makes it easy to change arguments for all of the layers. End of explanation """ optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss) """ Explanation: Optimization Method Pretty Tensor gave us the predicted class-label (y_pred) as well as a loss-measure that must be minimized, so as to improve the ability of the neural network to classify the input images. It is unclear from the documentation for Pretty Tensor whether the loss-measure is cross-entropy or something else. But we now use the AdamOptimizer to minimize the loss. Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution. End of explanation """ y_pred_cls = tf.argmax(y_pred, dimension=1) """ Explanation: Performance Measures We need a few more performance measures to display the progress to the user. First we calculate the predicted class number from the output of the neural network y_pred, which is a vector with 10 elements. The class number is the index of the largest element. End of explanation """ correct_prediction = tf.equal(y_pred_cls, y_true_cls) """ Explanation: Then we create a vector of booleans telling us whether the predicted class equals the true class of each image. End of explanation """ accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) """ Explanation: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers. End of explanation """ saver = tf.train.Saver(max_to_keep=100) """ Explanation: Saver In order to save the variables of the neural network, we now create a Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done further below. Note that if you have more than 100 neural networks in the ensemble then you must increase max_to_keep accordingly. End of explanation """ save_dir = 'checkpoints/' """ Explanation: This is the directory used for saving and retrieving the data. End of explanation """ if not os.path.exists(save_dir): os.makedirs(save_dir) """ Explanation: Create the directory if it does not exist. End of explanation """ def get_save_path(net_number): return save_dir + 'network' + str(net_number) """ Explanation: This function returns the save-path for the data-file with the given network number. End of explanation """ session = tf.Session() """ Explanation: TensorFlow Run Create TensorFlow session Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph. End of explanation """ def init_variables(): session.run(tf.initialize_all_variables()) """ Explanation: Initialize variables The variables for weights and biases must be initialized before we start optimizing them. We make a simple wrapper-function for this, because we will call it several times below. End of explanation """ train_batch_size = 64 """ Explanation: Helper-function to create a random training batch. There are thousands of images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer. If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations. End of explanation """ def random_batch(x_train, y_train): # Total number of images in the training-set. num_images = len(x_train) # Create a random index into the training-set. idx = np.random.choice(num_images, size=train_batch_size, replace=False) # Use the random index to select random images and labels. x_batch = x_train[idx, :] # Images. y_batch = y_train[idx, :] # Labels. # Return the batch. return x_batch, y_batch """ Explanation: Function for selecting a random training-batch of the given size. End of explanation """ def optimize(num_iterations, x_train, y_train): # Start-time used for printing time-usage below. start_time = time.time() for i in range(num_iterations): # Get a batch of training examples. # x_batch now holds a batch of images and # y_true_batch are the true labels for those images. x_batch, y_true_batch = random_batch(x_train, y_train) # Put the batch into a dict with the proper names # for placeholder variables in the TensorFlow graph. feed_dict_train = {x: x_batch, y_true: y_true_batch} # Run the optimizer using this batch of training data. # TensorFlow assigns the variables in feed_dict_train # to the placeholder variables and then runs the optimizer. session.run(optimizer, feed_dict=feed_dict_train) # Print status every 100 iterations and after last iteration. if i % 100 == 0: # Calculate the accuracy on the training-batch. acc = session.run(accuracy, feed_dict=feed_dict_train) # Status-message for printing. msg = "Optimization Iteration: {0:>6}, Training Batch Accuracy: {1:>6.1%}" # Print it. print(msg.format(i + 1, acc)) # Ending time. end_time = time.time() # Difference between start and end-times. time_dif = end_time - start_time # Print the time-usage. print("Time usage: " + str(timedelta(seconds=int(round(time_dif))))) """ Explanation: Helper-function to perform optimization iterations Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations. End of explanation """ num_networks = 5 """ Explanation: Create ensemble of neural networks Number of neural networks in the ensemble. End of explanation """ num_iterations = 10000 """ Explanation: Number of optimization iterations for each neural network. End of explanation """ if True: # For each of the neural networks. for i in range(num_networks): print("Neural network: {0}".format(i)) # Create a random training-set. Ignore the validation-set. x_train, y_train, _, _ = random_training_set() # Initialize the variables of the TensorFlow graph. session.run(tf.global_variables_initializer()) # Optimize the variables using this training-set. optimize(num_iterations=num_iterations, x_train=x_train, y_train=y_train) # Save the optimized variables to disk. saver.save(sess=session, save_path=get_save_path(i)) # Print newline. print() """ Explanation: Create the ensemble of neural networks. All networks use the same TensorFlow graph that was defined above. For each neural network the TensorFlow weights and variables are initialized to random values and then optimized. The variables are then saved to disk so they can be reloaded later. You may want to skip this computation if you just want to re-run the Notebook with different analysis of the results. End of explanation """ # Split the data-set in batches of this size to limit RAM usage. batch_size = 256 def predict_labels(images): # Number of images. num_images = len(images) # Allocate an array for the predicted labels which # will be calculated in batches and filled into this array. pred_labels = np.zeros(shape=(num_images, num_classes), dtype=np.float) # Now calculate the predicted labels for the batches. # We will just iterate through all the batches. # There might be a more clever and Pythonic way of doing this. # The starting index for the next batch is denoted i. i = 0 while i < num_images: # The ending index for the next batch is denoted j. j = min(i + batch_size, num_images) # Create a feed-dict with the images between index i and j. feed_dict = {x: images[i:j, :]} # Calculate the predicted labels using TensorFlow. pred_labels[i:j] = session.run(y_pred, feed_dict=feed_dict) # Set the start-index for the next batch to the # end-index of the current batch. i = j return pred_labels """ Explanation: Helper-functions for calculating and predicting classifications This function calculates the predicted labels of images, that is, for each image it calculates a vector of length 10 indicating which of the 10 classes the image is. The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size. End of explanation """ def correct_prediction(images, labels, cls_true): # Calculate the predicted labels. pred_labels = predict_labels(images=images) # Calculate the predicted class-number for each image. cls_pred = np.argmax(pred_labels, axis=1) # Create a boolean array whether each image is correctly classified. correct = (cls_true == cls_pred) return correct """ Explanation: Calculate a boolean array whether the predicted classes for the images are correct. End of explanation """ def test_correct(): return correct_prediction(images = data.test.images, labels = data.test.labels, cls_true = data.test.cls) """ Explanation: Calculate a boolean array whether the images in the test-set are classified correctly. End of explanation """ def validation_correct(): return correct_prediction(images = data.validation.images, labels = data.validation.labels, cls_true = data.validation.cls) """ Explanation: Calculate a boolean array whether the images in the validation-set are classified correctly. End of explanation """ def classification_accuracy(correct): # When averaging a boolean array, False means 0 and True means 1. # So we are calculating: number of True / len(correct) which is # the same as the classification accuracy. return correct.mean() """ Explanation: Helper-functions for calculating the classification accuracy This function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. classification_accuracy([True, True, False, False, False]) = 2/5 = 0.4 End of explanation """ def test_accuracy(): # Get the array of booleans whether the classifications are correct # for the test-set. correct = test_correct() # Calculate the classification accuracy and return it. return classification_accuracy(correct) """ Explanation: Calculate the classification accuracy on the test-set. End of explanation """ def validation_accuracy(): # Get the array of booleans whether the classifications are correct # for the validation-set. correct = validation_correct() # Calculate the classification accuracy and return it. return classification_accuracy(correct) """ Explanation: Calculate the classification accuracy on the original validation-set. End of explanation """ def ensemble_predictions(): # Empty list of predicted labels for each of the neural networks. pred_labels = [] # Classification accuracy on the test-set for each network. test_accuracies = [] # Classification accuracy on the validation-set for each network. val_accuracies = [] # For each neural network in the ensemble. for i in range(num_networks): # Reload the variables into the TensorFlow graph. saver.restore(sess=session, save_path=get_save_path(i)) # Calculate the classification accuracy on the test-set. test_acc = test_accuracy() # Append the classification accuracy to the list. test_accuracies.append(test_acc) # Calculate the classification accuracy on the validation-set. val_acc = validation_accuracy() # Append the classification accuracy to the list. val_accuracies.append(val_acc) # Print status message. msg = "Network: {0}, Accuracy on Validation-Set: {1:.4f}, Test-Set: {2:.4f}" print(msg.format(i, val_acc, test_acc)) # Calculate the predicted labels for the images in the test-set. # This is already calculated in test_accuracy() above but # it is re-calculated here to keep the code a bit simpler. pred = predict_labels(images=data.test.images) # Append the predicted labels to the list. pred_labels.append(pred) return np.array(pred_labels), \ np.array(test_accuracies), \ np.array(val_accuracies) pred_labels, test_accuracies, val_accuracies = ensemble_predictions() """ Explanation: Results and analysis Function for calculating the predicted labels for all the neural networks in the ensemble. The labels are combined further below. End of explanation """ print("Mean test-set accuracy: {0:.4f}".format(np.mean(test_accuracies))) print("Min test-set accuracy: {0:.4f}".format(np.min(test_accuracies))) print("Max test-set accuracy: {0:.4f}".format(np.max(test_accuracies))) """ Explanation: Summarize the classification accuracies on the test-set for the neural networks in the ensemble. End of explanation """ pred_labels.shape """ Explanation: The predicted labels of the ensemble is a 3-dim array, the first dim is the network-number, the second dim is the image-number, the third dim is the classification vector. End of explanation """ ensemble_pred_labels = np.mean(pred_labels, axis=0) ensemble_pred_labels.shape """ Explanation: Ensemble predictions There are different ways to calculate the predicted labels for the ensemble. One way is to calculate the predicted class-number for each neural network, and then select the class-number with most votes. But this requires a large number of neural networks relative to the number of classes. The method used here is instead to take the average of the predicted labels for all the networks in the ensemble. This is simple to calculate and does not require a large number of networks in the ensemble. End of explanation """ ensemble_cls_pred = np.argmax(ensemble_pred_labels, axis=1) ensemble_cls_pred.shape """ Explanation: The ensemble's predicted class number is then the index of the highest number in the label, which is calculated using argmax as usual. End of explanation """ ensemble_correct = (ensemble_cls_pred == data.test.cls) """ Explanation: Boolean array whether each of the images in the test-set was correctly classified by the ensemble of neural networks. End of explanation """ ensemble_incorrect = np.logical_not(ensemble_correct) """ Explanation: Negate the boolean array so we can use it to lookup incorrectly classified images. End of explanation """ test_accuracies """ Explanation: Best neural network Now we find the single neural network that performed best on the test-set. First list the classification accuracies on the test-set for all the neural networks in the ensemble. End of explanation """ best_net = np.argmax(test_accuracies) best_net """ Explanation: The index of the neural network with the highest classification accuracy. End of explanation """ test_accuracies[best_net] """ Explanation: The best neural network's classification accuracy on the test-set. End of explanation """ best_net_pred_labels = pred_labels[best_net, :, :] """ Explanation: Predicted labels of the best neural network. End of explanation """ best_net_cls_pred = np.argmax(best_net_pred_labels, axis=1) """ Explanation: The predicted class-number. End of explanation """ best_net_correct = (best_net_cls_pred == data.test.cls) """ Explanation: Boolean array whether the best neural network classified each image in the test-set correctly. End of explanation """ best_net_incorrect = np.logical_not(best_net_correct) """ Explanation: Boolean array whether each image is incorrectly classified. End of explanation """ np.sum(ensemble_correct) """ Explanation: Comparison of ensemble vs. the best single network The number of images in the test-set that were correctly classified by the ensemble. End of explanation """ np.sum(best_net_correct) """ Explanation: The number of images in the test-set that were correctly classified by the best neural network. End of explanation """ ensemble_better = np.logical_and(best_net_incorrect, ensemble_correct) """ Explanation: Boolean array whether each image in the test-set was correctly classified by the ensemble and incorrectly classified by the best neural network. End of explanation """ ensemble_better.sum() """ Explanation: Number of images in the test-set where the ensemble was better than the best single network: End of explanation """ best_net_better = np.logical_and(best_net_correct, ensemble_incorrect) """ Explanation: Boolean array whether each image in the test-set was correctly classified by the best single network and incorrectly classified by the ensemble. End of explanation """ best_net_better.sum() """ Explanation: Number of images in the test-set where the best single network was better than the ensemble. End of explanation """ def plot_images_comparison(idx): plot_images(images=data.test.images[idx, :], cls_true=data.test.cls[idx], ensemble_cls_pred=ensemble_cls_pred[idx], best_cls_pred=best_net_cls_pred[idx]) """ Explanation: Helper-functions for plotting and printing comparisons Function for plotting images from the test-set and their true and predicted class-numbers. End of explanation """ def print_labels(labels, idx, num=1): # Select the relevant labels based on idx. labels = labels[idx, :] # Select the first num labels. labels = labels[0:num, :] # Round numbers to 2 decimal points so they are easier to read. labels_rounded = np.round(labels, 2) # Print the rounded labels. print(labels_rounded) """ Explanation: Function for printing the predicted labels. End of explanation """ def print_labels_ensemble(idx, **kwargs): print_labels(labels=ensemble_pred_labels, idx=idx, **kwargs) """ Explanation: Function for printing the predicted labels for the ensemble of neural networks. End of explanation """ def print_labels_best_net(idx, **kwargs): print_labels(labels=best_net_pred_labels, idx=idx, **kwargs) """ Explanation: Function for printing the predicted labels for the best single network. End of explanation """ def print_labels_all_nets(idx): for i in range(num_networks): print_labels(labels=pred_labels[i, :, :], idx=idx, num=1) """ Explanation: Function for printing the predicted labels of all the neural networks in the ensemble. This only prints the labels for the first image. End of explanation """ plot_images_comparison(idx=ensemble_better) """ Explanation: Examples: Ensemble is better than the best network Plot examples of images that were correctly classified by the ensemble and incorrectly classified by the best single network. End of explanation """ print_labels_ensemble(idx=ensemble_better, num=1) """ Explanation: The ensemble's predicted labels for the first of these images (top left image): End of explanation """ print_labels_best_net(idx=ensemble_better, num=1) """ Explanation: The best network's predicted labels for the first of these images: End of explanation """ print_labels_all_nets(idx=ensemble_better) """ Explanation: The predicted labels of all the networks in the ensemble, for the first of these images: End of explanation """ plot_images_comparison(idx=best_net_better) """ Explanation: Examples: Best network is better than ensemble Now plot examples of images that were incorrectly classified by the ensemble but correctly classified by the best single network. End of explanation """ print_labels_ensemble(idx=best_net_better, num=1) """ Explanation: The ensemble's predicted labels for the first of these images (top left image): End of explanation """ print_labels_best_net(idx=best_net_better, num=1) """ Explanation: The best single network's predicted labels for the first of these images: End of explanation """ print_labels_all_nets(idx=best_net_better) """ Explanation: The predicted labels of all the networks in the ensemble, for the first of these images: End of explanation """ # This has been commented out in case you want to modify and experiment # with the Notebook without having to restart it. # session.close() """ Explanation: Close TensorFlow Session We are now done using TensorFlow, so we close the session to release its resources. End of explanation """
abulbasar/machine-learning
Scikit - 06 Text Processing.ipynb
apache-2.0
import pandas as pd # Used for dataframe functions import json # parse json string import nltk # Natural language toolkit for TDIDF etc. from bs4 import BeautifulSoup # Parse html string .. to extract text import re # Regex parser import numpy as np # Linear algebbra from sklearn import * # machine learning import matplotlib.pyplot as plt # Visualization # Wordcloud does not work on Windows. # Comment the below if you want to skip from wordcloud import WordCloud # Word cloud visualization import scipy #Sparse matrix np.set_printoptions(precision=4) pd.options.display.max_columns = 1000 pd.options.display.max_rows = 10 pd.options.display.float_format = lambda f: "%.4f" % f %matplotlib inline """ Explanation: Dataset Download the dataset and save it to a directory at per your convience. IMDB comments End of explanation """ import nltk nltk.download("punkt") nltk.download("stopwords") nltk.download("wordnet") nltk.download('averaged_perceptron_tagger') nltk.download("vader_lexicon") print(nltk.__version__) """ Explanation: Run the following lines when you run this notebook first time on your system. End of explanation """ # The following line does not work on Windows system !head -n 1 /data/imdb-comments.json data = [] with open("/data/imdb-comments.json", "r", encoding="utf8") as f: for l in f.readlines(): data.append(json.loads(l)) comments = pd.DataFrame.from_dict(data) comments.sample(10) comments.info() comments.label.value_counts() comments.groupby(["label", "sentiment"]).content.count().unstack() np.random.seed(1) v = list(comments["content"].sample(1))[0] v comments.head() comments["content"].values[0] """ Explanation: Now let's see how to create text classifier using nltk and scikit learn. End of explanation """ from nltk.sentiment.vader import SentimentIntensityAnalyzer sia = SentimentIntensityAnalyzer() sia.polarity_scores(comments["content"].values[0]) def sentiment_score(text): return sia.polarity_scores(text)["compound"] sentiment_score(comments["content"].values[0]) %%time comments["vader_score"] = comments["content"].apply(lambda text: sentiment_score(text)) comments["vader_sentiment"] = np.where(comments["vader_score"]>0, "pos", "neg") comments.head() comments.vader_sentiment.value_counts() print(metrics.classification_report(comments["sentiment"], comments["vader_sentiment"])) """ Explanation: Vader Sentiment Analysis End of explanation """ def preprocess(text): # Remove html tags text = BeautifulSoup(text.lower(), "html5lib").text # Replace the occurrences of multiple consecutive non-word ccharacters # with a single space (" ") text = re.sub(r"[\W]+", " ", text) return text preprocess(v) %%time # Apply the preprocessing logic to all comments comments["content"] = comments["content"].apply(preprocess) comments_train = comments[comments["label"] == "train"] comments_train.sample(10) comments_test = comments[comments["label"] == "test"] comments_test.sample(10) X_train = comments_train["content"].values y_train = np.where(comments_train.sentiment == "pos", 1, 0) X_test = comments_test["content"].values y_test = np.where(comments_test.sentiment == "pos", 1, 0) # http://snowball.tartarus.org/algorithms/porter/stemmer.html # http://www.nltk.org/howto/stem.html from nltk.stem.snowball import SnowballStemmer from nltk.stem.porter import PorterStemmer print(SnowballStemmer.languages) porter = PorterStemmer() snowball = SnowballStemmer("english") lemmatizer = nltk.wordnet.WordNetLemmatizer() values = [] for s in nltk.word_tokenize(""" revival allowance inference relational runner runs ran has having generously wasn't leaves swimming relative relating """): values.append((s, porter.stem(s) , snowball.stem(s), lemmatizer.lemmatize(s, "v"))) pd.DataFrame(values, columns = ["original", "porter", "snowball", "lemmatizer"]) stopwords = nltk.corpus.stopwords.words("english") print(len(stopwords), stopwords) """ Explanation: As we see above the accuracy is the range of 0.70. Vader model performed better for the positive sentiment compared to negative sentiment. Let's now use statistical model using TFIDF which generally perform better. Sentiment Analysis using statistical model using TFIDF End of explanation """ stopwords.remove("no") stopwords.remove("nor") stopwords.remove("not") sentence = """Financial Services revenues increased $0.5 billion, or 5%, primarily due to lower impairments and volume growth, partially offset by lower gains.""" stemmer = SnowballStemmer("english") #stemmer = PorterStemmer() def my_tokenizer(s): terms = nltk.word_tokenize(s.lower()) #terms = re.split("\s", s.lower()) #terms = [re.sub(r"[\.!]", "", v) for v in terms if len(v)>2] #terms = [v for v in terms if len(v)>2] terms = [v for v in terms if v not in stopwords] terms = [stemmer.stem(w) for w in terms] #terms = [term for term in terms if len(term) > 2] return terms print(my_tokenizer(sentence)) tfidf = feature_extraction.text.TfidfVectorizer(tokenizer=my_tokenizer, max_df = 0.95, min_df=0.0001 , ngram_range=(1, 2)) corpus = ["Today is Wednesday" , "Delhi weather is hot today." , "Delhi roads are not busy in the morning"] doc_term_matrix = tfidf.fit_transform(corpus) # returns term and index in the feature matrix print("Vocabulary: ", tfidf.vocabulary_) columns = [None] * len(tfidf.vocabulary_) for term in tfidf.vocabulary_: columns[tfidf.vocabulary_[term]] = term columns scores = pd.DataFrame(doc_term_matrix.toarray() , columns= columns) scores X_train_tfidf = tfidf.fit_transform(X_train) X_test_tfidf = tfidf.transform(X_test) X_test_tfidf.shape, y_test.shape, X_train_tfidf.shape, y_train.shape """ Explanation: Lets drop the following words from stopwords since they are likely good indicator of sentiment. End of explanation """ cell_count = np.product(X_train_tfidf.shape) bytes = cell_count * 4 GBs = bytes / (1024 ** 3) GBs sparsity = 1 - X_train_tfidf.count_nonzero() / cell_count sparsity 1 - X_train_tfidf.nnz / cell_count print("Type of doc_term_matrix", type(X_train_tfidf)) """ Explanation: Let's estimate the memory requirment if the data is presented in dense matrix format End of explanation """ print(X_train_tfidf.data.nbytes / (1024.0 ** 3), "GB") """ Explanation: Byte size of the training doc sparse doc End of explanation """ %%time lr = linear_model.LogisticRegression(C = 0.6, random_state = 1 , n_jobs = 8, solver="saga") lr.fit(X_train_tfidf, y_train) y_train_pred = lr.predict(X_train_tfidf) y_test_pred = lr.predict(X_test_tfidf) print("Training accuracy: ", metrics.accuracy_score(y_train, y_train_pred)) print("Test accuracy: ", metrics.accuracy_score(y_test, y_test_pred)) fpr, tpr, thresholds = metrics.roc_curve(y_test, lr.predict_proba(X_test_tfidf)[:, [1]]) auc = metrics.auc(fpr, tpr) plt.plot(fpr, tpr) plt.ylim(0, 1) plt.xlim(0, 1) plt.plot([0,1], [0,1], ls = "--", color = "k") plt.xlabel("False Postive Rate") plt.ylabel("True Positive Rate") plt.title("ROC Curve, auc: %.4f" % auc); %%time from sklearn import naive_bayes, ensemble bayes = naive_bayes.MultinomialNB(alpha=1) bayes.fit(X_train_tfidf, y_train) print("accuracy: ", bayes.score(X_test_tfidf, y_test)) %%time est = tree.DecisionTreeClassifier() est.fit(X_train_tfidf, y_train) print("accuracy: ", est.score(X_test_tfidf, y_test)) columns = [None] * len(tfidf.vocabulary_) for term in tfidf.vocabulary_: columns[tfidf.vocabulary_[term]] = term result = pd.DataFrame({"feature": columns , "importance": est.feature_importances_}) result = result.sort_values("importance", ascending = False) result = result[result.importance > 0.0] print("Top 50 terms: ", list(result.feature[:50])) """ Explanation: Classification Model End of explanation """ vocab_by_term = tfidf.vocabulary_ vocab_by_idx = dict({(vocab_by_term[term], term) for term in vocab_by_term}) str(vocab_by_term)[:100] str(vocab_by_idx)[:100] idx = 5 print("Content:\n", X_train[idx]) row = X_train_tfidf[idx] terms = [(vocab_by_idx[row.indices[i]], row.data[i]) for i, term in enumerate(row.indices)] pd.Series(dict(terms)).sort_values(ascending = False) idx = 50 row = X_train_tfidf[idx] terms = [(vocab_by_idx[row.indices[i]], row.data[i]) for i, term in enumerate(row.indices)] top_terms= list(pd.Series(dict(terms))\ .sort_values(ascending = False)[:50].index) wc = WordCloud(background_color="white", width=500, height=500, max_words=50).generate("+".join(top_terms)) plt.figure(figsize=(10, 10)) plt.imshow(wc) plt.axis("off"); """ Explanation: Important terms for a document End of explanation """ %%time tfidf =feature_extraction.text.TfidfVectorizer( tokenizer=my_tokenizer , stop_words = stopwords , ngram_range=(1, 2) ) pipe = pipeline.Pipeline([ ("tfidf", tfidf), ("est", linear_model.LogisticRegression(C = 1.0, random_state = 1 , n_jobs = 8, solver="saga")) ]) pipe.fit(X_train, y_train) import pickle with open("/tmp/model.pkl", "wb") as f: pickle.dump(pipe, f) !ls -lh /tmp/model.pkl with open("/tmp/model.pkl", "rb") as f: model = pickle.load(f) doc1 = """when we started watching this series on cable i had no idea how addictive it would be even when you hate a character you hold back because they are so beautifully developed you can almost understand why they react to frustration fear greed or temptation the way they do it s almost as if the viewer is experiencing one of christopher s learning curves i can t understand why adriana would put up with christopher s abuse of her verbally physically and emotionally but i just have to read the newspaper to see how many women can and do tolerate such behavior carmella has a dream house endless supply of expensive things but i m sure she would give it up for a loving and faithful husband or maybe not that s why i watch it doesn t matter how many times you watch an episode you can find something you missed the first five times we even watch episodes out of sequence watch season 1 on late night with commercials but all the language a e with language censored reruns on the movie network whenever they re on we re there we ve been totally spoiled now i also love the malaprop s an albacore around my neck is my favorite of johnny boy when these jewels have entered our family vocabulary it is a sign that i should get a life i will when the series ends and i have collected all the dvd s and put the collection in my will""" doc1 = preprocess(doc1) model.predict_proba(np.array([doc1]))[:, 1] """ Explanation: Build Pipeline for classificaiton Model End of explanation """ hashing_vectorizer = feature_extraction.text.HashingVectorizer(n_features=2 ** 3 , tokenizer=my_tokenizer, ngram_range=(1, 2)) corpus = ["Today is Wednesday" , "Delhi weather is hot today." , "Delhi roads are not busy in the morning"] doc_term_matrix = hashing_vectorizer.fit_transform(corpus) pd.DataFrame(doc_term_matrix.toarray()) # Each cell is normalized (l2) row-wise %%time n_features = int(X_train_tfidf.shape[1] * 0.8) hashing_vectorizer = feature_extraction.text.HashingVectorizer(n_features=n_features , tokenizer=my_tokenizer, ngram_range=(1, 2)) X_train_hash = hashing_vectorizer.fit_transform(X_train) X_test_hash = hashing_vectorizer.transform(X_test) X_train_hash X_train_hash.shape, X_test_hash.shape print(X_train_hash.data.nbytes / (1024.0 ** 3), "GB") %%time lr = linear_model.LogisticRegression(C = 1.0, random_state = 1, solver = "liblinear") lr.fit(X_train_hash, y_train) y_train_pred = lr.predict(X_train_hash) y_test_pred = lr.predict(X_test_hash) print("Training accuracy: ", metrics.accuracy_score(y_train, y_train_pred)) print("Test accuracy: ", metrics.accuracy_score(y_test, y_test_pred)) print(metrics.classification_report(y_test, y_test_pred)) """ Explanation: Hashing Vectorizer Convert a collection of text documents to a matrix of deterministic hash token (murmur3) occurrences It turns a collection of text documents into a scipy.sparse matrix holding token occurrence counts (or binary occurrence information), possibly normalized as token frequencies if norm=’l1’ or projected on the euclidean unit sphere if norm=’l2’. Advantages - it is very low memory scalable to large datasets as there is no need to store a vocabulary dictionary in memory - it is fast to pickle and un-pickle as it holds no state besides the constructor parameters - it can be used in a streaming (partial fit) or parallel pipeline as there is no state computed during fit. Disadvantages - there is no way to compute the inverse transform (from feature indices to string feature names) which can be a problem when trying to introspect which features are most important to a model. - there can be collisions: distinct tokens can be mapped to the same feature index. However in practice this is rarely an issue if n_features is large enough (e.g. 2 ** 18 for text classification problems). - no IDF weighting as this would render the transformer stateful. End of explanation """
bourneli/deep-learning-notes
DAT236x Deep Learning Explained/Lab1_MNIST_DataLoader.ipynb
mit
# Import the relevant modules to be used later from __future__ import print_function import gzip import matplotlib.image as mpimg import matplotlib.pyplot as plt import numpy as np import os import shutil import struct import sys try: from urllib.request import urlretrieve except ImportError: from urllib import urlretrieve # Config matplotlib for inline plotting %matplotlib inline """ Explanation: Lab 1: MNIST Data Loader This notebook is the first lab of the "Deep Learning Explained" course. It is derived from the tutorial numbered CNTK_103A in the CNTK repository. This notebook is used to download and pre-process the MNIST digit images to be used for building different models to recognize handwritten digits. Note: This notebook must be run to completion before the other course notebooks can be run. End of explanation """ # Functions to load MNIST images and unpack into train and test set. # - loadData reads image data and formats into a 28x28 long array # - loadLabels reads the corresponding labels data, 1 for each image # - load packs the downloaded image and labels data into a combined format to be read later by # CNTK text reader def loadData(src, cimg): print ('Downloading ' + src) gzfname, h = urlretrieve(src, './delete.me') print ('Done.') try: with gzip.open(gzfname) as gz: n = struct.unpack('I', gz.read(4)) # Read magic number. if n[0] != 0x3080000: raise Exception('Invalid file: unexpected magic number.') # Read number of entries. n = struct.unpack('>I', gz.read(4))[0] if n != cimg: raise Exception('Invalid file: expected {0} entries.'.format(cimg)) crow = struct.unpack('>I', gz.read(4))[0] ccol = struct.unpack('>I', gz.read(4))[0] if crow != 28 or ccol != 28: raise Exception('Invalid file: expected 28 rows/cols per image.') # Read data. res = np.fromstring(gz.read(cimg * crow * ccol), dtype = np.uint8) finally: os.remove(gzfname) return res.reshape((cimg, crow * ccol)) def loadLabels(src, cimg): print ('Downloading ' + src) gzfname, h = urlretrieve(src, './delete.me') print ('Done.') try: with gzip.open(gzfname) as gz: n = struct.unpack('I', gz.read(4)) # Read magic number. if n[0] != 0x1080000: raise Exception('Invalid file: unexpected magic number.') # Read number of entries. n = struct.unpack('>I', gz.read(4)) if n[0] != cimg: raise Exception('Invalid file: expected {0} rows.'.format(cimg)) # Read labels. res = np.fromstring(gz.read(cimg), dtype = np.uint8) finally: os.remove(gzfname) return res.reshape((cimg, 1)) def try_download(dataSrc, labelsSrc, cimg): data = loadData(dataSrc, cimg) labels = loadLabels(labelsSrc, cimg) return np.hstack((data, labels)) """ Explanation: Data download We will download the data onto the local machine. The MNIST database is a standard set of handwritten digits that has been widely used for training and testing of machine learning algorithms. It has a training set of 60,000 images and a test set of 10,000 images with each image being 28 x 28 grayscale pixels. This set is easy to use visualize and train on any computer. End of explanation """ # URLs for the train image and labels data url_train_image = 'http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz' url_train_labels = 'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz' num_train_samples = 60000 print("Downloading train data") train = try_download(url_train_image, url_train_labels, num_train_samples) url_test_image = 'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz' url_test_labels = 'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz' num_test_samples = 10000 print("Downloading test data") test = try_download(url_test_image, url_test_labels, num_test_samples) """ Explanation: Download the data In the following code, we use the functions defined above to download and unzip the MNIST data into memory. The training set has 60000 images while the test set has 10000 images. End of explanation """ # Plot a random image sample_number = 5001 plt.imshow(train[sample_number,:-1].reshape(28,28), cmap="gray_r") plt.axis('off') print("Image Label: ", train[sample_number,-1]) """ Explanation: Visualize the data Here, we use matplotlib to display one of the training images and it's associated label. End of explanation """ # Save the data files into a format compatible with CNTK text reader def savetxt(filename, ndarray): dir = os.path.dirname(filename) if not os.path.exists(dir): os.makedirs(dir) if not os.path.isfile(filename): print("Saving", filename ) with open(filename, 'w') as f: labels = list(map(' '.join, np.eye(10, dtype=np.uint).astype(str))) for row in ndarray: row_str = row.astype(str) label_str = labels[row[-1]] feature_str = ' '.join(row_str[:-1]) f.write('|labels {} |features {}\n'.format(label_str, feature_str)) else: print("File already exists", filename) # Save the train and test files (prefer our default path for the data) data_dir = os.path.join("..", "Examples", "Image", "DataSets", "MNIST") if not os.path.exists(data_dir): data_dir = os.path.join("data", "MNIST") print ('Writing train text file...') savetxt(os.path.join(data_dir, "Train-28x28_cntk_text.txt"), train) print ('Writing test text file...') savetxt(os.path.join(data_dir, "Test-28x28_cntk_text.txt"), test) print('Done') """ Explanation: Save the images Save the images in a local directory. While saving the data we flatten the images to a vector (28x28 image pixels becomes an array of length 784 data points). The labels are encoded as 1-hot encoding (label of 3 with 10 digits becomes 0001000000, where the first index corresponds to digit 0 and the last one corresponds to digit 9. End of explanation """
rcurrie/tumornormal
shapely.ipynb
apache-2.0
import os import json import numpy as np import pandas as pd import keras import matplotlib.pyplot as plt # fix random seed for reproducibility np.random.seed(42) """ Explanation: Train a binary tumor/normal classifier and explain via Shapely values Train a neural network on TCGA+TARGET+GTEX gene expression to classify tumor vs. normal. Evalute the model and explain using SHapley Additive exPlanations End of explanation """ %%time X = pd.read_hdf("data/tcga_target_gtex.h5", "expression") Y = pd.read_hdf("data/tcga_target_gtex.h5", "labels") # Prune X to only KEGG pathway genes # with open("data/c2.cp.kegg.v6.1.symbols.gmt") as f: # genes_subset = list(set().union(*[line.strip().split("\t")[2:] for line in f.readlines()])) # Prune X to only Cosmic Cancer Genes genes_subset = pd.read_csv("data/cosmic_germline.tsv", sep="\t")["Gene Symbol"].values X_pruned = X.drop(labels=(set(X.columns) - set(genes_subset)), axis=1, errors="ignore") # order must match dataframe genes = list(X_pruned.columns.values) print("Pruned expression to only include", len(genes), "genes") # Create a one-hot for tumor/normal training and numeric disease label for stratification from sklearn.preprocessing import LabelEncoder tumor_normal_encoder = LabelEncoder() Y["tumor_normal_value"] = pd.Series(tumor_normal_encoder.fit_transform(Y["tumor_normal"]), index=Y.index) disease_encoder = LabelEncoder() Y["disease_value"] = pd.Series(disease_encoder.fit_transform(Y["disease"]), index=Y.index) # Divide into training and test sets strattified by disease # Split into stratified training and test sets based primary site from sklearn.model_selection import StratifiedShuffleSplit split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) for train_index, test_index in split.split(X.values, Y.disease): X_train = X_pruned.values[train_index] X_test = X_pruned.values[test_index] Y_train = Y.iloc[train_index] Y_test = Y.iloc[test_index] print(X_train.shape, X_test.shape) # Lets see how big each class is based on primary site plt.hist(Y_train.disease_value.values, alpha=0.5, label='Train') plt.hist(Y_test.disease_value.values, alpha=0.5, label='Test') plt.legend(loc='upper right') plt.title("Disease distribution between train and test sets") plt.show() # Lets see how big each class is based on primary site plt.hist(Y_train.tumor_normal_value.values, alpha=0.5, label='Train') plt.hist(Y_test.tumor_normal_value.values, alpha=0.5, label='Test') plt.legend(loc='upper right') plt.title("Tumor/normal distribution between train and test sets") plt.show() """ Explanation: Load and Wrangle Data End of explanation """ %%time from keras.models import Model from keras.layers import Input, BatchNormalization, Dense, Dropout from keras.callbacks import EarlyStopping from keras import regularizers def create_model(input_shape, output_shape, params): inputs = Input(shape=(input_shape,)) x = BatchNormalization()(inputs) x = Dense(16, activation="relu")(x) x = Dropout(0.5)(x) x = Dense(16, activation="relu")(x) x = Dropout(0.5)(x) outputs = Dense(output_shape, kernel_initializer="normal", activation="sigmoid")(x) model = Model(inputs=inputs, outputs=outputs) model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"]) return model model = create_model(X_train.shape[1], 1, {}) model.summary() callbacks = [EarlyStopping(monitor="acc", min_delta=0.05, patience=2, verbose=2, mode="max")] model.fit(X_train, Y_train.tumor_normal_value.values, epochs=10, batch_size=128, shuffle="batch", callbacks=callbacks) print(model.metrics_names, model.evaluate(X_test, Y_test.tumor_normal_value.values)) # Save the model to disk so we can read and predict without training # See https://github.com/h5py/h5py/issues/712 os.environ["HDF5_USE_FILE_LOCKING"] = "FALSE" with open("models/disease.params.json", "w") as f: f.write(json.dumps({ "tumor_normal": tumor_normal_encoder.classes_.tolist(), "diseases": disease_encoder.classes_.tolist(), "genes": genes})) with open("models/disease.model.json", "w") as f: f.write(model.to_json()) model.save_weights("models/disease.weights.h5") # Load the model and predict the test set so we're using exactly what we'll load later from disk model = keras.models.model_from_json(open("models/disease.model.json").read()) model.load_weights("models/disease.weights.h5") params = json.loads(open("models/disease.params.json").read()) """ Explanation: Build and Train Model End of explanation """ import shap # import warnings; warnings.simplefilter('ignore') def shap_predict(X): return model.predict(X).flatten() shap.initjs() """ Explanation: Explain Evaluate several tissues to see if Shapely values re-capitulate any known biomarkers End of explanation """ # Select tumor and normal samples from a single tissue as background X_tumor = X_pruned.loc[Y[Y.disease == "Breast Invasive Carcinoma"].index] X_normal = X_pruned.loc[Y[Y.disease == "Breast - Mammary Tissue"].index] print("Found {} tumor and {} normal samples".format(X_tumor.shape[0], X_normal.shape[0])) background_samples = pd.concat([X_tumor.iloc[:25], X_normal.iloc[:25]]) print("Explantion based on {} samples".format(background_samples.shape[0])) explainer = shap.KernelExplainer(shap_predict, background_samples) # Show details for a tumor sample np.random.seed(42) sample_shap_values = explainer.shap_values(X_tumor.iloc[10], nsamples=150) shap.force_plot(sample_shap_values, X_tumor.iloc[10]) # Show details for a normal sample np.random.seed(42) sample_shap_values = explainer.shap_values(X_normal.iloc[0], nsamples=150) shap.force_plot(sample_shap_values, X_normal.iloc[0]) # Explain a subset of tumor and normal samples background_shap_values = explainer.shap_values(background_samples.iloc[::5], nsamples=150) shap.force_plot(background_shap_values, background_samples.iloc[::5]) shap.summary_plot(background_shap_values, background_samples.iloc[::5]) """ Explanation: Explain Breast Predictions End of explanation """
statsmodels/statsmodels.github.io
v0.12.2/examples/notebooks/generated/markov_regression.ipynb
bsd-3-clause
%matplotlib inline import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt # NBER recessions from pandas_datareader.data import DataReader from datetime import datetime usrec = DataReader('USREC', 'fred', start=datetime(1947, 1, 1), end=datetime(2013, 4, 1)) """ Explanation: Markov switching dynamic regression models This notebook provides an example of the use of Markov switching models in statsmodels to estimate dynamic regression models with changes in regime. It follows the examples in the Stata Markov switching documentation, which can be found at http://www.stata.com/manuals14/tsmswitch.pdf. End of explanation """ # Get the federal funds rate data from statsmodels.tsa.regime_switching.tests.test_markov_regression import fedfunds dta_fedfunds = pd.Series(fedfunds, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS')) # Plot the data dta_fedfunds.plot(title='Federal funds rate', figsize=(12,3)) # Fit the model # (a switching mean is the default of the MarkovRegession model) mod_fedfunds = sm.tsa.MarkovRegression(dta_fedfunds, k_regimes=2) res_fedfunds = mod_fedfunds.fit() res_fedfunds.summary() """ Explanation: Federal funds rate with switching intercept The first example models the federal funds rate as noise around a constant intercept, but where the intercept changes during different regimes. The model is simply: $$r_t = \mu_{S_t} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \sigma^2)$$ where $S_t \in {0, 1}$, and the regime transitions according to $$ P(S_t = s_t | S_{t-1} = s_{t-1}) = \begin{bmatrix} p_{00} & p_{10} \ 1 - p_{00} & 1 - p_{10} \end{bmatrix} $$ We will estimate the parameters of this model by maximum likelihood: $p_{00}, p_{10}, \mu_0, \mu_1, \sigma^2$. The data used in this example can be found at https://www.stata-press.com/data/r14/usmacro. End of explanation """ res_fedfunds.smoothed_marginal_probabilities[1].plot( title='Probability of being in the high regime', figsize=(12,3)); """ Explanation: From the summary output, the mean federal funds rate in the first regime (the "low regime") is estimated to be $3.7$ whereas in the "high regime" it is $9.6$. Below we plot the smoothed probabilities of being in the high regime. The model suggests that the 1980's was a time-period in which a high federal funds rate existed. End of explanation """ print(res_fedfunds.expected_durations) """ Explanation: From the estimated transition matrix we can calculate the expected duration of a low regime versus a high regime. End of explanation """ # Fit the model mod_fedfunds2 = sm.tsa.MarkovRegression( dta_fedfunds.iloc[1:], k_regimes=2, exog=dta_fedfunds.iloc[:-1]) res_fedfunds2 = mod_fedfunds2.fit() res_fedfunds2.summary() """ Explanation: A low regime is expected to persist for about fourteen years, whereas the high regime is expected to persist for only about five years. Federal funds rate with switching intercept and lagged dependent variable The second example augments the previous model to include the lagged value of the federal funds rate. $$r_t = \mu_{S_t} + r_{t-1} \beta_{S_t} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \sigma^2)$$ where $S_t \in {0, 1}$, and the regime transitions according to $$ P(S_t = s_t | S_{t-1} = s_{t-1}) = \begin{bmatrix} p_{00} & p_{10} \ 1 - p_{00} & 1 - p_{10} \end{bmatrix} $$ We will estimate the parameters of this model by maximum likelihood: $p_{00}, p_{10}, \mu_0, \mu_1, \beta_0, \beta_1, \sigma^2$. End of explanation """ res_fedfunds2.smoothed_marginal_probabilities[0].plot( title='Probability of being in the high regime', figsize=(12,3)); """ Explanation: There are several things to notice from the summary output: The information criteria have decreased substantially, indicating that this model has a better fit than the previous model. The interpretation of the regimes, in terms of the intercept, have switched. Now the first regime has the higher intercept and the second regime has a lower intercept. Examining the smoothed probabilities of the high regime state, we now see quite a bit more variability. End of explanation """ print(res_fedfunds2.expected_durations) """ Explanation: Finally, the expected durations of each regime have decreased quite a bit. End of explanation """ # Get the additional data from statsmodels.tsa.regime_switching.tests.test_markov_regression import ogap, inf dta_ogap = pd.Series(ogap, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS')) dta_inf = pd.Series(inf, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS')) exog = pd.concat((dta_fedfunds.shift(), dta_ogap, dta_inf), axis=1).iloc[4:] # Fit the 2-regime model mod_fedfunds3 = sm.tsa.MarkovRegression( dta_fedfunds.iloc[4:], k_regimes=2, exog=exog) res_fedfunds3 = mod_fedfunds3.fit() # Fit the 3-regime model np.random.seed(12345) mod_fedfunds4 = sm.tsa.MarkovRegression( dta_fedfunds.iloc[4:], k_regimes=3, exog=exog) res_fedfunds4 = mod_fedfunds4.fit(search_reps=20) res_fedfunds3.summary() res_fedfunds4.summary() """ Explanation: Taylor rule with 2 or 3 regimes We now include two additional exogenous variables - a measure of the output gap and a measure of inflation - to estimate a switching Taylor-type rule with both 2 and 3 regimes to see which fits the data better. Because the models can be often difficult to estimate, for the 3-regime model we employ a search over starting parameters to improve results, specifying 20 random search repetitions. End of explanation """ fig, axes = plt.subplots(3, figsize=(10,7)) ax = axes[0] ax.plot(res_fedfunds4.smoothed_marginal_probabilities[0]) ax.set(title='Smoothed probability of a low-interest rate regime') ax = axes[1] ax.plot(res_fedfunds4.smoothed_marginal_probabilities[1]) ax.set(title='Smoothed probability of a medium-interest rate regime') ax = axes[2] ax.plot(res_fedfunds4.smoothed_marginal_probabilities[2]) ax.set(title='Smoothed probability of a high-interest rate regime') fig.tight_layout() """ Explanation: Due to lower information criteria, we might prefer the 3-state model, with an interpretation of low-, medium-, and high-interest rate regimes. The smoothed probabilities of each regime are plotted below. End of explanation """ # Get the federal funds rate data from statsmodels.tsa.regime_switching.tests.test_markov_regression import areturns dta_areturns = pd.Series(areturns, index=pd.date_range('2004-05-04', '2014-5-03', freq='W')) # Plot the data dta_areturns.plot(title='Absolute returns, S&P500', figsize=(12,3)) # Fit the model mod_areturns = sm.tsa.MarkovRegression( dta_areturns.iloc[1:], k_regimes=2, exog=dta_areturns.iloc[:-1], switching_variance=True) res_areturns = mod_areturns.fit() res_areturns.summary() """ Explanation: Switching variances We can also accommodate switching variances. In particular, we consider the model $$ y_t = \mu_{S_t} + y_{t-1} \beta_{S_t} + \varepsilon_t \quad \varepsilon_t \sim N(0, \sigma_{S_t}^2) $$ We use maximum likelihood to estimate the parameters of this model: $p_{00}, p_{10}, \mu_0, \mu_1, \beta_0, \beta_1, \sigma_0^2, \sigma_1^2$. The application is to absolute returns on stocks, where the data can be found at https://www.stata-press.com/data/r14/snp500. End of explanation """ res_areturns.smoothed_marginal_probabilities[0].plot( title='Probability of being in a low-variance regime', figsize=(12,3)); """ Explanation: The first regime is a low-variance regime and the second regime is a high-variance regime. Below we plot the probabilities of being in the low-variance regime. Between 2008 and 2012 there does not appear to be a clear indication of one regime guiding the economy. End of explanation """
mne-tools/mne-tools.github.io
0.21/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb
bsd-3-clause
# Authors: Jeff Hanna <jeff.hanna@gmail.com> # # License: BSD (3-clause) import mne from mne import io from mne.datasets import refmeg_noise from mne.preprocessing import ICA import numpy as np print(__doc__) data_path = refmeg_noise.data_path() """ Explanation: Find MEG reference channel artifacts Use ICA decompositions of MEG reference channels to remove intermittent noise. Many MEG systems have an array of reference channels which are used to detect external magnetic noise. However, standard techniques that use reference channels to remove noise from standard channels often fail when noise is intermittent. The technique described here (using ICA on the reference channels) often succeeds where the standard techniques do not. There are two algorithms to choose from: separate and together (default). In the "separate" algorithm, two ICA decompositions are made: one on the reference channels, and one on reference + standard channels. The reference + standard channel components which correlate with the reference channel components are removed. In the "together" algorithm, a single ICA decomposition is made on reference + standard channels, and those components whose weights are particularly heavy on the reference channels are removed. This technique is fully described and validated in :footcite:HannaEtAl2020 End of explanation """ raw_fname = data_path + '/sample_reference_MEG_noise-raw.fif' raw = io.read_raw_fif(raw_fname, preload=True) """ Explanation: Read raw data End of explanation """ select_picks = np.concatenate( (mne.pick_types(raw.info, meg=True)[-32:], mne.pick_types(raw.info, meg=False, ref_meg=True))) plot_kwargs = dict( duration=100, order=select_picks, n_channels=len(select_picks), scalings={"mag": 8e-13, "ref_meg": 2e-11}) raw.plot(**plot_kwargs) """ Explanation: Note that even though standard noise removal has already been applied to these data, much of the noise in the reference channels (bottom of the plot) can still be seen in the standard channels. End of explanation """ raw.plot_psd(fmax=30) """ Explanation: The PSD of these data show the noise as clear peaks. End of explanation """ raw_tog = raw.copy() ica_kwargs = dict( method='picard', fit_params=dict(tol=1e-3), # use a high tol here for speed ) all_picks = mne.pick_types(raw_tog.info, meg=True, ref_meg=True) ica_tog = ICA(n_components=60, allow_ref_meg=True, **ica_kwargs) ica_tog.fit(raw_tog, picks=all_picks) bad_comps, scores = ica_tog.find_bads_ref(raw_tog, threshold=2.5) # Plot scores with bad components marked. ica_tog.plot_scores(scores, bad_comps) # Examine the properties of removed components. It's clear from the time # courses and topographies that these components represent external, # intermittent noise. ica_tog.plot_properties(raw_tog, picks=bad_comps) # Remove the components. raw_tog = ica_tog.apply(raw_tog, exclude=bad_comps) """ Explanation: Run the "together" algorithm. End of explanation """ raw_tog.plot_psd(fmax=30) """ Explanation: Cleaned data: End of explanation """ raw_sep = raw.copy() # Do ICA only on the reference channels. ref_picks = mne.pick_types(raw_sep.info, meg=False, ref_meg=True) ica_ref = ICA(n_components=2, allow_ref_meg=True, **ica_kwargs) ica_ref.fit(raw_sep, picks=ref_picks) # Do ICA on both reference and standard channels. Here, we can just reuse # ica_tog from the section above. ica_sep = ica_tog.copy() # Extract the time courses of these components and add them as channels # to the raw data. Think of them the same way as EOG/EKG channels, but instead # of giving info about eye movements/cardiac activity, they give info about # external magnetic noise. ref_comps = ica_ref.get_sources(raw_sep) for c in ref_comps.ch_names: # they need to have REF_ prefix to be recognised ref_comps.rename_channels({c: "REF_" + c}) raw_sep.add_channels([ref_comps]) # Now that we have our noise channels, we run the separate algorithm. bad_comps, scores = ica_sep.find_bads_ref(raw_sep, method="separate") # Plot scores with bad components marked. ica_sep.plot_scores(scores, bad_comps) # Examine the properties of removed components. ica_sep.plot_properties(raw_sep, picks=bad_comps) # Remove the components. raw_sep = ica_sep.apply(raw_sep, exclude=bad_comps) """ Explanation: Now try the "separate" algorithm. End of explanation """ raw_sep.plot(**plot_kwargs) """ Explanation: Cleaned raw data traces: End of explanation """ raw_sep.plot_psd(fmax=30) """ Explanation: Cleaned raw data PSD: End of explanation """
HrantDavtyan/Data_Scraping
Week 3/W3_RegEx_1.ipynb
apache-2.0
import re with open("financier.txt","r") as f: financier = f.readlines() print financier[2:4] type(financier) """ Explanation: Regular Expressions A regular expression (RegEx) is a sequence of chatacters that expresses a pattern to be searched withing a longer piece of text. re is a Python library for regular expressions, which has several nice methods for working with strings. The list of Frequenctly used characters is available on Moodle and the course page on GitHub. Sometimes people use RegEx for scraping the web, yet one is not encouraged to do so, as better/safer alternatives exist. Yet, once text data is scraped, RegEx is an important tool for cleaningsand tidying up the dataset. Let's now go to Project Gutenberg page and find some book to download. Below, I used as an example the Financier by Theodore Dreiser. The latter can be downloaded and read from local stroage or directly from the URL. We will go for the "download and read" option. End of explanation """ output = re.findall("\$",str(financier)) print output """ Explanation: Let's see how many times Mr. Dreiser uses the $ sign in his book. For that purpose, the findall() function from the re library will be used. The function receives the expression to search for asa first argument (always inside quotes) and the string to conduct the search on as a second argument. Please, note: as financier is a list, we convert it to string to be able to pass as an argument to our fucntion, as dollar sign is a special character for RegEx, we use the forward slach before to indicate that in this case we do not use "$" as a special character, instead it is just a text. End of explanation """ output = re.findall("(\$\S*)\s",str(financier)) print output """ Explanation: Let's see at what occasions he used it. More precicely, let's read the amount of money cited in the book. Amount usually comes after the sign, so we will look for all non-whitespace characters after the dollar sign that are followed by a whitespace (that's where the amoun ends). The brackets indicate the component we want to receive as an output. End of explanation """ output = re.findall("(@|\$)",str(financier)) print output """ Explanation: Let's use the | operator (i.e. or) to understand how many $ or @ signs were used by Mr. Dreiser. End of explanation """ output = re.findall("(?:E|e)uro",str(financier)) print output """ Explanation: Let's see how many times the word euro is used. Yet, we do not know whether the author typed Euro with a capital letter or not. So we will have to search both. If we simply put () Python will think that's the text we need to receive. So we must explicitly mention (using ?:) that the text inside brackets is only for OR function, still not meaning that it is the only part of text we want to receive. End of explanation """ output = re.findall("euro",str(financier),re.IGNORECASE) print output """ Explanation: Of course, there is an easier approac using flags: End of explanation """ sample_text = "My email is hdavtyan@aua.am" # Let's match e-mail first output = re.findall('\S+@.+',sample_text) print output """ Explanation: Now about substitution. If you want to find some text in the file and substitute it with something else, then re.sub command may come in handy. Let's promote me to Harvard: End of explanation """ # Let's now promote me to Harvard print re.sub(r'(\S+@)(.+)', r'\1harvard.edu', sample_text) """ Explanation: When brackets are used in RegEx, they for an enumerated group, that can be further called based on its order (e.g. first part of the string inside brackets fill be enumerated as the group 1). End of explanation """
Zhenxingzhang/AnalyticsVidhya
Articles/Getting_Started_with_BigMart_Sales(AV_Datahacks)/model_building.ipynb
apache-2.0
import pandas as pd import numpy as np %matplotlib inline from matplotlib.pylab import rcParams rcParams['figure.figsize'] = 12, 8 train = pd.read_csv("train_modified.csv") test = pd.read_csv("test_modified.csv") print train.shape train.dtypes """ Explanation: Load libraries and data: The data will be the one exported from the 'exploration_and_feature_engineering' file End of explanation """ #Mean based: mean_sales = train['Item_Outlet_Sales'].mean() #Define a dataframe with IDs for submission: base1 = test[['Item_Identifier','Outlet_Identifier']] base1['Item_Outlet_Sales'] = mean_sales #Export submission file base1.to_csv("alg0.csv",index=False) """ Explanation: Baseline models: End of explanation """ #Define target and ID columns: target = 'Item_Outlet_Sales' IDcol = ['Item_Identifier','Outlet_Identifier'] from sklearn import cross_validation, metrics def modelfit(alg, dtrain, dtest, predictors, target, IDcol, filename): #Fit the algorithm on the data alg.fit(dtrain[predictors], dtrain[target]) #Predict training set: dtrain_predictions = alg.predict(dtrain[predictors]) #Perform cross-validation: cv_score = cross_validation.cross_val_score(alg, dtrain[predictors], dtrain[target], cv=20, scoring='mean_squared_error') cv_score = np.sqrt(np.abs(cv_score)) #Print model report: print "\nModel Report" print "RMSE : %.4g" % np.sqrt(metrics.mean_squared_error(dtrain[target].values, dtrain_predictions)) print "CV Score : Mean - %.4g | Std - %.4g | Min - %.4g | Max - %.4g" % (np.mean(cv_score),np.std(cv_score),np.min(cv_score),np.max(cv_score)) #Predict on testing data: dtest[target] = alg.predict(dtest[predictors]) #Export submission file: IDcol.append(target) submission = pd.DataFrame({ x: dtest[x] for x in IDcol}) submission.to_csv(filename, index=False) """ Explanation: Function to fit and generate submission file: End of explanation """ from sklearn.linear_model import LinearRegression, Ridge predictors = [x for x in train.columns if x not in [target]+IDcol] # print predictors alg1 = LinearRegression(normalize=True) modelfit(alg1, train, test, predictors, target, IDcol, 'alg1.csv') coef1 = pd.Series(alg1.coef_, predictors).sort_values() coef1.plot(kind='bar', title='Model Coefficients') """ Explanation: Linear Regression Model: End of explanation """ predictors = [x for x in train.columns if x not in [target]+IDcol] alg2 = Ridge(alpha=0.05,normalize=True) modelfit(alg2, train, test, predictors, target, IDcol, 'alg2.csv') coef2 = pd.Series(alg2.coef_, predictors).sort_values() coef2.plot(kind='bar', title='Model Coefficients') """ Explanation: Ridge Regression Model: End of explanation """ from sklearn.tree import DecisionTreeRegressor predictors = [x for x in train.columns if x not in [target]+IDcol] alg3 = DecisionTreeRegressor(max_depth=15, min_samples_leaf=100) modelfit(alg3, train, test, predictors, target, IDcol, 'alg3.csv') coef3 = pd.Series(alg3.feature_importances_, predictors).sort_values(ascending=False) coef3.plot(kind='bar', title='Feature Importances') predictors = ['Item_MRP','Outlet_Type_0','Outlet_5','Outlet_Years'] alg4 = DecisionTreeRegressor(max_depth=8, min_samples_leaf=150) modelfit(alg4, train, test, predictors, target, IDcol, 'alg4.csv') coef4 = pd.Series(alg4.feature_importances_, predictors).sort_values(ascending=False) coef4.plot(kind='bar', title='Feature Importances') """ Explanation: Decision Tree Model: End of explanation """ from sklearn.ensemble import RandomForestRegressor predictors = [x for x in train.columns if x not in [target]+IDcol] alg5 = RandomForestRegressor(n_estimators=200,max_depth=5, min_samples_leaf=100,n_jobs=4) modelfit(alg5, train, test, predictors, target, IDcol, 'alg5.csv') coef5 = pd.Series(alg5.feature_importances_, predictors).sort_values(ascending=False) coef5.plot(kind='bar', title='Feature Importances') predictors = [x for x in train.columns if x not in [target]+IDcol] alg6 = RandomForestRegressor(n_estimators=400,max_depth=6, min_samples_leaf=100,n_jobs=4) modelfit(alg6, train, test, predictors, target, IDcol, 'alg6.csv') coef6 = pd.Series(alg6.feature_importances_, predictors).sort_values(ascending=False) coef6.plot(kind='bar', title='Feature Importances') """ Explanation: Random Forest Model: Note: random forest models are not 100% replicable. So the outputs might differ very slightly but should be around the ballpark. End of explanation """
mozilla-services/data-pipeline
reports/update-orphaning/Update orphaning analysis using longitudinal dataset.ipynb
mpl-2.0
import datetime as dt import urllib2 import ujson as json from os import environ %pylab inline """ Explanation: Update orphaning End of explanation """ starttime = dt.datetime.now() starttime """ Explanation: Get the time when this job was started (for debugging purposes). End of explanation """ channel_to_process = "release" sc.defaultParallelism # Uncomment the next line and adjust |today_env_str| as necessary to run manually #today_env_str = "20161105" today_env_str = environ.get("date", None) assert (today_env_str is not None), "The date environment parameter is missing." today = dt.datetime.strptime(today_env_str, "%Y%m%d").date() # Find the date of last Wednesday to get the proper 7 day range last_wednesday = today current_weekday = today.weekday() if (current_weekday < 2): last_wednesday -= (dt.timedelta(days=5) + dt.timedelta(days=current_weekday)) if (current_weekday > 2): last_wednesday -= (dt.timedelta(days=current_weekday) - dt.timedelta(days=2)) min_range = last_wednesday - dt.timedelta(days=17) report_date_str = last_wednesday.strftime("%Y%m%d") min_range_str = min_range.strftime("%Y%m%d") min_range_dash_str = min_range.strftime("%Y-%m-%d") list([last_wednesday, min_range_str, report_date_str]) """ Explanation: Declare the channel to look at. End of explanation """ sql_str = "SELECT * FROM longitudinal_v" + today_env_str frame = sqlContext.sql(sql_str) sql_str """ Explanation: The longitudinal dataset can be accessed as a Spark DataFrame, which is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python. End of explanation """ channel_subset = frame.filter(frame.normalized_channel == channel_to_process) """ Explanation: Restrict the dataframe to the desired channel. End of explanation """ data_subset = channel_subset.select("subsession_start_date", "subsession_length", "update_check_code_notify", "update_check_no_update_notify", "build.version", "settings.update.enabled") """ Explanation: Restrict the dataframe to the desired data. End of explanation """ def start_date_filter(d): try: date = dt.datetime.strptime(d.subsession_start_date[0][:10], "%Y-%m-%d").date() return min_range <= date except ValueError: return False except TypeError: return False date_filtered = data_subset.rdd.filter(start_date_filter) """ Explanation: Restrict the data to the proper 7 day range, starting at least 17 days before the creation date of the longitudinal dataset. End of explanation """ def latest_version_on_date(date, major_releases): latest_date, latest_version = u"1900-01-01", 0 for version, release_date in major_releases.iteritems(): version_int = int(version.split(".")[0]) if release_date <= date and release_date >= latest_date and version_int >= latest_version: latest_date = release_date latest_version = version_int return latest_version major_releases_json = urllib2.urlopen("https://product-details.mozilla.org/1.0/firefox_history_major_releases.json").read() major_releases = json.loads(major_releases_json) latest_version = latest_version_on_date(min_range_dash_str, major_releases) def status_mapper(d): try: if d.version is None or d.version[0] is None: return ("none-version", d) curr_version = int(d.version[0].split(".")[0]) if curr_version < 42: return ("ignore-version-too-low", d) if curr_version < latest_version - 2: # Check if the user ran a particular orphaned version of Firefox for at least 2 hours in # the last 12 weeks. An orphaned user is running a version of Firefox that's at least 3 # versions behind the current version. This means that an update has been available for # at least 12 weeks. 2 hours so most systems have had a chance to perform an update # check, download the update, and restart Firefox after the update has been downloaded. seconds = 0 curr_version = d.version[0] index = 0 twelve_weeks_ago = last_wednesday - dt.timedelta(weeks=12) while seconds < 7200 and index < len(d.version) and d.version[index] == curr_version: try: date = dt.datetime.strptime(d.subsession_start_date[index][:10], "%Y-%m-%d").date() if date < twelve_weeks_ago: return ("out-of-date-not-run-long-enough", d) seconds += d.subsession_length[index] index += 1 except ValueError: index += 1 except TypeError: index += 1 if seconds >= 7200: return ("out-of-date", d) return ("out-of-date-not-run-long-enough", d) return ("up-to-date", d) except ValueError: return ("value-error", d) statuses = date_filtered.map(status_mapper).cache() up_to_date_results = statuses.countByKey() up_to_date_json_results = json.dumps(up_to_date_results, ensure_ascii=False) up_to_date_json_results """ Explanation: Analyze the data to determine the number of users on a current version of Firefox vs. a version that's out of date. A "user on a current version" is defined as being either on the latest version as of the beginning of the 7 day range, according to the firefox_history_major_releases.json file on product-details.mozilla.org, or the two versions just prior to it. Versions prior to FF 42 are ignored since unified telemetry was not turned on by default on earlier versions. End of explanation """ out_of_date_statuses = statuses.filter(lambda p: "out-of-date" in p) def update_disabled_mapper(d): status, ping = d if ping is None or ping.enabled is None or ping.enabled[0] is None: return ("none-update-enabled", ping) if ping.enabled[0] == True: return ("update-enabled", ping) return ("update-disabled", ping) update_enabled_disabled_statuses = out_of_date_statuses.map(update_disabled_mapper) update_enabled_disabled_results = update_enabled_disabled_statuses.countByKey() update_enabled_disabled_json_results = json.dumps(update_enabled_disabled_results, ensure_ascii=False) update_enabled_disabled_json_results """ Explanation: For people who are out-of-date, determine how many of them have updates disabled: End of explanation """ update_enabled_statuses = update_enabled_disabled_statuses.filter(lambda p: "update-enabled" in p).cache() """ Explanation: Focus on orphaned users who have updates enabled. End of explanation """ def version_mapper(d): status, ping = d return (ping.version[0], ping) orphaned_by_versions = update_enabled_statuses.map(version_mapper) orphaned_by_versions_results = orphaned_by_versions.countByKey() orphaned_by_versions_json_results = json.dumps(orphaned_by_versions_results, ensure_ascii=False) orphaned_by_versions_json_results """ Explanation: For people who are out-of-date and have updates enabled, determine the distribution across Firefox versions. End of explanation """ def update_check_code_notify_mapper(d): status, ping = d if ping is None or ping.update_check_code_notify is None: return -1 for check_code in ping.update_check_code_notify: counter = -1 for i in check_code: counter += 1 if i != 0: return counter if ping.update_check_no_update_notify is not None and ping.update_check_no_update_notify[0] > 0: return 0; return -1 update_check_code_notify_statuses = update_enabled_statuses.map(update_check_code_notify_mapper) update_check_code_notify_results = update_check_code_notify_statuses.countByValue() update_check_code_notify_json_results = json.dumps(update_check_code_notify_results, ensure_ascii=False) update_check_code_notify_json_results """ Explanation: For people who are out-of-date and have updates enabled, determine what the update check returns. End of explanation """ latest_version_object = {"latest-version": latest_version} up_to_date_object = {"up-to-date": up_to_date_results} update_enabled_disabled_object = {"update-enabled-disabled": update_enabled_disabled_results} update_check_code_notify_object = {"update-check-code-notify": update_check_code_notify_results} orphaned_by_versions_object = {"orphaned-by-versions": orphaned_by_versions_results} final_results = [up_to_date_object, update_enabled_disabled_object, update_check_code_notify_object, latest_version_object, orphaned_by_versions_object] final_results_json = json.dumps(final_results, ensure_ascii=False) final_results_json """ Explanation: Write results to JSON. End of explanation """ filename = "./output/" + report_date_str + ".json" with open(filename, 'w') as f: f.write(final_results_json) filename """ Explanation: Finally, store the output in the local directory to be uploaded automatically once the job completes. The file will be stored at: https://analysis-output.telemetry.mozilla.org/SPARKJOBNAME/data/FILENAME End of explanation """ endtime = dt.datetime.now() endtime difference = endtime - starttime difference """ Explanation: Get the time when this job ended (for debugging purposes): End of explanation """
aitatanit/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Chapter7_BayesianMachineLearning/DontOverfit.ipynb
mit
import gzip import requests import zipfile url = "https://dl.dropbox.com/s/lnly9gw8pb1xhir/overfitting.zip" results = requests.get(url) import StringIO z = zipfile.ZipFile(StringIO.StringIO(results.content)) # z.extractall() z.extractall() z.namelist() d = z.open('overfitting.csv') d.readline() import numpy as np M = np.fromstring(d.read(), sep=",") len(d.read()) np.fromstring? data = np.loadtxt("overfitting.csv", delimiter=",", skiprows=1) print """ There are also 5 other fields, case_id - 1 to 20,000, a unique identifier for each row train - 1/0, this is a flag for the first 250 rows which are the training dataset Target_Practice - we have provided all 20,000 Targets for this model, so you can develop your method completely off line. Target_Leaderboard - only 250 Targets are provided. You submit your predictions for the remaining 19,750 to the Kaggle leaderboard. Target_Evaluate - again only 250 Targets are provided. Those competitors who beat the 'benchmark' on the Leaderboard will be asked to make one further submission for the Evaluation model. """ data.shape ix_training = data[:, 1] == 1 ix_testing = data[:, 1] == 0 training_data = data[ix_training, 5:] testing_data = data[ix_testing, 5:] training_labels = data[ix_training, 2] testing_labels = data[ix_testing, 2] print "training:", training_data.shape, training_labels.shape print "testing: ", testing_data.shape, testing_labels.shape """ Explanation: Implementation of Salisman's Don't Overfit submission From Kaggle In order to achieve this we have created a simulated data set with 200 variables and 20,000 cases. An ‘equation’ based on this data was created in order to generate a Target to be predicted. Given the all 20,000 cases, the problem is very easy to solve – but you only get given the Target value of 250 cases – the task is to build a model that gives the best predictions on the remaining 19,750 cases. End of explanation """ figsize(12, 4) hist(training_data.flatten()) print training_data.shape[0] * training_data.shape[1] """ Explanation: Develop Tim's model He mentions that the X variables are from a Uniform distribution. Let's investigate this: End of explanation """ import pymc as pm to_include = pm.Bernoulli("to_include", 0.5, size=200) coef = pm.Uniform("coefs", 0, 1, size=200) @pm.deterministic def Z(coef=coef, to_include=to_include, data=training_data): ym = np.dot(to_include * training_data, coef) return ym - ym.mean() @pm.deterministic def T(z=Z): return 0.45 * (np.sign(z) + 1.1) obs = pm.Bernoulli("obs", T, value=training_labels, observed=True) model = pm.Model([to_include, coef, Z, T, obs]) map_ = pm.MAP(model) map_.fit() mcmc = pm.MCMC(model) mcmc.sample(100000, 90000, 1) (np.round(T.value) == training_labels).mean() t_trace = mcmc.trace("T")[:] (np.round(t_trace[-500:-400, :]).mean(axis=0) == training_labels).mean() t_mean = np.round(t_trace).mean(axis=1) imshow(t_trace[-10000:, :], aspect="auto") colorbar() figsize(23, 8) coef_trace = mcmc.trace("coefs")[:] imshow(coef_trace[-10000:, :], aspect="auto", cmap=pyplot.cm.RdBu, interpolation="none") include_trace = mcmc.trace("to_include")[:] figsize(23, 8) imshow(include_trace[-10000:, :], aspect="auto", interpolation="none") """ Explanation: looks pretty right End of explanation """
steinam/teacher
jup_notebooks/data-science-ipython-notebooks-master/matplotlib/04.10-Customizing-Ticks.ipynb
mit
import matplotlib.pyplot as plt plt.style.use('classic') %matplotlib inline import numpy as np ax = plt.axes(xscale='log', yscale='log') ax.grid(); """ Explanation: <!--BOOK_INFORMATION--> <img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png"> This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub. The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book! No changes were made to the contents of this notebook from the original. <!--NAVIGATION--> < Text and Annotation | Contents | Customizing Matplotlib: Configurations and Stylesheets > Customizing Ticks Matplotlib's default tick locators and formatters are designed to be generally sufficient in many common situations, but are in no way optimal for every plot. This section will give several examples of adjusting the tick locations and formatting for the particular plot type you're interested in. Before we go into examples, it will be best for us to understand further the object hierarchy of Matplotlib plots. Matplotlib aims to have a Python object representing everything that appears on the plot: for example, recall that the figure is the bounding box within which plot elements appear. Each Matplotlib object can also act as a container of sub-objects: for example, each figure can contain one or more axes objects, each of which in turn contain other objects representing plot contents. The tick marks are no exception. Each axes has attributes xaxis and yaxis, which in turn have attributes that contain all the properties of the lines, ticks, and labels that make up the axes. Major and Minor Ticks Within each axis, there is the concept of a major tick mark, and a minor tick mark. As the names would imply, major ticks are usually bigger or more pronounced, while minor ticks are usually smaller. By default, Matplotlib rarely makes use of minor ticks, but one place you can see them is within logarithmic plots: End of explanation """ print(ax.xaxis.get_major_locator()) print(ax.xaxis.get_minor_locator()) print(ax.xaxis.get_major_formatter()) print(ax.xaxis.get_minor_formatter()) """ Explanation: We see here that each major tick shows a large tickmark and a label, while each minor tick shows a smaller tickmark with no label. These tick properties—locations and labels—that is, can be customized by setting the formatter and locator objects of each axis. Let's examine these for the x axis of the just shown plot: End of explanation """ ax = plt.axes() ax.plot(np.random.rand(50)) ax.yaxis.set_major_locator(plt.NullLocator()) ax.xaxis.set_major_formatter(plt.NullFormatter()) """ Explanation: We see that both major and minor tick labels have their locations specified by a LogLocator (which makes sense for a logarithmic plot). Minor ticks, though, have their labels formatted by a NullFormatter: this says that no labels will be shown. We'll now show a few examples of setting these locators and formatters for various plots. Hiding Ticks or Labels Perhaps the most common tick/label formatting operation is the act of hiding ticks or labels. This can be done using plt.NullLocator() and plt.NullFormatter(), as shown here: End of explanation """ fig, ax = plt.subplots(5, 5, figsize=(5, 5)) fig.subplots_adjust(hspace=0, wspace=0) # Get some face data from scikit-learn from sklearn.datasets import fetch_olivetti_faces faces = fetch_olivetti_faces().images for i in range(5): for j in range(5): ax[i, j].xaxis.set_major_locator(plt.NullLocator()) ax[i, j].yaxis.set_major_locator(plt.NullLocator()) ax[i, j].imshow(faces[10 * i + j], cmap="bone") """ Explanation: Notice that we've removed the labels (but kept the ticks/gridlines) from the x axis, and removed the ticks (and thus the labels as well) from the y axis. Having no ticks at all can be useful in many situations—for example, when you want to show a grid of images. For instance, consider the following figure, which includes images of different faces, an example often used in supervised machine learning problems (see, for example, In-Depth: Support Vector Machines): End of explanation """ fig, ax = plt.subplots(4, 4, sharex=True, sharey=True) """ Explanation: Notice that each image has its own axes, and we've set the locators to null because the tick values (pixel number in this case) do not convey relevant information for this particular visualization. Reducing or Increasing the Number of Ticks One common problem with the default settings is that smaller subplots can end up with crowded labels. We can see this in the plot grid shown here: End of explanation """ # For every axis, set the x and y major locator for axi in ax.flat: axi.xaxis.set_major_locator(plt.MaxNLocator(3)) axi.yaxis.set_major_locator(plt.MaxNLocator(3)) fig """ Explanation: Particularly for the x ticks, the numbers nearly overlap and make them quite difficult to decipher. We can fix this with the plt.MaxNLocator(), which allows us to specify the maximum number of ticks that will be displayed. Given this maximum number, Matplotlib will use internal logic to choose the particular tick locations: End of explanation """ # Plot a sine and cosine curve fig, ax = plt.subplots() x = np.linspace(0, 3 * np.pi, 1000) ax.plot(x, np.sin(x), lw=3, label='Sine') ax.plot(x, np.cos(x), lw=3, label='Cosine') # Set up grid, legend, and limits ax.grid(True) ax.legend(frameon=False) ax.axis('equal') ax.set_xlim(0, 3 * np.pi); """ Explanation: This makes things much cleaner. If you want even more control over the locations of regularly-spaced ticks, you might also use plt.MultipleLocator, which we'll discuss in the following section. Fancy Tick Formats Matplotlib's default tick formatting can leave a lot to be desired: it works well as a broad default, but sometimes you'd like do do something more. Consider this plot of a sine and a cosine: End of explanation """ ax.xaxis.set_major_locator(plt.MultipleLocator(np.pi / 2)) ax.xaxis.set_minor_locator(plt.MultipleLocator(np.pi / 4)) fig """ Explanation: There are a couple changes we might like to make. First, it's more natural for this data to space the ticks and grid lines in multiples of $\pi$. We can do this by setting a MultipleLocator, which locates ticks at a multiple of the number you provide. For good measure, we'll add both major and minor ticks in multiples of $\pi/4$: End of explanation """ def format_func(value, tick_number): # find number of multiples of pi/2 N = int(np.round(2 * value / np.pi)) if N == 0: return "0" elif N == 1: return r"$\pi/2$" elif N == 2: return r"$\pi$" elif N % 2 > 0: return r"${0}\pi/2$".format(N) else: return r"${0}\pi$".format(N // 2) ax.xaxis.set_major_formatter(plt.FuncFormatter(format_func)) fig """ Explanation: But now these tick labels look a little bit silly: we can see that they are multiples of $\pi$, but the decimal representation does not immediately convey this. To fix this, we can change the tick formatter. There's no built-in formatter for what we want to do, so we'll instead use plt.FuncFormatter, which accepts a user-defined function giving fine-grained control over the tick outputs: End of explanation """
adriantorrie/adriantorrie.github.io
downloads/notebooks/eoddata/eoddata_web_service_calls_exchange_list.ipynb
mit
%run ../../code/version_check.py """ Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#Summary" data-toc-modified-id="Summary-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Summary</a></div><div class="lev1 toc-item"><a href="#Version-Control" data-toc-modified-id="Version-Control-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Version Control</a></div><div class="lev1 toc-item"><a href="#Change-Log" data-toc-modified-id="Change-Log-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Change Log</a></div><div class="lev1 toc-item"><a href="#Setup" data-toc-modified-id="Setup-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Setup</a></div><div class="lev1 toc-item"><a href="#ExchangeList()" data-toc-modified-id="ExchangeList()-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>ExchangeList()</a></div><div class="lev2 toc-item"><a href="#Web-service-call" data-toc-modified-id="Web-service-call-51"><span class="toc-item-num">5.1&nbsp;&nbsp;</span>Web service call</a></div><div class="lev3 toc-item"><a href="#Gather-elements" data-toc-modified-id="Gather-elements-511"><span class="toc-item-num">5.1.1&nbsp;&nbsp;</span>Gather elements</a></div><div class="lev3 toc-item"><a href="#Get-data" data-toc-modified-id="Get-data-512"><span class="toc-item-num">5.1.2&nbsp;&nbsp;</span>Get data</a></div><div class="lev3 toc-item"><a href="#Save-to-file" data-toc-modified-id="Save-to-file-513"><span class="toc-item-num">5.1.3&nbsp;&nbsp;</span>Save to file</a></div><div class="lev3 toc-item"><a href="#Data-inspection" data-toc-modified-id="Data-inspection-514"><span class="toc-item-num">5.1.4&nbsp;&nbsp;</span>Data inspection</a></div><div class="lev2 toc-item"><a href="#Helper-function" data-toc-modified-id="Helper-function-52"><span class="toc-item-num">5.2&nbsp;&nbsp;</span>Helper function</a></div><div class="lev3 toc-item"><a href="#Usage" data-toc-modified-id="Usage-521"><span class="toc-item-num">5.2.1&nbsp;&nbsp;</span>Usage</a></div><div class="lev2 toc-item"><a href="#Client-function" data-toc-modified-id="Client-function-53"><span class="toc-item-num">5.3&nbsp;&nbsp;</span>Client function</a></div> # Summary Part of the blog series related to making web service calls to Eoddata.com. Overview of the web service can be found [here](http://ws.eoddata.com/data.asmx). * ** View the master post of this series to build a secure credentials file.** It is used in all posts related to this series. * Download this blog post as a [jupyter notebook](https://adriantorrie.github.io/downloads/notebooks/eoddata/eoddata_web_service_calls_exchange_list.ipynb) * Download the [class definition file](https://adriantorrie.github.io/downloads/code/eoddata.py) for an easy to use client, which is demonstrated below * This post covers the `ExchangeList` call: http://ws.eoddata.com/data.asmx?op=ExchangeList # Version Control End of explanation """ %run ../../code/eoddata.py import pandas as pd import requests as r ws = 'http://ws.eoddata.com/data.asmx' ns='http://ws.eoddata.com/Data' with (Client()) as eoddata: token = eoddata.get_token() """ Explanation: Change Log Date Created: 2017-03-25 Date of Change Change Notes -------------- ---------------------------------------------------------------- 2017-03-25 Initial draft 2017-04-02 - Changed any references for `get_exchange_list()` to `exchange_list()` - Client class function returns data in fixed order now Setup End of explanation """ session = r.Session() call = 'ExchangeList' kwargs = {'Token': token,} pattern = ".//{%s}EXCHANGE" url = '/'.join((ws, call)) response = session.get(url, params=kwargs, stream=True) if response.status_code == 200: root = etree.parse(response.raw).getroot() session.close() """ Explanation: ExchangeList() Web service call End of explanation """ elements = root.findall(pattern %(ns)) """ Explanation: Gather elements End of explanation """ exchanges = sorted(element.get('Code') for element in elements) exchanges """ Explanation: Get data End of explanation """ with open('../../data/exchanges.csv', 'w') as f: for element in elements: f.write('"%s"\n' % '","'.join(element.attrib.values())) """ Explanation: Save to file End of explanation """ for item in root.items(): print (item) for element in root.iter(): print(element.attrib) """ Explanation: Data inspection End of explanation """ def ExchangeList(session, token): call = 'ExchangeList' kwargs = {'Token': token,} pattern = ".//{%s}EXCHANGE" url = '/'.join((ws, call)) response = session.get(url, params=kwargs, stream=True) if response.status_code == 200: root = etree.parse(response.raw).getroot() return sorted(element.get('Code') for element in elements) """ Explanation: Helper function End of explanation """ session = r.session() exchanges = ExchangeList(session, token) exchanges session.close() """ Explanation: Usage End of explanation """ # pandas dataframe is returned df = eoddata.exchange_list() df.head() """ Explanation: Client function End of explanation """
tensorflow/docs-l10n
site/ja/hub/tutorials/bangla_article_classifier.ipynb
apache-2.0
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """ Explanation: Copyright 2019 The TensorFlow Hub Authors. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation """ %%bash # https://github.com/pypa/setuptools/issues/1694#issuecomment-466010982 pip install gdown --no-use-pep517 %%bash sudo apt-get install -y unzip import os import tensorflow as tf import tensorflow_hub as hub import gdown import numpy as np from sklearn.metrics import classification_report import matplotlib.pyplot as plt import seaborn as sns """ Explanation: TF-Hub によるベンガル語の記事分類 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/hub/tutorials/bangla_article_classifier"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/bangla_article_classifier.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/bangla_article_classifier.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub で表示</a></td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/hub/tutorials/bangla_article_classifier.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td> </table> 注意: このノートブックでは、pip を用いた Python パッケージのインストールに加え、sudo apt installを使用してシステムパッケージをインストールします。これにはunzipを使います。 この Colab は、非英語/現地語のテキスト分類に Tensorflow Hub を使用したデモンストレーションです。ここではローカル言語として ベンガル語 を選択し、事前トレーニングされた単語埋め込みを使用してベンガル語のニュース記事を 5 つのカテゴリに分類する、マルチクラス分類タスクを解決します。ベンガル語の事前トレーニング済みの単語埋め込みは fastText を使用します。これは Facebook のライブラリで、157 言語の事前トレーニング済みの単語ベクトルが公開されています。 ここでは TF-Hub (TensorFlow Hub) の事前トレーニング済みの埋め込みエクスポート機能を使用して、まず単語埋め込みをテキスト埋め込みモジュールに変換した後、そのモジュールを使用して Tensorflow の使いやすい高レベル API である tf.keras で分類器のトレーニングを行い、ディープラーニングモデルを構築します。ここでは fastText Embedding を使用していますが、他のタスクで事前トレーニングした別の埋め込みをエクスポートし、TensorFlow Hub で素早く結果を得ることも可能です。 セットアップ End of explanation """ gdown.download( url='https://drive.google.com/uc?id=1Ag0jd21oRwJhVFIBohmX_ogeojVtapLy', output='bard.zip', quiet=True ) %%bash unzip -qo bard.zip """ Explanation: データセット ここで使用するのは BARD(ベンガル語記事データセット)です。これは、様々なベンガル語のニュースポータルから収集した約 3,76,226 件の記事が、経済、州、国際、スポーツ、エンターテイメントの 5 つのカテゴリに分類されています。ファイルは Google Drive からダウンロードしますが、bit.ly/BARD_DATASET のリンクはこの GitHub リポジトリから参照しています。 End of explanation """ %%bash curl -O https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.bn.300.vec.gz curl -O https://raw.githubusercontent.com/tensorflow/hub/master/examples/text_embeddings_v2/export_v2.py gunzip -qf cc.bn.300.vec.gz --k """ Explanation: 事前トレーニング済み単語ベクトルを TF-Hub モジュールにエクスポートする TF-Hub には、単語埋め込みを TF-Hubの テキスト埋め込みモジュールに変換する、この便利なスクリプトがあります。export_v2.py と同じディレクトリに単語埋め込み用の .txt または .vec ファイルをダウンロードしてスクリプトを実行するだけで、ベンガル語やその他の言語用のモジュールを作成することができます。 エクスポーターは埋め込みベクトルを読み込んで、Tensorflow の SavedModel にエクスポートします。SavedModel には重みとグラフを含む完全な TensorFlow プログラムが含まれています。TF-Hub は SavedModel をモジュールとして読み込むことができます。モデルを構築には tf.keras を使用するので、ハブモジュールにラッパーを提供する hub.KerasLayer を用いて Keras のレイヤーとして使用します。 まず、fastText から単語埋め込みを、TF-Hub のレポジトリから埋め込みエクスポーターを取得します。 End of explanation """ %%bash python export_v2.py --embedding_file=cc.bn.300.vec --export_path=text_module --num_lines_to_ignore=1 --num_lines_to_use=100000 module_path = "text_module" embedding_layer = hub.KerasLayer(module_path, trainable=False) """ Explanation: 次に、エクスポートスクリプトを埋め込みファイル上で実行します。fastText Embedding にはヘッダ行があり、かなり大きい(ベンガル語でモジュール変換後 3.3GB 程度)ため、ヘッダ行を無視して最初の 100,000 トークンのみをテキスト埋め込みモジュールにエクスポートします。 End of explanation """ embedding_layer(['বাস', 'বসবাস', 'ট্রেন', 'যাত্রী', 'ট্রাক']) """ Explanation: テキスト埋め込みモジュールは、文字列の 1 次元テンソル内の文のバッチを入力として受け取り、文に対応する形状の埋め込みベクトル (batch_size, embedding_dim) を出力します。これは入力をスペースで分割して、前処理を行います。単語埋め込みは sqrtn コンバイナ(こちらを参照)を使用して文の埋め込みに結合されます。これの実演として、ベンガル語の単語リストを入力として渡し、対応する埋め込みベクトルを取得します。 End of explanation """ dir_names = ['economy', 'sports', 'entertainment', 'state', 'international'] file_paths = [] labels = [] for i, dir in enumerate(dir_names): file_names = ["/".join([dir, name]) for name in os.listdir(dir)] file_paths += file_names labels += [i] * len(os.listdir(dir)) np.random.seed(42) permutation = np.random.permutation(len(file_paths)) file_paths = np.array(file_paths)[permutation] labels = np.array(labels)[permutation] """ Explanation: Tensorflow Dataset を変換する データセットが非常に大きいため、データセット全体をメモリに読み込むのではなく、Tensorflow Dataset の関数を利用してジェネレータを使用し、実行時にサンプルをバッチで生成します。また、データセットは非常にバランスが悪いので、ジェネレータを使用する前にデータセットをシャッフルします。 End of explanation """ train_frac = 0.8 train_size = int(len(file_paths) * train_frac) # plot training vs validation distribution plt.subplot(1, 2, 1) plt.hist(labels[0:train_size]) plt.title("Train labels") plt.subplot(1, 2, 2) plt.hist(labels[train_size:]) plt.title("Validation labels") plt.tight_layout() """ Explanation: シャッフル後には、トレーニング例と検証例のラベルの分布を確認することができます。 End of explanation """ def load_file(path, label): return tf.io.read_file(path), label def make_datasets(train_size): batch_size = 256 train_files = file_paths[:train_size] train_labels = labels[:train_size] train_ds = tf.data.Dataset.from_tensor_slices((train_files, train_labels)) train_ds = train_ds.map(load_file).shuffle(5000) train_ds = train_ds.batch(batch_size).prefetch(tf.data.AUTOTUNE) test_files = file_paths[train_size:] test_labels = labels[train_size:] test_ds = tf.data.Dataset.from_tensor_slices((test_files, test_labels)) test_ds = test_ds.map(load_file) test_ds = test_ds.batch(batch_size).prefetch(tf.data.AUTOTUNE) return train_ds, test_ds train_data, validation_data = make_datasets(train_size) """ Explanation: ジェネレータを使用して Datasete を作成するには、まず file_paths から各項目を、ラベル配列からラベルを読み込むジェネレータ関数を書き込み、各ステップ毎にそれぞれ 1 つのトレーニング例を生成します。このジェネレータ関数を tf.data.Dataset.from_generator メソッドに渡して出力タイプを指定します。各トレーニング例は、tf.string データ型の項目と One-Hot エンコーディングされたラベルを含むタプルです。tf.data.Dataset.skip メソッドとtf.data.Dataset.take メソッドを使用して、データセットは 80 対 20 の割合でトレーニングデータと検証データに分割しています。 End of explanation """ def create_model(): model = tf.keras.Sequential([ tf.keras.layers.Input(shape=[], dtype=tf.string), embedding_layer, tf.keras.layers.Dense(64, activation="relu"), tf.keras.layers.Dense(16, activation="relu"), tf.keras.layers.Dense(5), ]) model.compile(loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer="adam", metrics=['accuracy']) return model model = create_model() # Create earlystopping callback early_stopping_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=3) """ Explanation: モデルのトレーニングと評価 既にモジュールの周りにラッパーを追加し、Keras の他のレイヤーと同じように使用できるようになったので、レイヤーの線形スタックである小さな Sequential モデルを作成します。他のレイヤーと同様に model.add を使用して、テキスト埋め込みモジュールの追加が可能です。損失とオプティマイザを指定してモデルをコンパイルし、10 エポック分をトレーニングします。tf.keras API はテンソルフローのデータセットを入力として扱うことができるので、fit メソッドに Dataset インスタンスを渡してモデルをトレーニングすることができます。ジェネレータ関数を使用するので、tf.data がサンプルの生成、バッチ処理、モデルへの供給を行います。 モデル End of explanation """ history = model.fit(train_data, validation_data=validation_data, epochs=5, callbacks=[early_stopping_callback]) """ Explanation: トレーニング End of explanation """ # Plot training & validation accuracy values plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('Model accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show() # Plot training & validation loss values plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show() """ Explanation: 評価 tf.keras.Model.fit メソッドが返す各エポックの損失と精度の値を含む tf.keras.callbacks.History オブジェクトを使用して、学習データと検証データの精度と損失曲線を可視化することができます。 End of explanation """ y_pred = model.predict(validation_data) y_pred = np.argmax(y_pred, axis=1) samples = file_paths[0:3] for i, sample in enumerate(samples): f = open(sample) text = f.read() print(text[0:100]) print("True Class: ", sample.split("/")[0]) print("Predicted Class: ", dir_names[y_pred[i]]) f.close() """ Explanation: 予測 検証データの予測値を取得して混同行列をチェックすることにより、5 つの各クラスのモデルの性能を確認することができます。tf.keras.Model.predict メソッドが各クラスの確率として N-D 配列を返すので、np.argmax を使用してそれらをクラスラベルに変換することができます。 End of explanation """ y_true = np.array(labels[train_size:]) print(classification_report(y_true, y_pred, target_names=dir_names)) """ Explanation: パフォーマンスを比較する これで labelsから検証データの正しいラベルを得ることができるようになったので、それを予測と比較して classification_report を取得します。 End of explanation """
statsmodels/statsmodels.github.io
v0.13.2/examples/notebooks/generated/mediation_survival.ipynb
bsd-3-clause
import pandas as pd import numpy as np import statsmodels.api as sm from statsmodels.stats.mediation import Mediation """ Explanation: Mediation analysis with duration data This notebook demonstrates mediation analysis when the mediator and outcome are duration variables, modeled using proportional hazards regression. These examples are based on simulated data. End of explanation """ np.random.seed(3424) """ Explanation: Make the notebook reproducible. End of explanation """ n = 1000 """ Explanation: Specify a sample size. End of explanation """ exp = np.random.normal(size=n) """ Explanation: Generate an exposure variable. End of explanation """ def gen_mediator(): mn = np.exp(exp) mtime0 = -mn * np.log(np.random.uniform(size=n)) ctime = -2 * mn * np.log(np.random.uniform(size=n)) mstatus = (ctime >= mtime0).astype(int) mtime = np.where(mtime0 <= ctime, mtime0, ctime) return mtime0, mtime, mstatus """ Explanation: Generate a mediator variable. End of explanation """ def gen_outcome(otype, mtime0): if otype == "full": lp = 0.5 * mtime0 elif otype == "no": lp = exp else: lp = exp + mtime0 mn = np.exp(-lp) ytime0 = -mn * np.log(np.random.uniform(size=n)) ctime = -2 * mn * np.log(np.random.uniform(size=n)) ystatus = (ctime >= ytime0).astype(int) ytime = np.where(ytime0 <= ctime, ytime0, ctime) return ytime, ystatus """ Explanation: Generate an outcome variable. End of explanation """ def build_df(ytime, ystatus, mtime0, mtime, mstatus): df = pd.DataFrame( { "ytime": ytime, "ystatus": ystatus, "mtime": mtime, "mstatus": mstatus, "exp": exp, } ) return df """ Explanation: Build a dataframe containing all the relevant variables. End of explanation """ def run(otype): mtime0, mtime, mstatus = gen_mediator() ytime, ystatus = gen_outcome(otype, mtime0) df = build_df(ytime, ystatus, mtime0, mtime, mstatus) outcome_model = sm.PHReg.from_formula( "ytime ~ exp + mtime", status="ystatus", data=df ) mediator_model = sm.PHReg.from_formula("mtime ~ exp", status="mstatus", data=df) med = Mediation( outcome_model, mediator_model, "exp", "mtime", outcome_predict_kwargs={"pred_only": True}, ) med_result = med.fit(n_rep=20) print(med_result.summary()) """ Explanation: Run the full simulation and analysis, under a particular population structure of mediation. End of explanation """ run("full") """ Explanation: Run the example with full mediation End of explanation """ run("partial") """ Explanation: Run the example with partial mediation End of explanation """ run("no") """ Explanation: Run the example with no mediation End of explanation """
Hironsan/awesome-embedding-models
notebooks/skip-gram_with_ng.ipynb
mit
# Hyper Parameter Settings embedding_size = 200 epochs_to_train = 10 num_neg_samples = 5 sampling_factor = 1e-5 window_size = 5 save_path = './word_vectors.txt' """ Explanation: Setting Hyperparameters You set hyperparameters for Skip-gram with negative sampling. By default, it is set as follows. End of explanation """ def maybe_download(url): """ Download a file if not present. """ filename = url.split('/')[-1] path = get_file(filename, url) return path def unzip(zip_filename): """ Extract a file from the zipfile """ with zipfile.ZipFile(zip_filename) as f: for filename in f.namelist(): dirname = os.path.dirname(filename) f.extract(filename, dirname) return os.path.abspath(filename) # Download Data url = 'http://mattmahoney.net/dc/text8.zip' filename = maybe_download(url) text_file = unzip(filename) url = 'http://download.tensorflow.org/data/questions-words.txt' eval_data = maybe_download(url) """ Explanation: Download training and evaluation data You can download training data and evaluation data. End of explanation """ # Load Data sentences = word2vec.Text8Corpus(text_file) sentences = [' '.join(sent) for sent in sentences] tokenizer = Tokenizer(filters=base_filter() + "'") tokenizer.fit_on_texts(sentences) sentences = tokenizer.texts_to_sequences(sentences) V = len(tokenizer.word_index) + 1 print('Vocabulary:', V) """ Explanation: Reading training data You can read training data from a text file using the word2vec.Text8Corpus class. By default, it assumes that the text file is given. Tokenizer tokenizes the sentences and assign ID to the vocabulary. End of explanation """ def build_model(): target_word = Sequential() target_word.add(Embedding(V, embedding_size, input_length=1)) context = Sequential() context.add(Embedding(V, embedding_size, input_length=1)) model = Sequential() model.add(Merge([target_word, context], mode='dot', dot_axes=2)) model.add(Reshape((1,), input_shape=(1, 1))) model.add(Activation('sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop') return model model = build_model() """ Explanation: Layers and Activation Functions The two main components to build neural networks architecture in Keras is Layer and Activation. There are many useful layers and activation functions in Keras. For now, let's create a binary classifier with Embedding layer: End of explanation """ def train_model(model): sampling_table = make_sampling_table(V, sampling_factor=sampling_factor) for epoch in range(epochs_to_train): loss = 0. for i, sent in enumerate(sentences): if i % 500 == 0: print('{}/{}'.format(i, len(sentences))) couples, labels = skipgrams(sequence=sent, vocabulary_size=V, window_size=window_size, negative_samples=num_neg_samples, sampling_table=sampling_table) if couples: words, contexts = zip(*couples) words = np.array(words, dtype=np.int32) contexts = np.array(contexts, dtype=np.int32) y = np.array(labels, dtype=np.int32) loss += model.train_on_batch([words, contexts], y) print('num epoch: {} loss: {}'.format(epoch, loss)) return model model = train_model(model) """ Explanation: Training Model Now, we obtained skip-gram model. Let's train it by calling train_on_batch and passing training examples: End of explanation """ def save_model(model): with open(save_path, 'w') as f: f.write(' '.join([str(V - 1), str(embedding_size)])) f.write('\n') vectors = model.get_weights()[0] for word, i in tokenizer.word_index.items(): f.write(word) f.write(' ') f.write(' '.join(map(str, list(vectors[i, :])))) f.write('\n') save_model(model) """ Explanation: Save word embeddings Congraturations! We finished training. Let's save word embeddings into text file: End of explanation """ def read_analogies(filename, word2id): """ Reads through the analogy question file. Returns: questions: a [n, 4] numpy array containing the analogy question's word ids. questions_skipped: questions skipped due to unknown words. """ questions = [] questions_skipped = 0 with open(filename, 'r') as analogy_f: for line in analogy_f: if line.startswith(':'): # Skip comments. continue words = line.strip().lower().split() ids = [w in word2id for w in words] if False in ids or len(ids) != 4: questions_skipped += 1 else: questions.append(words) print('Eval analogy file: {}'.format(filename)) print('Questions: {}'.format(len(questions))) print('Skipped: {}'.format(questions_skipped)) return questions """ Explanation: Definition for reading evaluation data You can read evaluation data from a text file: End of explanation """ def eval_model(): w2v = Word2Vec.load_word2vec_format(save_path, binary=False) w2v.most_similar(positive=['country']) word2id = dict([(w, i) for i, w in enumerate(w2v.index2word)]) analogy_questions = read_analogies(eval_data, word2id) correct = 0 total = len(analogy_questions) for question in analogy_questions: a, b, c, d = question # E.g. [Athens, Greece, Baghdad, Iraq] analogies = w2v.most_similar(positive=[b, c], negative=[a], topn=4) for analogy in analogies: word, _ = analogy if d == word: # Predicted Correctly! correct += 1 break print('Eval %4d/%d accuracy = %4.1f%%' % (correct, total, correct * 100.0 / total)) eval_model() """ Explanation: Evaluation We evaluate obtained word embeddings on analogy task. In analogy task, when A, B, C are given, you need to find D such that A is to B what C is to D.(e.g. man is to king what women is to D.) End of explanation """
alirsamar/MLND
titanic_survival_exploration/Titanic_Survival_Exploration.ipynb
mit
import numpy as np import pandas as pd # RMS Titanic data visualization code from titanic_visualizations import survival_stats from IPython.display import display %matplotlib inline # Load the dataset in_file = 'titanic_data.csv' full_data = pd.read_csv(in_file) # Print the first few entries of the RMS Titanic data display(full_data.head()) """ Explanation: Machine Learning Engineer Nanodegree Introduction and Foundations Project 0: Titanic Survival Exploration In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions. Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook. Getting Started To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame. Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function. Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML. End of explanation """ # Store the 'Survived' feature in a new variable and remove it from the dataset outcomes = full_data['Survived'] data = full_data.drop('Survived', axis = 1) # Show the new dataset with 'Survived' removed display(data.head()) """ Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship: - Survived: Outcome of survival (0 = No; 1 = Yes) - Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class) - Name: Name of passenger - Sex: Sex of the passenger - Age: Age of the passenger (Some entries contain NaN) - SibSp: Number of siblings and spouses of the passenger aboard - Parch: Number of parents and children of the passenger aboard - Ticket: Ticket number of the passenger - Fare: Fare paid by the passenger - Cabin Cabin number of the passenger (Some entries contain NaN) - Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton) Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets. Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes. End of explanation """ def accuracy_score(truth, pred): """ Returns accuracy score for input truth and predictions. """ # Ensure that the number of predictions matches number of outcomes if len(truth) == len(pred): # Calculate and return the accuracy as a percent return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100) else: return "Number of predictions does not match number of outcomes!" # Test the 'accuracy_score' function predictions = pd.Series(np.ones(5, dtype = int)) print accuracy_score(outcomes[:5], predictions) """ Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i]. To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers. Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be? End of explanation """ def predictions_0(data): """ Model with no features. Always predicts a passenger did not survive. """ predictions = [] for _, passenger in data.iterrows(): # Predict the survival of 'passenger' predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_0(data) """ Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off. Making Predictions If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking. The predictions_0 function below will always predict that a passenger did not survive. End of explanation """ print accuracy_score(outcomes, predictions) """ Explanation: Question 1 Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived? Hint: Run the code cell below to see the accuracy of this prediction. End of explanation """ survival_stats(data, outcomes, 'Sex') """ Explanation: Answer: 61.62% Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across. Run the code cell below to plot the survival outcomes of passengers based on their sex. End of explanation """ def predictions_1(data): """ Model with one feature: - Predict a passenger survived if they are female. """ predictions = [] for _, passenger in data.iterrows(): pred = 1 if passenger['Sex'] == 'female' else 0 predictions.append(pred) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_1(data) """ Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive. Fill in the missing code below so that the function will make this prediction. Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger. End of explanation """ print accuracy_score(outcomes, predictions) """ Explanation: Question 2 How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive? Hint: Run the code cell below to see the accuracy of this prediction. End of explanation """ survival_stats(data, outcomes, 'Age', ["Sex == 'male'"]) """ Explanation: Answer: 78.68% Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included. Run the code cell below to plot the survival outcomes of male passengers based on their age. End of explanation """ def predictions_2(data): """ Model with two features: - Predict a passenger survived if they are female. - Predict a passenger survived if they are male and younger than 10. """ predictions = [] for _, passenger in data.iterrows(): if passenger['Sex'] == 'female': pred = 1 else: pred = 1 if passenger['Age'] < 10 else 0 predictions.append(pred) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_2(data) """ Explanation: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive. Fill in the missing code below so that the function will make this prediction. Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1. End of explanation """ print accuracy_score(outcomes, predictions) """ Explanation: Question 3 How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived? Hint: Run the code cell below to see the accuracy of this prediction. End of explanation """ survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "Pclass == 1", "Age < 35"]) """ Explanation: Answer: 79.35% Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions. Pclass, Sex, Age, SibSp, and Parch are some suggested features to try. Use the survival_stats function below to to examine various survival statistics. Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age &lt; 18"] End of explanation """ def predictions_3(data): """ Model with multiple features. Makes a prediction with an accuracy of at least 80%. """ predictions = [] for _, passenger in data.iterrows(): if passenger['Sex'] == 'female': if passenger['Pclass'] == 1: pred = 1 elif passenger['Pclass'] == 2: pred = 1 else: if passenger['SibSp'] == 0: pred = 1 else: pred = 0 else: if passenger['Pclass'] == 1: if passenger['Age'] < 18: pred = 1 else: pred = 0 elif passenger['Pclass'] == 2: if passenger['Age'] < 10: pred = 1 else: pred = 0 else: if passenger['Age'] < 1: pred = 1 else: pred = 0 predictions.append(pred) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_3(data) """ Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction. Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model. Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2. End of explanation """ print accuracy_score(outcomes, predictions) """ Explanation: Question 4 Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions? Hint: Run the code cell below to see the accuracy of your predictions. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/mohc/cmip6/models/hadgem3-gc31-ll/ocean.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-ll', 'ocean') """ Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: MOHC Source ID: HADGEM3-GC31-LL Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:14 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) """ Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) """ Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) """ Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" # TODO - please enter value(s) """ Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) """ Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation """
akokai/chemviewing
notebooks/use-snur-data.ipynb
unlicense
uri = URIBASE + 'uses' r = requests.get(uri, headers = {'Accept': 'application/json, */*'}) j = json.loads(r.text) print(len(j)) DataFrame(j) """ Explanation: Can we get chemical use classification data? i.e., lists of chemicals classified by use. First, get the controlled vocabulary of uses. End of explanation """ uri = URIBASE + 'uses/124470' # "Flame retardant" r = requests.get(uri, headers = {'Accept': 'application/json, */*'}) j = json.loads(r.text) j """ Explanation: Getting the "details" on a use... not so useful End of explanation """ uri = URIBASE + 'chemicals/datatable?isTemplateFilter=false&chemicalIds=&snurUseIds=&useIds=124470&groupIds=&categoryIds=&endpointKeys=&synonymIds=&sourceIds=' # &sEcho=4&iColumns=6&sColumns=&iDisplayStart=0&iDisplayLength=10&mDataProp_0=0&mDataProp_1=1\ # &mDataProp_2=2&mDataProp_3=3&mDataProp_4=4&mDataProp_5=5&sSearch=&bRegex=false&sSearch_0=\ # &bRegex_0=false&bSearchable_0=true&sSearch_1=&bRegex_1=false&bSearchable_1=true&sSearch_2=\ # &bRegex_2=false&bSearchable_2=true&sSearch_3=&bRegex_3=false&bSearchable_3=true&sSearch_4=\ # &bRegex_4=false&bSearchable_4=true&sSearch_5=&bRegex_5=false&bSearchable_5=true&iSortCol_0=0\ # &sSortDir_0=asc&iSortingCols=1&bSortable_0=false&bSortable_1=true&bSortable_2=false\ # &bSortable_3=false&bSortable_4=false&bSortable_5=false' print(uri) r = requests.get(uri, headers = {'Accept': 'application/json, */*'}) j = json.loads(r.text) j """ Explanation: Can it return a list of chemicals classified with a specific use ID? Unfortunately, the monster URI that the documentation provides (item 3, p 3) for doesn't really do much, or I am not using it correctly. End of explanation """ uri = URIBASE + 'datatable?mediaType=json&useIds=124470&sourceIds=2-5-6-7-3-10-9-8-1-16-4-11-1981377' r = requests.get(uri, headers = {'Accept': 'application/json, */*'}) r.text """ Explanation: Trying something different: learn from the URIs that ChemView generates when you do a search and export the results. * Searched for chemicals matching the use "Flame retardant" (from the drop-down menu) in all sources. * Replaced mediaType=xls to retrieve json instead in the resulting URI. ...This doesn't work either. End of explanation """ uri = URIBASE + 'sources' r = requests.get(uri, headers = {'Accept': 'application/json, */*'}) j = json.loads(r.text) sources_df = DataFrame(j) sources_df # Calculate the number of items in the 'chemicals' field for each source. sources_df['num_chems'] = sources_df['chemicals'].apply(len) sources_df[['sourceId', 'sourceDesc', 'num_chems']] sources_df.ix[9,:] DataFrame(sources_df.ix[9,0]) """ Explanation: Looking at 'sources': can we get SNUR info? End of explanation """ uri = URIBASE + 'chemicals/f&sourceIds=1' #&chemicalIds=&snurUseIds=&useIds=&groupIds=&categoryIds=&endpointKeys=&synonymIds=' # &sEcho=4&iColumns=6&sColumns=&iDisplayStart=0&iDisplayLength=10&mDataProp_0=0&mDataProp_1=1\ # &mDataProp_2=2&mDataProp_3=3&mDataProp_4=4&mDataProp_5=5&sSearch=&bRegex=false&sSearch_0=\ # &bRegex_0=false&bSearchable_0=true&sSearch_1=&bRegex_1=false&bSearchable_1=true&sSearch_2=\ # &bRegex_2=false&bSearchable_2=true&sSearch_3=&bRegex_3=false&bSearchable_3=true&sSearch_4=\ # &bRegex_4=false&bSearchable_4=true&sSearch_5=&bRegex_5=false&bSearchable_5=true&iSortCol_0=0\ # &sSortDir_0=asc&iSortingCols=1&bSortable_0=false&bSortable_1=true&bSortable_2=false\ # &bSortable_3=false&bSortable_4=false&bSortable_5=false' print(uri) r = requests.get(uri, headers = {'Accept': 'application/json, */*'}) j = json.loads(r.text) j """ Explanation: This tells us that if you ask ChemView for information form SNUR sources, you will get information about... just two chemicals? End of explanation """ uri = URIBASE + 'chemicals/3554283?sourceIds=1' print(uri) r = requests.get(uri, headers = {'Accept': 'application/json, */*'}) j = json.loads(r.text) j """ Explanation: Try to get SNUR information for a known ID What if we look up info about one of these chemicals, specifying SNURs as the source. End of explanation """ uri = URIBASE + 'chemicals/3565112?sourceIds=1&synonymIds=' print(uri) r = requests.get(uri, headers = {'Accept': 'application/json, */*'}) j = json.loads(r.text) j print(j['sources'][0]['chemicals'][0]['externalLink']) """ Explanation: That returned nothing. OK, we also know that chemical ID 3565112 corresponds to PMN Number P-11-0607 and that ChemView has a record of the SNURs linked to this substance... End of explanation """ uri = 'http://java.epa.gov/chemview?tf=0&ch=P-09-0248&su=2-5-6-7&as=3-10-9-8&ac=1-16&ma=4-11-1981377&tds=0&tdl=10&tas1=1&tas2=asc&tas3=undefined&tss=&modal=template&modalId=3517608&modalSrc=1&modalDetailId=3517610&mediaType=json' r = requests.get(uri, headers = {'Accept': 'application/json, */*'}) j = json.loads(r.text) j """ Explanation: That did return some actual information. The external links about the specific chemicals both point to a PDF of the SNURs published in the Federal Register. We already know that this is not the extent of EPA's public data on these SNURs, so where is it in ChemView? I navigated to the ChemView record for PMN number P-09-0248 and clicked on it to get a summary of the SNUR: Below, I copied the link that it gives you when you click "E-mail Url", but added &amp;mediaType=json. End of explanation """ uri = 'http://java.epa.gov/chemview?tf=1&su=2-5-6-7&as=3-10-9-8&ac=1-16&ma=4-11-1981377&tds=0&tdl=10&tas1=1&tas2=asc&tas3=undefined&tss=&modal=template&modalId=103298&modalSrc=3&modalDetailId=5636434&modalVae=0-0-1-0-0&mediaType=json' r = requests.get(uri, headers = {'Accept': 'application/json, */*'}) j = json.loads(r.text) j """ Explanation: Apparently these data are not API-present yet. Trying something else by tweaking the URL from a different search... End of explanation """
laurentperrinet/Khoei_2017_PLoSCB
notebooks/control_jobs.ipynb
mit
!ipython3 experiment_fle.py !ipython3 experiment_speed.py !ipython3 experiment_contrast.py !ipython3 experiment_MotionReversal.py !ipython3 experiment_SI_controls.py """ Explanation: controlling jobs locally This is a set of convenient commands used to control simulations locally. 🏄 running scripts 🏄 End of explanation """ !find . -name *lock* -exec ls -l {} \; |wc -l !find . -name *lock* -exec ls -l {} \; """ Explanation: managing lock files counting: End of explanation """ !git commit -m' finished new run ' ../notebooks/control_jobs.ipynb """ Explanation: removing older files ⚠️ THIS WILL DELETE (cached) FILES version control End of explanation """
ledeprogram/algorithms
class7/homework/argueso_olaya_7_1.ipynb
gpl-3.0
import pandas as pd %matplotlib inline from sklearn import datasets from pandas.tools.plotting import scatter_matrix import matplotlib.pyplot as plt iris = datasets.load_iris() iris x = iris.data[:,:2] y = iris.target from sklearn import tree from sklearn.cross_validation import train_test_split dt = tree.DecisionTreeClassifier() x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.5,train_size=0.5) dt = dt.fit(x_train,y_train) from sklearn import metrics import numpy as np def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True): y_pred=clf.predict(X) if show_accuracy: print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n") if show_classification_report: print("Classification report") print(metrics.classification_report(y,y_pred),"\n") if show_confussion_matrix: print("Confusion matrix") print(metrics.confusion_matrix(y,y_pred),"\n") measure_performance(x_train,y_train,dt) measure_performance(x_test,y_test,dt) """ Explanation: We covered a lot of information today and I'd like you to practice developing classification trees on your own. For each exercise, work through the problem, determine the result, and provide the requested interpretation in comments along with the code. The point is to build classifiers, not necessarily good classifiers (that will hopefully come later) 1. Load the iris dataset and create a holdout set that is 50% of the data (50% in training and 50% in test). Output the results (don't worry about creating the tree visual unless you'd like to) and discuss them briefly (are they good or not?) End of explanation """ x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.75,train_size=0.25) def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True): y_pred=clf.predict(X) if show_accuracy: print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n") if show_classification_report: print("Classification report") print(metrics.classification_report(y,y_pred),"\n") if show_confussion_matrix: print("Confusion matrix") print(metrics.confusion_matrix(y,y_pred),"\n") measure_performance(x_train,y_train,dt) measure_performance(x_test,y_test,dt) """ Explanation: Every time I run the code, the result is different, i.e. the accuracy level varies. Sometimes the training dataset does better, sometimes is the other way around. Since I do not really understand what's going on under the hood, I am not able to go further into my analysis. However, I would say that, in general, I could not trust the results of such a model, because, as a journalist, I am accountable for everything I produce. In this case, I could not rely on a "source" that is changing its version every single time I interview him/her. 2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be. End of explanation """ breast_cancer = datasets.load_breast_cancer() breast_cancer breast_cancer.keys() breast_cancer['feature_names'] type(breast_cancer) df = pd.DataFrame(breast_cancer.data, columns=breast_cancer['feature_names']) df df.corr() """ Explanation: In this case, the test dataset seems to do a little bit better than the training dataset. I guess that is because the larger the training dataset, the more accuracy it can achieve. On the other hand, the smaller the test data, the easier it is for the model to be accurate. 3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes to we have? What are we trying to predict? For context of the data, see the documentation here: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29 End of explanation """ x = breast_cancer.data[:,:2] y = breast_cancer.target dt = tree.DecisionTreeClassifier() x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.5,train_size=0.5) dt = dt.fit(x_train,y_train) def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True): y_pred=clf.predict(X) if show_accuracy: print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n") if show_classification_report: print("Classification report") print(metrics.classification_report(y,y_pred),"\n") if show_confussion_matrix: print("Confusion matrix") print(metrics.confusion_matrix(y,y_pred),"\n") measure_performance(x_train,y_train,dt) measure_performance(x_test,y_test,dt) x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75) dt = dt.fit(x_train,y_train) def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True): y_pred=clf.predict(X) if show_accuracy: print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n") if show_classification_report: print("Classification report") print(metrics.classification_report(y,y_pred),"\n") if show_confussion_matrix: print("Confusion matrix") print(metrics.confusion_matrix(y,y_pred),"\n") measure_performance(x_train,y_train,dt) measure_performance(x_test,y_test,dt) """ Explanation: Since I am not an expert in breast cancer, I do not feel able to select the most relevant features to perform a meaningful analysis. I have therefore chosen two of them that share a high correlation: mean perimeter and mean radius. 4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results. End of explanation """
briennakh/BIOF509
Wk13/Wk13-Advanced-ML-tasks.ipynb
mit
from sklearn.cross_validation import cross_val_score from sklearn.datasets import load_iris from sklearn.ensemble import AdaBoostClassifier iris = load_iris() clf = AdaBoostClassifier(n_estimators=100) scores = cross_val_score(clf, iris.data, iris.target) scores.mean() from sklearn.cross_validation import cross_val_score from sklearn.datasets import load_iris from sklearn.ensemble import GradientBoostingClassifier iris = load_iris() clf = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, max_depth=1, random_state=0) scores = cross_val_score(clf, iris.data, iris.target) scores.mean() """ Explanation: Week 13 - Advanced Machine Learning During the course we have covered a variety of different tasks and algorithms. These were chosen for their broad applicability and ease of use with many important techniques and areas of study skipped. The goal of this class is to provide a brief overview of some of the latest advances and areas that could not be covered due to our limited time. Boosting Boosting is an area I briefly mentioned in week 11 when I discussed ensembles. I mention it here again to make the point that the gap between the best known algorithms and the most easily utilized algorithms is often very narrow, or even nonexistent. Adaboost utilizes adaptive boosting. It was developed in 2003 and was regarded as the best out-of-the-box algorithm for many years. XGBoost is an open source package with bindings for a variety of programming languages including python. The approach taken is slightly different to Adaboost and utilizes gradient boosting instead of adaptive boosting. Work began on the package in 2014 and it has rapidly grown in popularity, in part due to several high profile wins in machine learning competitions. Scikit learn has implementations for both Adaboost and gradient tree boosting End of explanation """ # http://scikit-learn.org/stable/auto_examples/semi_supervised/plot_label_propagation_structure.html # Authors: Clay Woolam <clay@woolam.org> # Andreas Mueller <amueller@ais.uni-bonn.de> # Licence: BSD import numpy as np import matplotlib.pyplot as plt from sklearn.semi_supervised import label_propagation from sklearn.datasets import make_circles # generate ring with inner box n_samples = 200 X, y = make_circles(n_samples=n_samples, shuffle=False) outer, inner = 0, 1 labels = -np.ones(n_samples) labels[0] = outer labels[-1] = inner ############################################################################### # Learn with LabelSpreading label_spread = label_propagation.LabelSpreading(kernel='knn', alpha=1.0) label_spread.fit(X, labels) ############################################################################### # Plot output labels output_labels = label_spread.transduction_ plt.figure(figsize=(8.5, 4)) plt.subplot(1, 2, 1) plot_outer_labeled, = plt.plot(X[labels == outer, 0], X[labels == outer, 1], 'rs') plot_unlabeled, = plt.plot(X[labels == -1, 0], X[labels == -1, 1], 'g.') plot_inner_labeled, = plt.plot(X[labels == inner, 0], X[labels == inner, 1], 'bs') plt.legend((plot_outer_labeled, plot_inner_labeled, plot_unlabeled), ('Outer Labeled', 'Inner Labeled', 'Unlabeled'), loc='upper left', numpoints=1, shadow=False) plt.title("Raw data (2 classes=red and blue)") plt.subplot(1, 2, 2) output_label_array = np.asarray(output_labels) outer_numbers = np.where(output_label_array == outer)[0] inner_numbers = np.where(output_label_array == inner)[0] plot_outer, = plt.plot(X[outer_numbers, 0], X[outer_numbers, 1], 'rs') plot_inner, = plt.plot(X[inner_numbers, 0], X[inner_numbers, 1], 'bs') plt.legend((plot_outer, plot_inner), ('Outer Learned', 'Inner Learned'), loc='upper left', numpoints=1, shadow=False) plt.title("Labels learned with Label Spreading (KNN)") plt.subplots_adjust(left=0.07, bottom=0.07, right=0.93, top=0.92) plt.show() """ Explanation: Semi-supervised Semi-supervised learning attempts to use unlabeled data to better understand the structure of the data and apply this information to constructing a model using the labeled samples. End of explanation """ import matplotlib.pyplot as plt %matplotlib inline plt.gray() from keras.datasets import mnist (X_train, y_train), (X_test, y_test) = mnist.load_data() fig, axes = plt.subplots(3,5, figsize=(12,8)) for i, ax in enumerate(axes.flatten()): ax.imshow(X_train[i], interpolation='nearest') plt.show() from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.layers.convolutional import Convolution2D, MaxPooling2D from keras.utils import np_utils batch_size = 512 nb_classes = 10 nb_epoch = 3 X_train = X_train.reshape(X_train.shape[0], 1, 28, 28) X_test = X_test.reshape(X_test.shape[0], 1, 28, 28) X_train = X_train.astype("float32") X_test = X_test.astype("float32") X_train /= 255 X_test /= 255 # convert class vectors to binary class matrices Y_train = np_utils.to_categorical(y_train, nb_classes) Y_test = np_utils.to_categorical(y_test, nb_classes) model = Sequential() #model.add(Convolution2D(8, 1, 3, 3, input_shape=(1,28,28), activation='relu')) model.add(Convolution2D(4, 3, 3, input_shape=(1,28,28), activation='relu')) model.add(Convolution2D(4, 3, 3, activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(4, input_dim=4*28*28*0.25, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(nb_classes, input_dim=4, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adadelta') model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, show_accuracy=True, verbose=1, validation_data=(X_test, Y_test)) score = model.evaluate(X_test, Y_test, show_accuracy=True, verbose=0) print('Test score:', score[0]) print('Test accuracy:', score[1]) predictions = model.predict_classes(X_test) fig, axes = plt.subplots(3,5, figsize=(12,8)) for i, ax in enumerate(axes.flatten()): ax.imshow(X_test[predictions == 7][i].reshape((28,28)), interpolation='nearest') plt.show() from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, predictions) plt.bone() plt.matshow(cm) plt.colorbar() plt.ylabel('True label') plt.xlabel('Predicted label') import numpy as np np.fill_diagonal(cm, 0) plt.matshow(cm) plt.colorbar() plt.ylabel('True label') plt.xlabel('Predicted label') """ Explanation: Active learning Active learning is related to semi-supervised learning, with active learning being a more iterative process. The algorithm chooses which samples to request labels for. This can be a very useful approach when many samples are available but labelling them is expensive. The goal with active learning is to choose a set of samples to label that would most improve the performance of the model. There are a variety of different ways the samples can be chosen: Distributed across the feature space Uncertainty Disagreement between multiple models Potential for model change . . . and many others Examples can be found in compound screening, data mapping, and study categorization. Recommender systems Recommender systems attempt to predict the rating a user would give for an item. The most famous example of such a system was the Netflix prize. These systems continue to be widely used in ecommerce. There are a variety of different ways to implement these systems: Content-based filtering Collaborative filtering Hybrid approach Deep learning Although there is a neural network available on the development version of scikit learn it only runs on the CPU making the large neural networks now popular prohibitively slow. Fortunately, there are a number of different packages available for python that can run on a GPU. Theano is the GPGPU equivalent of numpy. It implements all the core functionality needed to build a deep neural network, and run it on the GPGPU, but does not come with an existing implementation. A variety of packages have been built on top of Theano that enable neural networks to be implemented in a relatively straightforward manner. Parrallels can be draw with the relationship between numpy and scikit learn. Pylearn2 has been around for a number of years but has been somewhat superseded by a number of new packages, including blocks, keras, and lasagne. You may have also heard of TensorFlow that was released by Google relatively recently. TensorFlow lies somewhere between the low-level Theano and the high-level packages such as blocks, keras, and lasagne. Currently only keras supports TensorFlow as an alternative backend. Installing these packages with support for executing code on the GPU is more challenging than simply conda install ... or pip install .... In addition to installing these packages it is also necessary to to install the CUDA packages. Information in this step is usually provided for each of the packages. Beyond the advances due to the greater computational capacity available on the GPU there have been a number of other important approaches utilized: Convolutional neural nets Recurrent neural nets Dropout Early stopping Data augmentation Aphex34 via wikimedia. End of explanation """
Python4AstronomersAndParticlePhysicists/PythonWorkshop-ICE
notebooks/06_01_pandas.ipynb
mit
# Import libraries import pandas as pd import numpy as np """ Explanation: This is the notebook for the python pandas dataframe course The idea of this notebook is to show the power of working with pandas dataframes Motivation We usually work with tabular data We should not handle them with bash commands like: for, split, grep, awk, etc... And pandas is a very nice tool to handle this kind of data. Welcome to Pandas! Definition of pandas: Python package providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. More information about pandas: http://pandas.pydata.org/pandas-docs/stable/ Contents of the course: Know your data: Dimensionality: Series or DataFrame Index Some examples Exercise 1: Selecting pandas structure I/O: Reading: CSV, FITS, SQL Writing: CSV Advanced example: Reading and writing CSV files by chunks Selecting and slicing: loc. & iloc. Advanced example: Estimate a galaxy property for a subset of galaxies using boolean conditions Exercise 2: Estimate another galaxy property Merge, join, and concatenate: Exercise 3: Generate a random catalog using the concat method Example: Merging dataframes using the merge method More functions: Loop a dataframe (itertuples and iterows) Sort Sample Reshape: pivot, stack, unstack, etc. Caveats and technicalities: Floating point limitations .values FITS chunks View or copy Wrong input example Some useful information Ten minutes to pandas: https://pandas.pydata.org/pandas-docs/stable/10min.html Pandas cookbook: https://pandas.pydata.org/pandas-docs/stable/cookbook.html Nice pandas course: https://www.datacamp.com/community/tutorials/pandas-tutorial-dataframe-python#gs.=B6Dr74 Multidimensional dataframes, xarray: http://xarray.pydata.org/en/stable/ Tips & Tricks https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/ <a id=know></a> Know your data Very important to (perfectly) know your data: structure, data type, index, relation, etc. (see Pau's talk for a much better explanation ;) Dimensionality: - 1-D: Series; e.g. - Solar planets: [Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune] - Set of astronomical objects and when they were observed: [[NGC1952, 2012-05-01], [NGC224, 2013-01-23], [NGC5194, 2014-02-13]] - 2-D: DataFrame; e.g (more business oriented): - 3 months of sales information for 3 fictitious companies: sales = [{'account': 'Jones LLC', 'Jan': 150, 'Feb': 200, 'Mar': 140}, {'account': 'Alpha Co', 'Jan': 200, 'Feb': 210, 'Mar': 215}, {'account': 'Blue Inc', 'Jan': 50, 'Feb': 90, 'Mar': 95 }] Index It is the value (~key) we use as a reference for each element. (Note: It does not have to be unique) Most of the data contain at least one index End of explanation """ solar_planets = ['Mercury','Venus','Earth','Mars','Jupiter','Saturn','Uranus','Neptune'] splanets = pd.Series(solar_planets) # Tips and tricks # To access the Docstring for quick reference on syntax use ? before: #?pd.Series() splanets splanets.index """ Explanation: Series definition Series is a one-dimensional labeled array capable of holding any data type The axis labels are collectively referred to as the index This is the basic idea of how to create a Series dataframe: s = pd.Series(data, index=index) where data can be: - list - ndarray - python dictionary - scalar and index is a list of axis labels Create a Series array from a list If no index is passed, one will be created having values [0, ..., len(data) - 1] End of explanation """ s1 = pd.Series(np.random.randn(5)) s1 s1.index """ Explanation: Create a Series array from a numpy array If data is an ndarray, index must be the same length as data. If no index is passed, one will be created having values [0, ..., len(data) - 1] Not including index: End of explanation """ s2 = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e']) s2 s2.index """ Explanation: Including index End of explanation """ s3 = pd.Series(5., index=['a', 'b', 'c', 'd', 'e']) s3 s3.index """ Explanation: From scalar value If data is a scalar value, an index must be provided The value will be repeated to match the length of index End of explanation """ d = {'a' : 0., 'b' : 1., 'c' : 2.} sd = pd.Series(d) sd """ Explanation: Create a Series array from a python dictionary End of explanation """ sales = [{'account': 'Jones LLC', 'Jan': 150, 'Feb': 200, 'Mar': 140}, {'account': 'Alpha Co', 'Jan': 200, 'Feb': 210, 'Mar': 215}, {'account': 'Blue Inc', 'Jan': 50, 'Feb': 90, 'Mar': 95 }] df = pd.DataFrame(sales) df df.info() df.index df = df.set_index('account') df """ Explanation: DataFrame definition DataFrame is a 2-dimensional labeled data structure with columns of potentially different types (see also Panel - 3-dimensional array). You can think of it like a spreadsheet or SQL table, or a dict of Series objects. It is generally the most commonly used pandas object. Like Series, DataFrame accepts many different kinds of input: Dict of 1D ndarrays, lists, dicts, or Series 2-D numpy.ndarray Structured or record ndarray A Series Another DataFrame From a list of dictionaries End of explanation """ d = {'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']), 'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])} df = pd.DataFrame(d) df df.info() pd.DataFrame(d, index=['d', 'b', 'a']) df.index df.columns """ Explanation: From dict of Series or dicts End of explanation """ d = {'one' : [1., 2., 3., 4.], 'two' : [4., 3., 2., 1.]} pd.DataFrame(d) pd.DataFrame(d, index=['a', 'b', 'c', 'd']) """ Explanation: From dict of ndarrays / lists The ndarrays must all be the same length. If an index is passed, it must clearly also be the same length as the arrays. If no index is passed, the result will be range(n), where n is the array length. End of explanation """ data = np.random.random_sample((5, 5)) data df = pd.DataFrame(data) df # Add index df = pd.DataFrame(data,index = ['a','b','c','d','e']) df # Add column names df = pd.DataFrame(data, index = ['a','b','c','d','e'], columns = ['ra', 'dec','z_phot','z_true','imag']) df """ Explanation: From structured or record array The ndarrays must all be the same length. If an index is passed, it must clearly also be the same length as the arrays. If no index is passed, the result will be range(n), where n is the array length. End of explanation """ data2 = [{'a': 1, 'b': 2}, {'a': 5, 'b': 10, 'c': 20}] pd.DataFrame(data2) pd.DataFrame(data2, index=['first', 'second']) pd.DataFrame(data2, columns=['a', 'b']) """ Explanation: From a list of dicts End of explanation """ #Few galaxies with some properties: id, ra, dec, magi galaxies = [ {'id' : 1, 'ra' : 4.5, 'dec' : -55.6, 'magi' : 21.3}, {'id' : 3, 'ra' : 23.5, 'dec' : 23.6, 'magi' : 23.3}, {'id' : 25, 'ra' : 22.5, 'dec' : -0.3, 'magi' : 20.8}, {'id' : 17, 'ra' : 33.5, 'dec' : 15.6, 'magi' : 24.3} ] # %load -r 1-19 solutions/06_01_pandas.py """ Explanation: <a id=exercise1></a> Exercise 1: Selecting pandas structure Given a few galaxies with some properties ['id', 'ra', 'dec', 'magi'], choose which pandas structure to use and its index: End of explanation """ filename = '../resources/galaxy_sample.csv' !head -30 ../resources/galaxy_sample.csv """ Explanation: <a id=io></a> I/O Reading from different sources into a DataFrame Most of the times any study starts with an input file containing some data rather than having a python list or dictionary. Here we present three different data sources and how to read them: two file formats (CSV and FITS) and a database connection. Advanced: More and more frequently the amount of data to handle is larger and larger (Big Data era) and therefore files are huge. This is why we strongly recommend to always program by chunks (sometimes it is mandatory and also it is not straight forward to implement). - From a CSV (Comma Separated Value) file: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html Reading the full catalog at once (if the file is not very large) CSV file created using the following query (1341.csv.bz2): SELECT unique_gal_id, ra_gal, dec_gal, z_cgal, z_cgal_v, lmhalo, (mr_gal - 0.8 * (atan(1.5 * z_cgal)- 0.1489)) AS abs_mag, gr_gal AS color, (des_asahi_full_i_true - 0.8 * (atan(1.5 * z_cgal)- 0.1489)) AS app_mag FROM micecatv2_0_view TABLESAMPLE (BUCKET 1 OUT OF 512) End of explanation """ filename_bz2 = '../resources/galaxy_sample.csv.bz2' !head ../resources/galaxy_sample.csv.bz2 """ Explanation: CSV.BZ2 (less storage, slower when reading because of decompression) End of explanation """ # Field index name (known a priori from the header or the file description) unique_gal_id_field = 'unique_gal_id' galaxy_sample = pd.read_csv(filename, sep=',', index_col = unique_gal_id_field, comment='#', na_values = '\\N') galaxy_sample.head() galaxy_sample.tail() """ Explanation: Reading the full catalog at once (if the file is not very large) End of explanation """ galaxy_sample.describe() galaxy_sample.info() galaxy_sample_bz2 = pd.read_csv(filename_bz2, sep=',', index_col = unique_gal_id_field, comment='#', na_values = r'\N') galaxy_sample_bz2.head() """ Explanation: DataFrame.describe: Generates descriptive statistics that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values. End of explanation """ from astropy.table import Table """ Explanation: FITS file: Pandas does not read directly FITS files so it is necessary to make some "convertion" We have found 2 different approaches: Table method from astropy pyfits fitsio (see "Caveats and technicalities" section below) Not easy to read it by chunks (see also "Caveats and technicalities" section below) Note: we strongly recommend to use CSV.BZ2! Using astropy (or pyfits) This method does not support "by chunks" and therefore you have to read it all at once End of explanation """ filename = '../resources/galaxy_sample.fits' #?Table.read() data = Table.read(filename) type(data) df = data.to_pandas() df.head() df = df.set_index('unique_gal_id') df.head() df.shape df.values.dtype df.info() """ Explanation: FITS file created using the same query as the CSV file: End of explanation """ # For PostgreSQL access from sqlalchemy.engine import create_engine # Text wrapping import textwrap # Database configuration parameters #db_url = '{scheme}://{user}:{password}@{host}/{database}' db_url = 'sqlite:///../resources/pandas.sqlite' sql_sample = textwrap.dedent("""\ SELECT * FROM micecatv1 WHERE ABS(ra_mag-ra) > 0.05 """) index_col = 'id' # Create database connection engine = create_engine(db_url) df = pd.read_sql(sql_sample, engine,index_col = 'id') df.head() """ Explanation: - From Database: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql.html End of explanation """ outfile = '../resources/micecatv1_sample1.csv' with open(outfile, 'w') as f_out: df.to_csv(f_out, columns = ['ra', 'dec','ra_mag','dec_mag'], index=True, header=True ) """ Explanation: Write to csv file: End of explanation """ filename = '../resources/galaxy_sample.csv' outfile = '../resources/galaxy_sample_some_columns.csv' # chunk size gal_chunk = 100000 # Field index name (known a priori from the header or the file description) unique_gal_id_field = 'unique_gal_id' """ Explanation: Advanced example: Reading and writing by chunks End of explanation """ with open(filename, 'r') as galaxy_fd, open (outfile, 'w') as f_out: galaxy_sample_reader = pd.read_csv( galaxy_fd, sep=',', index_col = unique_gal_id_field, comment='#', na_values = '\\N', chunksize=gal_chunk ) for chunk, block in enumerate(galaxy_sample_reader): print(chunk) # In order not to write n chunk times the header (HELP PAU!) block.to_csv(f_out, columns = ['ra_gal','dec_gal','z_cgal_v'], index=True, header= chunk==0, mode='a' ) block.head() block.tail(3) """ Explanation: Opening file with the with method Creating a file object using read_csv method Looping by chunks using enumerate in order to also have the chunk number End of explanation """ # DataFrame plot method %matplotlib inline import matplotlib.pyplot as plt block['lmhalo'].plot.hist(bins=100, logy = True) plt.show() """ Explanation: DataFrame plot method (just for curiosity!) End of explanation """ # Same dataframe as before filename='../resources/galaxy_sample.csv.bz2' galaxy_sample = pd.read_csv(filename, sep=',', index_col = unique_gal_id_field, comment='#', na_values = r'\N') galaxy_sample.head() """ Explanation: <a id=selecting></a> SELECTING AND SLICING The idea of this section is to show how to slice and get and set subsets of pandas objects The basics of indexing are as follows: | Operation | Syntax | Result | |--------------------------------|------------------|---------------| | Select column | df[column label] | Series | | Select row by index | df.loc[index] | Series | | Select row by integer location | df.iloc[pos] | Series | | Slice rows | df[5:10] | DataFrame | | Select rows by boolean vector | df[bool_vec] | DataFrame | End of explanation """ galaxy_sample['ra_gal'].head() type(galaxy_sample['dec_gal']) galaxy_sample[['ra_gal','dec_gal','lmhalo']].head() """ Explanation: Select a column End of explanation """ galaxy_sample.loc[28581888] type(galaxy_sample.loc[28581888]) """ Explanation: Select a row by index End of explanation """ galaxy_sample.iloc[0] type(galaxy_sample.iloc[0]) """ Explanation: Select a row by integer location End of explanation """ galaxy_sample.iloc[3:7] galaxy_sample[3:7] type(galaxy_sample.iloc[3:7]) """ Explanation: Slice rows End of explanation """ # Boolean vector (galaxy_sample['ra_gal'] < 45).tail() type(galaxy_sample['ra_gal'] < 45) galaxy_sample[galaxy_sample['ra_gal'] < 45].head() # redshift shell galaxy_sample[(galaxy_sample.z_cgal <= 0.2) | (galaxy_sample.z_cgal >= 1.0)].head() galaxy_sample[(galaxy_sample.z_cgal <= 1.0) & (galaxy_sample.index.isin([5670656,13615360,3231232]))] galaxy_sample[(galaxy_sample['ra_gal'] < 1.) & (galaxy_sample['dec_gal'] < 1.)][['ra_gal','dec_gal']].head() """ Explanation: Select rows by boolean vector: The operators are: | for or, & for and, and ~ for not. These must be grouped by using parentheses. End of explanation """ galaxy_sample.tail(10) # Splitting the galaxies # Boolean mask has_disk_mask = (galaxy_sample['color']-0.29+0.03*galaxy_sample['abs_mag'] < 0) has_disk_mask.tail(10) print (len(has_disk_mask)) print (type(has_disk_mask)) # Counting how many spirals n_spiral = has_disk_mask.sum() # Counting how many ellipticals n_elliptical = ~has_disk_mask.sum() galaxy_sample[has_disk_mask].count() galaxy_sample[has_disk_mask]['hubble_type'] = 'Spiral' # It did not add any column! It was working in a view! galaxy_sample.tail(10) # This is the proper way of doing it if one wants to add another column galaxy_sample.loc[has_disk_mask, 'hubble_type'] = 'Spiral' galaxy_sample.loc[~has_disk_mask, 'hubble_type'] = 'Elliptical' galaxy_sample.tail(10) # We can use the numpy where method to do the same: galaxy_sample['color_type'] = np.where(has_disk_mask, 'Blue', 'Red') galaxy_sample.tail(10) # The proper way would be to use a boolean field galaxy_sample['has_disk'] = has_disk_mask galaxy_sample.tail(10) galaxy_sample.loc[~has_disk_mask, 'disk_length'] = 0. galaxy_sample.loc[has_disk_mask, 'disk_length'] = np.fabs( np.random.normal( 0., scale=0.15, size=n_spiral ) ) """ Explanation: Recap: loc works on labels in the index. iloc works on the positions in the index (so it only takes integers). Advanced example: estimate the size of the disk (disk_length) for a set of galaxies In this exercise we are going to use some of the previous examples. Also we are going to introduce how to add a column and some other concepts We split the galaxies into two different populations, Ellipticals and Spirals, depending on the their color and absolute magnitude: if color - 0.29 + 0.03 * abs_mag &lt; 0 then Spiral else then Elliptical How many galaxies are elliptical or spirals? Elliptical galaxies do not have any disk (and therefore disk_length = 0). The disk_length for spiral galaxies follows a normal distribution with mean = 0 and sigma = 0.15 (in arcsec). In addition, the minimum disk_length for a spiral galaxy is 1.e-3. End of explanation """ galaxy_sample.tail(10) # Minimum value for disk_length for spirals dl_min = 1.e-4; disk_too_small_mask = has_disk_mask & (galaxy_sample['disk_length'] < dl_min) disk_too_small_mask.sum() galaxy_sample.loc[disk_too_small_mask, 'disk_length'].head() galaxy_sample.loc[disk_too_small_mask, 'disk_length'] = dl_min galaxy_sample.loc[disk_too_small_mask, 'disk_length'].head() galaxy_sample.tail(10) """ Explanation: DO NOT LOOP THE PANDAS DATAFRAME IN GENERAL! End of explanation """ # %load -r 20-102 solutions/06_01_pandas.py """ Explanation: <a id=exercise2></a> Exercise 2: Estimate another galaxy property What is the mean value and the standard deviation of the disk_length for spiral galaxies (Tip: use the .mean() and .std() methods) Estimate the bulge_length for elliptical galaxies. The bulge_length depends on the absolute magnitude in the following way: bulge_length = exp(-1.145 - 0.269 * (abs_mag - 23.)) How many galaxies have bulge_lenth > 1.0? In our model the maximum bulge_length for an elliptical galaxy is 0.5 arcsec. What is the mean value and the standard deviation of the bulge_length for elliptical galaxies. And for ellipticals with absolute magnitude brighter than -20? End of explanation """ df1 = pd.DataFrame( {'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3'], 'C': ['C0', 'C1', 'C2', 'C3'], 'D': ['D0', 'D1', 'D2', 'D3']}, index=[0, 1, 2, 3] ) df2 = pd.DataFrame( {'A': ['A4', 'A5', 'A6', 'A7'], 'B': ['B4', 'B5', 'B6', 'B7'], 'C': ['C4', 'C5', 'C6', 'C7'], 'D': ['D4', 'D5', 'D6', 'D7']}, index=[4, 5, 6, 7] ) df3 = pd.DataFrame( {'A': ['A8', 'A9', 'A10', 'A11'], 'B': ['B8', 'B9', 'B10', 'B11'], 'C': ['C8', 'C9', 'C10', 'C11'], 'D': ['D8', 'D9', 'D10', 'D11']}, index=[8, 9, 10, 11] ) frames = [df1, df2, df3] result = pd.concat(frames) result # Multiindex result = pd.concat(frames, keys=['x', 'y','z']) result result.index result.loc['y'] df4 = pd.DataFrame( {'B': ['B2', 'B3', 'B6', 'B7'], 'D': ['D2', 'D3', 'D6', 'D7'], 'F': ['F2', 'F3', 'F6', 'F7']}, index=[2, 3, 6, 7] ) df4 df1 result = pd.concat([df1, df4]) result result = pd.concat([df1, df4], axis=1) result result = pd.concat([df1, df4], axis=1, join='inner') result """ Explanation: <a id=merging></a> Merge, join, and concatenate https://pandas.pydata.org/pandas-docs/stable/merging.html pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations. concat method: pd.concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False, keys=None, levels=None, names=None, verify_integrity=False, copy=True) End of explanation """ df1 df2 result = df1.append(df2) result df1 df4 result = df1.append(df4) result result = pd.concat([df1,df4]) result """ Explanation: Using append method: End of explanation """ result = pd.concat([df1,df4], ignore_index=True) result """ Explanation: Note: Unlike list.append method, which appends to the original list and returns nothing, append here does not modify df1 and returns its copy with df2 appended. End of explanation """ result = df1.append(df4, ignore_index = True) result """ Explanation: This is also a valid argument to DataFrame.append: End of explanation """ df1 s1 = pd.Series(['X0', 'X1', 'X2', 'X3'], name='X') s1 result = pd.concat([s1,df1]) result result = pd.concat([df1,s1], axis = 1) result s2 = pd.Series(['_0', '_1', '_2', '_3']) result = pd.concat([df1,s2,s2,s2], axis = 1) result """ Explanation: Mixing dimensions End of explanation """ data = [ # halo_id, gal_id, ra, dec, z, abs_mag' [1, 1, 21.5, 30.1, 0.21, -21.2], [1, 2, 21.6, 29.0, 0.21, -18.3], [1, 3, 21.4, 30.0, 0.21, -18.5], [2, 1, 45.0, 45.0, 0.42, -20.4], [3, 1, 25.0, 33.1, 0.61, -21.2], [3, 2, 25.1, 33.2, 0.61, -20.3] ] # %load -r 103-145 solutions/06_01_pandas.py """ Explanation: <a id=exercise3></a> Exercise 3: Generate a random catalog using concat method In this exercise we will use the concat method and show a basic example of multiIndex. Given a subset of a few galaxies with the following properties ['halo_id', 'gal_id' ,'ra', 'dec', 'z', 'abs_mag'], create a random catalog with 50 times more galaxies than the subset keeping the properties of the galaxies but placing them randomly in the first octant of the sky. The index of each galaxy is given by the tuple [halo_id, gal_id] End of explanation """ star_filename = '../resources/df_star.ssv' spectra_filename = '../resources/df_spectra.ssv' starid_specid_filename = '../resources/df_starid_specid.ssv' df_spectra = pd.read_csv(spectra_filename, index_col=['spec_id', 'band'], sep = ' ') df_spectra.head(41) df_starid_specid = pd.read_csv(starid_specid_filename, sep=' ') df_starid_specid.head(5) # Given that the file is somehow corrupted we open it without defining any index df_star = pd.read_csv(star_filename, sep=' ') df_star.head(10) df_star[(df_star['sdss_star_id'] == 1237653665258930303) & (df_star['filter'] == 'NB455')] # Drop duplicates: df_star.drop_duplicates(subset = ['sdss_star_id', 'filter'], inplace= True) df_star[(df_star['sdss_star_id'] == 1237653665258930303) & (df_star['filter'] == 'NB455')] df_starid_specid.head(5) """ Explanation: Merge method: Database-style DataFrame joining/merging: pandas has full-featured, high performance in-memory join operations idiomatically very similar to relational databases like SQL. These methods perform significantly better (in some cases well over an order of magnitude better) than other open source implementations (like base::merge.data.frame in R). The reason for this is careful algorithmic design and internal layout of the data in DataFrame See the cookbook for some advanced strategies Users who are familiar with SQL but new to pandas might be interested in a comparison with SQL pd.merge(left, right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=True, suffixes=('_x', '_y'), copy=True, indicator=False) Example: Merging dataframes using the merge method (thanks Nadia!) Goal: build a dataframe merging 2 different dataframes with complementary information, through the relation given by a third dataframe. df_stars contains information of stars magnitudes per sdss_star_id and per filter: ['sdss_star_id', 'filter', 'expected_mag', 'expected_mag_err'] Note, the file is "somehow" corrupted and entries are duplicate several times Unique entries are characterized by sdss_star_id and filter df_spectra contains information of star flux per band (== filter) and per spec_id (!= sdss_star_id): ['spec_id', 'band', 'flux', 'flux_err'] Unique entries are characterized by spec_id and band df_spec_IDs allows to make the correspondence between sdss_star_id (== objID) and spec_id (== specObjID): ['objID', 'specObjID'] Unique entries are characterized by objID End of explanation """ df_spectra.reset_index(inplace = True) df_spectra.head() df_spectra.rename(columns={'band': 'filter'}, inplace = True) df_spectra.head() df_starid_specid.rename(columns={'objID':'sdss_star_id', 'specObjID':'spec_id'}, inplace = True) df_starid_specid.head() """ Explanation: We are going to unset the index and rename the columns in order to use the "on" argument: End of explanation """ df_star_merged = pd.merge(df_star, df_starid_specid, on='sdss_star_id') df_star_merged.head() df_star_merged = pd.merge(df_star_merged, df_spectra, on=['spec_id','filter']) df_star_merged.head(40) df_star_merged.set_index(['sdss_star_id', 'filter'], inplace = True) df_star_merged.head() # Each element has been observed in how many bands? count_bands = df_star_merged.groupby(level=0)['flux'].count() count_bands.head(20) df_star_merged.groupby(level=1)['flux_err'].mean().head(10) """ Explanation: Now we have everything ready to make the JOINs End of explanation """ # e.g.: the decimal value 0.1 cannot be represented exactly as a base 2 fraction (0.1 + 0.2) == 0.3 (0.1 + 0.2) - 0.3 """ Explanation: <a id=functions></a> More functions Looping a dataframe (iterrows): https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iterrows.html sort method: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html sample method: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sample.html Reshape dataframes (pivot, stack, unstack): http://nikgrozev.com/2015/07/01/reshaping-in-pandas-pivot-pivot-table-stack-and-unstack-explained-with-pictures/ Data cleaning: check for missing values (isnull): https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isnull.html drop missing values (dropna): https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html fill the missing values with other values (fillna): https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html replace values with different values (replace): https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html Some general ideas to get home: Do not loop a dataframe! Try to work by chunks; create functions that work with chunks Work with standard formats and "already implemented" functions <a id=caveats></a> Caveats and technicalities Floating point limitations: Be careful with exact comparisons! End of explanation """ import fitsio filename = '../resources/galaxy_sample.fits' fits=fitsio.FITS(filename) data = fits[1] # Number of rows data.get_nrows() # chunk size gal_chunk = 300000 # e.g.to create the ranges! import math niter = int(math.ceil(data.get_nrows() / float(gal_chunk))) for i in range(niter): s = i*gal_chunk f = min((i+1)*gal_chunk, data.get_nrows()) chunk = data[s:f] print (i) print (type(chunk)) print (chunk.dtype) df_chunk = pd.DataFrame(chunk) print (type(df_chunk)) print (df_chunk.dtypes) df_chunk = df_chunk.set_index('unique_gal_id') print (df_chunk.head()) """ Explanation: FITS files fitsio And working by chunks End of explanation """ bad_filename = '../resources/steps.flagship.dat' df_bad = pd.read_csv(bad_filename) df_bad.head() df_bad = pd.read_csv(bad_filename, sep = ' ') """ Explanation: .values DataFrame attribute Some scipy functions do not allow to use pandas dataframe as arguments and therefore it is useful to use the values atribute, which is the numpy representation of NDFrame The dtype will be a lower-common-denominator dtype (implicit upcasting); that is to say if the dtypes (even of numeric types) are mixed, the one that accommodates all will be chosen. Use this with care if you are not dealing with the blocks. View vs. Copy https://pandas.pydata.org/pandas-docs/stable/indexing.html#returning-a-view-versus-a-copy Wrong input example: .dat Look at the file using e.g. head bash command Note that there are more than one space, and if you do tail filename, different number of "spaces" End of explanation """ filename = '../resources/steps.flagship.ssv' columns = ['step_num', 'r_min', 'r_max', 'r_med', 'a_med', 'z_med'] df = pd.read_csv(filename, sep = ' ', header = None, names = columns, index_col = 'step_num') df.head() """ Explanation: Necessary to "modify" the file in order to convert it into a standard csv file, e.g.: cat steps.flagship.dat | tr -s " " | sed 's/^ *//g' &gt; steps.flagship.ssv End of explanation """
HazyResearch/snorkel
tutorials/advanced/Parallel_Processing.ipynb
apache-2.0
%load_ext autoreload %autoreload 2 %matplotlib inline import os os.environ['SNORKELDB'] = 'postgres:///snorkel' from snorkel import SnorkelSession session = SnorkelSession() """ Explanation: Parallel Processing in Snorkel In this notebook, we'll do the same preprocessing as in the introduction tutorial, but using multiple processes to do it in parallel. Initializing a SnorkelSession First, we initialize a SnorkelSession, which manages a connection to a database automatically for us, and will enable us to save intermediate results. We need to specify a connection to a database that supports multiple connections. End of explanation """ from snorkel.parser import TSVDocPreprocessor doc_preprocessor = TSVDocPreprocessor('../intro/data/articles.tsv', max_docs=n_docs) """ Explanation: Loading the Corpus End of explanation """ from snorkel.parser.spacy_parser import Spacy from snorkel.parser import CorpusParser corpus_parser = CorpusParser(parser=Spacy()) %time corpus_parser.apply(doc_preprocessor, count=2591, parallelism=4) """ Explanation: Running a CorpusParser The only thing we do differently from the introduction tutorial is specify how many processes to use: End of explanation """ from snorkel.models import candidate_subclass Spouse = candidate_subclass('Spouse', ['person1', 'person2']) from snorkel.candidates import Ngrams, CandidateExtractor from snorkel.matchers import PersonMatcher ngrams = Ngrams(n_max=7) person_matcher = PersonMatcher(longest_match_only=True) cand_extractor = CandidateExtractor(Spouse, [ngrams, ngrams], [person_matcher, person_matcher]) from snorkel.models import Document from util import number_of_people docs = session.query(Document).order_by(Document.name).all() train_sents = set() dev_sents = set() test_sents = set() for i, doc in enumerate(docs): for s in doc.sentences: if number_of_people(s) <= 5: if i % 10 == 8: dev_sents.add(s) elif i % 10 == 9: test_sents.add(s) else: train_sents.add(s) """ Explanation: Generating Candidates We can also do candidate generation in parallel. We'll repeat some code from the introduction. End of explanation """ %%time for i, sents in enumerate([train_sents, dev_sents, test_sents]): cand_extractor.apply(sents, split=i, parallelism=4) print("Number of candidates:", session.query(Spouse).filter(Spouse.split == i).count()) """ Explanation: Finally, we'll again apply the candidate extractor with a specified number of parallel processes. End of explanation """
godfreyduke/deep-learning
dcgan-svhn/DCGAN_Exercises.ipynb
mit
%matplotlib inline import pickle as pkl import matplotlib.pyplot as plt import numpy as np from scipy.io import loadmat import tensorflow as tf !mkdir data """ Explanation: Deep Convolutional GANs In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here. You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same. End of explanation """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm data_dir = 'data/' if not isdir(data_dir): raise Exception("Data directory doesn't exist!") class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(data_dir + "train_32x32.mat"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar: urlretrieve( 'http://ufldl.stanford.edu/housenumbers/train_32x32.mat', data_dir + 'train_32x32.mat', pbar.hook) if not isfile(data_dir + "test_32x32.mat"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar: urlretrieve( 'http://ufldl.stanford.edu/housenumbers/test_32x32.mat', data_dir + 'test_32x32.mat', pbar.hook) """ Explanation: Getting the data Here you can download the SVHN dataset. Run the cell above and it'll download to your machine. End of explanation """ trainset = loadmat(data_dir + 'train_32x32.mat') testset = loadmat(data_dir + 'test_32x32.mat') """ Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above. End of explanation """ idx = np.random.randint(0, trainset['X'].shape[3], size=36) fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),) for ii, ax in zip(idx, axes.flatten()): ax.imshow(trainset['X'][:,:,:,ii], aspect='equal') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) plt.subplots_adjust(wspace=0, hspace=0) """ Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake. End of explanation """ def scale(x, feature_range=(-1, 1)): # scale to (0, 1) x = ((x - x.min())/(255 - x.min())) # scale to feature_range min, max = feature_range x = x * (max - min) + min return x class Dataset: def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None): split_idx = int(len(test['y'])*(1 - val_frac)) self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:] self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:] self.train_x, self.train_y = train['X'], train['y'] self.train_x = np.rollaxis(self.train_x, 3) self.valid_x = np.rollaxis(self.valid_x, 3) self.test_x = np.rollaxis(self.test_x, 3) if scale_func is None: self.scaler = scale else: self.scaler = scale_func self.shuffle = shuffle def batches(self, batch_size): if self.shuffle: idx = np.arange(len(dataset.train_x)) np.random.shuffle(idx) self.train_x = self.train_x[idx] self.train_y = self.train_y[idx] n_batches = len(self.train_y)//batch_size for ii in range(0, len(self.train_y), batch_size): x = self.train_x[ii:ii+batch_size] y = self.train_y[ii:ii+batch_size] yield self.scaler(x), self.scaler(y) """ Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images. End of explanation """ def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z') return inputs_real, inputs_z """ Explanation: Network Inputs Here, just creating some placeholders like normal. End of explanation """ def generator(z, output_dim, reuse=False, alpha=0.2, training=True): with tf.variable_scope('generator', reuse=reuse): # First fully connected layer x # Output layer, 32x32x3 logits = out = tf.tanh(logits) return out """ Explanation: Generator Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images. What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU. You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper: Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3. Exercise: Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one. End of explanation """ def discriminator(x, reuse=False, alpha=0.2): with tf.variable_scope('discriminator', reuse=reuse): # Input layer is 32x32x3 x = logits = out = return out, logits """ Explanation: Discriminator Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers. You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU. Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately. Exercise: Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first. End of explanation """ def model_loss(input_real, input_z, output_dim, alpha=0.2): """ Get the loss for the discriminator and generator :param input_real: Images from the real dataset :param input_z: Z input :param out_channel_dim: The number of channels in the output image :return: A tuple of (discriminator loss, generator loss) """ g_model = generator(input_z, output_dim, alpha=alpha) d_model_real, d_logits_real = discriminator(input_real, alpha=alpha) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha) d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real))) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake))) g_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake))) d_loss = d_loss_real + d_loss_fake return d_loss, g_loss """ Explanation: Model Loss Calculating the loss like before, nothing new here. End of explanation """ def model_opt(d_loss, g_loss, learning_rate, beta1): """ Get optimization operations :param d_loss: Discriminator loss Tensor :param g_loss: Generator loss Tensor :param learning_rate: Learning Rate Placeholder :param beta1: The exponential decay rate for the 1st moment in the optimizer :return: A tuple of (discriminator training operation, generator training operation) """ # Get weights and bias to update t_vars = tf.trainable_variables() d_vars = [var for var in t_vars if var.name.startswith('discriminator')] g_vars = [var for var in t_vars if var.name.startswith('generator')] # Optimize with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars) return d_train_opt, g_train_opt """ Explanation: Optimizers Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics. End of explanation """ class GAN: def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5): tf.reset_default_graph() self.input_real, self.input_z = model_inputs(real_size, z_size) self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z, real_size[2], alpha=0.2) self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, 0.5) """ Explanation: Building the model Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object. End of explanation """ def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)): fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.axis('off') img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8) ax.set_adjustable('box-forced') im = ax.imshow(img, aspect='equal') plt.subplots_adjust(wspace=0, hspace=0) return fig, axes """ Explanation: Here is a function for displaying generated images. End of explanation """ def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)): saver = tf.train.Saver() sample_z = np.random.uniform(-1, 1, size=(72, z_size)) samples, losses = [], [] steps = 0 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for x, y in dataset.batches(batch_size): steps += 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z}) _ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x}) if steps % print_every == 0: # At the end of each epoch, get the losses and print them out train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x}) train_loss_g = net.g_loss.eval({net.input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) if steps % show_every == 0: gen_samples = sess.run( generator(net.input_z, 3, reuse=True, training=False), feed_dict={net.input_z: sample_z}) samples.append(gen_samples) _ = view_samples(-1, samples, 6, 12, figsize=figsize) plt.show() saver.save(sess, './checkpoints/generator.ckpt') with open('samples.pkl', 'wb') as f: pkl.dump(samples, f) return losses, samples """ Explanation: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an errror without it because of the tf.control_dependencies block we created in model_opt. End of explanation """ real_size = (32,32,3) z_size = 100 learning_rate = 0.001 batch_size = 64 epochs = 1 alpha = 0.01 beta1 = 0.9 # Create the network net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1) # Load the data and train the network here dataset = Dataset(trainset, testset) losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5)) fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator', alpha=0.5) plt.plot(losses.T[1], label='Generator', alpha=0.5) plt.title("Training Losses") plt.legend() _ = view_samples(-1, samples, 6, 12, figsize=(10,5)) """ Explanation: Hyperparameters GANs are very senstive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them. Exercise: Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time. End of explanation """
mne-tools/mne-tools.github.io
0.14/_downloads/plot_forward.ipynb
bsd-3-clause
import mne from mne.datasets import sample data_path = sample.data_path() # the raw file containing the channel location + types raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' # The paths to freesurfer reconstructions subjects_dir = data_path + '/subjects' subject = 'sample' """ Explanation: Head model and forward computation The aim of this tutorial is to be a getting started for forward computation. For more extensive details and presentation of the general concepts for forward modeling. See ch_forward. End of explanation """ mne.viz.plot_bem(subject=subject, subjects_dir=subjects_dir, brain_surfaces='white', orientation='coronal') """ Explanation: Computing the forward operator To compute a forward operator we need: a -trans.fif file that contains the coregistration info. a source space the BEM surfaces Compute and visualize BEM surfaces The BEM surfaces are the triangulations of the interfaces between different tissues needed for forward computation. These surfaces are for example the inner skull surface, the outer skull surface and the outer skill surface. Computing the BEM surfaces requires FreeSurfer and makes use of either of the two following command line tools: gen_mne_watershed_bem gen_mne_flash_bem Here we'll assume it's already computed. It takes a few minutes per subject. For EEG we use 3 layers (inner skull, outer skull, and skin) while for MEG 1 layer (inner skull) is enough. Let's look at these surfaces. The function :func:mne.viz.plot_bem assumes that you have the the bem folder of your subject FreeSurfer reconstruction the necessary files. End of explanation """ # The transformation file obtained by coregistration trans = data_path + '/MEG/sample/sample_audvis_raw-trans.fif' info = mne.io.read_info(raw_fname) mne.viz.plot_trans(info, trans, subject=subject, dig=True, meg_sensors=True, subjects_dir=subjects_dir) """ Explanation: Visualization the coregistration The coregistration is operation that allows to position the head and the sensors in a common coordinate system. In the MNE software the transformation to align the head and the sensors in stored in a so-called trans file. It is a FIF file that ends with -trans.fif. It can be obtained with mne_analyze (Unix tools), mne.gui.coregistration (in Python) or mrilab if you're using a Neuromag system. For the Python version see func:mne.gui.coregistration Here we assume the coregistration is done, so we just visually check the alignment with the following code. End of explanation """ src = mne.setup_source_space(subject, spacing='oct6', subjects_dir=subjects_dir, add_dist=False, overwrite=True) print(src) """ Explanation: Compute Source Space The source space defines the position of the candidate source locations. The following code compute such a cortical source space with an OCT-6 resolution. See setting_up_source_space for details on source space definition and spacing parameter. End of explanation """ mne.viz.plot_bem(subject=subject, subjects_dir=subjects_dir, brain_surfaces='white', src=src, orientation='coronal') """ Explanation: src contains two parts, one for the left hemisphere (4098 locations) and one for the right hemisphere (4098 locations). Sources can be visualized on top of the BEM surfaces. End of explanation """ import numpy as np # noqa from mayavi import mlab # noqa from surfer import Brain # noqa brain = Brain('sample', 'lh', 'inflated', subjects_dir=subjects_dir) surf = brain._geo vertidx = np.where(src[0]['inuse'])[0] mlab.points3d(surf.x[vertidx], surf.y[vertidx], surf.z[vertidx], color=(1, 1, 0), scale_factor=1.5) """ Explanation: However, only sources that lie in the plotted MRI slices are shown. Let's write a few lines of mayavi to see all sources. End of explanation """ conductivity = (0.3,) # for single layer # conductivity = (0.3, 0.006, 0.3) # for three layers model = mne.make_bem_model(subject='sample', ico=4, conductivity=conductivity, subjects_dir=subjects_dir) bem = mne.make_bem_solution(model) """ Explanation: Compute forward solution We can now compute the forward solution. To reduce computation we'll just compute a single layer BEM (just inner skull) that can then be used for MEG (not EEG). We specify if we want a one-layer or a three-layer BEM using the conductivity parameter. The BEM solution requires a BEM model which describes the geometry of the head the conductivities of the different tissues. End of explanation """ fwd = mne.make_forward_solution(raw_fname, trans=trans, src=src, bem=bem, fname=None, meg=True, eeg=False, mindist=5.0, n_jobs=2) print(fwd) """ Explanation: Note that the BEM does not involve any use of the trans file. The BEM only depends on the head geometry and conductivities. It is therefore independent from the MEG data and the head position. Let's now compute the forward operator, commonly referred to as the gain or leadfield matrix. See :func:mne.make_forward_solution for details on parameters meaning. End of explanation """ leadfield = fwd['sol']['data'] print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape) """ Explanation: We can explore the content of fwd to access the numpy array that contains the gain matrix. End of explanation """
metpy/MetPy
v1.0/_downloads/0eff36d3fdf633f2a71ae3e92fdeb5b8/Simple_Sounding.ipynb
bsd-3-clause
import matplotlib.pyplot as plt import numpy as np import pandas as pd import metpy.calc as mpcalc from metpy.cbook import get_test_data from metpy.plots import add_metpy_logo, SkewT from metpy.units import units # Change default to be better for skew-T plt.rcParams['figure.figsize'] = (9, 9) # Upper air data can be obtained using the siphon package, but for this example we will use # some of MetPy's sample data. col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed'] df = pd.read_fwf(get_test_data('jan20_sounding.txt', as_file_obj=False), skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names) # Drop any rows with all NaN values for T, Td, winds df = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed' ), how='all').reset_index(drop=True) """ Explanation: Simple Sounding Use MetPy as straightforward as possible to make a Skew-T LogP plot. End of explanation """ p = df['pressure'].values * units.hPa T = df['temperature'].values * units.degC Td = df['dewpoint'].values * units.degC wind_speed = df['speed'].values * units.knots wind_dir = df['direction'].values * units.degrees u, v = mpcalc.wind_components(wind_speed, wind_dir) skew = SkewT() # Plot the data using normal plotting functions, in this case using # log scaling in Y, as dictated by the typical meteorological plot skew.plot(p, T, 'r') skew.plot(p, Td, 'g') skew.plot_barbs(p, u, v) # Add the relevant special lines skew.plot_dry_adiabats() skew.plot_moist_adiabats() skew.plot_mixing_lines() skew.ax.set_ylim(1000, 100) # Add the MetPy logo! fig = plt.gcf() add_metpy_logo(fig, 115, 100) # Example of defining your own vertical barb spacing skew = SkewT() # Plot the data using normal plotting functions, in this case using # log scaling in Y, as dictated by the typical meteorological plot skew.plot(p, T, 'r') skew.plot(p, Td, 'g') # Set spacing interval--Every 50 mb from 1000 to 100 mb my_interval = np.arange(100, 1000, 50) * units('mbar') # Get indexes of values closest to defined interval ix = mpcalc.resample_nn_1d(p, my_interval) # Plot only values nearest to defined interval values skew.plot_barbs(p[ix], u[ix], v[ix]) # Add the relevant special lines skew.plot_dry_adiabats() skew.plot_moist_adiabats() skew.plot_mixing_lines() skew.ax.set_ylim(1000, 100) # Add the MetPy logo! fig = plt.gcf() add_metpy_logo(fig, 115, 100) # Show the plot plt.show() """ Explanation: We will pull the data out of the example dataset into individual variables and assign units. End of explanation """
jrieke/machine-intelligence-2
sheet11/sheet11_1.ipynb
mit
from __future__ import division, print_function import matplotlib.pyplot as plt %matplotlib inline import scipy.stats import numpy as np import math from scipy.ndimage import imread import sys """ Explanation: Machine Intelligence II - Team MensaNord Sheet 11 Nikolai Zaki Alexander Moore Johannes Rieke Georg Hoelger Oliver Atanaszov End of explanation """ # import image img_orig = imread('testimg.jpg').flatten() print("$img_orig") print("shape: \t\t", img_orig.shape) # = vector print("values: \t from ", img_orig.min(), " to ", img_orig.max(), "\n") # "img" holds 3 vectors img = np.zeros((3,img_orig.shape[0])) print("$img") print("shape: \t\t",img.shape) std = [0, 0.05, 0.1] for i in range(img.shape[1]): # normalize => img[0] img[0][i] = img_orig[i] / 255 # gaussian noise => img[1] img[2] img[1][i] = img[0][i] + np.random.normal(0, std[1]) img[2][i] = img[0][i] + np.random.normal(0, std[2]) print(img[:, 0:4]) """ Explanation: Exercise 1.0 Load the data into a vector and normalize it such that the values are between 0 and 1. Create two new datasets by adding Gaussian noise with zero mean and standard deviation σ N ∈ {0.05, 0.1}. End of explanation """ # histograms fig, axes = plt.subplots(1, 3, figsize=(15, 5)) for i, ax in enumerate(axes.flatten()): plt.sca(ax) plt.hist(img[i], 100, normed=1, alpha=0.75) plt.xlim(-0.1, 1.1) plt.ylim(0, 18) plt.xlabel("value") plt.ylabel("probability") plt.title('img[{}]'.format(i)) # divide probablity space in 100 bins nbins = 100 bins = np.linspace(0, 1, nbins+1) # holds data equivalent to shown histograms (but cutted from 0 to 1) elementsPerBin = np.zeros((3,nbins)) for i in range(3): ind = np.digitize(img[i], bins) elementsPerBin[i] = [len(img[i][ind == j]) for j in range(nbins)] # counts number of elements from bin '0' to bin 'j' sumUptoBinJ = np.asarray([[0 for i in range(nbins)] for i in range(3)]) for i in range(3): for j in range(nbins): sumUptoBinJ[i][j] = np.sum(elementsPerBin[i][0:j+1]) # plot plt.figure(figsize=(15, 5)) for i in range(3): plt.plot(sumUptoBinJ[i], '.-') plt.legend(['img[0]', 'img[1]', 'img[2]']) plt.xlabel('bin') plt.ylabel('empirical distribution functions'); """ Explanation: Create a figure showing the 3 histograms (original & 2 sets of noise corrupted data – use enough bins!). In an additional figure, show the three corresponding empirical distribution functions in one plot. End of explanation """ def H(vec, h): """ (rectangular) histogram kernel function """ vec = np.asarray(vec) return np.asarray([1 if abs(x)<.5 else 0 for x in vec]) """ Explanation: Exercise 1.1 Take a subset of P = 100 observations and estimate the probability density p̂ of intensities with a rectangular kernel (“gliding window”) parametrized by window width h. Plot the estimates p̂ resulting for (e.g. 10) different samples of size P End of explanation """ def P_est(x, h, data, kernel = H): """ returns the probability that data contains values @ (x +- h/2) """ n = 1 #= data.shape[1] #number of dimensions (for multidmensional data) p = len(data) return 1/(h**n)/p*np.sum(kernel((data - x)/h, h)) # take 10 data sets with 100 observations (indexes 100k to 101k) # nomenclature: data_3(3, 10, 100) holds 3 times data(10, 100) P = 100 offset = int(100000) data_3 = np.zeros((3, 10,P)) for j in range(3): for i in range(10): data_3[j][i] = img[j][offset+i*P:offset+(i+1)*P] print(data_3.shape) # calculate probability estimation for (center +- h/2) on the 10 data sets h = .15 nCenters = 101 Centers = np.linspace(0,1,nCenters) fig, ax = plt.subplots(2,5,figsize=(15,6)) ax = ax.ravel() for i in range(10): ax[i].plot([P_est(center,h,data_3[0][i]) for center in Centers]) """ Explanation: $P(\underline{x}) = \frac{1}{h^n} \frac{1}{p} \Sigma_{\alpha=1}^{p} H(\frac{\underline{x} - \underline{x}^{(\alpha)}}{h})$ End of explanation """ testdata = img[0][50000:55000] # calculate average negative log likelihood for def avg_NegLL(data, h, kernel=H): sys.stdout.write(".") average = 0 for i in range(10): L_prob = [np.log(P_est(x,h,data[i],kernel)) for x in testdata] negLL = -1*np.sum(L_prob) # print(negLL) average += negLL average /= 10 return average # avg_NegLL(data,h,testdata) """ Explanation: Calculate the negative log-likelihood per datapoint of your estimator using 5000 samples from the data not used for the density estimation (i.e. the “test-set”). Get the average of the negative log-likelihood over the 10 samples. $P({\underline{x}^{(\alpha)}};\underline{w}) = - \Sigma_{\alpha=1}^{p} ln P(\underline{x}^{(\alpha)};\underline{w})$ End of explanation """ hs = np.linspace(0.001, 0.999, 20) def plot_negLL(data_3=data_3, kernel=H): fig = plt.figure(figsize=(12,8)) for j in range(3): print("calc data[{}]".format(j)) LLs = [avg_NegLL(data_3[j],h,kernel=kernel) for h in hs] plt.plot(hs,LLs) print() plt.legend(['img[0]', 'img[1]', 'img[2]']) plt.show() plot_negLL() """ Explanation: 2) Repeat this procedure (without plotting) for a sequence of kernel widths h to get the mean log likelihood (averaged over the different samples) resulting for each value of h. (a) Apply this procedure to all 3 datasets (original and the two noise-corruped ones) to make a plot showing the obtained likelihoods (y-axis) vs. kernel width h (x-axis) as one line for each dataset. End of explanation """ P = 500 data_3b = np.zeros((3, 10,P)) for j in range(3): for i in range(10): data_3b[j][i] = img[j][offset+i*P:offset+(i+1)*P] plot_negLL(data_3=data_3b) """ Explanation: not plotted points have value = inf because: $negLL = - log( \Pi_\alpha P(x^\alpha,w) )$ so if one single $P(x^\alpha,w) = 0$ occurs (x has 5000 elements) the result is -log(0)=inf (not defined) this only occurs with the histogram kernel. (b) Repeat the previous step (LL & plot) for samples of size P = 500. End of explanation """ def Gaussian(x,h): """ gaussian kernel function """ return np.exp(-x**2/h/2)/np.sqrt(2*np.pi*h) fig, ax = plt.subplots(2,5,figsize=(15,6)) h = .15 ax = ax.ravel() for i in range(10): ax[i].plot([P_est(center,h,data_3[0][i],kernel=Gaussian) for center in Centers]) hs = np.linspace(0.001, 0.4, 20) plot_negLL(kernel=Gaussian) plot_negLL(data_3=data_3b, kernel=Gaussian) """ Explanation: (c) Repeat the previous steps (a & b) for the Gaussian kernel with σ^2 = h. End of explanation """
hetaodie/hetaodie.github.io
assets/media/uda-ml/supervisedlearning/jc/为慈善机构寻找捐助者/charity_finish/charity/finding_donors/finding_donors.ipynb
mit
# TODO:总的记录数 n_records = len(data) # # TODO:被调查者 的收入大于$50,000的人数 n_greater_50k = len(data[data.income.str.contains('>50K')]) # # TODO:被调查者的收入最多为$50,000的人数 n_at_most_50k = len(data[data.income.str.contains('<=50K')]) # # TODO:被调查者收入大于$50,000所占的比例 greater_percent = (n_greater_50k / n_records) * 100 # 打印结果 print ("Total number of records: {}".format(n_records)) print ("Individuals making more than $50,000: {}".format(n_greater_50k)) print ("Individuals making at most $50,000: {}".format(n_at_most_50k)) print ("Percentage of individuals making more than $50,000: {:.2f}%".format(greater_percent)) """ Explanation: 机器学习纳米学位 监督学习 项目2: 为CharityML寻找捐献者 欢迎来到机器学习工程师纳米学位的第二个项目!在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以'练习'开始的标题表示接下来的代码部分中有你必须要实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以'TODO'标出。请仔细阅读所有的提示! 除了实现代码外,你还必须回答一些与项目和你的实现有关的问题。每一个需要你回答的问题都会以'问题 X'为标题。请仔细阅读每个问题,并且在问题后的'回答'文字框中写出完整的答案。我们将根据你对问题的回答和撰写代码所实现的功能来对你提交的项目进行评分。 提示:Code 和 Markdown 区域可通过Shift + Enter快捷键运行。此外,Markdown可以通过双击进入编辑模式。 开始 在这个项目中,你将使用1994年美国人口普查收集的数据,选用几个监督学习算法以准确地建模被调查者的收入。然后,你将根据初步结果从中选择出最佳的候选算法,并进一步优化该算法以最好地建模这些数据。你的目标是建立一个能够准确地预测被调查者年收入是否超过50000美元的模型。这种类型的任务会出现在那些依赖于捐款而存在的非营利性组织。了解人群的收入情况可以帮助一个非营利性的机构更好地了解他们要多大的捐赠,或是否他们应该接触这些人。虽然我们很难直接从公开的资源中推断出一个人的一般收入阶层,但是我们可以(也正是我们将要做的)从其他的一些公开的可获得的资源中获得一些特征从而推断出该值。 这个项目的数据集来自UCI机器学习知识库。这个数据集是由Ron Kohavi和Barry Becker在发表文章_"Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid"_之后捐赠的,你可以在Ron Kohavi提供的在线版本中找到这个文章。我们在这里探索的数据集相比于原有的数据集有一些小小的改变,比如说移除了特征'fnlwgt' 以及一些遗失的或者是格式不正确的记录。 探索数据 运行下面的代码单元以载入需要的Python库并导入人口普查数据。注意数据集的最后一列'income'将是我们需要预测的列(表示被调查者的年收入会大于或者是最多50,000美元),人口普查数据中的每一列都将是关于被调查者的特征。 练习:数据探索 首先我们对数据集进行一个粗略的探索,我们将看看每一个类别里会有多少被调查者?并且告诉我们这些里面多大比例是年收入大于50,000美元的。在下面的代码单元中,你将需要计算以下量: 总的记录数量,'n_records' 年收入大于50,000美元的人数,'n_greater_50k'. 年收入最多为50,000美元的人数 'n_at_most_50k'. 年收入大于50,000美元的人所占的比例, 'greater_percent'. 提示: 您可能需要查看上面的生成的表,以了解'income'条目的格式是什么样的。 End of explanation """ # 为这个项目导入需要的库 import numpy as np import pandas as pd from time import time from IPython.display import display # 允许为DataFrame使用display() # 导入附加的可视化代码visuals.py import visuals as vs # 为notebook提供更加漂亮的可视化 %matplotlib inline # 导入人口普查数据 data = pd.read_csv("census.csv") # 成功 - 显示第一条记录 display(data.head(n=1)) # 将数据切分成特征和对应的标签 income_raw = data['income'] features_raw = data.drop('income', axis = 1) """ Explanation: 准备数据 在数据能够被作为输入提供给机器学习算法之前,它经常需要被清洗,格式化,和重新组织 - 这通常被叫做预处理。幸运的是,对于这个数据集,没有我们必须处理的无效或丢失的条目,然而,由于某一些特征存在的特性我们必须进行一定的调整。这个预处理都可以极大地帮助我们提升几乎所有的学习算法的结果和预测能力。 获得特征和标签 income 列是我们需要的标签,记录一个人的年收入是否高于50K。 因此我们应该把他从数据中剥离出来,单独存放。 End of explanation """ # 可视化 'capital-gain'和'capital-loss' 两个特征 vs.distribution(features_raw) """ Explanation: 转换倾斜的连续特征 一个数据集有时可能包含至少一个靠近某个数字的特征,但有时也会有一些相对来说存在极大值或者极小值的不平凡分布的的特征。算法对这种分布的数据会十分敏感,并且如果这种数据没有能够很好地规一化处理会使得算法表现不佳。在人口普查数据集的两个特征符合这个描述:'capital-gain'和'capital-loss'。 运行下面的代码单元以创建一个关于这两个特征的条形图。请注意当前的值的范围和它们是如何分布的。 End of explanation """ # 对于倾斜的数据使用Log转换 skewed = ['capital-gain', 'capital-loss'] features_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1)) # 可视化对数转换后 'capital-gain'和'capital-loss' 两个特征 vs.distribution(features_raw, transformed = True) """ Explanation: 对于高度倾斜分布的特征如'capital-gain'和'capital-loss',常见的做法是对数据施加一个<a href="https://en.wikipedia.org/wiki/Data_transformation_(statistics)">对数转换</a>,将数据转换成对数,这样非常大和非常小的值不会对学习算法产生负面的影响。并且使用对数变换显著降低了由于异常值所造成的数据范围异常。但是在应用这个变换时必须小心:因为0的对数是没有定义的,所以我们必须先将数据处理成一个比0稍微大一点的数以成功完成对数转换。 运行下面的代码单元来执行数据的转换和可视化结果。再次,注意值的范围和它们是如何分布的。 End of explanation """ from sklearn.preprocessing import MinMaxScaler # 初始化一个 scaler,并将它施加到特征上 scaler = MinMaxScaler() numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week'] features_raw[numerical] = scaler.fit_transform(data[numerical]) # 显示一个经过缩放的样例记录 display(features_raw.head(n = 1)) """ Explanation: 规一化数字特征 除了对于高度倾斜的特征施加转换,对数值特征施加一些形式的缩放通常会是一个好的习惯。在数据上面施加一个缩放并不会改变数据分布的形式(比如上面说的'capital-gain' or 'capital-loss');但是,规一化保证了每一个特征在使用监督学习器的时候能够被平等的对待。注意一旦使用了缩放,观察数据的原始形式不再具有它本来的意义了,就像下面的例子展示的。 运行下面的代码单元来规一化每一个数字特征。我们将使用sklearn.preprocessing.MinMaxScaler来完成这个任务。 End of explanation """ # TODO:使用pandas.get_dummies()对'features_raw'数据进行独热编码 features = pd.get_dummies(features_raw) # TODO:将'income_raw'编码成数字值 income = income_raw.replace(['>50K', '<=50K'], [1, 0]) # 打印经过独热编码之后的特征数量 encoded = list(features.columns) print ("{} total features after one-hot encoding.".format(len(encoded))) # 移除下面一行的注释以观察编码的特征名字 #print encoded """ Explanation: 练习:数据预处理 从上面的数据探索中的表中,我们可以看到有几个属性的每一条记录都是非数字的。通常情况下,学习算法期望输入是数字的,这要求非数字的特征(称为类别变量)被转换。转换类别变量的一种流行的方法是使用独热编码方案。独热编码为每一个非数字特征的每一个可能的类别创建一个_“虚拟”_变量。例如,假设someFeature有三个可能的取值A,B或者C,。我们将把这个特征编码成someFeature_A, someFeature_B和someFeature_C. | 特征X | | 特征X_A | 特征X_B | 特征X_C | | :-: | | :-: | :-: | :-: | | B | | 0 | 1 | 0 | | C | ----> 独热编码 ----> | 0 | 0 | 1 | | A | | 1 | 0 | 0 | 此外,对于非数字的特征,我们需要将非数字的标签'income'转换成数值以保证学习算法能够正常工作。因为这个标签只有两种可能的类别("<=50K"和">50K"),我们不必要使用独热编码,可以直接将他们编码分别成两个类0和1,在下面的代码单元中你将实现以下功能: - 使用pandas.get_dummies()对'features_raw'数据来施加一个独热编码。 - 将目标标签'income_raw'转换成数字项。 - 将"<=50K"转换成0;将">50K"转换成1。 End of explanation """ # 导入 train_test_split from sklearn.model_selection import train_test_split # 将'features'和'income'数据切分成训练集和测试集 X_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0, stratify = income) # 将'X_train'和'y_train'进一步切分为训练集和验证集 X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=0, stratify = y_train) # 显示切分的结果 print ("Training set has {} samples.".format(X_train.shape[0])) print ("Validation set has {} samples.".format(X_val.shape[0])) print ("Testing set has {} samples.".format(X_test.shape[0])) """ Explanation: 混洗和切分数据 现在所有的 类别变量 已被转换成数值特征,而且所有的数值特征已被规一化。和我们一般情况下做的一样,我们现在将数据(包括特征和它们的标签)切分成训练和测试集。其中80%的数据将用于训练和20%的数据用于测试。然后再进一步把训练数据分为训练集和验证集,用来选择和优化模型。 运行下面的代码单元来完成切分。 End of explanation """ #不能使用scikit-learn,你需要根据公式自己实现相关计算。 beta=0.5 TP = float(len(y_val[y_val == 1])) FP = float(len(y_val[y_val == 0])) FN = 0 #TODO: 计算准确率 accuracy = float(len(y_val[y_val == 1])) / len(y_val) # TODO: 计算查准率 Precision precision=float(TP)/(TP+FP) # TODO: 计算查全率 Recall recall=TP/(TP+FN) # TODO: 使用上面的公式,设置beta=0.5,计算F-score fscore = (1+0.5**2)* ( (precision*recall) / (0.5**2*precision + recall) ) # 打印结果 print ("Naive Predictor on validation data: \n \ Accuracy score: {:.4f} \n \ Precision: {:.4f} \n \ Recall: {:.4f} \n \ F-score: {:.4f}".format(accuracy, precision, recall, fscore)) """ Explanation: 评价模型性能 在这一部分中,我们将尝试四种不同的算法,并确定哪一个能够最好地建模数据。四种算法包含一个天真的预测器 和三个你选择的监督学习器。 评价方法和朴素的预测器 CharityML通过他们的研究人员知道被调查者的年收入大于\$50,000最有可能向他们捐款。因为这个原因CharityML对于准确预测谁能够获得\$50,000以上收入尤其有兴趣。这样看起来使用准确率作为评价模型的标准是合适的。另外,把没有收入大于\$50,000的人识别成年收入大于\$50,000对于CharityML来说是有害的,因为他想要找到的是有意愿捐款的用户。这样,我们期望的模型具有准确预测那些能够年收入大于\$50,000的能力比模型去查全这些被调查者更重要。我们能够使用F-beta score作为评价指标,这样能够同时考虑查准率和查全率: $$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$ 尤其是,当 $\beta = 0.5$ 的时候更多的强调查准率,这叫做F$_{0.5}$ score (或者为了简单叫做F-score)。 End of explanation """ # TODO:从sklearn中导入两个评价指标 - fbeta_score和accuracy_score from sklearn.metrics import fbeta_score, accuracy_score def train_predict(learner, sample_size, X_train, y_train, X_val, y_val): ''' inputs: - learner: the learning algorithm to be trained and predicted on - sample_size: the size of samples (number) to be drawn from training set - X_train: features training set - y_train: income training set - X_val: features validation set - y_val: income validation set ''' results = {} # TODO:使用sample_size大小的训练数据来拟合学习器 # TODO: Fit the learner to the training data using slicing with 'sample_size' start = time() # 获得程序开始时间 learner = learner.fit(X_train[:sample_size],y_train[:sample_size]) end = time() # 获得程序结束时间 # TODO:计算训练时间 results['train_time'] = end - start print(results['train_time']) # TODO: 得到在验证集上的预测值 # 然后得到对前300个训练数据的预测结果 start = time() # 获得程序开始时间 predictions_val = learner.predict(X_val) predictions_train = learner.predict(X_train[:300]) end = time() # 获得程序结束时间 # TODO:计算预测用时 results['pred_time'] = end - start # TODO:计算在最前面的300个训练数据的准确率 results['acc_train'] = accuracy_score(y_train[:300],predictions_train) # TODO:计算在验证上的准确率 results['acc_val'] = accuracy_score( y_val, predictions_val) # TODO:计算在最前面300个训练数据上的F-score results['f_train'] = fbeta_score(y_train[:300], predictions_train, 0.5) # TODO:计算验证集上的F-score results['f_val'] = fbeta_score(y_val,predictions_val,0.5) # 成功 print ("{} trained on {} samples.".format(learner.__class__.__name__, sample_size)) # 返回结果 return results """ Explanation: 问题 1 - 天真的预测器的性能 通过查看收入超过和不超过 \$50,000 的人数,我们能发现多数被调查者年收入没有超过 \$50,000。如果我们简单地预测说“这个人的收入没有超过 \$50,000”,我们就可以得到一个 准确率超过 50% 的预测。这样我们甚至不用看数据就能做到一个准确率超过 50%。这样一个预测被称作是天真的。通常对数据使用一个天真的预测器是十分重要的,这样能够帮助建立一个模型表现是否好的基准。 使用下面的代码单元计算天真的预测器的相关性能。将你的计算结果赋值给'accuracy', ‘precision’, ‘recall’ 和 'fscore',这些值会在后面被使用,请注意这里不能使用scikit-learn,你需要根据公式自己实现相关计算。 如果我们选择一个无论什么情况都预测被调查者年收入大于 \$50,000 的模型,那么这个模型在验证集上的准确率,查准率,查全率和 F-score是多少? 监督学习模型 问题 2 - 模型应用 你能够在 scikit-learn 中选择以下监督学习模型 - 高斯朴素贝叶斯 (GaussianNB) - 决策树 (DecisionTree) - 集成方法 (Bagging, AdaBoost, Random Forest, Gradient Boosting) - K近邻 (K Nearest Neighbors) - 随机梯度下降分类器 (SGDC) - 支撑向量机 (SVM) - Logistic回归(LogisticRegression) 从上面的监督学习模型中选择三个适合我们这个问题的模型,并回答相应问题。 模型1 模型名称 回答:决策树 描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处) 回答:学生录取资格(来自机器学习课程(决策树)) 这个模型的优势是什么?他什么情况下表现最好? 回答:优势:1、决策树易于实现和理解;2、计算复杂度相对较低,结果的输出易于理解。 当目标函数具有离散的输出值值表现最好 这个模型的缺点是什么?什么条件下它表现很差? 回答:可能出现过拟合问题。当过于依赖数据或参数设置不好时,它的表现很差。 根据我们当前数据集的特点,为什么这个模型适合这个问题。 回答:1、该问题是非线性问题,决策树能够很好地解决非线性问题;2、我们的数据中有大量布尔型特征且它的一些特征对于我们的目标可能相关程度并不高 模型2 模型名称 回答:高斯朴素贝叶斯 描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处) 回答:过滤垃圾邮件,可以把文档中的词作为特征进行分类(来自机器学习课程(朴素贝叶斯)))。 这个模型的优势是什么?他什么情况下表现最好? 回答:优势是在数据较少的情况下仍然有效,对缺失数据不敏感。适合小规模数据 这个模型的缺点是什么?什么条件下它表现很差? 回答:朴素贝叶斯模型假设各属性相互独立。但在实际应用中,属性之间往往有一定关联性,导致分类效果受到影响。 根据我们当前数据集的特点,为什么这个模型适合这个问题。 回答:数据集各属性关联性相对较小,且为小规模数据 模型3 模型名称 回答:AdaBoost 描述一个该模型在真实世界的一个应用场景。(你需要为此做点研究,并给出你的引用出处) 回答:预测患有疝病的马是否存活 这个模型的优势是什么?他什么情况下表现最好? 回答:优势是泛化错误低,易编码,可以应用在大部分分类器上,无参数调整。对于基于错误提升分类器性能它的表现最好 这个模型的缺点是什么?什么条件下它表现很差? 回答:缺点是对离群点敏感。当输入数据有不少极端值时,它的表现很差 根据我们当前数据集的特点,为什么这个模型适合这个问题。 回答:我们的数据集特征很多,较为复杂,在后续迭代中,出现错误的数据权重可能增大,而针对这种错误的调节能力正是AdaBoost的长处 练习 - 创建一个训练和预测的流水线 为了正确评估你选择的每一个模型的性能,创建一个能够帮助你快速有效地使用不同大小的训练集并在验证集上做预测的训练和验证的流水线是十分重要的。 你在这里实现的功能将会在接下来的部分中被用到。在下面的代码单元中,你将实现以下功能: 从sklearn.metrics中导入fbeta_score和accuracy_score。 用训练集拟合学习器,并记录训练时间。 对训练集的前300个数据点和验证集进行预测并记录预测时间。 计算预测训练集的前300个数据点的准确率和F-score。 计算预测验证集的准确率和F-score。 End of explanation """ # TODO:从sklearn中导入三个监督学习模型 from sklearn.tree import DecisionTreeClassifier from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import AdaBoostClassifier # TODO:初始化三个模型 clf_A = DecisionTreeClassifier(random_state=6) clf_B = GaussianNB() clf_C = AdaBoostClassifier(random_state=6) # TODO:计算1%, 10%, 100%的训练数据分别对应多少点 samples_1 = int(len(X_train)*0.01) samples_10 = int(len(X_train)*0.1) samples_100 = int(len(X_train)) # 收集学习器的结果 results = {} for clf in [clf_A, clf_B, clf_C]: clf_name = clf.__class__.__name__ results[clf_name] = {} for i, samples in enumerate([samples_1, samples_10, samples_100]): results[clf_name][i] = train_predict(clf, samples, X_train, y_train, X_val, y_val) # 对选择的三个模型得到的评价结果进行可视化 vs.evaluate(results, accuracy, fscore) """ Explanation: 练习:初始模型的评估 在下面的代码单元中,您将需要实现以下功能: - 导入你在前面讨论的三个监督学习模型。 - 初始化三个模型并存储在'clf_A','clf_B'和'clf_C'中。 - 使用模型的默认参数值,在接下来的部分中你将需要对某一个模型的参数进行调整。 - 设置random_state (如果有这个参数)。 - 计算1%, 10%, 100%的训练数据分别对应多少个数据点,并将这些值存储在'samples_1', 'samples_10', 'samples_100'中 注意:取决于你选择的算法,下面实现的代码可能需要一些时间来运行! End of explanation """ # TODO:导入'GridSearchCV', 'make_scorer'和其他一些需要的库 from sklearn.ensemble import AdaBoostClassifier from sklearn.model_selection import GridSearchCV from sklearn.metrics import fbeta_score,make_scorer # TODO:初始化分类器 clf = AdaBoostClassifier(random_state=0) # TODO:创建你希望调节的参数列表 parameters = {'n_estimators': [50, 100, 200]} # TODO:创建一个fbeta_score打分对象 scorer = make_scorer(fbeta_score, beta=0.5) # TODO:在分类器上使用网格搜索,使用'scorer'作为评价函数 grid_obj = GridSearchCV(clf, parameters,scorer) # TODO:用训练数据拟合网格搜索对象并找到最佳参数 grid_obj = grid_obj.fit(X_train, y_train) # 得到estimator best_clf = grid_obj.best_estimator_ # 使用没有调优的模型做预测 predictions = (clf.fit(X_train, y_train)).predict(X_val) best_predictions = best_clf.predict(X_val) # 汇报调优后的模型 print ("best_clf\n------") print (best_clf) # 汇报调参前和调参后的分数 print ("\nUnoptimized model\n------") print ("Accuracy score on validation data: {:.4f}".format(accuracy_score(y_val, predictions))) print ("F-score on validation data: {:.4f}".format(fbeta_score(y_val, predictions, beta = 0.5))) print ("\nOptimized Model\n------") print ("Final accuracy score on the validation data: {:.4f}".format(accuracy_score(y_val, best_predictions))) print ("Final F-score on the validation data: {:.4f}".format(fbeta_score(y_val, best_predictions, beta = 0.5))) """ Explanation: 提高效果 在这最后一节中,您将从三个有监督的学习模型中选择 最好的 模型来使用学生数据。你将在整个训练集(X_train和y_train)上使用网格搜索优化至少调节一个参数以获得一个比没有调节之前更好的 F-score。 问题 3 - 选择最佳的模型 基于你前面做的评价,用一到两段话向 CharityML 解释这三个模型中哪一个对于判断被调查者的年收入大于 \$50,000 是最合适的。 提示:你的答案应该包括评价指标,预测/训练时间,以及该算法是否适合这里的数据。 回答:DecisionTree在训练集上的accuracy score和F-score在三个模型中是最好的,虽然DecisionTree在测试集上的表现没这么好,在无参数调整的情况下出现了轻度的过拟合,但调整参数后应该可以消除这个问题,虽然对完整数据它的训练时间较长,但比AdaBoost快多了,且考虑到它的预测时间短,也就是查询时间短,我们一旦把模型训练出来,之后的主要任务就只有查询了,并不会过多消耗资源和开支,所以我还是决定使用DecisionTree. 问题 4 - 用通俗的话解释模型 用一到两段话,向 CharityML 用外行也听得懂的话来解释最终模型是如何工作的。你需要解释所选模型的主要特点。例如,这个模型是怎样被训练的,它又是如何做出预测的。避免使用高级的数学或技术术语,不要使用公式或特定的算法名词。 回答: 训练 根据数据的一个特征对所有数据进行最大的分割,然后对分割的数据接着重复上述步骤,直到所有的特征都判断完,这样一系列的判断对应的结果为数据的类别,这样的判断构成的就是决策树。 再多次执行上一步,每次执行的时候对样本进行随机有放回的抽样,构成多个不一样的决策树,这些决策树合并起来就是随机森林,可避免过拟合现象。 预测 对于新来的样本,每个决策树做一个分类结果进行相等权重投票,然后以多数者投票的结果作为该样本的分类结果 练习:模型调优 调节选择的模型的参数。使用网格搜索(GridSearchCV)来至少调整模型的重要参数(至少调整一个),这个参数至少需尝试3个不同的值。你要使用整个训练集来完成这个过程。在接下来的代码单元中,你需要实现以下功能: 导入sklearn.model_selection.GridSearchCV 和 sklearn.metrics.make_scorer. 初始化你选择的分类器,并将其存储在clf中。 设置random_state (如果有这个参数)。 创建一个对于这个模型你希望调整参数的字典。 例如: parameters = {'parameter' : [list of values]}。 注意: 如果你的学习器有 max_features 参数,请不要调节它! 使用make_scorer来创建一个fbeta_score评分对象(设置$\beta = 0.5$)。 在分类器clf上用'scorer'作为评价函数运行网格搜索,并将结果存储在grid_obj中。 用训练集(X_train, y_train)训练grid search object,并将结果存储在grid_fit中。 注意: 取决于你选择的参数列表,下面实现的代码可能需要花一些时间运行! End of explanation """ # TODO:导入一个有'feature_importances_'的监督学习模型 # TODO:在训练集上训练一个监督学习模型 model = DecisionTreeClassifier() model.fit(X_train,y_train) # TODO: 提取特征重要性 importances = model.feature_importances_ # 绘图 vs.feature_plot(importances, X_train, y_train) """ Explanation: 问题 5 - 最终模型评估 你的最优模型在测试数据上的准确率和 F-score 是多少?这些分数比没有优化的模型好还是差? 注意:请在下面的表格中填写你的结果,然后在答案框中提供讨论。 结果: | 评价指标 | 未优化的模型 | 优化的模型 | | :------------: | :---------------: | :-------------: | | 准确率 | 0.8648 |0.8715| | F-score | 0.7443 |0.7561| 回答:0.8715 和 0.7561 比没有优化的模型好 特征的重要性 在数据上(比如我们这里使用的人口普查的数据)使用监督学习算法的一个重要的任务是决定哪些特征能够提供最强的预测能力。专注于少量的有效特征和标签之间的关系,我们能够更加简单地理解这些现象,这在很多情况下都是十分有用的。在这个项目的情境下这表示我们希望选择一小部分特征,这些特征能够在预测被调查者是否年收入大于\$50,000这个问题上有很强的预测能力。 选择一个有 'feature_importance_' 属性的scikit学习分类器(例如 AdaBoost,随机森林)。'feature_importance_' 属性是对特征的重要性排序的函数。在下一个代码单元中用这个分类器拟合训练集数据并使用这个属性来决定人口普查数据中最重要的5个特征。 问题 6 - 观察特征相关性 当探索数据的时候,它显示在这个人口普查数据集中每一条记录我们有十三个可用的特征。 在这十三个记录中,你认为哪五个特征对于预测是最重要的,选择每个特征的理由是什么?你会怎样对他们排序? 回答: - 特征1:education_level:学位较高的人,更加容易获得高的收入; - 特征2:capital-gain:拥有其他的资本收益,一般有的收入都会高 - 特征3:occupaiton:收入与所从事的职业密切相关; - 特征4:hours_per_week:工作时间的长短与收入是正相关的关系; - 特征5:age:当前社会,一般来说随着年龄的增长,经验增加,收入也会越来越高; capital-gain> age > hours_per_week > education_level > occupaiton 练习 - 提取特征重要性 选择一个scikit-learn中有feature_importance_属性的监督学习分类器,这个属性是一个在做预测的时候根据所选择的算法来对特征重要性进行排序的功能。 在下面的代码单元中,你将要实现以下功能: - 如果这个模型和你前面使用的三个模型不一样的话从sklearn中导入一个监督学习模型。 - 在整个训练集上训练一个监督学习模型。 - 使用模型中的 'feature_importances_'提取特征的重要性。 End of explanation """ # 导入克隆模型的功能 from sklearn.base import clone # 减小特征空间 X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]] X_val_reduced = X_val[X_val.columns.values[(np.argsort(importances)[::-1])[:5]]] # 在前面的网格搜索的基础上训练一个“最好的”模型 clf_on_reduced = (clone(best_clf)).fit(X_train_reduced, y_train) # 做一个新的预测 reduced_predictions = clf_on_reduced.predict(X_val_reduced) # 对于每一个版本的数据汇报最终模型的分数 print ("Final Model trained on full data\n------") print ("Accuracy on validation data: {:.4f}".format(accuracy_score(y_val, best_predictions))) print ("F-score on validation data: {:.4f}".format(fbeta_score(y_val, best_predictions, beta = 0.5))) print ("\nFinal Model trained on reduced data\n------") print ("Accuracy on validation data: {:.4f}".format(accuracy_score(y_val, reduced_predictions))) print ("F-score on validation data: {:.4f}".format(fbeta_score(y_val, reduced_predictions, beta = 0.5))) """ Explanation: 问题 7 - 提取特征重要性 观察上面创建的展示五个用于预测被调查者年收入是否大于\$50,000最相关的特征的可视化图像。 这五个特征的权重加起来是否超过了0.5?<br> 这五个特征和你在问题 6中讨论的特征比较怎么样?<br> 如果说你的答案和这里的相近,那么这个可视化怎样佐证了你的想法?<br> 如果你的选择不相近,那么为什么你觉得这些特征更加相关? 回答:1、五个特征的权重加起来超过了0.5(0.22+0.17+0.12+0.12+0.10) 2、部分不一样 3、柱形图越高说明权重越大,说明越重要,证实了我选取的特征排序 特征选择 如果我们只是用可用特征的一个子集的话模型表现会怎么样?通过使用更少的特征来训练,在评价指标的角度来看我们的期望是训练和预测的时间会更少。从上面的可视化来看,我们可以看到前五个最重要的特征贡献了数据中所有特征中超过一半的重要性。这提示我们可以尝试去减小特征空间,简化模型需要学习的信息。下面代码单元将使用你前面发现的优化模型,并只使用五个最重要的特征在相同的训练集上训练模型。 End of explanation """ #TODO test your model on testing data and report accuracy and F score res = best_clf.predict(X_test) from sklearn.ensemble import AdaBoostClassifier clf = AdaBoostClassifier(n_estimators=100,random_state=0) clf = clf.fit(X_train,y_train) res = clf.predict(X_test) from sklearn.metrics import fbeta_score,accuracy_score print(accuracy_score(y_true=y_test,y_pred=res)) print(fbeta_score(y_true=y_test,y_pred=res,beta=0.5)) """ Explanation: 问题 8 - 特征选择的影响 最终模型在只是用五个特征的数据上和使用所有的特征数据上的 F-score 和准确率相比怎么样? 如果训练时间是一个要考虑的因素,你会考虑使用部分特征的数据作为你的训练集吗? 回答:使用所有的特征数据上的F-score和准确率更高 会考虑,因为大大缩短了训练时间 问题 9 - 在测试集上测试你的模型 终于到了测试的时候,记住,测试集只能用一次。 使用你最有信心的模型,在测试集上测试,计算出准确率和 F-score。 简述你选择这个模型的原因,并分析测试结果 优势是泛化错误低,易编码,可以应用在大部分分类器上,无参数调整。对于基于错误提升分类器性能它的表现最好. 预测结果较好 End of explanation """
Oli4/lsi-material
Foundations of Information Management/Sheet 4 - SQL queries.ipynb
mit
cur.execute('''SELECT film.title FROM film, person, participation WHERE film.genre LIKE '%Thriller%' AND film.id = participation.film AND person.id = participation.person AND participation.function = "director" AND person.name= "Spielberg" AND person.firstname = "Steven" ''') for row in cur.fetchall(): print(row[0]) """ Explanation: a) Which thrillers were directed by Steven Spielberg? End of explanation """ cur.execute('''SELECT DISTINCT person.firstname, person.name FROM person, participation WHERE person.id = participation.person AND participation.person IN (SELECT participation.person FROM participation GROUP BY participation.person HAVING count(*)>20);''') cur.execute('''SELECT person.firstname, person.name FROM (person JOIN participation ON participation.person=person.ID) GROUP BY person.ID HAVING COUNT(participation.film) > 20;''') # Both statements work. for row in cur.fetchall(): print(row[0], row[1]) """ Explanation: b) Who acted in at least 20 different films? End of explanation """ cur.execute('''SELECT show.date, cinema.name, cinema.city FROM show, film, cinema WHERE show.film = film.id AND show.cinema = cinema.id AND film.title="Alice in Wonderland";''') for row in cur.fetchall(): print(row[0], row[1], row[2]) """ Explanation: c) List all shows of “Alice in Wonderland”. End of explanation """ cur.execute('''SELECT p.firstname, p.name, f.title FROM (((person p INNER JOIN participation par ON p.ID=par.person AND par.function="director") INNER JOIN film f ON par.film=f.ID) INNER JOIN participation par2 ON p.ID=par2.person AND par2.film=par.film AND par2.function="actor") ORDER BY p.name;''') ''' Some of the results seem to be counter intuitive as actors in animation movies or severall people acting and directing in the same movie. But this is due to co-directors and the voice cast of animation movies ''' for row in cur.fetchall()[:20]: print(row[0], row[1], row[2]) """ Explanation: d) Who acted in his/her own movie? End of explanation """ cur.execute('''SELECT DISTINCT c.name, c.city FROM (cinema c INNER JOIN show s ON c.ID=s.cinema) WHERE s.film IN (SELECT f.ID FROM (film f INNER JOIN participation par ON f.ID= par.film) WHERE par.person= (SELECT p.ID FROM person p WHERE p.name="Winslet" AND p.firstname="Kate")) ;''') cur.execute('''SELECT DISTINCT c.name, c.city FROM (((cinema c JOIN show s ON c.ID=s.cinema) JOIN participation par ON s.film = par.film) JOIN person p ON p.ID=person) WHERE p.name="Winslet" AND p.firstname="Kate" ;''') for row in cur.fetchall()[:20]: print(row) """ Explanation: e) Which cinemas show films with Kate Winslet? End of explanation """ cur.execute('''SELECT DISTINCT f.title FROM ((film f INNER JOIN participation par ON f.ID = par.film AND par.function='director') INNER JOIN participation par1 ON f.ID = par1.film AND par1.function='director' AND par.person IS NOT par1.person) ;''') cur.execute('''SELECT f.title FROM (film f JOIN participation par ON f.ID = par.film) WHERE par.function='director' GROUP BY ID HAVING COUNT(*) > 1 ORDER BY f.title asc ;''') for row in cur.fetchall()[:20]: print(*row) """ Explanation: f) Which films have more than one director? End of explanation """ cur.execute('''SELECT f.title FROM (film f JOIN show s ON f.ID=s.film) WHERE s.date > '2015-05-30' ;''') '''The dates have been assigned randomly between 1980-01-01 and 2016-01-01 during database creation. ''' for row in cur.fetchall()[:20]: print(*row) """ Explanation: g) Which films have not been presented in a cinema yet? End of explanation """ cur.execute('''SELECT p.firstname, p.name FROM person p EXCEPT SELECT p.firstname, p.name FROM (person p JOIN participation par ON p.ID=par.person) ORDER BY p.name, p.firstname ;''') # It seems that for severall actors no participation record was written for row in cur.fetchall()[:20]: print(*row) cur.execute('''SELECT p.firstname, p.name FROM person p WHERE p.ID NOT IN (SELECT par.person FROM participation par) ORDER BY p.name, p.firstname ;''') for row in cur.fetchall()[:20]: print(*row) """ Explanation: h) Who hasn’t participated in a film yet? End of explanation """ cur.execute('''SELECT p.firstname, p.name FROM person p WHERE p.ID IN ( SELECT x.person FROM ( (SELECT * FROM (film f JOIN participation par ON f.ID=par.film) WHERE par.function="director" ) as x JOIN (SELECT * FROM (film f1 JOIN participation par1 ON f1.ID=par1.film) WHERE par1.function="director" ) as y ON x.year=y.year AND x.person = y.person AND x.film <> y.film )) ;''') for row in cur.fetchall()[:100]: print(*row) cur.execute('''SELECT f.year, f.title FROM (( film f JOIN participation par ON f.ID=par.film) JOIN person p ON p.ID=par.person) WHERE p.name = "Donner" AND p.firstname = "Richard" AND par.function='director' Order By f.year ;''') # Just to see which movies where made in the same year for row in cur.fetchall()[:100]: print(*row) """ Explanation: i) Who directed at least two different films in the same year? End of explanation """ cur.execute('''SELECT DISTINCT p.firstname, p.name FROM person p JOIN person p1 ON p.name = p1.name AND p.firstname = p1.firstname AND p.ID <> p1.ID ORDER BY p.name, p.firstname ;''') # Just to see which movies where made in the same year for row in cur.fetchall()[:20]: print(*row) """ Explanation: k) Are there persons having the same name (name and first name)? End of explanation """ cur.execute('''SELECT DISTINCT p.firstname, p.name FROM (((person p JOIN participation par ON p.ID=par.person) JOIN film f ON par.film=f.ID) JOIN show s ON f.ID=s.film) WHERE s.date<"2017-01-01" ;''') # Just to see which movies where made in the same year for row in cur.fetchall()[:20]: print(*row) cur.execute('''SELECT DISTINCT p.firstname, p.name FROM person p WHERE EXISTS (SELECT par.person FROM (participation par JOIN show s ON s.film=par.film ) WHERE s.date<"2017-01-01") ;''') # Just to see which movies where made in the same year for row in cur.fetchall()[:20]: print(*row) """ Explanation: Question 2 What is the meaning of the following SQL queries over the film schema. Provid the corresponding realational algebra expressions. a) SELECT DISTINCT title FROM (film JOIN show ON ID = film) JOIN cinema ON cinema.ID = cinema WHERE name = ’Metropol’ b) SELECT DISTINCT person.name, person.firstname FROM film, person, cinema, participation, show WHERE film.ID = participation.film AND film.ID = show.film AND person.ID = person AND cinema.ID = cinema AND date = ’2016-11-16’ Sheet 5 Question 3 End of explanation """
fangohr/polygon-finite-difference-mesh-tools
notebooks/example.ipynb
bsd-2-clause
cc = pmt.CartesianCoords(5,5) print("2D\n") print("x-coordinate: {}".format(cc.x)) print("y-coordinate: {}".format(cc.y)) print("radial: {}".format(cc.r)) print("azimuth: {}".format(cc.a)) cc3D = pmt.CartesianCoords(1,2,3) print("\n3D\n") print("x-coordinate: {}".format(cc3D.x)) print("y-coordinate: {}".format(cc3D.y)) print("z-coordinate: {}".format(cc3D.z)) print("radial: {}".format(cc3D.r)) print("azimuth: {}".format(cc3D.a)) print("height: {}".format(cc3D.h)) """ Explanation: CartesianCoords and PolarCoords are classes that were designed to be used in-house for the conversion between Cartesian and Polar coordinates. You just need to initialise the object with some coordinates, and then it is easy to extract the relevant information. 3D coordinates are possible, but the z-coordinate has a default value of 0. End of explanation """ print(pmt.in_poly.__doc__) """ Explanation: pmt.PolarCoords works in exactly the same way, but instead you initialise it with polar coordinates (radius, azimuth and height (optional), respectively) and the cartesian ones can be extracted as above. Function 1: in_poly End of explanation """ pmt.in_poly(x=5, y=30, n=3, r=40, plot=True) pmt.in_poly(x=5, y=30, n=3, r=40) # No graph will be generated, more useful for use within other functions pmt.in_poly(x=0, y=10, n=6, r=20, plot=True) # Dot changes colour to green when inside the polygon import numpy as np pmt.in_poly(x=-10, y=-25, n=6, r=20, rotation=np.pi/6, translate=(5,-20), plot=True) # Rotation and translation """ Explanation: Takes three arguments by default: x, specifying the x-coordinate of the point you would like to test y, specifying the y-coordinate of the point you would like to test n, the number of sides of the polygon Optional arguments are: r, the radius of the circumscribed circle (equal to the distance from the circumcentre to one of the vertices). Default r=1 rotation, the anti-clockwise rotation of the shape in radians. Default rotation=0 translate, specifies the coordinates of the circumcentre, given as a tuple (x,y). Default translate=(0,0) plot, a boolean value to determine whether or not the plot is shown. Default plot=False Examples below: End of explanation """ pmt.in_poly(x=3, y=5, n=100, r=10, plot=True) """ Explanation: And of course, as n becomes large, the polygon tends to a circle: End of explanation """ print(pmt.plot_circular_fidi_mesh.__doc__) """ Explanation: Function 2: plot_circular_fidi_mesh End of explanation """ pmt.plot_circular_fidi_mesh(diameter=60) pmt.plot_circular_fidi_mesh(diameter=60, x_spacing=2, y_spacing=2, centre_mesh=True) # Note the effect of centre_mesh=True. In the previous plot, the element boundaries are aligned with 0 on the x- and y-axes. # In this case, centring the mesh has the effect of producing a mesh that is slightly wider than desired, shown below. pmt.plot_circular_fidi_mesh(diameter=30, x_spacing=1, y_spacing=2, show_axes=False, show_title=False) # Flexible element sizes. Toggling axes and title can make for prettier (albeit less informative) pictures. """ Explanation: Only has one default argument: diameter, the diameter of the circle you would like to plot Optional arguments: x_spacing, the width of the mesh elements. Default x_spacing=2 y_spacing, the height of the mesh elements. Default y_spacing=2 (only integers are currently supported for x- and y-spacing.) centre_mesh, outlined in the documentation above. Default centre_mesh='auto' show_axes, boolean, self-explanatory. Default show_axes=True show_title, boolean, self-explanatory. Default show_title=True End of explanation """ print(pmt.plot_poly_fidi_mesh.__doc__) """ Explanation: Function 3: plot_poly_fidi_mesh End of explanation """ pmt.plot_poly_fidi_mesh(diameter=50, n=5, x_spacing=1, y_spacing=1, rotation=np.pi/10) """ Explanation: Requires two arguments: diameter, the diameter of the circumscribed circle n, the number of sides the polygon should have Optional arguments: x_spacing y_spacing centre_mesh show_axes show_title (All of the above have the same function as in plot_circular_fidi_mesh, and below, like in_poly) rotation translate End of explanation """ print(pmt.find_circumradius.__doc__) """ Explanation: Function 4: find_circumradius End of explanation """ pmt.find_circumradius(n=3, side=10) """ Explanation: If you need to specify the side length, or the distance from the circumcentre to the middle of one of the faces, this function will convert that value to the circumradius (not diameter!) that would give the correct side length or apothem. End of explanation """ d1 = 2*pmt.find_circumradius(n=3, side=40) pmt.plot_poly_fidi_mesh(diameter=d1, n=3, x_spacing=1, y_spacing=1) # It can be seen on the y-axis that the side has a length of 40, as desired. d2 = 2*pmt.find_circumradius(n=5, apothem=20) pmt.plot_poly_fidi_mesh(diameter=d2, n=5, x_spacing=1, y_spacing=1) # The circumcentre lies at (0,0), and the leftmost side is in line with x=-20 """ Explanation: Using this in combination with plot_poly_fidi_mesh: End of explanation """
phoebe-project/phoebe2-docs
development/tutorials/passbands.ipynb
gpl-3.0
#!pip install -I "phoebe>=2.4,<2.5" """ Explanation: Adding new passbands to PHOEBE In this tutorial we will show you how to add your own passband to PHOEBE. Adding a custom passband involves: downloading and setting up model atmosphere tables; providing a passband transmission function; defining and registering passband parameters; computing blackbody response for the passband; [optional] computing Castelli & Kurucz (2004) passband tables; [optional] computing Husser et al. (2013) PHOENIX passband tables; [optional] if the passband is one of the passbands included in the Wilson-Devinney code, importing the WD response; and saving the generated passband file. <!-- * \[optional\] computing Werner et al. (2012) TMAP passband tables; --> Let's first make sure we have the correct version of PHOEBE installed. Uncomment the following line if running in an online notebook session such as colab. End of explanation """ import phoebe from phoebe import u # Register a passband: pb = phoebe.atmospheres.passbands.Passband( ptf='my_passband.ptf', pbset='Custom', pbname='mypb', effwl=330, wlunits=u.nm, calibrated=True, reference='A completely made-up passband published in Nowhere (2017)', version=1.0, comments='This is my first custom passband' ) # Blackbody response: pb.compute_blackbody_response() # CK2004 response: pb.compute_ck2004_response(path='tables/ck2004') pb.compute_ck2004_intensities(path='tables/ck2004') pb.compute_ck2004_ldcoeffs() pb.compute_ck2004_ldints() # PHOENIX response: pb.compute_phoenix_response(path='tables/phoenix') pb.compute_phoenix_intensities(path='tables/phoenix') pb.compute_phoenix_ldcoeffs() pb.compute_phoenix_ldints() # Impute missing values from the PHOENIX model atmospheres: pb.impute_atmosphere_grid(pb._phoenix_energy_grid) pb.impute_atmosphere_grid(pb._phoenix_photon_grid) pb.impute_atmosphere_grid(pb._phoenix_ld_energy_grid) pb.impute_atmosphere_grid(pb._phoenix_ld_photon_grid) pb.impute_atmosphere_grid(pb._phoenix_ldint_energy_grid) pb.impute_atmosphere_grid(pb._phoenix_ldint_photon_grid) for i in range(len(pb._phoenix_intensity_axes[3])): pb.impute_atmosphere_grid(pb._phoenix_Imu_energy_grid[:,:,:,i,:]) pb.impute_atmosphere_grid(pb._phoenix_Imu_photon_grid[:,:,:,i,:]) # Wilson-Devinney response: pb.import_wd_atmcof('atmcofplanck.dat', 'atmcof.dat', 22) # Save the passband: pb.save('my_passband.fits') """ Explanation: If you plan on computing model atmosphere intensities (as opposed to only blackbody intensities), you will need to download atmosphere tables and unpack them into a local directory of your choice. Keep in mind that this will take a long time. Plan to go for lunch or leave it overnight. The good news is that this needs to be done only once. For the purpose of this document, we will use a local tables/ directory and assume that we are computing intensities for all available model atmospheres: mkdir tables cd tables wget http://phoebe-project.org/static/atms/ck2004.tgz wget http://phoebe-project.org/static/atms/phoenix.tgz <!-- wget http://phoebe-project.org/static/atms/tmap.tgz --> Once the data are downloaded, unpack the archives: tar xvzf ck2004.tgz tar xvzf phoenix.tgz <!-- tar xvzf tmap.tgz --> That should leave you with the following directory structure: tables |____ck2004 | |____TxxxxxGxxPxx.fits (3800 files) |____phoenix | |____ltexxxxx-x.xx-x.x.PHOENIX-ACES-AGSS-COND-SPECINT-2011.fits (7260 files) I don't care about the details, just show/remind me how it's done Makes sense, and we don't judge: you want to get to science. Provided that you have the passband transmission file available and the atmosphere tables already downloaded, the sequence that will generate/register a new passband is: End of explanation """ %matplotlib inline import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger(clevel='WARNING') """ Explanation: Getting started Let us start by importing phoebe, numpy and matplotlib: End of explanation """ wl = np.linspace(300, 360, 61) ptf = np.zeros(len(wl)) ptf[(wl>=320) & (wl<=340)] = 1.0 """ Explanation: Passband transmission function The passband transmission function is typically a user-provided two-column file. The first column is wavelength, and the second column is passband transmission. For the purposes of this tutorial, we will simulate the passband as a uniform box. End of explanation """ plt.xlabel('Wavelength [nm]') plt.ylabel('Passband transmission') plt.plot(wl, ptf, 'b-') plt.show() """ Explanation: Let us plot this mock passband transmission function to see what it looks like: End of explanation """ np.savetxt('my_passband.ptf', np.vstack((wl, ptf)).T) """ Explanation: Let us now save these data in a file that we will use to register a new passband. End of explanation """ pb = phoebe.atmospheres.passbands.Passband( ptf='my_passband.ptf', pbset='Custom', pbname='mypb', effwl=330., wlunits=u.nm, calibrated=True, reference='A completely made-up passband published in Nowhere (2017)', version=1.0, comments='This is my first custom passband') """ Explanation: Registering a passband The first step in introducing a new passband into PHOEBE is registering it with the system. We use the Passband class for that. End of explanation """ pb.content """ Explanation: The first argument, ptf, is the passband transmission file we just created. Of course, you would provide an actual passband transmission function that comes from a respectable source rather than this silly tutorial. The next two arguments, pbset and pbname, should be taken in unison. The way PHOEBE refers to passbands is a pbset:pbname string, for example Johnson:V, Cousins:Rc, etc. Thus, our fake passband will be Custom:mypb. The following two arguments, effwl and wlunits, also come as a pair. PHOEBE uses effective wavelength to apply zero-level passband corrections when better options (such as model atmospheres) are unavailable. Effective wavelength is a transmission-weighted average wavelength in the units given by wlunits. The calibrated parameter instructs PHOEBE whether to take the transmission function as calibrated, i.e. the flux through the passband is absolutely calibrated. If set to True, PHOEBE will assume that absolute intensities computed using the passband transmission function do not need further calibration. If False, the intensities are considered as scaled rather than absolute, i.e. correct to a scaling constant. Most modern passbands provided in the recent literature are calibrated. The reference parameter holds a reference string to the literature from which the transmission function was taken from. It is common that updated transmission functions become available, which is the point of the version parameter. If there are multiple versions of the transmission function, PHOEBE will by default take the largest value, or the value that is explicitly requested in the filter string, i.e. Johnson:V:1.0 or Johnson:V:2.0. Finally, the comments parameter is a convenience parameter to store any additional pertinent information. Computing blackbody response To significantly speed up calculations, passband intensities are stored in lookup tables instead of computing them over and over again on the fly. Computed passband tables are tagged in the content property of the class: End of explanation """ pb.compute_blackbody_response() """ Explanation: Since we have not computed any tables yet, the list is empty for now. Blackbody functions for computing the lookup tables are built into PHOEBE and you do not need any auxiliary files to generate them. The lookup tables are defined for effective temperatures between 300K and 500,000K. To compute the blackbody response, issue: End of explanation """ pb.content """ Explanation: Checking the content property again shows that the table has been successfully computed: End of explanation """ pb.Inorm(Teff=5772, atm='blackbody', ld_func='linear', ld_coeffs=[0.0]) """ Explanation: We can now test-drive the blackbody lookup table we just created. For this we will use a low-level class method that computes normal emergent passband intensity, Inorm(). For the sake of simplicity, we will turn off limb darkening by setting ld_func to 'linear' and ld_coeffs to '[0.0]': End of explanation """ jV = phoebe.get_passband('Johnson:V') teffs = np.linspace(5000, 8000, 100) plt.xlabel('Temperature [K]') plt.ylabel('Inorm [W/m^3]') plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ld_func='linear', ld_coeffs=[0.0]), label='mypb') plt.plot(teffs, jV.Inorm(teffs, atm='blackbody', ld_func='linear', ld_coeffs=[0.0]), label='jV') plt.legend(loc='lower right') plt.show() """ Explanation: Let us now plot a range of temperatures, to make sure that normal emergent passband intensities do what they are supposed to do. While at it, let us compare what we get for the Johnson:V passband. End of explanation """ pb.compute_ck2004_response(path='tables/ck2004', verbose=False) """ Explanation: This makes perfect sense: Johnson V transmission function is wider than our boxed transmission function, so intensity in the V band is larger the lower temperatures. However, for the hotter temperatures the contribution to the UV flux increases and our box passband with a perfect transmission of 1 takes over. Computing Castelli & Kurucz (2004) response For any real science you will want to generate model atmosphere tables. The default choice in PHOEBE are the models computed by Fiorella Castelli and Bob Kurucz (website, paper) that feature new opacity distribution functions. In principle, you can generate PHOEBE-compatible tables for any model atmospheres, but that would require a bit of book-keeping legwork in the PHOEBE backend. Contact us to discuss an extension to other model atmospheres. To compute Castelli & Kurucz (2004) passband tables, we will use the previously downloaded model atmospheres. We start with the ck2004 normal intensities: End of explanation """ pb.content """ Explanation: Note, of course, that you will need to change the path to point to the directory where you unpacked the ck2004 tables. The verbosity parameter verbose will report on the progress as computation is being done. Depending on your computer speed, this step will take up to a minute to complete. We can now check the passband's content attribute again: End of explanation """ loggs = np.ones(len(teffs))*4.43 abuns = np.zeros(len(teffs)) plt.xlabel('Temperature [K]') plt.ylabel('Inorm [W/m^3]') plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ld_func='linear', ld_coeffs=[0.0]), label='blackbody') plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='ck2004') plt.legend(loc='lower right') plt.show() """ Explanation: Let us now use the same low-level function as before to compare normal emergent passband intensity for our custom passband for blackbody and ck2004 model atmospheres. One other complication is that, unlike blackbody model that depends only on the temperature, the ck2004 model depends on surface gravity (log g) and heavy metal abundances as well, so we need to pass those arrays. End of explanation """ pb.compute_ck2004_intensities(path='tables/ck2004', verbose=False) """ Explanation: Quite a difference. That is why using model atmospheres is superior when accuracy is of importance. Next, we need to compute direction-dependent intensities for all our limb darkening and boosting needs. This is a step that takes a long time; depending on your computer speed, it can take a few minutes to complete. End of explanation """ pb.compute_ck2004_ldcoeffs() pb.compute_ck2004_ldints() """ Explanation: This step will allow PHOEBE to compute all direction-dependent intensities on the fly, including the interpolation of the limb darkening coefficients that is model-independent. When limb darkening models are preferred (for example, when you don't quite trust direction-dependent intensities from the model atmosphere), we need to calculate two more tables: one for limb darkening coefficients and the other for the integrated limb darkening. That is done by two methods that can take a couple of minutes to complete: End of explanation """ pb.compute_phoenix_response(path='tables/phoenix', verbose=False) pb.compute_phoenix_intensities(path='tables/phoenix', verbose=False) pb.compute_phoenix_ldcoeffs() pb.compute_phoenix_ldints() print(pb.content) """ Explanation: This completes the computation of Castelli & Kurucz auxiliary tables. Computing PHOENIX response PHOENIX is a 3-D model atmosphere code. Because of that, it is more complex and better behaved for cooler stars (down to ~2300K). The steps to compute PHOENIX intensity tables are analogous to the ones we used for ck2004; so we can do all of them in a single step: End of explanation """ pb.impute_atmosphere_grid(pb._phoenix_energy_grid) pb.impute_atmosphere_grid(pb._phoenix_photon_grid) pb.impute_atmosphere_grid(pb._phoenix_ld_energy_grid) pb.impute_atmosphere_grid(pb._phoenix_ld_photon_grid) pb.impute_atmosphere_grid(pb._phoenix_ldint_energy_grid) pb.impute_atmosphere_grid(pb._phoenix_ldint_photon_grid) for i in range(len(pb._phoenix_intensity_axes[3])): pb.impute_atmosphere_grid(pb._phoenix_Imu_energy_grid[:,:,:,i,:]) pb.impute_atmosphere_grid(pb._phoenix_Imu_photon_grid[:,:,:,i,:]) """ Explanation: There is one extra step that we need to do for phoenix atmospheres: because there are gaps in the coverage of atmospheric parameters, we need to impute those values in order to allow for seamless interpolation. This is achieved by the call to impute_atmosphere_grid(). It is a computationally intensive step that can take 10+ minutes. End of explanation """ plt.xlabel('Temperature [K]') plt.ylabel('Inorm [W/m^3]') plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ldatm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='blackbody') plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='ck2004', ldatm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='ck2004') plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='phoenix', ldatm='phoenix', ld_func='linear', ld_coeffs=[0.0]), label='phoenix') plt.legend(loc='lower right') plt.show() """ Explanation: Now we can compare all three model atmospheres: End of explanation """ pb.import_wd_atmcof('atmcofplanck.dat', 'atmcof.dat', 22) """ Explanation: We see that, as temperature increases, model atmosphere intensities can differ quite a bit. That explains why the choice of a model atmosphere is quite important and should be given proper consideration. Importing Wilson-Devinney response PHOEBE no longer shares any codebase with the WD code, but for comparison purposes it is sometimes useful to use the same atmosphere tables. If the passband you are registering with PHOEBE has been defined in WD's atmcof.dat and atmcofplanck.dat files, PHOEBE can import those coefficients and use them to compute intensities. To import a set of WD atmospheric coefficients, you need to know the corresponding index of the passband (you can look it up in the WD user manual available at ftp://ftp.astro.ufl.edu/pub/wilson/lcdc2003/ebdoc2003.2feb2004.pdf.gz) and you need to grab the files ftp://ftp.astro.ufl.edu/pub/wilson/lcdc2003/atmcofplanck.dat.gz and ftp://ftp.astro.ufl.edu/pub/wilson/lcdc2003/atmcof.dat.gz from Bob Wilson's webpage. For this particular passband the index is 22. To import, issue: End of explanation """ pb.content plt.xlabel('Temperature [K]') plt.ylabel('Inorm [W/m^3]') plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ldatm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='blackbody') plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='ck2004', ldatm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='ck2004') plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='phoenix', ldatm='phoenix', ld_func='linear', ld_coeffs=[0.0]), label='phoenix') plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='extern_atmx', ldatm='phoenix', ld_func='linear', ld_coeffs=[0.0]), label='wd_atmx') plt.legend(loc='lower right') plt.show() """ Explanation: We can consult the content attribute to see the entire set of supported tables, and plot different atmosphere models for comparison purposes: End of explanation """ pb.save('~/.phoebe/atmospheres/tables/passbands/my_passband.fits') """ Explanation: Still an appreciable difference. Saving the passband table The final step of all this (computer's) hard work is to save the passband file so that these steps do not need to be ever repeated. From now on you will be able to load the passband file explicitly and PHOEBE will have full access to all of its tables. Your new passband will be identified as 'Custom:mypb'. To make PHOEBE automatically load the passband, it needs to be added to one of the passband directories that PHOEBE recognizes. If there are no proprietary aspects that hinder the dissemination of the tables, please consider contributing them to PHOEBE so that other users can use them. End of explanation """
omoju/udacityUd120Lessons
Decision Trees.ipynb
gpl-3.0
%pylab inline import sys from time import time sys.path.append("../tools/") sys.path.append("../naive bayes/") from prep_terrain_data import makeTerrainData from class_vis import prettyPicture, output_image import matplotlib.pyplot as plt import numpy as np import pylab as pl ### features_train and features_test are the features for the training ### and testing datasets, respectively ### labels_train and labels_test are the corresponding item labels features_train, labels_train, features_test, labels_test = makeTerrainData() from sklearn import tree #features_train = features_train[:len(features_train)/100] #labels_train = labels_train[:len(labels_train)/100] features_train, labels_train, features_test, labels_test = makeTerrainData() #parameters = {'kernel':('linear', 'rbf'), 'C':[10, 100, 1000, 10000]} clf = tree.DecisionTreeClassifier() t0 = time() clf.fit(features_train, labels_train) print("done in %0.3fs" % (time() - t0)) ############################################################## def submitAcc(): return clf.score(features_test, labels_test) pred = clf.predict(features_test) print "Classifier with accurancy %.2f%%" % (submitAcc() * 100) prettyPicture(clf, features_test, labels_test) """ Explanation: Lesson 3 - Decision Trees End of explanation """ prettyPicture(clf, features_train, labels_train) """ Explanation: This decision tree seems to be overfitting the data. Lets plot the training points instead of the test points to explore what is going on. End of explanation """ clf = tree.DecisionTreeClassifier(min_samples_split=50) t0 = time() clf.fit(features_train, labels_train) print("done in %0.3fs" % (time() - t0)) pred = clf.predict(features_test) print "Classifier with accurancy %.2f%%" % (submitAcc() * 100) prettyPicture(clf, features_test, labels_test) """ Explanation: Tweak parameters We will tweak min_sample_split parameter so we can adjust our classifier to not overfit. This gives us the minimum number of samples that must be in a population inorder to split it. End of explanation """ from email_preprocess import preprocess ### features_train and features_test are the features for the training ### and testing datasets, respectively ### labels_train and labels_test are the corresponding item labels features_train, features_test, labels_train, labels_test = preprocess() """ Explanation: Decision Tree Mini Project End of explanation """ from sklearn import tree def submitAcc(): return clf.score(features_test, labels_test) clf = tree.DecisionTreeClassifier(min_samples_split=40) t0 = time() clf.fit(features_train, labels_train) print("done in %0.3fs" % (time() - t0)) pred = clf.predict(features_test) print "Classifier with accurancy %.2f%%" % (submitAcc() * 100) """ Explanation: Part 1: Get the Decision Tree Running Get the decision tree up and running as a classifier, setting min_samples_split=40. It will probably take a while to train. What’s the accuracy? Answer Accuracy of the classifier is 97.90% End of explanation """ print "Number of data points in the data %d" % len(features_train) print "Number of features in the data %d " % len(features_train[3]) """ Explanation: Part 2: Speed It Up You found in the SVM mini-project that the parameter tune can significantly speed up the training time of a machine learning algorithm. A general rule is that the parameters can tune the complexity of the algorithm, with more complex algorithms generally running more slowly. Another way to control the complexity of an algorithm is via the number of features that you use in training/testing. The more features the algorithm has available, the more potential there is for a complex fit. We will explore this in detail in the “Feature Selection” lesson, but you’ll get a sneak preview now. find the number of features in your data. The data is organized into a numpy array where the number of rows is the number of data points and the number of columns is the number of features; so to extract this number, use a line of code like len(features_train[0]) End of explanation """ import sys from time import time sys.path.append("../tools/") from email_preprocess import preprocess features_train, features_test, labels_train, labels_test = preprocess() print "Number of data points in the data %d" % len(features_train) print "Number of features in the data %d " % len(features_train[3]) """ Explanation: Go into tools/email_preprocess.py, and find the line of code that looks like this: selector = SelectPercentile(f_classif, percentile=1) Change percentile from 10 to 1. What’s the number of features now? Answer 379 End of explanation """ from sklearn import tree def submitAcc(): return clf.score(features_test, labels_test) clf = tree.DecisionTreeClassifier(min_samples_split=40) t0 = time() clf.fit(features_train, labels_train) print("done in %0.3fs" % (time() - t0)) pred = clf.predict(features_test) print "Classifier with accurancy %.2f%%" % (submitAcc() * 100) """ Explanation: What do you think SelectPercentile is doing? Would a large value for percentile lead to a more complex or less complex decision tree, all other things being equal? Answer I think a large value for percentile would lead to a more complex decision tree. Note the difference in training time depending on the number of features. Answer training time went down significantly with less features. What’s the accuracy when percentile = 1? Answer Accuracy of the prediction with percentile = 1 is 96.70% End of explanation """
kingtaurus/cs224d
old_assignments/assignment2/part1-NER.ipynb
mit
import sys, os from numpy import * from matplotlib.pyplot import * %matplotlib inline matplotlib.rcParams['savefig.dpi'] = 100 %load_ext autoreload %autoreload 2 """ Explanation: CS 224D Assignment #2 Part [1]: Deep Networks: NER Window Model For this first part of the assignment, you'll build your first "deep" networks. On problem set 1, you computed the backpropagation gradient $\frac{\partial J}{\partial w}$ for a two-layer network; in this problem set you'll implement a slightly more complex network to perform named entity recognition (NER). Before beginning the programming section, you should complete parts (a) and (b) of the corresponding section of the handout. End of explanation """ from misc import random_weight_matrix random.seed(10) print(random_weight_matrix(3,5)) """ Explanation: (c): Random Initialization Test Use the cell below to test your code. End of explanation """ import data_utils.utils as du import data_utils.ner as ner # Load the starter word vectors wv, word_to_num, num_to_word = ner.load_wv('data/ner/vocab.txt', 'data/ner/wordVectors.txt') tagnames = ["O", "LOC", "MISC", "ORG", "PER"] num_to_tag = dict(enumerate(tagnames)) tag_to_num = du.invert_dict(num_to_tag) # Set window size windowsize = 3 # Load the training set docs = du.load_dataset('data/ner/train') X_train, y_train = du.docs_to_windows(docs, word_to_num, tag_to_num, wsize=windowsize) # Load the dev set (for tuning hyperparameters) docs = du.load_dataset('data/ner/dev') X_dev, y_dev = du.docs_to_windows(docs, word_to_num, tag_to_num, wsize=windowsize) # Load the test set (dummy labels only) docs = du.load_dataset('data/ner/test.masked') X_test, y_test = du.docs_to_windows(docs, word_to_num, tag_to_num, wsize=windowsize) """ Explanation: (d): Implementation We've provided starter code to load in the dataset and convert it to a list of "windows", consisting of indices into the matrix of word vectors. We pad each sentence with begin and end tokens &lt;s&gt; and &lt;/s&gt;, which have their own word vector representations; additionally, we convert all words to lowercase, canonicalize digits (e.g. 1.12 becomes DG.DGDG), and replace unknown words with a special token UUUNKKK. You don't need to worry about the details of this, but you can inspect the docs variables or look at the raw data (in plaintext) in the ./data/ directory. End of explanation """ from softmax_example import SoftmaxRegression sr = SoftmaxRegression(wv=zeros((10,100)), dims=(100,5)) ## # Automatic gradient checker! # this checks anything you add to self.grads or self.sgrads # using the method of Assignment 1 sr.grad_check(x=5, y=4) """ Explanation: To avoid re-inventing the wheel, we provide a base class that handles a lot of the drudgery of managing parameters and running gradient descent. It's based on the classifier API used by scikit-learn, so if you're familiar with that library it should be easy to use. We'll be using this class for the rest of this assignment, so it helps to get acquainted with a simple example that should be familiar from Assignment 1. To keep this notebook uncluttered, we've put the code in the softmax_example.py; take a look at it there, then run the cell below. End of explanation """ from nerwindow import WindowMLP clf = WindowMLP(wv, windowsize=windowsize, dims=[None, 100, 5], reg=0.001, alpha=0.01) clf.grad_check(X_train[0], y_train[0]) # gradient check on single point """ Explanation: In order to implement a model, you need to subclass NNBase, then implement the following methods: __init__() (initialize parameters and hyperparameters) _acc_grads() (compute and accumulate gradients) compute_loss() (compute loss for a training example) predict(), predict_proba(), or other prediction method (for evaluation) NNBase provides you with a few others that will be helpful: grad_check() (run a gradient check - calls _acc_grads and compute_loss) train_sgd() (run SGD training; more on this later) Your task is to implement the window model in nerwindow.py; a scaffold has been provided for you with instructions on what to fill in. When ready, you can test below: End of explanation """ nepoch = 5 N = nepoch * len(y_train) k = 5 # minibatch size random.seed(10) # do not change this! #### YOUR CODE HERE #### #### END YOUR CODE ### """ Explanation: Now we'll train your model on some data! You can implement your own SGD method, but we recommend that you just call clf.train_sgd. This takes the following arguments: X, y : training data idxiter: iterable (list or generator) that gives index (row of X) of training examples in the order they should be visited by SGD printevery: int, prints progress after this many examples costevery: int, computes mean loss after this many examples. This is a costly operation, so don't make this too frequent! The implementation we give you supports minibatch learning; if idxiter is a list-of-lists (or yields lists), then gradients will be computed for all indices in a minibatch before modifying the parameters (this is why we have you write _acc_grad instead of applying them directly!). Before training, you should generate a training schedule to pass as idxiter. If you know how to use Python generators, we recommend those; otherwise, just make a static list. Make the following in the cell below: An "epoch" schedule that just iterates through the training set, in order, nepoch times. A random schedule of N examples sampled with replacement from the training set. A random schedule of N/k minibatches of size k, sampled with replacement from the training set. End of explanation """ #### YOUR CODE HERE #### # Sandbox: build a good model by tuning hyperparameters #### END YOUR CODE #### #### YOUR CODE HERE #### # Sandbox: build a good model by tuning hyperparameters #### END YOUR CODE #### #### YOUR CODE HERE #### # Sandbox: build a good model by tuning hyperparameters #### END YOUR CODE #### """ Explanation: Now call train_sgd to train on X_train, y_train. To verify that things work, train on 100,000 examples or so to start (with any of the above schedules). This shouldn't take more than a couple minutes, and you should get a mean cross-entropy loss around 0.4. Now, if this works well, it's time for production! You have three tasks here: Train a good model Plot a learning curve (cost vs. # of iterations) Use your best model to predict the test set You should train on the train data and evaluate performance on the dev set. The test data we provided has only dummy labels (everything is O); we'll compare your predictions to the true labels at grading time. Scroll down to section (f) for the evaluation code. We don't expect you to spend too much time doing an exhaustive search here; the default parameters should work well, although you can certainly do better. Try to achieve an F1 score of at least 76% on the dev set, as reported by eval_performance. Feel free to create new cells and write new code here, including new functions (helpers and otherwise) in nerwindow.py. When you have a good model, follow the instructions below to make predictions on the test set. A strong model may require 10-20 passes (or equivalent number of random samples) through the training set and could take 20 minutes or more to train - but it's also possible to be much, much faster! Things you may want to tune: - alpha (including using an "annealing" schedule to decrease the learning rate over time) - training schedule and minibatch size - regularization strength - hidden layer dimension - width of context window End of explanation """ ## # Plot your best learning curve here counts, costs = zip(*traincurvebest) figure(figsize=(6,4)) plot(5*array(counts), costs, color='b', marker='o', linestyle='-') title(r"Learning Curve ($\alpha$=%g, $\lambda$=%g)" % (clf.alpha, clf.lreg)) xlabel("SGD Iterations"); ylabel(r"Average $J(\theta)$"); ylim(ymin=0, ymax=max(1.1*max(costs),3*min(costs))); ylim(0,0.5) # Don't change this filename! savefig("ner.learningcurve.best.png") ## # Plot comparison of learning rates here # feel free to change the code below figure(figsize=(6,4)) counts, costs = zip(*trainingcurve1) plot(5*array(counts), costs, color='b', marker='o', linestyle='-', label=r"$\alpha=0.01$") counts, costs = zip(*trainingcurve2) plot(5*array(counts), costs, color='g', marker='o', linestyle='-', label=r"$\alpha=0.1$") title(r"Learning Curve ($\lambda=0.01$, minibatch k=5)") xlabel("SGD Iterations"); ylabel(r"Average $J(\theta)$"); ylim(ymin=0, ymax=max(1.1*max(costs),3*min(costs))); legend() # Don't change this filename savefig("ner.learningcurve.comparison.png") """ Explanation: (e): Plot Learning Curves The train_sgd function returns a list of points (counter, cost) giving the mean loss after that number of SGD iterations. If the model is taking too long you can cut it off by going to Kernel->Interrupt in the IPython menu; train_sgd will return the training curve so-far, and you can restart without losing your training progress. Make two plots: Learning curve using reg = 0.001, and comparing the effect of changing the learning rate: run with alpha = 0.01 and alpha = 0.1. Use minibatches of size 5, and train for 10,000 minibatches with costevery=200. Be sure to scale up your counts (x-axis) to reflect the batch size. What happens if the model tries to learn too fast? Explain why this occurs, based on the relation of SGD to the true objective. Learning curve for your best model (print the hyperparameters in the title), as trained using your best schedule. Set costevery so that you get at least 100 points to plot. End of explanation """ # Predict labels on the dev set yp = clf.predict(X_dev) # Save predictions to a file, one per line ner.save_predictions(yp, "dev.predicted") from nerwindow import full_report, eval_performance full_report(y_dev, yp, tagnames) # full report, helpful diagnostics eval_performance(y_dev, yp, tagnames) # performance: optimize this F1 # Save your predictions on the test set for us to evaluate # IMPORTANT: make sure X_test is exactly as loaded # from du.docs_to_windows, so that your predictions # line up with ours. yptest = clf.predict(X_test) ner.save_predictions(yptest, "test.predicted") """ Explanation: (f): Evaluating your model Evaluate the model on the dev set using your predict function, and compute performance metrics below! End of explanation """ # Recommended function to print scores # scores = list of float # words = list of str def print_scores(scores, words): for i in range(len(scores)): print "[%d]: (%.03f) %s" % (i, scores[i], words[i]) #### YOUR CODE HERE #### neurons = [1,3,4,6,8] # change this to your chosen neurons for i in neurons: print "Neuron %d" % i print_scores(topscores[i], topwords[i]) #### END YOUR CODE #### """ Explanation: Part [1.1]: Probing neuron responses You might have seen some results from computer vision where the individual neurons learn to detect edges, shapes, or even cat faces. We're going to do the same for language. Recall that each "neuron" is essentially a logistic regression unit, with weights corresponding to rows of the corresponding matrix. So, if we have a hidden layer of dimension 100, then we can think of our matrix $W \in \mathbb{R}^{100 x 150}$ as representing 100 hidden neurons each with weights W[i,:] and bias b1[i]. (a): Hidden Layer, Center Word For now, let's just look at the center word, and ignore the rest of the window. This corresponds to columns W[:,50:100], although this could change if you altered the window size for your model. For each neuron, find the top 10 words that it responds to, as measured by the dot product between W[i,50:100] and L[j]. Use the provided code to print these words and their scores for 5 neurons of your choice. In your writeup, briefly describe what you notice here. The num_to_word dictionary, loaded earlier, may be helpful. End of explanation """ #### YOUR CODE HERE #### for i in range(1,5): print "Output neuron %d: %s" % (i, num_to_tag[i]) print_scores(topscores[i], topwords[i]) print "" #### END YOUR CODE #### """ Explanation: (b): Model Output, Center Word Now, let's do the same for the output layer. Here we only have 5 neurons, one for each class. O isn't very interesting, but let's look at the other four. Here things get a little more complicated: since we take a softmax, we can't just look at the neurons separately. An input could cause several of these neurons to all have a strong response, so we really need to compute the softmax output and find the strongest inputs for each class. As before, let's consider only the center word (W[:,50:100]). For each class ORG, PER, LOC, and MISC, find the input words that give the highest probability $P(\text{class}\ |\ \text{word})$. You'll need to do the full feed-forward computation here - for efficiency, try to express this as a matrix operation on $L$. This is the same feed-forward computation as used to predict probabilities, just with $W$ replaced by W[:,50:100]. As with the hidden-layer neurons, print the top 10 words and their corresponding class probabilities for each class. End of explanation """ #### YOUR CODE HERE #### for i in range(1,5): print "Output neuron %d: %s" % (i, num_to_tag[i]) print_scores(topscores[i], topwords[i]) print "" #### END YOUR CODE #### """ Explanation: (c): Model Output, Preceding Word Now for one final task: let's look at the preceding word. Repeat the above analysis for the output layer, but use the first part of $W$, i.e. W[:,:50]. Describe what you see, and include these results in your writeup. End of explanation """
SuLab/WikidataIntegrator
notebooks/setDescription.ipynb
mit
from wikidataintegrator import wdi_core, wdi_login import os import pandas as pd import pprint """ Explanation: This notebook contains code examples for maintaining, extending and updating the Wikipathways bot load the libraries End of explanation """ def sparql(query, endpoint): query=query return wdi_core.WDItemEngine.execute_sparql_query(query, endpoint=endpoint) def sparqlpandas(query, endpoint): return wdi_core.WDItemEngine.execute_sparql_query(query, endpoint=endpoint, as_dataframe=True) """ Explanation: Define the functions End of explanation """ print("Logging in...") if "WDUSER" in os.environ and "WDPASS" in os.environ: WDUSER = os.environ['WDUSER'] WDPASS = os.environ['WDPASS'] else: raise ValueError("WDUSER and WDPASS must be specified in local.py or as environment variables") login = wdi_login.WDLogin(WDUSER, WDPASS) """ Explanation: Login to the bot The code snippet below is to login in to Wikidata. First it checks if a local variable containing the wikidata user/bot account and password has been set. In this code example in the jupyter Notebook is use the %env magic. In jenkins the variables are set when configuring the jenkins recipe. Running a script like this on Wikidata requires a bot account. End of explanation """ ## Find the Wikidata identifier for WP716 query = """ SELECT * WHERE {?pathway wdt:P2888 <http://identifiers.org/wikipathways/WP716>} """ sparqlpandas(query, endpoint="https://query.wikidata.org/sparql") wdwpid = sparql(query, endpoint="https://query.wikidata.org/sparql")["results"]["bindings"][0]["pathway"]["value"] print(wdwpid) wdPage = wdi_core.WDItemEngine(wd_item_id=wdwpid.replace("http://www.wikidata.org/entity/", "")) #pprint.pprint(wdPage.get_wd_json_representation()) """ Explanation: Update the description for Pathway http://identifiers.org/wikipathways/WP716 End of explanation """ query = """ prefix dcterms: <http://purl.org/dc/terms/> SELECT ?item ?description WHERE { <""" query += wdwpid + "> " query += """wdt:P2410 ?wpid ; wdt:P2888 ?wpiri . SERVICE <http://sparql.wikipathways.org/> { ?version dc:identifier ?wpiri ; dcterms:description ?description . } } """ sparqlpandas(query, endpoint="https://query.wikidata.org/sparql") description = sparql(query, endpoint="https://query.wikidata.org/sparql")["results"]["bindings"][0]["description"]["value"] print(description) wdPage.set_description(description, lang="en") wdPage.write(login) """ Explanation: Get the description from Wikipathways End of explanation """
mne-tools/mne-tools.github.io
0.18/_downloads/2e5e89949bd57aecc1ef4e79435a8149/plot_temporal_whitening.ipynb
bsd-3-clause
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # # License: BSD (3-clause) import numpy as np from scipy import signal import matplotlib.pyplot as plt import mne from mne.time_frequency import fit_iir_model_raw from mne.datasets import sample print(__doc__) data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' proj_fname = data_path + '/MEG/sample/sample_audvis_ecg-proj.fif' raw = mne.io.read_raw_fif(raw_fname) proj = mne.read_proj(proj_fname) raw.info['projs'] += proj raw.info['bads'] = ['MEG 2443', 'EEG 053'] # mark bad channels # Set up pick list: Gradiometers - bad channels picks = mne.pick_types(raw.info, meg='grad', exclude='bads') order = 5 # define model order picks = picks[:1] # Estimate AR models on raw data b, a = fit_iir_model_raw(raw, order=order, picks=picks, tmin=60, tmax=180) d, times = raw[0, 10000:20000] # look at one channel from now on d = d.ravel() # make flat vector innovation = signal.convolve(d, a, 'valid') d_ = signal.lfilter(b, a, innovation) # regenerate the signal d_ = np.r_[d_[0] * np.ones(order), d_] # dummy samples to keep signal length """ Explanation: Temporal whitening with AR model Here we fit an AR model to the data and use it to temporally whiten the signals. End of explanation """ plt.close('all') plt.figure() plt.plot(d[:100], label='signal') plt.plot(d_[:100], label='regenerated signal') plt.legend() plt.figure() plt.psd(d, Fs=raw.info['sfreq'], NFFT=2048) plt.psd(innovation, Fs=raw.info['sfreq'], NFFT=2048) plt.psd(d_, Fs=raw.info['sfreq'], NFFT=2048, linestyle='--') plt.legend(('Signal', 'Innovation', 'Regenerated signal')) plt.show() """ Explanation: Plot the different time series and PSDs End of explanation """
quantopian/research_public
notebooks/data/quandl.currfx_usdeur/notebook.ipynb
apache-2.0
# import the dataset from quantopian.interactive.data.quandl import currfx_usdeur # Since this data is public domain and provided by Quandl for free, there is no _free version of this # data set, as found in the premium sets. This import gets you the entirety of this data set. # import data operations from odo import odo # import other libraries we will use import pandas as pd import matplotlib.pyplot as plt currfx_usdeur.sort('asof_date') """ Explanation: Quandl: US vs. EUR Exchange Rate In this notebook, we'll take a look at data set , available on Quantopian. This dataset spans from 1999 through the current day. It contains the daily exchange rates for the US Dollar (USD) vs. the European EURO (EUR) We access this data via the API provided by Quandl. More details on this dataset can be found on Quandl's website. Blaze Before we dig into the data, we want to tell you about how you generally access Quantopian partner data sets. These datasets are available using the Blaze library. Blaze provides the Quantopian user with a convenient interface to access very large datasets. Some of these sets (though not this one) are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side. To learn more about using Blaze and generally accessing Quantopian partner data, clone this tutorial notebook. With preamble in place, let's get started: End of explanation """ currfx_usdeur.count() """ Explanation: The data goes all the way back to 1999 and is updated daily. Blaze provides us with the first 10 rows of the data for display. Just to confirm, let's just count the number of rows in the Blaze expression: End of explanation """ usdeur_df = odo(currfx_usdeur, pd.DataFrame) usdeur_df.plot(x='asof_date', y='rate') plt.xlabel("As Of Date (asof_date)") plt.ylabel("Exchange Rate") plt.title("USD vs. EUR Exchange Rate") plt.legend().set_visible(False) """ Explanation: Let's go plot it for fun. This data set is definitely small enough to just put right into a Pandas DataFrame End of explanation """
WNoxchi/Kaukasos
FAI02_old/Lesson9/Lesson9_SR_CodeAlong.ipynb
mit
%matplotlib inline import os; import sys; sys.path.insert(1, os.path.join('../utils')) from utils2 import * from scipy.optimize import fmin_l_bfgs_b from scipy.misc import imsave from keras import metrics from vgg16_avg import VGG16_Avg # Tell TensorFlow to use no more GPU RAM than necessary limit_mem() path = '../data/' """ Explanation: 07 SEP 2017 - WH Nixalo This is a code along of the super-resolution portion of the FADL2 Lesson 9 JNB. End of explanation """ rn_mean = np.array([123.68, 116.779, 103.939], dtype=np.float32) preproc = lambda x: (x - rn_mean)[:, :, :, ::-1] deproc = lambda x,s: np.clip(x.reshape(s)[:, :, :, ::-1] + rn_mean, 0, 255) arr_lr = bcolz.open(path + 'trn_resized_72.bc')[:] arr_hr = bcolz.open(path + 'trn_resized_288.bc')[:] pars = {'verbose': 0, 'callbacks': [TQDMNotebookCallback(leave_inner=True)]} shp = arr_hr.shape[1:] arr_hr.shape[1:] """ Explanation: Use Content Loss to Create a Super-Resolution Network So far we've demonstrated how to achieve successful results in style transfer. However, there's an obvious drawback to our implementation, namely that we're training an image, not a network, and therefore every new image requires us to retrain. It's not a feasible menthod for any sort of real-time application. Fortunately, we can address this issue by using a fully covolutional network (FCN), and in particular we'll look at this implementation for Super Resolution. We're following the approach in this paper End of explanation """ def conv_block(x, filters, size, stride=(2,2), mode='same', act=True): x = Convolution2D(filters, size, size, subsample=stride, border_mode=mode)(x) x = BatchNormalization(mode=2)(x) return Activation('relu')(x) if act else x def res_block(ip, nf=64): x = conv_block(ip, nf, 3, (1,1)) x = conv_block(x, nf, 3, (1,1), act=False) return merge([x, ip], mode='sum') # def deconv_block(x, filters, size, shape, stride=(2,2)): # x = Deconvolution2D(filters, size, size, subsample=stride, # border_mode='same', output_shape=(None,)+shape)(x) # x = BatchNormalization(mode=2)(x) # return Activation('relu')(x) def up_block(x, filters, size): x = keras.layers.UpSampling2D()(x) x = Convolution2D(filters, size, size, border_mode='same')(x) x = BatchNormalization(mode=2)(x) return Activation('relu')(x) """ Explanation: To start we'll define some of the building blocks of our network. In particular recall the residual block (as used in ResNet), which is just a sequence of 2 Convolutional layers that is added to the initial block input. We also have a de-Convolutional layer (aka. a "Transposed Convolution" or "Fractionally-Strided Convolution"), whose purpose is to learn to 'undo' the Convolutional function. It does this by padding the smaller image in such a way to apply filters on it to produce a larger image. End of explanation """ inp = Input(arr_lr.shape[1:]) x = conv_block(inp, 64, 9, (1,1)) for i in range(4): x = res_block(x) x = up_block(x, 64, 3) x = up_block(x, 64, 3) x = Convolution2D(3, 9, 9, activation='tanh', border_mode='same')(x) outp = Lambda(lambda x: (x+1) * 127.5)(x) """ Explanation: This model here is using the previously defined blocks to envode a low res image and then upsample it to math the same image in hires. End of explanation """ vgg_inp = Input(shp) vgg = VGG16(include_top=False, input_tensor=Lambda(preproc)(vgg_inp)) """ Explanation: The method of training this network is almost exactly the same as training the pixels from our previous implementations. The idea here is we're going to feed two images to Vgg16 and compare their convolutional outputs at some layer. These two images are the target image (which in our case is the same as the original but at a higher resolution), and the output of the previous network we just defined, which we hope will learn to output a high resolution image. The key then is to train this other network to produce an image that minimizes the loss between the outputs of some convoltuional layer in Vgg16 (which the paper refers to as "perceptual loss"). In doing so, we're able to train a network that can upsample an image and recreate the higher resolution details. End of explanation """ for λ in vgg.layers: λ.trainable=False """ Explanation: Since we only want to learn the "upsampling network", and are just using VGG to calculate the loss function, we set the Vgg layers to not be trainable: End of explanation """ def get_outp(m, ln): return m.get_layer(f'block{ln}_conv1').output vgg_content = Model(vgg_inp, [get_outp(vgg, o) for o in [1,2,3]]) vgg1 = vgg_content(vgg_inp) vgg2 = vgg_content(outp) # ln = 1 # print(f'block{ln}_conv1'.format(ln)) def mean_sqr_b(diff): dims = list(range(1, K.ndim(diff))) return K.expand_dims(K.sqrt(K.mean(diff**2, dims)), 0) w = [0.1, 0.8, 0.1] def content_fn(x): res = 0; n=len(w) for i in range(n): res += mean_sqr_b(x[i]-x[i+n]) * w[i] return res m_sr = Model([inp, vgg_inp], Lambda(content_fn)(vgg1+vgg2)) targ = np.zeros((arr_hr.shape[0], 1)) """ Explanation: An important difference in training for super resolution is the loss function. We use what's known as a perceptual loss function (which is simply the content loss for some layer). End of explanation """ m_sr.compile('adam','mse') m_sr.fit([arr_lr, arr_hr], targ, 8, 2, **pars) m_sr.save_weights(path + 'lesson9/results/' + 'sr_final.h5') """ Explanation: Finally we compile this chain of models and we can pass it the original lores image as well as the hires to train on. We also define a zero vector as a target parameter, which is a necessary parameter when calling fit on a keras model. End of explanation """ K.set_value(m_sr.optimizer.lr, 1e-4) m_sr.fit([arr_lr, arr_hr], targ, 8, 1, **pars) """ Explanation: We use learning rate annealing to get a better fit. End of explanation """ top_model = Model(inp, outp) p = top_model.predict(arr_lr[10:11]) """ Explanation: We're only interested in the trained part of the mdoel, which does the actual upsampling. End of explanation """ plt.imshow(arr_lr[10].astype('uint8')); plt.imshow(p[0].astype('uint8')); top_model.save_weights(path + 'lesson9/results/' + 'sr_final.h5') top_model.load_weights(path + 'lesson9/results/' + 'sr_final.h5') """ Explanation: After tarining for some time, we get some very impressive results! Look at these two images, we can see that the predicted higher resolution image has filled in a lot of detail, including the shadows under the greens and the texture of the food. End of explanation """ # well, since you mention it: mofolo_jup = Image.open(path + 'sr-imgs/Jupiter-Juno-LR.jpeg') mofolo_jup = np.expand_dims(np.array(mofolo_jup), 0) p = top_model.predict(mofolo_jup) # lores Jupiter plt.imshow(mofolo_jup[0].astype('uint8')); # superes Jupiter plt.imshow(p[0].astype('uint8')) # original hires jupiter: mofohi_jup = Image.open(path + 'sr-imgs/Jupiter-Juno-HR.jpg') plt.imshow(mofohi_jup) """ Explanation: The important thing to take away here is that as opposed to our earlier approaches, this type of approach results in a model that can create the desired image and is a scalable implementation. Note that we haven't used a test ste here, so we don't know if the above result is due to over-fitting. End of explanation """ class ReflectionPadding2D(Layer): def __init__(self, padding=(1,1), **kwargs): self.padding = tuple(padding) self.input_spec = [InputSpec(ndim=4)] super(ReflectionPadding2D, self).__init__(**kwargs) def get_output_shape_for(self, s): return (s[0], s[1] + 2 * self.padding[0], s[2] + 2 * self.padding[1], s[3]) def call(self, x, mask=None): w_pad, h_pad = self.padding return tf.pad(x, [[0,0], [h_pad, h_pad], [w_pad, w_pad], [0,0] ], 'REFLECT') """ Explanation: Fast Style Transfer The original paper showing the above approach to our super resolution also used this approach to createa a much faster style transfer system (for a specific style). Take a look at the paper and the very helpful supplementary material. Reflection Padding The supplementary material mentions that they found reflection padding helpful - we have implemented this as a Keras layer. All the other layers and blocks are already defined above. End of explanation """ inp = Input((288, 288, 3)) ref_model = Model(inp, ReflectionPadding2D((40, 10))(inp)) ref_model.compile('adam', 'mse') p = ref_model.predict(arr_hr[10:11]) plt.imshow(p[0].astype('uint8')) """ Explanation: Testing the reflection padding layer: End of explanation """ shp = arr_hr.shape[1:] style = Image.open(path + 'nst/starry-night.png') # style = style.resize(np.divide(style.size, 3.5).astype('int32')) style = np.array(style)[:shp[0], :shp[1], :shp[2]] plt.imshow(style); def res_crop_block(ip, nf=64): x = conv_block(ip, nf, 3, (1,1,), 'valid') x = conv_block(x, nf, 3, (1,1), 'valid', False) ip = Lambda(lambda x: x[:, 2:-2, 2:-2])(ip) return merge([x, ip], mode='sum') inp=Input(shp) x=ReflectionPadding2D((40,40))(inp) x=conv_block(x, 64, 9, (1,1)) x=conv_block(x, 64, 3) x=conv_block(x, 64, 3) for i in range(5): x=res_crop_block(x) x=up_block(x, 64, 3) x=up_block(x, 64, 3) x=Convolution2D(3, 9, 9, activation='tanh', border_mode='same')(x) outp=Lambda(lambda x: (x+1)*128.5)(x) vgg_inp=Input(shp) vgg=VGG16(include_top=False, input_tensor=Lambda(preproc)(vgg_inp)) for λ in vgg.layers: λ.trainable=False def get_outp(m, ln): return m.get_layer(f'block{ln}_conv2').output vgg_content = Model(vgg_inp, [get_outp(vgg, o) for o in [2,3,4,5]]) """ Explanation: Main Algorithm This approach is exactly the same as super resoltuion, except now the loss includes the style loss. End of explanation """ style_targs = [K.variable(o) for o in vgg_content.predict(np.expand_dims(style, 0))] [K.eval(K.shape(o)) for o in style_targs] vgg1 = vgg_content(vgg_inp) vgg2 = vgg_content(outp) """ Explanation: Here we alter the super resolution approach by adding style outputs: End of explanation """ def gram_matrix_b(x): x = K.permute_dimensions(x, (0, 3, 1, 2)) s = K.shape(x) feat = K.reshape(x, (s[0], s[1], s[2]*s[3])) return K.batch_dot(feat, K.permute_dimensions(feat, (0, 2, 1)) ) / K.prod(K.cast(s[1:], K.floatx())) w = [0.1, 0.2, 0.6, 0.1] def tot_loss(x): loss = 0; n = len(style_targs) for i in range(n): loss += mean_sqr_b(gram_matrix_b(x[i+n]) - gram_matrix_b(style_targs[i])) / 2. loss += mean_sqr_b(x[i]-x[i+n]) * w[i] return loss loss = Lambda(tot_loss)(vgg1 + vgg2) m_style = Model([inp, vgg_inp], loss) targ = np.zeros((arr_hr.shape[0], 1)) m_style.compile('adam', 'mse') m_style.fit([arr_hr, arr_hr], targ, 4, 2, **pars) m_style.save_weights(path + 'lesson9/results/' + 'style_final.h5') K.set_value(m_style.optimizer.lr, 1e-4) m_style.fit([arr_hr, arr_hr], targ, 4, 1, **pars) top_model = Model(inp, outp) """ Explanation: Our loss now includes the MSE for the content loss and the Gram Matrix for the style End of explanation """ p = top_model.predict(arr_hr[:10]) plt.imshow(np.round(p[0]).astype('uint8')) """ Explanation: Now we can pass any image through this CNN and it'll prodice the desired style. End of explanation """ top_model.save_weights(path + 'lesson9/results/style_final.h5') # top_model.load_weights(path + 'lesson9/results/style_final.h5') """ Explanation: this is kind of hillarious End of explanation """
MaxPowerWasTaken/MaxPowerWasTaken.github.io
jupyter_notebooks/Multiprocessing with Pandas.ipynb
gpl-3.0
from multiprocessing import Pool, cpu_count def process_Pandas_data(func, df, num_processes=None): ''' Apply a function separately to each column in a dataframe, in parallel.''' # If num_processes is not specified, default to minimum(#columns, #machine-cores) if num_processes==None: num_processes = min(df.shape[1], cpu_count()) # 'with' context manager takes care of pool.close() and pool.join() for us with Pool(num_processes) as pool: # we need a sequence to pass pool.map; this line creates a generator (lazy iterator) of columns seq = [df[col_name] for col_name in df.columns] # pool.map returns results as a list results_list = pool.map(func, seq) # return list of processed columns, concatenated together as a new dataframe return pd.concat(results_list, axis=1) """ Explanation: Processing Multiple Pandas Series in Parallel Introduction Python's Pandas library for data processing is great for all sorts of data-processing tasks. However, one thing it doesn't support out of the box is parallel processing across multiple cores. I've been wanting a simple way to process Pandas DataFrames in parallel, and recently I found this truly awesome blog post.. It shows how to apply an arbitrary Python function to each object in a sequence, in parallel, using Pool.map from the Multiprocessing library. The author's example involves running urllib2.urlopen() across a list of urls, to scrape html from several web sites in parallel. But the principle applies equally to mapping a function across several columns in a Pandas DataFrame. Here's an example of how useful that can be. A simple multiprocessing wrapper Here's some code which will accept a Pandas DataFrame and a function, apply the function to each column in the DataFrame, and return the results (as a new dataframe). It also allows the caller to specify the number of processes to run in parallel, but uses a sensible default when not provided. End of explanation """ #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! # UNCOMMENT IN MARKDOWN BEFORE PUSHING LIVE # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! # (commented out so can run notebook in one click.) #with Pool(num_processes) as pool: # ... # results_list = pool.map(func, seq) """ Explanation: Hopefully the code above looks pretty straightforward, but if it looks a bit confusing at first glance, ultimately the key is these two lines: End of explanation """ import pandas as pd df = pd.read_csv('datasets/quora_kaggle.csv') df.head(3) import re from nltk.corpus import stopwords def tokenize_column(text_series): ''' Accept a series of strings, returns list of words (lowercased) without punctuation or stopwords''' # lowercase everything text_series = text_series.astype(str).str.lower() # remove punctuation (r'\W' is regex, matches any non-alphanumeric character) text_series = text_series.str.replace(r'\W', ' ') # return list of words, without stopwords sw = stopwords.words('english') return text_series.apply(lambda row: [word for word in row.split() if word not in sw]) """ Explanation: the rest was just setting the default number of processes to run in parallel, getting a 'sequence of columns' from our input dataframe, and concatenating the list of results we get back from pool.map A function to measure parallel performance gains with To measure the speed boost from wrapping a bit of Pandas processing in this multiprocessing wrapper, I'm going to load the Quora Duplicate Questions dataset, and the vectorized text-tokenizing function from my last blog post on using vectorized Pandas functions. End of explanation """ print(df.question1.head(3), '\n\n', tokenize_column(df.question1.head(3))) """ Explanation: To see what this does "tokenizing" function does, here's a few unprocessed quora questions, followed by their outputs from the tokenizer End of explanation """ from datetime import datetime def clock_tokenize_in_series(df): '''Calc time to process in series''' # Initialize dataframe to hold processed questions, and start clock qs_processed = pd.DataFrame() start = datetime.now() # process question columns in series for col in df.columns: qs_processed[col] = tokenize_column(df[col]) # return time elapsed return datetime.now() - start def clock_tokenize_in_parallel(df): '''Calc time to process in parallel''' # Initialize dataframe to hold processed questions, and start clock qs_processed = pd.DataFrame() start = datetime.now() # process question columns in parallel qs_processed2 = process_Pandas_data(tokenize_column, df) # return time elapsed return datetime.now() - start """ Explanation: Clocking Performance Gains of Using Multiprocessing, 2 Cores The two functions below clock the time elapsed from tokenizing our two question columns in series or in parallel. Defining these tests as their own functions means we're not creating any new global-scope variables when we measure performance. All the intermediate results (like the new dataframes of processed questions) are garbage-collected after the function returns its results (an elapsed time). This is important to maintain an apples-to-apples performance comparison; otherwise, performance tests run later in the notebook would have less RAM available than the first test we run. End of explanation """ # Print Time Results no_parallel = clock_tokenize_in_series(df[['question1', 'question2']]) parallel = clock_tokenize_in_parallel(df[['question1', 'question2']]) print('Time elapsed for processing 2 questions in series :', no_parallel) print('Time elapsed for processing 2 questions in parallel :', parallel) """ Explanation: And now to measure our results: End of explanation """ # Column-bind two questions with copies of themselves for 4 text columns four_qs = pd.concat([df[['question1','question2']], df[['question1','question2']]], axis=1) four_qs.columns = ['q1', 'q2', 'q1copy', 'q2copy'] four_qs.head(2) # Print Results for running tokenizer on 4 questions in series, then in parallel no_parallel = clock_tokenize_in_series(four_qs) parallel = clock_tokenize_in_parallel(four_qs) print('Time elapsed for processing 4 questions in series :', no_parallel) print('Time elapsed for processing 4 questions in parallel :', parallel) """ Explanation: So processing the two columns in parallel cut our processing time from 23.7 seconds down to 14.7 seconds, a decrease of 38%. The theoretical maximum reduction we might have expected with no multiprocessing overhead would of course been a 50% reduction, so this is not bad. Comparing Performance with 4 Cores I have four cores on this laptop, and I'd like to see how the performance gains scale here from two to four cores. Below, I'll make copies of our q1 and q2 so we have four total text columns, then re-run the comparison by passing this new 4-column dataframe to the testing function defined above. End of explanation """
karthikrangarajan/intro-to-sklearn
archive/Intro_ML_sklearn.ipynb
bsd-3-clause
# Plot settings for notebook # so that plots show up in notebook %matplotlib inline # seaborn here is used for aesthetics. # here, setting seaborn plot defaults (this can be safely commented out) import seaborn; seaborn.set() # Import an example plot from the figures directory from fig_code import plot_sgd_separator plot_sgd_separator() """ Explanation: Machine Learning with scikit-learn <img src='imgs/robotguy.png' alt="Smiley face" width="42" height="42" align="left">Learning Objectives Gain some high level knowledge around machine learning with a gentle/brief introduction Learn the importance of pre-processing data and how scikit-learn expects data See data transformations for machine learning in action Get an idea of your options for learning on training sets and applying model for prediction See what sort of metrics are commonly used in scikit-learn Learn options for model evaluation Become familiar with ways to make this process robust and simplified (pipelining and tuning parameters) For workshop (reorg) Sections: 1. ML 101 w/ code examples (incl. taste of sklearn w/ logistic regression + accuracy scores and user entered sepal and petal measurements); intro to sklearn's approach 2. Our data: iris 3. Supervised 4. Unsupervised * Evaluating a model * What next (GridSearch & Pipeline) Flow: * a learner and visual (logistic regression, pairplot, accuracy, give some sepal and petal measurements) * what category of ML is logistic regression? (regression or classification?) * QUESTION: what should we have done first? Any ideas? (EDA, visualize, pre-process if need be) * flower pics * peek at data * preprocessing from sklearn * Supervised - decision tree in detail, random forest * Unsupervised - novelty detection aka anomoly detection (note that PCA & dimensionality reduction is unsupervised) * Evaluate - metrics * what do you do with this model? what next? * note: parameter tuning can be automated with GridSearch * note: can test many algorithms at once with Pipeline some questions inline Does this parameter when increased cause overfitting or underfitting? what are the implications of those cases?<br><br> Is it better to have too many false positives or too many false negatives?<br><br> What is the diffrence between outlier detection and anomaly detection? Machine Learning 101 It's said in different ways, but I like the way Jake VanderPlas defines ML: Machine Learning is about building <b>programs with tunable parameters</b> (typically an array of floating point values) that are adjusted automatically so as to improve their behavior by <b>adapting to previously seen data</b>. He goes on to say: Machine Learning can be considered a <b>subfield of Artificial Intelligence</b> since those algorithms can be seen as building blocks to make computers learn to behave more intelligently by somehow <b>generalizing</b> rather that just storing and retrieving data items like a database system would do. (more here) ML is much more than writing a program. ML experts write clever and robust algorithms which can generalize to answer different, but specific questions. There are still types of questions that a certain algorithm can not or should not be used to answer. I say answer instead of solve, because even with an answer one should evaluate whether it is a good answer or bad answer. Also, just an in statistics, one needs to be careful about assumptions and limitations of an algorithm and the subsequent model that is built from it. Here's my hand-drawn diagram of the machine learning process.<br> <img src='imgs/ml_process.png' alt="Smiley face" width="550"> <br><br> Examples Below, we are going to show a simple case of <i>classification</i>. In the figure we show a collection of 2D data, colored by their class labels (imagine one class is labeled "red" and the other "blue"). The fig_code module is credited to Jake VanderPlas and was cloned from his github repo here - also on our repo is his license file since he asked us to include that if we use his source code. :) End of explanation """ from sklearn.datasets import load_iris iris = load_iris() # Leave one value out from training set - that will be test later on X_train, y_train = iris.data[:-1,:], iris.target[:-1] from sklearn.linear_model import LogisticRegression # our model - a multiclass regression logistic = LogisticRegression() # train on iris training set logistic.fit(X_train, y_train) X_test = iris.data[-1,:].reshape(1, -1) y_predict = logistic.predict(X_test) print('Predicted class %s, real class %s' % ( y_predict, iris.target[-1])) print('Probabilities of membership in each class: %s' % logistic.predict_proba(X_test)) """ Explanation: Above is the vector which best separates the two classes, "red" and "blue" using a classification algorithm called Stochastic Gradient Decent (don't worry about the detail yet). The confidence intervals are shown as dashed lines. - FACT CHECK CI LINE COMMENT PLEASE This demonstrates a very important aspect of ML and that is the algorithm is <i>generalizable</i>, i.e., if we add some new data, a new point, the algorithm can <i>predict</i> whether is should be in the "red" or "blue" category. <b>ML TIP: ML can only answer 5 questions:</b> * How much/how many? * Which category? * Which group? * Is it weird? * Which action? As far as algorithms for learning a model (i.e. running some training data through an algorithm), it's nice to think of them in two different ways (with the help of the machine learning wikipedia article). The first way of thinking about ML, is by the type of information or <i>input</i> given to a system. So, given that criteria there are three classical categories: 1. Supervised learning - we get the data and the labels 2. Unsupervised learning - only get the data (no labels) 3. Reinforcement learning - reward/penalty based information (feedback) Another way of categorizing ML approaches, is to think of the desired <i>output</i>: 1. Classification 2. Regression 3. Clustering 4. Density estimation 5. Dimensionality reduction --> This second approach (by desired <i>output</i>) is how sklearn categorizes it's ML algorithms. The problem solved in supervised learning (e.g. classification, regression) Supervised learning consists in learning the link between two datasets: the observed data X and an external variable y that we are trying to predict, usually called “target” or “labels”. Most often, y is a 1D array of length n_samples. All supervised estimators in sklearn implement a fit(X, y) method to fit the model and a predict(X) method that, given unlabeled observations X, returns the predicted labels y. Common algorithms you will use to train a model and then use trying to predict the labels of unknown observations are: <b>classification</b> and <b>regression</b>. There are many types of classification and regression (for examples check out the sklearn algorithm cheatsheet below). The problem solved in <i>un</i>supervised learning In machine learning, the problem of unsupervised learning is that of trying to find <b>hidden structure</b> in unlabeled data. Unsupervised models have a fit(), transform() and/or fit_transform() in sklearn. There are some instances where ML is just not needed or appropriate for solving a problem. Some examples are pattern matching (e.g. regex), group-by and data mining in general (discovery vs. prediction). EXERCISE: Should I use ML or can I get away with something else? Looking back at previous years, by what percent did housing prices increase over each decade?<br> Looking back at previous years, and given the relationship between housing prices and mean income in my area, given my income how much will a house be in two years in my area?<br> A vacuum like roomba has to make a decision to vacuum the living room again or return to its base.<br> Is this image a cat or dog?<br> Are orange tabby cats more common than other breeds in Austin, Texas?<br> Using my SQL database on housing prices, group my housing prices by whether or not the house is under 10 miles from a school.<br> What is the weather going to be like tomorrow?<br> What is the purpose of life? A very brief introduction to scikit-learn (aka sklearn) This module is not meant to be a comprehensive introduction to ML, but rather an introduction to the current de facto tool for ML in python. As a gentle intro, it is helpful to think of the sklearn approach having layers of abstraction. This famous quote certainly applies: Easy reading is damn hard writing, and vice versa. <br> --Nathaniel Hawthorne In sklearn, you'll find you have a common programming choice: to do things very explicitly, e.g. pre-process data one step at a time, perhaps do a transformation like PCA, split data into traning and test sets, define a classifier or learner with desired parameterss, train the classifier, use the classifier to predict on a test set and then analyze how good it did. A different approach and something sklearn offers is to combine some or all of the steps above into a pipeline so to speak. For instance, one could define a pipeline which does all of these steps at one time and perhaps even pits mutlple learners against one another or does some parameter tuning with a grid search (examples will be shown towards the end). This is what is meant here by layers of abstraction. So, in this particular module, for the most part, we will try to be explicit regarding our process and give some useful tips on options for a more automated or pipelined approach. Just note, once you've mastered the explicit approaches you might want to explore sklearn's GridSearchCV and Pipeline classes. Here is sklearn's algorithm diagram - (note, this is not an exhaustive list of model options offered in sklearn, but serves as a good algorithm guide). Your first model - a multiclass logistic regression on the iris dataset sklearn comes with this dataset ready-to-go for sklearn's algorithms End of explanation """ print(type(iris.data)) print(type(iris.target)) """ Explanation: QUESTION: * What would have been good to do before plunging right in to a logistic regression model? Some terms you will encouter as a Machine Learnest Term | Definition ------------- | ------------- Training set | set of data used to learn a model Test set | set of data used to test a model Feature | a variable (continuous, discrete, categorical, etc.) aka column Target | Label (associated with dependent variable, what we predict) Learner | Model or algorithm Fit, Train | learn a model with an ML algorithm using a training set Predict | w/ supervised learning, give a label to an unknown datum(data), w/ unsupervised decide if new data is weird, in which group, or what to do next with the new data Accuracy | percentage of correct predictions ((TP + TN) / total) Precision | percentage of correct positive predictions (TP / (FP + TP)) Recall | percentage of positive cases caught (TP / (FN + TP)) PRO TIP: Are you a statitician? Want to talk like a machine learning expert? Here you go (from the friendly people at SAS (here)): A Statistician Would Say | A Machine Learnest Would Say ------------- | ------------- dependent variable | target variable | feature transformation | feature creation BREAK <b>ML TIP: Ask sharp questions.</b><br>e.g. What type of flower is this (pictured below) closest to of the three given classes? (This links out to source) <a href="http://www.madlantern.com/photography/wild-iris/"><img border="0" alt="iris species" src="imgs/iris-setosa.jpg" width="400" height="400"></a> Labels (species names/classes): (This links out to source) <a href="http://articles.concreteinteractive.com/machine-learning-a-new-tool-for-humanity/"><img border="0" alt="iris species" src="imgs/irises.png" width="500" height="500"></a> NOTE: sklearn needs data/features (aka columns) in numpy ndarrays and the optional labels also as numpy ndarrays. TIP: Commonly, machine learning algorithms will require your data to be standardized and preprocessed. In sklearn the data must also take on a certain structure as well.</b> End of explanation """ import seaborn as sb import pandas as pd import numpy as np #sb.set_context("notebook", font_scale=2.5) %matplotlib inline """ Explanation: Let's Dive In! End of explanation """ import pandas as pd from sklearn import datasets iris = datasets.load_iris() # How many data points (rows) x how many features (columns) print(iris.data.shape) print(iris.target.shape) # What python object represents print(type(iris.data)) print(type(iris.target)) """ Explanation: Features in the Iris dataset: 0 sepal length in cm<br> 1 sepal width in cm<br> 2 petal length in cm<br> 3 petal width in cm<br> Target classes to predict: 0 Iris Setosa<br> 1 Iris Versicolour<br> 2 Iris Virginica<br> Get to know the data - visualize and explore Features (columns/measurements) come from this diagram (links out to source on kaggle): <a href="http://blog.kaggle.com/2015/04/22/scikit-learn-video-3-machine-learning-first-steps-with-the-iris-dataset/"><img border="0" alt="iris data features" src="imgs/iris_petal_sepal.png" width="200" height="200"></a> Shape Peek at data Summaries <b>Shape and representation<b> End of explanation """ # convert to pandas df (adding real column names) iris.df = pd.DataFrame(iris.data, columns = ['Sepal length', 'Sepal width', 'Petal length', 'Petal width']) # first few rows iris.df.head() """ Explanation: <b>Sneak a peek at data (a reminder of your pandas dataframe methods)<b> End of explanation """ # summary stats iris.df.describe() """ Explanation: <b>Describe the dataset with some summary statitsics<b> End of explanation """ # Standardization aka scaling from sklearn import preprocessing, datasets # make sure we have iris loaded iris = datasets.load_iris() X, y = iris.data, iris.target # scale it to a gaussian distribution X_scaled = preprocessing.scale(X) # how does it look now pd.DataFrame(X_scaled).head() # let's just confirm our standardization worked (mean is 0 w/ unit variance) pd.DataFrame(X_scaled).describe() # also could: #print(X_scaled.mean(axis = 0)) #print(X_scaled.std(axis = 0)) """ Explanation: We don't have to do much with the iris dataset. It has no missing values. It's already in numpy arrays and has the correct shape for sklearn. However we could try <b>standardization</b> and/or <b>normalization</b>. (later, in the transforms section, we will show one hot encoding, a preprocessing step) Preprocessing (Bonus Material) <p>What you might have to do before using a learner in `sklearn`:</p> Non-numerics transformed to numeric (tip: use applymap() method from pandas) Fill in missing values Standardization Normalization Encoding categorical features (e.g. one-hot encoding or dummy variables) <b>Features should end up in a numpy.ndarray (hence numeric) and labels in a list.</b> Data options: * Use pre-processed datasets from scikit-learn * Create your own * Read from a file If you use your own data or "real-world" data you will likely have to do some data wrangling and need to leverage pandas for some data manipulation. Standardization - make our data look like a standard Gaussian distribution (commonly needed for sklearn learners) FYI: you'll commonly see the data or feature set (ML word for data without it's labels) represented as a capital <b>X</b> and the targets or labels (if we have them) represented as a lowercase <b>y</b>. This is because the data is a 2D array or list of lists and the targets are a 1D array or simple list. End of explanation """ # Standardization aka scaling from sklearn import preprocessing, datasets # make sure we have iris loaded iris = datasets.load_iris() X, y = iris.data, iris.target # scale it to a gaussian distribution X_norm = preprocessing.normalize(X, norm='l1') # how does it look now pd.DataFrame(X_norm).tail() # let's just confirm our standardization worked (mean is 0 w/ unit variance) pd.DataFrame(X_norm).describe() # cumulative sum of normalized and original data: #print(pd.DataFrame(X_norm.cumsum().reshape(X.shape)).tail()) #print(pd.DataFrame(X).cumsum().tail()) # unit norm (convert to unit vectors) - all row sums should be 1 now X_norm.sum(axis = 1) """ Explanation: PRO TIP: To save our standardization and reapply later (say to the test set or some new data), create a transformer object like so: ```python scaler = preprocessing.StandardScaler().fit(X_train) apply to a new dataset (e.g. test set): scaler.transform(X_test) ``` Normalization - scaling samples <i>individually</i> to have unit norm This type of scaling is really important if doing some downstream transformations and learning (see sklearn docs here for more) where similarity of pairs of samples is examined A basic intro to normalization and the unit vector can be found here End of explanation """ # PCA for dimensionality reduction from sklearn import decomposition from sklearn import datasets iris = datasets.load_iris() X, y = iris.data, iris.target # perform principal component analysis pca = decomposition.PCA(n_components = 3) pca.fit(X) X_t = pca.transform(X) (X_t[:, 0]) # import numpy and matplotlib for plotting (and set some stuff) import numpy as np np.set_printoptions(suppress=True) import matplotlib.pyplot as plt %matplotlib inline # let's separate out data based on first two principle components x1, x2 = X_t[:, 0], X_t[:, 1] # please don't worry about details of the plotting below # (will introduce in different module) # (note: you can get the iris names below from iris.target_names, also in docs) s1 = ['r' if v == 0 else 'b' if v == 1 else 'g' for v in y] s2 = ['Setosa' if v == 0 else 'Versicolor' if v == 1 else 'Virginica' for v in y] classes = s2 colors = s1 for (i, cla) in enumerate(set(classes)): xc = [p for (j, p) in enumerate(x1) if classes[j] == cla] yc = [p for (j, p) in enumerate(x2) if classes[j] == cla] cols = [c for (j, c) in enumerate(colors) if classes[j] == cla] plt.scatter(xc, yc, c = cols, label = cla) plt.legend(loc = 4) """ Explanation: PRO TIP: To save our normalization (like standardization above) and reapply later (say to the test set or some new data), create a transformer object like so: ```python normalizer = preprocessing.Normalizer().fit(X_train) apply to a new dataset (e.g. test set): normalizer.transform(X_test) ``` BREAK Make the learning easier or better beforehand - feature creation/selection PCA SelectKBest One-Hot Encoder Principal component analysis (aka PCA) reduces the dimensions of a dataset down to get the most out of the information without a really big feature space Useful for very large feature space (e.g. say the botanist in charge of the iris dataset measured 100 more parts of the flower and thus there were 104 columns instead of 4) More about PCA on wikipedia here End of explanation """ # SelectKBest for selecting top-scoring features from sklearn import datasets from sklearn.feature_selection import SelectKBest, chi2 iris = datasets.load_iris() X, y = iris.data, iris.target print(X.shape) # Do feature selection # input is scoring function (here chi2) to get univariate p-values # and number of top-scoring features (k) - here we get the top 2 X_t = SelectKBest(chi2, k = 2).fit_transform(X, y) print(X_t.shape) """ Explanation: Selecting k top scoring features (also dimensionality reduction) End of explanation """ # OneHotEncoder for dummying variables from sklearn.preprocessing import OneHotEncoder, LabelEncoder import pandas as pd data = pd.DataFrame({'index': range(1, 7), 'state': ['WA', 'NY', 'CO', 'NY', 'CA', 'WA']}) print(data) # We encode both our categorical variable and it's labels enc = OneHotEncoder() label_enc = LabelEncoder() # remember the labels here # Encode labels (can use for discrete numerical values as well) data_label_encoded = label_enc.fit_transform(data['state']) data['state'] = data_label_encoded # Encode and "dummy" variables data_feature_one_hot_encoded = enc.fit_transform(data[['state']]) # Put into dataframe to look nicer and decode state dummy variables to original state values # TRY: compare the original input data (look at row numbers) to one hot encoding results # --> do they match?? pd.DataFrame(data_feature_one_hot_encoded.toarray(), columns = label_enc.inverse_transform(range(4))) # Encoded labels as dummy variables print(data_label_encoded) # Decoded print(label_enc.inverse_transform(data_label_encoded)) """ Explanation: <b>Note on scoring function selection in SelectKBest tranformations:</b> * For regression - f_regression * For classification - chi2, f_classif One Hot Encoding It's an operation on feature labels - a method of dummying variable Expands the feature space by nature of transform - later this can be processed further with a dimensionality reduction (the dummied variables are now their own features) FYI: One hot encoding variables is needed for python ML module tenorflow The code cell below should help make this clear End of explanation """ from sklearn import datasets iris = datasets.load_iris() X, y = iris.data, iris.target a = pd.DataFrame(X, columns = ['Sepal length', 'Sepal width', 'Petal length', 'Petal width']) col5 = pd.DataFrame(np.random.randint(1, 4, size = len(y))) X_plus = pd.concat([a, col5], axis = 1) X_plus.head(20) # ...now one-hot-encode... """ Explanation: EXERCISE: Use one hot encoding to "recode" the iris data's extra suprise column (we are going to add a categorical variable here to play with...) End of explanation """ from sklearn.datasets import load_iris from sklearn.cross_validation import train_test_split from sklearn import tree # Let's load the iris dataset iris = load_iris() X, y = iris.data, iris.target # split data into training and test sets using the handy train_test_split func # in this split, we are "holding out" only one value and label (placed into X_test and y_test) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 1) # Let's try a decision tree classification method clf = tree.DecisionTreeClassifier() clf = clf.fit(iris.data, iris.target) # Let's predict on our "held out" sample y_pred = clf.predict(X_test) # What was the label associated with this test sample? ("held out" sample's original label) # fill in the blank below # how did our prediction do? print("Prediction: %d, Original label: %d" % (y_pred[0], ___)) # <-- fill in blank """ Explanation: BREAK Learning Algorithms - Supervised Learning Reminder: All supervised estimators in scikit-learn implement a fit(X, y) method to fit the model and a predict(X) method that, given unlabeled observations X, returns the predicted labels y. (direct quote from sklearn docs) Given that Iris is a fairly small, labeled dataset with relatively few features...what algorithm would you start with and why? "Often the hardest part of solving a machine learning problem can be finding the right estimator for the job." "Different estimators are better suited for different types of data and different problems." <a href = "http://scikit-learn.org/stable/tutorial/machine_learning_map/index.html" style = "float: right">-Choosing the Right Estimator from sklearn docs</a> <b>An estimator for recognizing a new iris from its measurements</b> Or, in machine learning parlance, we <i>fit</i> an estimator on known samples of the iris measurements to <i>predict</i> the class to which an unseen iris belongs. Let's give it a try! (We are actually going to hold out a small percentage of the iris dataset and check our predictions against the labels) End of explanation """ from IPython.display import Image from sklearn.externals.six import StringIO import pydot dot_data = StringIO() tree.export_graphviz(clf, out_file=dot_data) graph = pydot.graph_from_dot_data(dot_data.getvalue()) dot_data = StringIO() tree.export_graphviz(clf, out_file=dot_data, feature_names=iris.feature_names, class_names=iris.target_names, filled=True, rounded=True, special_characters=True) graph = pydot.graph_from_dot_data(dot_data.getvalue()) Image(graph.create_png()) from sklearn.tree import export_graphviz import graphviz export_graphviz(clf, out_file="mytree.dot", feature_names=iris.feature_names, class_names=iris.target_names, filled=True, rounded=True, special_characters=True) with open("mytree.dot") as f: dot_graph = f.read() graphviz.Source(dot_graph) """ Explanation: EXERCISE: enter in your own iris data point and see what the prediction is (what limitation do you think you might encounter here?) - if out of range What does the graph look like for this decision tree? End of explanation """ from sklearn.datasets import load_iris from sklearn.ensemble import RandomForestClassifier import pandas as pd import numpy as np iris = load_iris() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2) clf = RandomForestClassifier(n_jobs=2) df = pd.DataFrame(iris.data, columns=iris.feature_names) df['is_train'] = np.random.uniform(0, 1, len(df)) <= .75 df['species'] = pd.Factor(iris.target, iris.target_names) train, test = df[df['is_train']==True], df[df['is_train']==False] clf = RandomForestClassifier(n_jobs=2) clf.fit(X_train, y_train) preds = iris.target_names[clf.predict(X_test, y_test)] df.head() #pd.crosstab(test['species'], preds, rownames=['actual'], colnames=['preds']) """ Explanation: From Decision Tree to Random Forest End of explanation """ from sklearn import cluster, datasets # data iris = datasets.load_iris() X, y = iris.data, iris.target k_means = cluster.KMeans(n_clusters=3) k_means.fit(X) # how do our original labels fit into the clusters we found? print(k_means.labels_[::10]) print(y[::10]) """ Explanation: <p>We can be explicit and use the `train_test_split` method in scikit-learn ( [train_test_split](http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.train_test_split.html) ) as in (and as shown above for `iris` data):<p> ```python # Create some data by hand and place 70% into a training set and the rest into a test set # Here we are using labeled features (X - feature data, y - labels) in our made-up data import numpy as np from sklearn import linear_model from sklearn.cross_validation import train_test_split X, y = np.arange(10).reshape((5, 2)), range(5) X_train, X_test, y_train, y_test = train_test_split(X, y, train_size = 0.70) clf = linear_model.LinearRegression() clf.fit(X_train, y_train) ``` OR Be more concise and ```python import numpy as np from sklearn import cross_validation, linear_model X, y = np.arange(10).reshape((5, 2)), range(5) clf = linear_model.LinearRegression() score = cross_validation.cross_val_score(clf, X, y) ``` <p>There is also a `cross_val_predict` method to create estimates rather than scores ( [cross_val_predict](http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.cross_val_predict.html) ) BREAK ### Learning Algorithms - Unsupervised Learning > Reminder: In machine learning, the problem of unsupervised learning is that of trying to find hidden structure in unlabeled data. Since the training set given to the learner is unlabeled, there is no error or reward signal to evaluate a potential solution. Basically, we are just finding a way to represent the data and get as much information from it that we can. HEY! Remember PCA from above? PCA is actually considered unsupervised learning. We just put it up there because it's a good way to visualize data at the beginning of the ML process. We are going to continue to use the `iris` dataset (however we won't be needed the targets or labels) End of explanation """ import numpy as np # import model algorithm and data from sklearn import svm, datasets # import splitter from sklearn.cross_validation import train_test_split # import metrics from sklearn.metrics import confusion_matrix # feature data (X) and labels (y) iris = datasets.load_iris() X, y = iris.data, iris.target # split data into training and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, train_size = 0.70, random_state = 42) # perform the classification step and run a prediction on test set from above clf = svm.SVC(kernel = 'linear', C = 0.01) y_pred = clf.fit(X_train, y_train).predict(X_test) # Define a plotting function confusion matrices # (from http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html) import matplotlib.pyplot as plt def plot_confusion_matrix(cm, target_names, title = 'The Confusion Matrix', cmap = plt.cm.YlOrRd): plt.imshow(cm, interpolation = 'nearest', cmap = cmap) plt.tight_layout() # Add feature labels to x and y axes tick_marks = np.arange(len(target_names)) plt.xticks(tick_marks, target_names, rotation=45) plt.yticks(tick_marks, target_names) plt.ylabel('True Label') plt.xlabel('Predicted Label') plt.colorbar() """ Explanation: EXERCISE IDEA: Iterate over different number of clusters, n_clusters param, in Kmeans BREAK Evaluating - using metrics <b>Confusion matrix</b> - visually inspect quality of a classifier's predictions (more here) - very useful to see if a particular class is problematic <b>Here, we will process some data, classify it with SVM (see here for more info), and view the quality of the classification with a confusion matrix.</b> End of explanation """ %matplotlib inline cm = confusion_matrix(y_test, y_pred) # see the actual counts print(cm) # visually inpsect how the classifier did matching predictions to true labels plot_confusion_matrix(cm, iris.target_names) """ Explanation: Numbers in confusion matrix: * on-diagonal - counts of points for which the predicted label is equal to the true label * off-diagonal - counts of mislabeled points End of explanation """ from sklearn.metrics import classification_report # Using the test and prediction sets from above print(classification_report(y_test, y_pred, target_names = iris.target_names)) # Another example with some toy data y_test = ['cat', 'dog', 'mouse', 'mouse', 'cat', 'cat'] y_pred = ['mouse', 'dog', 'cat', 'mouse', 'cat', 'mouse'] # How did our predictor do? print(classification_report(y_test, y_pred, target_names = y_test)) """ Explanation: <b>Classification reports</b> - a text report with important classification metrics (e.g. precision, recall) End of explanation """ import numpy as np from sklearn import cross_validation # Let's run a prediction on some test data given a trained model # First, create some data X = np.sort(np.random.rand(20)) func = lambda x: np.cos(1.5 * np.pi * x) y = np.array([func(x) for x in X]) # A plotting function import matplotlib.pyplot as plt %matplotlib inline def plot_fit(X_train, y_train, X_test, y_pred): plt.plot(X_test, y_pred, label = "Model") plt.plot(X_test, func(X_test), label = "Function") plt.scatter(X_train, y_train, label = "Samples") plt.xlabel("x") plt.ylabel("y") plt.xlim((0, 1)) plt.ylim((-2, 2)) """ Explanation: Evaluating Models and Under/Over-Fitting Over-fitting or under-fitting can be visualized as below and tuned as we will see later with GridSearchCV paramter tuning A <b>validation curve</b> gives one an idea of the relationship of model complexity to model performance. For this examination it would help to understand the idea of the <b>bias-variance tradeoff</b>. A <b>learning curve</b> helps answer the question of if there is an added benefit to adding more training data to a model. It is also a tool for investigating whether an estimator is more affected by variance error or bias error. End of explanation """ from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression poly = PolynomialFeatures(degree = 1, include_bias = False) lm = LinearRegression() from sklearn.pipeline import Pipeline pipeline = Pipeline([("polynomial_features", poly), ("linear_regression", lm)]) pipeline.fit(X[:, np.newaxis], y) X_test = np.linspace(0, 1, 100) y_pred = pipeline.predict(X_test[:, np.newaxis]) plot_fit(X, y, X_test, y_pred) """ Explanation: BREAK Easy reading...create and use a pipeline <b>Pipelining</b> (as an aside to this section) * Pipeline(steps=[...]) - where steps can be a list of processes through which to put data or a dictionary which includes the parameters for each step as values * For example, here we do a transformation (SelectKBest) and a classification (SVC) all at once in a pipeline we set up ```python a feature selection instance selection = SelectKBest(chi2, k = 2) classification instance clf = svm.SVC(kernel = 'linear') make a pipeline pipeline = Pipeline([("feature selection", selection), ("classification", clf)]) train the model pipeline.fit(X, y) ``` See a full example here Note: If you wish to perform <b>multiple transformations</b> in your pipeline try FeatureUnion End of explanation """ from sklearn.grid_search import GridSearchCV from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression poly = PolynomialFeatures(include_bias = False) lm = LinearRegression() pipeline = Pipeline([("polynomial_features", poly), ("linear_regression", lm)]) param_grid = dict(polynomial_features__degree = list(range(1, 30, 2)), linear_regression__normalize = [False, True]) grid_search = GridSearchCV(pipeline, param_grid=param_grid) grid_search.fit(X[:, np.newaxis], y) print(grid_search.best_params_) """ Explanation: Last, but not least, Searching Parameter Space with GridSearchCV End of explanation """
AtmaMani/pyChakras
udemy_ml_bootcamp/Python-for-Data-Visualization/Pandas Built-in Data Viz/Pandas Built-in Data Visualization.ipynb
mit
import numpy as np import pandas as pd %matplotlib inline """ Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a> Pandas Built-in Data Visualization In this lecture we will learn about pandas built-in capabilities for data visualization! It's built-off of matplotlib, but it baked into pandas for easier usage! Let's take a look! Imports End of explanation """ df1 = pd.read_csv('df1',index_col=0) df2 = pd.read_csv('df2') """ Explanation: The Data There are some fake data csv files you can read in as dataframes: End of explanation """ df1['A'].hist() """ Explanation: Style Sheets Matplotlib has style sheets you can use to make your plots look a little nicer. These style sheets include plot_bmh,plot_fivethirtyeight,plot_ggplot and more. They basically create a set of style rules that your plots follow. I recommend using them, they make all your plots have the same look and feel more professional. You can even create your own if you want your company's plots to all have the same look (it is a bit tedious to create on though). Here is how to use them. Before plt.style.use() your plots look like this: End of explanation """ import matplotlib.pyplot as plt plt.style.use('ggplot') """ Explanation: Call the style: End of explanation """ df1['A'].hist() plt.style.use('bmh') df1['A'].hist() plt.style.use('dark_background') df1['A'].hist() plt.style.use('fivethirtyeight') df1['A'].hist() plt.style.use('ggplot') """ Explanation: Now your plots look like this: End of explanation """ df2.plot.area(alpha=0.4) """ Explanation: Let's stick with the ggplot style and actually show you how to utilize pandas built-in plotting capabilities! Plot Types There are several plot types built-in to pandas, most of them statistical plots by nature: df.plot.area df.plot.barh df.plot.density df.plot.hist df.plot.line df.plot.scatter df.plot.bar df.plot.box df.plot.hexbin df.plot.kde df.plot.pie You can also just call df.plot(kind='hist') or replace that kind argument with any of the key terms shown in the list above (e.g. 'box','barh', etc..) Let's start going through them! Area End of explanation """ df2.head() df2.plot.bar() df2.plot.bar(stacked=True) """ Explanation: Barplots End of explanation """ df1['A'].plot.hist(bins=50) """ Explanation: Histograms End of explanation """ df1.plot.line(x=df1.index,y='B',figsize=(12,3),lw=1) """ Explanation: Line Plots End of explanation """ df1.plot.scatter(x='A',y='B') """ Explanation: Scatter Plots End of explanation """ df1.plot.scatter(x='A',y='B',c='C',cmap='coolwarm') """ Explanation: You can use c to color based off another column value Use cmap to indicate colormap to use. For all the colormaps, check out: http://matplotlib.org/users/colormaps.html End of explanation """ df1.plot.scatter(x='A',y='B',s=df1['C']*200) """ Explanation: Or use s to indicate size based off another column. s parameter needs to be an array, not just the name of a column: End of explanation """ df2.plot.box() # Can also pass a by= argument for groupby """ Explanation: BoxPlots End of explanation """ df = pd.DataFrame(np.random.randn(1000, 2), columns=['a', 'b']) df.plot.hexbin(x='a',y='b',gridsize=25,cmap='Oranges') """ Explanation: Hexagonal Bin Plot Useful for Bivariate Data, alternative to scatterplot: End of explanation """ df2['a'].plot.kde() df2.plot.density() """ Explanation: Kernel Density Estimation plot (KDE) End of explanation """
andreww/end_of_day_two
DefensiveProgramming_3.ipynb
mit
def test_range_overlap(): assert range_overlap([(-3.0, 5.0), (0.0, 4.5), (-1.5, 2.0)]) == (0.0, 2.0) assert range_overlap([ (2.0, 3.0), (2.0, 4.0) ]) == (2.0, 3.0) assert range_overlap([ (0.0, 1.0), (0.0, 2.0), (-1.0, 1.0) ]) == (0.0, 1.0) """ Explanation: # Defensive programming (2) We have seen the basic idea that we can insert assert statments into code, to check that the results are what we expect, but how can we test software more fully? Can doing this help us avoid bugs in the first place? One possible approach is test driven development. Many people think this reduces the number of bugs in software as it is written, but evidence for this in the sciences is somewhat limited as it is not always easy to say what the right answer should be before writing the software. Having said that, the tests involved in test driven development are certanly useful even if some of them are written after the software. We will look at a new (and quite difficult) problem, finding the overlap between ranges of numbers. For example, these could be the dates that different sensors were running, and you need to find the date ranges where all sensors recorded data before running further analysis. <img src="python-overlapping-ranges.svg"> Start off by imagining you have a working function range_overlap that takes a list of tuples. Write some assert statments that would check if the answer from this function is correct. Put these in a function. Think of different cases and about edge cases (which may show a subtle bug). End of explanation """ def test_range_overlap_no_overlap(): assert range_overlap([ (0.0, 1.0), (5.0, 6.0) ]) == None assert range_overlap([ (0.0, 1.0), (1.0, 2.0) ]) == None """ Explanation: But what if there is no overlap? What if they just touch? End of explanation """ def test_range_overlap_one_range(): assert range_overlap([ (0.0, 1.0) ]) == (0.0, 1.0) """ Explanation: What about the case of a single range? End of explanation """ def range_overlap(ranges): # Return common overlap among a set of [low, high] ranges. lowest = -1000.0 highest = 1000.0 for (low, high) in ranges: lowest = max(lowest, low) highest = min(highest, high) return (lowest, highest) """ Explanation: The write a solution - one possible one is below. End of explanation """ test_range_overlap() test_range_overlap_no_overlap() test_range_overlap_one_range() """ Explanation: And test it... End of explanation """ def pairs_overlap(rangeA, rangeB): # Check if A starts after B ends and # A ends before B starts. If both are # false, there is an overlap. # We are assuming (0.0 1.0) and # (1.0 2.0) do not overlap. If these should # overlap swap >= for > and <= for <. overlap = not ((rangeA[0] >= rangeB[1]) or (rangeA[1] <= rangeB[0])) return overlap def find_overlap(rangeA, rangeB): # Return the overlap between range # A and B if pairs_overlap(rangeA, rangeB): low = max(rangeA[0], rangeB[0]) high = min(rangeA[1], rangeB[1]) return (low, high) else: return None def range_overlap(ranges): # Return common overlap among a set of # [low, high] ranges. if len(ranges) == 1: # Special case of one range - # overlaps with itself return(ranges[0]) elif len(ranges) == 2: # Just return from find_overlap return find_overlap(ranges[0], ranges[1]) else: # Range of A, B, C is the # range of range(B,C) with # A, etc. Do this by recursion... overlap = find_overlap(ranges[-1], ranges[-2]) if overlap is not None: # Chop off the end of ranges and # replace with the overlap ranges = ranges[:-2] ranges.append(overlap) # Now run again, with the smaller list. return range_overlap(ranges) else: return None test_range_overlap() test_range_overlap_one_range() test_range_overlap_no_overlap() """ Explanation: Should we add to the tests? Can you write version with fewer bugs. My attempt is below. End of explanation """
mne-tools/mne-tools.github.io
0.14/_downloads/plot_decoding_csp_eeg.ipynb
bsd-3-clause
# Authors: Martin Billinger <martin.billinger@tugraz.at> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt from mne import Epochs, pick_types, find_events from mne.channels import read_layout from mne.io import concatenate_raws, read_raw_edf from mne.datasets import eegbci from mne.decoding import CSP print(__doc__) # ############################################################################# # # Set parameters and read data # avoid classification of evoked responses by using epochs that start 1s after # cue onset. tmin, tmax = -1., 4. event_id = dict(hands=2, feet=3) subject = 1 runs = [6, 10, 14] # motor imagery: hands vs feet raw_fnames = eegbci.load_data(subject, runs) raw_files = [read_raw_edf(f, preload=True) for f in raw_fnames] raw = concatenate_raws(raw_files) # strip channel names of "." characters raw.rename_channels(lambda x: x.strip('.')) # Apply band-pass filter raw.filter(7., 30.) events = find_events(raw, shortest_event=0, stim_channel='STI 014') picks = pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False, exclude='bads') # Read epochs (train will be done only between 1 and 2s) # Testing will be done with a running classifier epochs = Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks, baseline=None, preload=True) epochs_train = epochs.copy().crop(tmin=1., tmax=2.) labels = epochs.events[:, -1] - 2 """ Explanation: =========================================================================== Motor imagery decoding from EEG data using the Common Spatial Pattern (CSP) =========================================================================== Decoding of motor imagery applied to EEG data decomposed using CSP. Here the classifier is applied to features extracted on CSP filtered signals. See http://en.wikipedia.org/wiki/Common_spatial_pattern and [1]. The EEGBCI dataset is documented in [2]. The data set is available at PhysioNet [3]_. References .. [1] Zoltan J. Koles. The quantitative extraction and topographic mapping of the abnormal components in the clinical EEG. Electroencephalography and Clinical Neurophysiology, 79(6):440--447, December 1991. .. [2] Schalk, G., McFarland, D.J., Hinterberger, T., Birbaumer, N., Wolpaw, J.R. (2004) BCI2000: A General-Purpose Brain-Computer Interface (BCI) System. IEEE TBME 51(6):1034-1043. .. [3] Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh, Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE. (2000) PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation 101(23):e215-e220. End of explanation """ from sklearn.lda import LDA # noqa from sklearn.cross_validation import ShuffleSplit # noqa # Assemble a classifier lda = LDA() csp = CSP(n_components=4, reg=None, log=True) # Define a monte-carlo cross-validation generator (reduce variance): cv = ShuffleSplit(len(labels), 10, test_size=0.2, random_state=42) scores = [] epochs_data = epochs.get_data() epochs_data_train = epochs_train.get_data() # Use scikit-learn Pipeline with cross_val_score function from sklearn.pipeline import Pipeline # noqa from sklearn.cross_validation import cross_val_score # noqa clf = Pipeline([('CSP', csp), ('LDA', lda)]) scores = cross_val_score(clf, epochs_data_train, labels, cv=cv, n_jobs=1) # Printing the results class_balance = np.mean(labels == labels[0]) class_balance = max(class_balance, 1. - class_balance) print("Classification accuracy: %f / Chance level: %f" % (np.mean(scores), class_balance)) # plot CSP patterns estimated on full data for visualization csp.fit_transform(epochs_data, labels) evoked = epochs.average() evoked.data = csp.patterns_.T evoked.times = np.arange(evoked.data.shape[0]) layout = read_layout('EEG1005') evoked.plot_topomap(times=[0, 1, 2, 3, 4, 5], ch_type='eeg', layout=layout, scale_time=1, time_format='%i', scale=1, unit='Patterns (AU)', size=1.5) """ Explanation: Classification with linear discrimant analysis End of explanation """ sfreq = raw.info['sfreq'] w_length = int(sfreq * 0.5) # running classifier: window length w_step = int(sfreq * 0.1) # running classifier: window step size w_start = np.arange(0, epochs_data.shape[2] - w_length, w_step) scores_windows = [] for train_idx, test_idx in cv: y_train, y_test = labels[train_idx], labels[test_idx] X_train = csp.fit_transform(epochs_data_train[train_idx], y_train) X_test = csp.transform(epochs_data_train[test_idx]) # fit classifier lda.fit(X_train, y_train) # running classifier: test classifier on sliding window score_this_window = [] for n in w_start: X_test = csp.transform(epochs_data[test_idx][:, :, n:(n + w_length)]) score_this_window.append(lda.score(X_test, y_test)) scores_windows.append(score_this_window) # Plot scores over time w_times = (w_start + w_length / 2.) / sfreq + epochs.tmin plt.figure() plt.plot(w_times, np.mean(scores_windows, 0), label='Score') plt.axvline(0, linestyle='--', color='k', label='Onset') plt.axhline(0.5, linestyle='-', color='k', label='Chance') plt.xlabel('time (s)') plt.ylabel('classification accuracy') plt.title('Classification score over time') plt.legend(loc='lower right') plt.show() """ Explanation: Look at performance over time End of explanation """
akseshina/dl_course
seminar_6/hw_sklearn.ipynb
gpl-3.0
import numpy as np import pandas as pd from collections import Counter import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns from sklearn.neighbors import NearestCentroid import random import pickle family_classification_metadata = pd.read_table('../seminar_5/data/family_classification_metadata.tab') family_classification_sequences = pd.read_table('../seminar_5/data/family_classification_sequences.tab') table = pd.read_csv('data/protVec_100d_3grams_without_quotes.csv', sep='\t', header=None) table = table.T header = table.iloc[0] # grab the first row for the header prot2vec = table[1:] # take the data less the header row prot2vec.columns = header # set the header row as the df header most_common_families = Counter(family_classification_metadata['FamilyID']).most_common(1000) most_common_families = [family for (family, count) in most_common_families] family2num = {f: i for (i, f) in enumerate(most_common_families)} MAX_PROTEIN_LEN = 501 EMBED_LEN = 100 all_proteins = family_classification_sequences['Sequences'] all_families = family_classification_metadata['FamilyID'] selected_ids = [i for i in range(len(all_proteins)) if all_families[i] in family2num and len(all_proteins[i]) <= MAX_PROTEIN_LEN] random.shuffle(selected_ids) train_ratio = 0.9 num_train = int(len(selected_ids) * train_ratio) train_ids = selected_ids[:num_train] test_ids = selected_ids[num_train:] def embedding(protein): res = np.zeros(100) for i in range(0, (len(protein) - 3) // 3): try: res = np.add(res, prot2vec[protein[i*3: i*3 + 3]]) except KeyError: res = np.add(res, prot2vec['<unk>']) return np.divide(res, ((len(protein) - 3) // 3)) #embedding(all_proteins[11]) X_train = [] for i in range(len(train_ids)): #if i % 2000 == 0: # print(i) cur_id = train_ids[i] X_train.append(embedding(all_proteins[cur_id])) X_test = [] for i in range(len(test_ids)): #if i % 2000 == 0: # print(i) cur_id = test_ids[i] X_test.append(embedding(all_proteins[cur_id])) with open('data/X_train.pickle', 'wb') as f: pickle.dump(X_train, f) with open('data/X_test.pickle', 'wb') as f: pickle.dump(X_test, f) y_train = all_families[train_ids] y_test = all_families[test_ids] with open('data/y_train.pickle', 'wb') as f: pickle.dump(y_train, f) with open('data/y_test.pickle', 'wb') as f: pickle.dump(y_test, f) """ Explanation: Protein Family Classification End of explanation """ for shrinkage in [None, .2, 5, 10]: clf = NearestCentroid(shrink_threshold=shrinkage) clf.fit(X_train, y_train) y_pred = clf.predict(X_test) print('Accuracy for shinkage {}: {:3.1f}%'.format(shrinkage, np.mean(y_test == y_pred) * 100)) """ Explanation: Nearest centroid classifier I used it because it's fast. End of explanation """
chengwliu/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
Chapter4_TheGreatestTheoremNeverTold/Ch4_LawOfLargeNumbers_PyMC2.ipynb
mit
%matplotlib inline import numpy as np from IPython.core.pylabtools import figsize import matplotlib.pyplot as plt figsize(12.5, 5) import pymc as pm sample_size = 100000 expected_value = lambda_ = 4.5 poi = pm.rpoisson N_samples = range(1, sample_size, 100) for k in range(3): samples = poi(lambda_, size=sample_size) partial_average = [samples[:i].mean() for i in N_samples] plt.plot(N_samples, partial_average, lw=1.5, label="average \ of $n$ samples; seq. %d" % k) plt.plot(N_samples, expected_value * np.ones_like(partial_average), ls="--", label="true expected value", c="k") plt.ylim(4.35, 4.65) plt.title("Convergence of the average of \n random variables to its \ expected value") plt.ylabel("average of $n$ samples") plt.xlabel("# of samples, $n$") plt.legend(); """ Explanation: Chapter 4 The greatest theorem never told This chapter focuses on an idea that is always bouncing around our minds, but is rarely made explicit outside books devoted to statistics. In fact, we've been using this simple idea in every example thus far. The Law of Large Numbers Let $Z_i$ be $N$ independent samples from some probability distribution. According to the Law of Large numbers, so long as the expected value $E[Z]$ is finite, the following holds, $$\frac{1}{N} \sum_{i=1}^N Z_i \rightarrow E[ Z ], \;\;\; N \rightarrow \infty.$$ In words: The average of a sequence of random variables from the same distribution converges to the expected value of that distribution. This may seem like a boring result, but it will be the most useful tool you use. Intuition If the above Law is somewhat surprising, it can be made clearer by examining a simple example. Consider a random variable $Z$ that can take only two values, $c_1$ and $c_2$. Suppose we have a large number of samples of $Z$, denoting a specific sample $Z_i$. The Law says that we can approximate the expected value of $Z$ by averaging over all samples. Consider the average: $$ \frac{1}{N} \sum_{i=1}^N \;Z_i $$ By construction, $Z_i$ can only take on $c_1$ or $c_2$, hence we can partition the sum over these two values: \begin{align} \frac{1}{N} \sum_{i=1}^N \;Z_i & =\frac{1}{N} \big( \sum_{ Z_i = c_1}c_1 + \sum_{Z_i=c_2}c_2 \big) \\[5pt] & = c_1 \sum_{ Z_i = c_1}\frac{1}{N} + c_2 \sum_{ Z_i = c_2}\frac{1}{N} \\[5pt] & = c_1 \times \text{ (approximate frequency of $c_1$) } \\ & \;\;\;\;\;\;\;\;\; + c_2 \times \text{ (approximate frequency of $c_2$) } \\[5pt] & \approx c_1 \times P(Z = c_1) + c_2 \times P(Z = c_2 ) \\[5pt] & = E[Z] \end{align} Equality holds in the limit, but we can get closer and closer by using more and more samples in the average. This Law holds for almost any distribution, minus some important cases we will encounter later. Example Below is a diagram of the Law of Large numbers in action for three different sequences of Poisson random variables. We sample sample_size = 100000 Poisson random variables with parameter $\lambda = 4.5$. (Recall the expected value of a Poisson random variable is equal to its parameter.) We calculate the average for the first $n$ samples, for $n=1$ to sample_size. End of explanation """ figsize(12.5, 4) N_Y = 250 # use this many to approximate D(N) N_array = np.arange(1000, 50000, 2500) # use this many samples in the approx. to the variance. D_N_results = np.zeros(len(N_array)) lambda_ = 4.5 expected_value = lambda_ # for X ~ Poi(lambda) , E[ X ] = lambda def D_N(n): """ This function approx. D_n, the average variance of using n samples. """ Z = poi(lambda_, size=(n, N_Y)) average_Z = Z.mean(axis=0) return np.sqrt(((average_Z - expected_value) ** 2).mean()) for i, n in enumerate(N_array): D_N_results[i] = D_N(n) plt.xlabel("$N$") plt.ylabel("expected squared-distance from true value") plt.plot(N_array, D_N_results, lw=3, label="expected distance between\n\ expected value and \naverage of $N$ random variables.") plt.plot(N_array, np.sqrt(expected_value) / np.sqrt(N_array), lw=2, ls="--", label=r"$\frac{\sqrt{\lambda}}{\sqrt{N}}$") plt.legend() plt.title("How 'fast' is the sample average converging? "); """ Explanation: Looking at the above plot, it is clear that when the sample size is small, there is greater variation in the average (compare how jagged and jumpy the average is initially, then smooths out). All three paths approach the value 4.5, but just flirt with it as $N$ gets large. Mathematicians and statistician have another name for flirting: convergence. Another very relevant question we can ask is how quickly am I converging to the expected value? Let's plot something new. For a specific $N$, let's do the above trials thousands of times and compute how far away we are from the true expected value, on average. But wait &mdash; compute on average? This is simply the law of large numbers again! For example, we are interested in, for a specific $N$, the quantity: $$D(N) = \sqrt{ \;E\left[\;\; \left( \frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \;\right)^2 \;\;\right] \;\;}$$ The above formulae is interpretable as a distance away from the true value (on average), for some $N$. (We take the square root so the dimensions of the above quantity and our random variables are the same). As the above is an expected value, it can be approximated using the law of large numbers: instead of averaging $Z_i$, we calculate the following multiple times and average them: $$ Y_k = \left( \;\frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \; \right)^2 $$ By computing the above many, $N_y$, times (remember, it is random), and averaging them: $$ \frac{1}{N_Y} \sum_{k=1}^{N_Y} Y_k \rightarrow E[ Y_k ] = E\;\left[\;\; \left( \frac{1}{N}\sum_{i=1}^NZ_i - 4.5 \;\right)^2 \right]$$ Finally, taking the square root: $$ \sqrt{\frac{1}{N_Y} \sum_{k=1}^{N_Y} Y_k} \approx D(N) $$ End of explanation """ import pymc as pm N = 10000 print(np.mean([pm.rexponential(0.5) > 10 for i in range(N)])) """ Explanation: As expected, the expected distance between our sample average and the actual expected value shrinks as $N$ grows large. But also notice that the rate of convergence decreases, that is, we need only 10 000 additional samples to move from 0.020 to 0.015, a difference of 0.005, but 20 000 more samples to again decrease from 0.015 to 0.010, again only a 0.005 decrease. It turns out we can measure this rate of convergence. Above I have plotted a second line, the function $\sqrt{\lambda}/\sqrt{N}$. This was not chosen arbitrarily. In most cases, given a sequence of random variable distributed like $Z$, the rate of converge to $E[Z]$ of the Law of Large Numbers is $$ \frac{ \sqrt{ \; Var(Z) \; } }{\sqrt{N} }$$ This is useful to know: for a given large $N$, we know (on average) how far away we are from the estimate. On the other hand, in a Bayesian setting, this can seem like a useless result: Bayesian analysis is OK with uncertainty so what's the statistical point of adding extra precise digits? Though drawing samples can be so computationally cheap that having a larger $N$ is fine too. How do we compute $Var(Z)$ though? The variance is simply another expected value that can be approximated! Consider the following, once we have the expected value (by using the Law of Large Numbers to estimate it, denote it $\mu$), we can estimate the variance: $$ \frac{1}{N}\sum_{i=1}^N \;(Z_i - \mu)^2 \rightarrow E[ \;( Z - \mu)^2 \;] = Var( Z )$$ Expected values and probabilities There is an even less explicit relationship between expected value and estimating probabilities. Define the indicator function $$\mathbb{1}_A(x) = \begin{cases} 1 & x \in A \\ 0 & else \end{cases} $$ Then, by the law of large numbers, if we have many samples $X_i$, we can estimate the probability of an event $A$, denoted $P(A)$, by: $$ \frac{1}{N} \sum_{i=1}^N \mathbb{1}_A(X_i) \rightarrow E[\mathbb{1}_A(X)] = P(A) $$ Again, this is fairly obvious after a moments thought: the indicator function is only 1 if the event occurs, so we are summing only the times the event occurs and dividing by the total number of trials (consider how we usually approximate probabilities using frequencies). For example, suppose we wish to estimate the probability that a $Z \sim Exp(.5)$ is greater than 10, and we have many samples from a $Exp(.5)$ distribution. $$ P( Z > 10 ) = \frac{1}{N} \sum_{i=1}^N \mathbb{1}_{z > 10 }(Z_i) $$ End of explanation """ figsize(12.5, 4) std_height = 15 mean_height = 150 n_counties = 5000 pop_generator = pm.rdiscrete_uniform norm = pm.rnormal # generate some artificial population numbers population = pop_generator(100, 1500, size=n_counties) average_across_county = np.zeros(n_counties) for i in range(n_counties): # generate some individuals and take the mean average_across_county[i] = norm(mean_height, 1. / std_height ** 2, size=population[i]).mean() # located the counties with the apparently most extreme average heights. i_min = np.argmin(average_across_county) i_max = np.argmax(average_across_county) # plot population size vs. recorded average plt.scatter(population, average_across_county, alpha=0.5, c="#7A68A6") plt.scatter([population[i_min], population[i_max]], [average_across_county[i_min], average_across_county[i_max]], s=60, marker="o", facecolors="none", edgecolors="#A60628", linewidths=1.5, label="extreme heights") plt.xlim(100, 1500) plt.title("Average height vs. County Population") plt.xlabel("County Population") plt.ylabel("Average height in county") plt.plot([100, 1500], [150, 150], color="k", label="true expected \ height", ls="--") plt.legend(scatterpoints=1); """ Explanation: What does this all have to do with Bayesian statistics? Point estimates, to be introduced in the next chapter, in Bayesian inference are computed using expected values. In more analytical Bayesian inference, we would have been required to evaluate complicated expected values represented as multi-dimensional integrals. No longer. If we can sample from the posterior distribution directly, we simply need to evaluate averages. Much easier. If accuracy is a priority, plots like the ones above show how fast you are converging. And if further accuracy is desired, just take more samples from the posterior. When is enough enough? When can you stop drawing samples from the posterior? That is the practitioners decision, and also dependent on the variance of the samples (recall from above a high variance means the average will converge slower). We also should understand when the Law of Large Numbers fails. As the name implies, and comparing the graphs above for small $N$, the Law is only true for large sample sizes. Without this, the asymptotic result is not reliable. Knowing in what situations the Law fails can give us confidence in how unconfident we should be. The next section deals with this issue. The Disorder of Small Numbers The Law of Large Numbers is only valid as $N$ gets infinitely large: never truly attainable. While the law is a powerful tool, it is foolhardy to apply it liberally. Our next example illustrates this. Example: Aggregated geographic data Often data comes in aggregated form. For instance, data may be grouped by state, county, or city level. Of course, the population numbers vary per geographic area. If the data is an average of some characteristic of each the geographic areas, we must be conscious of the Law of Large Numbers and how it can fail for areas with small populations. We will observe this on a toy dataset. Suppose there are five thousand counties in our dataset. Furthermore, population number in each state are uniformly distributed between 100 and 1500. The way the population numbers are generated is irrelevant to the discussion, so we do not justify this. We are interested in measuring the average height of individuals per county. Unbeknownst to us, height does not vary across county, and each individual, regardless of the county he or she is currently living in, has the same distribution of what their height may be: $$ \text{height} \sim \text{Normal}(150, 15 ) $$ We aggregate the individuals at the county level, so we only have data for the average in the county. What might our dataset look like? End of explanation """ print("Population sizes of 10 'shortest' counties: ") print(population[np.argsort(average_across_county)[:10]]) print("\nPopulation sizes of 10 'tallest' counties: ") print(population[np.argsort(-average_across_county)[:10]]) """ Explanation: What do we observe? Without accounting for population sizes we run the risk of making an enormous inference error: if we ignored population size, we would say that the county with the shortest and tallest individuals have been correctly circled. But this inference is wrong for the following reason. These two counties do not necessarily have the most extreme heights. The error results from the calculated average of smaller populations not being a good reflection of the true expected value of the population (which in truth should be $\mu =150$). The sample size/population size/$N$, whatever you wish to call it, is simply too small to invoke the Law of Large Numbers effectively. We provide more damning evidence against this inference. Recall the population numbers were uniformly distributed over 100 to 1500. Our intuition should tell us that the counties with the most extreme population heights should also be uniformly spread over 100 to 1500, and certainly independent of the county's population. Not so. Below are the population sizes of the counties with the most extreme heights. End of explanation """ figsize(12.5, 6.5) data = np.genfromtxt("./data/census_data.csv", skip_header=1, delimiter=",") plt.scatter(data[:, 1], data[:, 0], alpha=0.5, c="#7A68A6") plt.title("Census mail-back rate vs Population") plt.ylabel("Mail-back rate") plt.xlabel("population of block-group") plt.xlim(-100, 15e3) plt.ylim(-5, 105) i_min = np.argmin(data[:, 0]) i_max = np.argmax(data[:, 0]) plt.scatter([data[i_min, 1], data[i_max, 1]], [data[i_min, 0], data[i_max, 0]], s=60, marker="o", facecolors="none", edgecolors="#A60628", linewidths=1.5, label="most extreme points") plt.legend(scatterpoints=1); """ Explanation: Not at all uniform over 100 to 1500. This is an absolute failure of the Law of Large Numbers. Example: Kaggle's U.S. Census Return Rate Challenge Below is data from the 2010 US census, which partitions populations beyond counties to the level of block groups (which are aggregates of city blocks or equivalents). The dataset is from a Kaggle machine learning competition some colleagues and I participated in. The objective was to predict the census letter mail-back rate of a group block, measured between 0 and 100, using census variables (median income, number of females in the block-group, number of trailer parks, average number of children etc.). Below we plot the census mail-back rate versus block group population: End of explanation """ # adding a number to the end of the %run call with get the ith top photo. %run top_showerthoughts_submissions.py 2 print("Post contents: \n") print(top_post) """ contents: an array of the text from the last 100 top submissions to a subreddit votes: a 2d numpy array of upvotes, downvotes for each submission. """ n_submissions = len(votes) submissions = np.random.randint( n_submissions, size=4) print("Some Submissions (out of %d total) \n-----------"%n_submissions) for i in submissions: print('"' + contents[i] + '"') print("upvotes/downvotes: ",votes[i,:], "\n") """ Explanation: The above is a classic phenomenon in statistics. I say classic referring to the "shape" of the scatter plot above. It follows a classic triangular form, that tightens as we increase the sample size (as the Law of Large Numbers becomes more exact). I am perhaps overstressing the point and maybe I should have titled the book "You don't have big data problems!", but here again is an example of the trouble with small datasets, not big ones. Simply, small datasets cannot be processed using the Law of Large Numbers. Compare with applying the Law without hassle to big datasets (ex. big data). I mentioned earlier that paradoxically big data prediction problems are solved by relatively simple algorithms. The paradox is partially resolved by understanding that the Law of Large Numbers creates solutions that are stable, i.e. adding or subtracting a few data points will not affect the solution much. On the other hand, adding or removing data points to a small dataset can create very different results. For further reading on the hidden dangers of the Law of Large Numbers, I would highly recommend the excellent manuscript The Most Dangerous Equation. Example: How to order Reddit submissions You may have disagreed with the original statement that the Law of Large numbers is known to everyone, but only implicitly in our subconscious decision making. Consider ratings on online products: how often do you trust an average 5-star rating if there is only 1 reviewer? 2 reviewers? 3 reviewers? We implicitly understand that with such few reviewers that the average rating is not a good reflection of the true value of the product. This has created flaws in how we sort items, and more generally, how we compare items. Many people have realized that sorting online search results by their rating, whether the objects be books, videos, or online comments, return poor results. Often the seemingly top videos or comments have perfect ratings only from a few enthusiastic fans, and truly more quality videos or comments are hidden in later pages with falsely-substandard ratings of around 4.8. How can we correct this? Consider the popular site Reddit (I purposefully did not link to the website as you would never come back). The site hosts links to stories or images, and a very popular part of the site are the comments associated with each link. Redditors can vote up or down on each submission (called upvotes and downvotes). Reddit, by default, will sort submissions to a given subreddit by Hot, that is, the submissions that have the most upvotes recently. <img src="http://i.imgur.com/3v6bz9f.png" /> How would you determine which submissions are the best? There are a number of ways to achieve this: Popularity: A submission is considered good if it has many upvotes. A problem with this model is that a submission with hundreds of upvotes, but thousands of downvotes. While being very popular, the submission is likely more controversial than best. Difference: Using the difference of upvotes and downvotes. This solves the above problem, but fails when we consider the temporal nature of submission. Depending on when a submission is posted, the website may be experiencing high or low traffic. The difference method will bias the Top submissions to be the those made during high traffic periods, which have accumulated more upvotes than submissions that were not so graced, but are not necessarily the best. Time adjusted: Consider using Difference divided by the age of the submission. This creates a rate, something like difference per second, or per minute. An immediate counter-example is, if we use per second, a 1 second old submission with 1 upvote would be better than a 100 second old submission with 99 upvotes. One can avoid this by only considering at least t second old submission. But what is a good t value? Does this mean no submission younger than t is good? We end up comparing unstable quantities with stable quantities (young vs. old submissions). Ratio: Rank submissions by the ratio of upvotes to total number of votes (upvotes plus downvotes). This solves the temporal issue, such that new submissions who score well can be considered Top just as likely as older submissions, provided they have many upvotes to total votes. The problem here is that a submission with a single upvote (ratio = 1.0) will beat a submission with 999 upvotes and 1 downvote (ratio = 0.999), but clearly the latter submission is more likely to be better. I used the phrase more likely for good reason. It is possible that the former submission, with a single upvote, is in fact a better submission than the latter with 999 upvotes. The hesitation to agree with this is because we have not seen the other 999 potential votes the former submission might get. Perhaps it will achieve an additional 999 upvotes and 0 downvotes and be considered better than the latter, though not likely. What we really want is an estimate of the true upvote ratio. Note that the true upvote ratio is not the same as the observed upvote ratio: the true upvote ratio is hidden, and we only observe upvotes vs. downvotes (one can think of the true upvote ratio as "what is the underlying probability someone gives this submission a upvote, versus a downvote"). So the 999 upvote/1 downvote submission probably has a true upvote ratio close to 1, which we can assert with confidence thanks to the Law of Large Numbers, but on the other hand we are much less certain about the true upvote ratio of the submission with only a single upvote. Sounds like a Bayesian problem to me. One way to determine a prior on the upvote ratio is to look at the historical distribution of upvote ratios. This can be accomplished by scraping Reddit's submissions and determining a distribution. There are a few problems with this technique though: Skewed data: The vast majority of submissions have very few votes, hence there will be many submissions with ratios near the extremes (see the "triangular plot" in the above Kaggle dataset), effectively skewing our distribution to the extremes. One could try to only use submissions with votes greater than some threshold. Again, problems are encountered. There is a tradeoff between number of submissions available to use and a higher threshold with associated ratio precision. Biased data: Reddit is composed of different subpages, called subreddits. Two examples are r/aww, which posts pics of cute animals, and r/politics. It is very likely that the user behaviour towards submissions of these two subreddits are very different: visitors are likely to be more friendly and affectionate in the former, and would therefore upvote submissions more, compared to the latter, where submissions are likely to be controversial and disagreed upon. Therefore not all submissions are the same. In light of these, I think it is better to use a Uniform prior. With our prior in place, we can find the posterior of the true upvote ratio. The Python script top_showerthoughts_submissions.py will scrape the best posts from the showerthoughts community on Reddit. This is a text-only community so the title of each post is the post. Below is the top post as well as some other sample posts: End of explanation """ import pymc as pm def posterior_upvote_ratio(upvotes, downvotes, samples=20000): """ This function accepts the number of upvotes and downvotes a particular submission received, and the number of posterior samples to return to the user. Assumes a uniform prior. """ N = upvotes + downvotes upvote_ratio = pm.Uniform("upvote_ratio", 0, 1) observations = pm.Binomial("obs", N, upvote_ratio, value=upvotes, observed=True) # do the fitting; first do a MAP as it is cheap and useful. map_ = pm.MAP([upvote_ratio, observations]).fit() mcmc = pm.MCMC([upvote_ratio, observations]) mcmc.sample(samples, samples / 4) return mcmc.trace("upvote_ratio")[:] """ Explanation: For a given true upvote ratio $p$ and $N$ votes, the number of upvotes will look like a Binomial random variable with parameters $p$ and $N$. (This is because of the equivalence between upvote ratio and probability of upvoting versus downvoting, out of $N$ possible votes/trials). We create a function that performs Bayesian inference on $p$, for a particular comment's upvote/downvote pair. End of explanation """ figsize(11., 8) posteriors = [] colours = ["#348ABD", "#A60628", "#7A68A6", "#467821", "#CF4457"] for i in range(len(submissions)): j = submissions[i] posteriors.append(posterior_upvote_ratio(votes[j, 0], votes[j, 1])) plt.hist(posteriors[i], bins=18, normed=True, alpha=.9, histtype="step", color=colours[i % 5], lw=3, label='(%d up:%d down)\n%s...' % (votes[j, 0], votes[j, 1], contents[j][:50])) plt.hist(posteriors[i], bins=18, normed=True, alpha=.2, histtype="stepfilled", color=colours[i], lw=3, ) plt.legend(loc="upper left") plt.xlim(0, 1) plt.title("Posterior distributions of upvote ratios on different submissions"); """ Explanation: Below are the resulting posterior distributions. End of explanation """ N = posteriors[0].shape[0] lower_limits = [] for i in range(len(submissions)): j = submissions[i] plt.hist(posteriors[i], bins=20, normed=True, alpha=.9, histtype="step", color=colours[i], lw=3, label='(%d up:%d down)\n%s...' % (votes[j, 0], votes[j, 1], contents[j][:50])) plt.hist(posteriors[i], bins=20, normed=True, alpha=.2, histtype="stepfilled", color=colours[i], lw=3, ) v = np.sort(posteriors[i])[int(0.05 * N)] # plt.vlines( v, 0, 15 , color = "k", alpha = 1, linewidths=3 ) plt.vlines(v, 0, 10, color=colours[i], linestyles="--", linewidths=3) lower_limits.append(v) plt.legend(loc="upper left") plt.legend(loc="upper left") plt.title("Posterior distributions of upvote ratios on different submissions"); order = np.argsort(-np.array(lower_limits)) print(order, lower_limits) """ Explanation: Some distributions are very tight, others have very long tails (relatively speaking), expressing our uncertainty with what the true upvote ratio might be. Sorting! We have been ignoring the goal of this exercise: how do we sort the submissions from best to worst? Of course, we cannot sort distributions, we must sort scalar numbers. There are many ways to distill a distribution down to a scalar: expressing the distribution through its expected value, or mean, is one way. Choosing the mean is a bad choice though. This is because the mean does not take into account the uncertainty of distributions. I suggest using the 95% least plausible value, defined as the value such that there is only a 5% chance the true parameter is lower (think of the lower bound on the 95% credible region). Below are the posterior distributions with the 95% least-plausible value plotted: End of explanation """ def intervals(u, d): a = 1. + u b = 1. + d mu = a / (a + b) std_err = 1.65 * np.sqrt((a * b) / ((a + b) ** 2 * (a + b + 1.))) return (mu, std_err) print("Approximate lower bounds:") posterior_mean, std_err = intervals(votes[:, 0], votes[:, 1]) lb = posterior_mean - std_err print(lb) print("\n") print("Top 40 Sorted according to approximate lower bounds:") print("\n") order = np.argsort(-lb) ordered_contents = [] for i in order[:40]: ordered_contents.append(contents[i]) print(votes[i, 0], votes[i, 1], contents[i]) print("-------------") """ Explanation: The best submissions, according to our procedure, are the submissions that are most-likely to score a high percentage of upvotes. Visually those are the submissions with the 95% least plausible value close to 1. Why is sorting based on this quantity a good idea? By ordering by the 95% least plausible value, we are being the most conservative with what we think is best. That is, even in the worst case scenario, when we have severely overestimated the upvote ratio, we can be sure the best comments are still on top. Under this ordering, we impose the following very natural properties: given two submissions with the same observed upvote ratio, we will assign the submission with more votes as better (since we are more confident it has a higher ratio). given two submissions with the same number of votes, we still assign the submission with more upvotes as better. But this is too slow for real-time! I agree, computing the posterior of every submission takes a long time, and by the time you have computed it, likely the data has changed. I delay the mathematics to the appendix, but I suggest using the following formula to compute the lower bound very fast. $$ \frac{a}{a + b} - 1.65\sqrt{ \frac{ab}{ (a+b)^2(a + b +1 ) } }$$ where \begin{align} & a = 1 + u \\ & b = 1 + d \\ \end{align} $u$ is the number of upvotes, and $d$ is the number of downvotes. The formula is a shortcut in Bayesian inference, which will be further explained in Chapter 6 when we discuss priors in more detail. End of explanation """ r_order = order[::-1][-40:] plt.errorbar(posterior_mean[r_order], np.arange(len(r_order)), xerr=std_err[r_order], capsize=0, fmt="o", color="#7A68A6") plt.xlim(0.3, 1) plt.yticks(np.arange(len(r_order) - 1, -1, -1), map(lambda x: x[:30].replace("\n", ""), ordered_contents)); """ Explanation: We can view the ordering visually by plotting the posterior mean and bounds, and sorting by the lower bound. In the plot below, notice that the left error-bar is sorted (as we suggested this is the best way to determine an ordering), so the means, indicated by dots, do not follow any strong pattern. End of explanation """ # Enter code here import scipy.stats as stats exp = stats.expon(scale=4) N = int(1e5) X = exp.rvs(N) # ... """ Explanation: In the graphic above, you can see why sorting by mean would be sub-optimal. Extension to Starred rating systems The above procedure works well for upvote-downvotes schemes, but what about systems that use star ratings, e.g. 5 star rating systems. Similar problems apply with simply taking the average: an item with two perfect ratings would beat an item with thousands of perfect ratings, but a single sub-perfect rating. We can consider the upvote-downvote problem above as binary: 0 is a downvote, 1 if an upvote. A $N$-star rating system can be seen as a more continuous version of above, and we can set $n$ stars rewarded is equivalent to rewarding $\frac{n}{N}$. For example, in a 5-star system, a 2 star rating corresponds to 0.4. A perfect rating is a 1. We can use the same formula as before, but with $a,b$ defined differently: $$ \frac{a}{a + b} - 1.65\sqrt{ \frac{ab}{ (a+b)^2(a + b +1 ) } }$$ where \begin{align} & a = 1 + S \\ & b = 1 + N - S \\ \end{align} where $N$ is the number of users who rated, and $S$ is the sum of all the ratings, under the equivalence scheme mentioned above. Example: Counting Github stars What is the average number of stars a Github repository has? How would you calculate this? There are over 6 million repositories, so there is more than enough data to invoke the Law of Large numbers. Let's start pulling some data. TODO Conclusion While the Law of Large Numbers is cool, it is only true so much as its name implies: with large sample sizes only. We have seen how our inference can be affected by not considering how the data is shaped. By (cheaply) drawing many samples from the posterior distributions, we can ensure that the Law of Large Number applies as we approximate expected values (which we will do in the next chapter). Bayesian inference understands that with small sample sizes, we can observe wild randomness. Our posterior distribution will reflect this by being more spread rather than tightly concentrated. Thus, our inference should be correctable. There are major implications of not considering the sample size, and trying to sort objects that are unstable leads to pathological orderings. The method provided above solves this problem. Appendix Derivation of sorting comments formula Basically what we are doing is using a Beta prior (with parameters $a=1, b=1$, which is a uniform distribution), and using a Binomial likelihood with observations $u, N = u+d$. This means our posterior is a Beta distribution with parameters $a' = 1 + u, b' = 1 + (N - u) = 1+d$. We then need to find the value, $x$, such that 0.05 probability is less than $x$. This is usually done by inverting the CDF (Cumulative Distribution Function), but the CDF of the beta, for integer parameters, is known but is a large sum [3]. We instead use a Normal approximation. The mean of the Beta is $\mu = a'/(a'+b')$ and the variance is $$\sigma^2 = \frac{a'b'}{ (a' + b')^2(a'+b'+1) }$$ Hence we solve the following equation for $x$ and have an approximate lower bound. $$ 0.05 = \Phi\left( \frac{(x - \mu)}{\sigma}\right) $$ $\Phi$ being the cumulative distribution for the normal distribution Exercises 1. How would you estimate the quantity $E\left[ \cos{X} \right]$, where $X \sim \text{Exp}(4)$? What about $E\left[ \cos{X} | X \lt 1\right]$, i.e. the expected value given we know $X$ is less than 1? Would you need more samples than the original samples size to be equally accurate? End of explanation """ from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling() """ Explanation: 2. The following table was located in the paper "Going for Three: Predicting the Likelihood of Field Goal Success with Logistic Regression" [2]. The table ranks football field-goal kickers by their percent of non-misses. What mistake have the researchers made? Kicker Careers Ranked by Make Percentage <table><tbody><tr><th>Rank </th><th>Kicker </th><th>Make % </th><th>Number of Kicks</th></tr><tr><td>1 </td><td>Garrett Hartley </td><td>87.7 </td><td>57</td></tr><tr><td>2</td><td> Matt Stover </td><td>86.8 </td><td>335</td></tr><tr><td>3 </td><td>Robbie Gould </td><td>86.2 </td><td>224</td></tr><tr><td>4 </td><td>Rob Bironas </td><td>86.1 </td><td>223</td></tr><tr><td>5</td><td> Shayne Graham </td><td>85.4 </td><td>254</td></tr><tr><td>… </td><td>… </td><td>…</td><td> </td></tr><tr><td>51</td><td> Dave Rayner </td><td>72.2 </td><td>90</td></tr><tr><td>52</td><td> Nick Novak </td><td>71.9 </td><td>64</td></tr><tr><td>53 </td><td>Tim Seder </td><td>71.0 </td><td>62</td></tr><tr><td>54 </td><td>Jose Cortez </td><td>70.7</td><td> 75</td></tr><tr><td>55 </td><td>Wade Richey </td><td>66.1</td><td> 56</td></tr></tbody></table> In August 2013, a popular post on the average income per programmer of different languages was trending. Here's the summary chart: (reproduced without permission, cause when you lie with stats, you gunna get the hammer). What do you notice about the extremes? Average household income by programming language <table > <tr><td>Language</td><td>Average Household Income ($)</td><td>Data Points</td></tr> <tr><td>Puppet</td><td>87,589.29</td><td>112</td></tr> <tr><td>Haskell</td><td>89,973.82</td><td>191</td></tr> <tr><td>PHP</td><td>94,031.19</td><td>978</td></tr> <tr><td>CoffeeScript</td><td>94,890.80</td><td>435</td></tr> <tr><td>VimL</td><td>94,967.11</td><td>532</td></tr> <tr><td>Shell</td><td>96,930.54</td><td>979</td></tr> <tr><td>Lua</td><td>96,930.69</td><td>101</td></tr> <tr><td>Erlang</td><td>97,306.55</td><td>168</td></tr> <tr><td>Clojure</td><td>97,500.00</td><td>269</td></tr> <tr><td>Python</td><td>97,578.87</td><td>2314</td></tr> <tr><td>JavaScript</td><td>97,598.75</td><td>3443</td></tr> <tr><td>Emacs Lisp</td><td>97,774.65</td><td>355</td></tr> <tr><td>C#</td><td>97,823.31</td><td>665</td></tr> <tr><td>Ruby</td><td>98,238.74</td><td>3242</td></tr> <tr><td>C++</td><td>99,147.93</td><td>845</td></tr> <tr><td>CSS</td><td>99,881.40</td><td>527</td></tr> <tr><td>Perl</td><td>100,295.45</td><td>990</td></tr> <tr><td>C</td><td>100,766.51</td><td>2120</td></tr> <tr><td>Go</td><td>101,158.01</td><td>231</td></tr> <tr><td>Scala</td><td>101,460.91</td><td>243</td></tr> <tr><td>ColdFusion</td><td>101,536.70</td><td>109</td></tr> <tr><td>Objective-C</td><td>101,801.60</td><td>562</td></tr> <tr><td>Groovy</td><td>102,650.86</td><td>116</td></tr> <tr><td>Java</td><td>103,179.39</td><td>1402</td></tr> <tr><td>XSLT</td><td>106,199.19</td><td>123</td></tr> <tr><td>ActionScript</td><td>108,119.47</td><td>113</td></tr> </table> References Wainer, Howard. The Most Dangerous Equation. American Scientist, Volume 95. Clarck, Torin K., Aaron W. Johnson, and Alexander J. Stimpson. "Going for Three: Predicting the Likelihood of Field Goal Success with Logistic Regression." (2013): n. page. Web. 20 Feb. 2013. http://en.wikipedia.org/wiki/Beta_function#Incomplete_beta_function End of explanation """
cgrudz/cgrudz.github.io
teaching/stat_775_2021_fall/activities/activity-2021-08-30.ipynb
mit
import numpy as np # load the data below specifying the correct path in the load text with the correct commands for a csv file """ Explanation: Introduction to Python part III (And a discussion of orthgonality) Activity 1: Discussion of orthogonality One of the primary concepts discussed in the review of inner product spaces is the concept of orthogonality. We will discuss as a class the following points: How is orthogonality derived from the vector inner product? What does this represent? How is orthogonality used to construct bases for a vector space / sub-space? What are what is the Gram-Schidt process and what is the consequence of this as a matrix decomposition? What is the geometric interpretation of an orthogonal matrix? Activity 2: Basic data analysis and manipulation We will continue from here with analyzing the patient data in inflammation-01.csv. Use the loadtxt command from numpy to load the data into the namespace as a variable data. Notice that the # denotes a comment, i.e., a note that will not be run by the Python kernel and instead serves as a reference to ourself and future programmers about the use of the code. End of explanation """ import scipy as sp """ Explanation: NumPy has several useful functions that take an array as input to perform operations on its values. If we want to find the average inflammation for all patients on all days, for example, we can ask NumPy to compute data’s mean value with the function np.mean. Exercise 1 The standard five number summary to analyze data in terms of the center, spread, skewness and extreme values is given in terms of: the minumum value the maximum value the median the upper quartile the lower quartile In addition, the mean, mode and standard deviation are useful descriptors of the data. In the following, look for the methods needed to compute the above statistics on the data array. Furthermore, load these into variables such that the below print statements will work. Note that some of these may be better computed with methods in the complementary scientific computing library scipy. This is imported in the standard convention below: End of explanation """ print('maximum inflammation:', maxval) print('minimum inflammation:', minval) print('standard deviation:', stdval) print('median inflammation:', medval) print('mean inflammation:', meanval) print('upper quartiile inflammation', up_quartval) print('lower quartiile inflammation', lo_quartval) """ Explanation: Note that you may need to use a command such as conda install scipy for the above to work. How do we know what functions NumPy and SciPy have and how to use them? If you are working in IPython or in a Jupyter Notebook, there is an easy way to find out. If you type the name of something followed by a dot, then you can use tab completion (e.g. type numpy. and then press Tab) to see a list of all functions and attributes that you can use. After selecting one, you can also add a question mark (e.g. np.cumprod?), and IPython will return an explanation of the method! End of explanation """ A = np.array([[1,2,3], [4,5,6], [7, 8, 9]]) print('A = ') print(A) B = np.hstack([A, A]) print('B = ') print(B) C = np.vstack([A, A]) print('C = ') print(C) """ Explanation: Exercise 2: When analyzing data, we often want to look at variations in statistical values, such as the maximum inflammation per patient or the average inflammation per day. We note that our data example is formatted in the following row / column orientation: With the above conventions, compute the patient max inflammation and the average inflamation. Use the keyword arguments above in the np.max and np.mean functions to specify over which dimension we make a calculation. Use this to verify a second calculation, slicing the array with the : notation in one of the axes to compute the patient max for the third patient and the daily average on the second day. Exercise 3: Arrays can be concatenated and stacked on top of one another, using NumPy’s vstack and hstack functions for vertical and horizontal stacking, respectively. End of explanation """ D = np.hstack((A[:, :1], A[:, -1:])) print('D = ') print(D) """ Explanation: Write some additional code that slices the first and last columns of A, and stacks them into a 3x2 array. Make sure to print the results to verify your solution. Note a ‘gotcha’ with array indexing is that singleton dimensions are dropped by default. That means A[:, 0] is a one dimensional array, which won’t stack as desired. To preserve singleton dimensions, the index itself can be a slice or array. For example, A[:, :1] returns a two dimensional array with one singleton dimension (i.e. a column vector). End of explanation """ patient3_week1 = data[3, :7] print(patient3_week1) """ Explanation: An alternative way to achieve the same result is to use Numpy’s delete function to remove the second column of A. Use the search function for the documentation on the np.delete function to find the syntax for constructing such an array. Exercise 4: The patient data is longitudinal in the sense that each row represents a series of observations relating to one individual. This means that the change in inflammation over time is a meaningful concept. Let’s find out how to calculate changes in the data contained in an array with NumPy. The np.diff function takes an array and returns the differences between two successive values. Let’s use it to examine the changes each day across the first week of patient 3 from our inflammation dataset. End of explanation """ np.diff(patient3_week1) """ Explanation: Calling np.diff(patient3_week1) would do the following calculations [ 0 - 0, 2 - 0, 0 - 2, 4 - 0, 2 - 4, 2 - 2 ] and return the 6 difference values in a new array. End of explanation """
halexan/cs231n
assignment1/two_layer_net.ipynb
mit
# A bit of setup import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.neural_net import TwoLayerNet %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): """ returns relative error """ return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) """ Explanation: Implementing a Neural Network In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset. End of explanation """ # Create a small net and some toy data to check your implementations. # Note that we set the random seed for repeatable experiments. input_size = 4 hidden_size = 10 num_classes = 3 num_inputs = 5 def init_toy_model(): np.random.seed(0) return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1) def init_toy_data(): np.random.seed(1) X = 10 * np.random.randn(num_inputs, input_size) y = np.array([0, 1, 2, 2, 1]) return X, y net = init_toy_model() X, y = init_toy_data() """ Explanation: We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation. End of explanation """ scores = net.loss(X) print 'Your scores:' print scores print print 'correct scores:' correct_scores = np.asarray([ [-0.81233741, -1.27654624, -0.70335995], [-0.17129677, -1.18803311, -0.47310444], [-0.51590475, -1.01354314, -0.8504215 ], [-0.15419291, -0.48629638, -0.52901952], [-0.00618733, -0.12435261, -0.15226949]]) print correct_scores print # The difference should be very small. We get < 1e-7 print 'Difference between your scores and correct scores:' print np.sum(np.abs(scores - correct_scores)) """ Explanation: Forward pass: compute scores Open the file cs231n/classifiers/neural_net.py and look at the method TwoLayerNet.loss. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters. Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs. End of explanation """ loss, _ = net.loss(X, y, reg=0.1) correct_loss = 1.30378789133 # should be very small, we get < 1e-12 print 'Difference between your loss and correct loss:' print np.sum(np.abs(loss - correct_loss)) """ Explanation: Forward pass: compute loss In the same function, implement the second part that computes the data and regularizaion loss. End of explanation """ from cs231n.gradient_check import eval_numerical_gradient # Use numeric gradient checking to check your implementation of the backward pass. # If your implementation is correct, the difference between the numeric and # analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2. loss, grads = net.loss(X, y, reg=0.1) # these should all be less than 1e-8 or so for param_name in grads: f = lambda W: net.loss(X, y, reg=0.1)[0] param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False) print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])) """ Explanation: Backward pass Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check: End of explanation """ net = init_toy_model() stats = net.train(X, y, X, y, learning_rate=1e-1, reg=1e-5, num_iters=100, verbose=False) print 'Final training loss: ', stats['loss_history'][-1] # plot the loss history plt.plot(stats['loss_history']) plt.xlabel('iteration') plt.ylabel('training loss') plt.title('Training Loss history') plt.show() """ Explanation: Train the network To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains. Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2. End of explanation """ from cs231n.data_utils import load_CIFAR10 def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000): """ Load the CIFAR-10 dataset from disk and perform preprocessing to prepare it for the two-layer neural net classifier. These are the same steps as we used for the SVM, but condensed to a single function. """ # Load the raw CIFAR-10 data cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # Subsample the data mask = range(num_training, num_training + num_validation) X_val = X_train[mask] y_val = y_train[mask] mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] # Normalize the data: subtract the mean image mean_image = np.mean(X_train, axis=0) X_train -= mean_image X_val -= mean_image X_test -= mean_image # Reshape data to rows X_train = X_train.reshape(num_training, -1) X_val = X_val.reshape(num_validation, -1) X_test = X_test.reshape(num_test, -1) return X_train, y_train, X_val, y_val, X_test, y_test # Invoke the above function to get our data. X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data() print 'Train data shape: ', X_train.shape print 'Train labels shape: ', y_train.shape print 'Validation data shape: ', X_val.shape print 'Validation labels shape: ', y_val.shape print 'Test data shape: ', X_test.shape print 'Test labels shape: ', y_test.shape """ Explanation: Load the data Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset. End of explanation """ input_size = 32 * 32 * 3 hidden_size = 50 num_classes = 10 net = TwoLayerNet(input_size, hidden_size, num_classes) # Train the network stats = net.train(X_train, y_train, X_val, y_val, num_iters=1000, batch_size=200, learning_rate=1e-4, learning_rate_decay=0.95, reg=0.5, verbose=True) # Predict on the validation set val_acc = np.mean(net.predict(X_val) == y_val) print 'Validation accuracy: ', val_acc """ Explanation: Train a network To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate. End of explanation """ # Plot the loss function and train / validation accuracies plt.subplot(2, 1, 1) plt.plot(stats['loss_history']) plt.title('Loss history') plt.xlabel('Iteration') plt.ylabel('Loss') plt.subplot(2, 1, 2) plt.plot(stats['train_acc_history'], label='train') plt.plot(stats['val_acc_history'], label='val') plt.title('Classification accuracy history') plt.xlabel('Epoch') plt.ylabel('Clasification accuracy') plt.show() from cs231n.vis_utils import visualize_grid # Visualize the weights of the network def show_net_weights(net): W1 = net.params['W1'] W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2) plt.imshow(visualize_grid(W1, padding=3).astype('uint8')) plt.gca().axis('off') plt.show() show_net_weights(net) """ Explanation: Debug the training With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good. One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization. Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized. End of explanation """ best_net = None # store the best model into this ################################################################################# # TODO: Tune hyperparameters using the validation set. Store your best trained # # model in best_net. # # # # To help debug your network, it may help to use visualizations similar to the # # ones we used above; these visualizations will have significant qualitative # # differences from the ones we saw above for the poorly tuned network. # # # # Tweaking hyperparameters by hand can be fun, but you might find it useful to # # write code to sweep through possible combinations of hyperparameters # # automatically like we did on the previous exercises. # ################################################################################# # Hyperparameters including: # 1) learning_rate # 2) learning_rate_decay # 3) L2 regulation # 4) hidden layers # 5) momentum # 6) ... # This would need many time to try every bunch of parameters, really hard job input_size = 32 * 32 * 3 num_classes = 10 best_acc = 0 best_parameters = {} attempt_times = 0 hidden_sizes = [100,150,200,250,300] learning_rate = [1e-3,1e-4,2e-3,2e-4,5e-5] learning_rate_decay = [0.8, 0.85, 0.9, 0.95, 1] regulations = [0.1, 0.3, 0.5, 0.7] for hd in hidden_sizes: net = TwoLayerNet(input_size, hd, num_classes) for lr in learning_rate: for lrd in learning_rate_decay: for reg in regulations: stats = net.train(X_train, y_train, X_val, y_val, num_iters=10, batch_size=200, learning_rate=lr, learning_rate_decay=lrd, reg=reg, verbose=False) val_acc = np.mean(net.predict(X_val) == y_val) # print '***************************************' # print 'Trial:', attempt_times, ' hidden_sizes =', hd, ' learning_rate =', lr, \ # ' learning_rate_decay =', lrd, ' regulations =', reg # print 'val_acc =', val_acc if best_acc < val_acc: best_acc = val_acc best_net = net best_parameters['reg'] = reg best_parameters['hidden_size'] = hd best_parameters['learning_rate'] = lr best_parameters['learning_rate_decay'] = lrd for key in best_para.keys(): print key, " : ", best_para[key] print 'Validation accuracy: ', best_acc ################################################################################# # END OF YOUR CODE # ################################################################################# # visualize the weights of the best network show_net_weights(best_net) """ Explanation: Tune your hyperparameters What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy. Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value. Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set. Experiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.). End of explanation """ test_acc = (best_net.predict(X_test) == y_test).mean() print 'Test accuracy: ', test_acc """ Explanation: Run on the test set When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%. We will give you extra bonus point for every 1% of accuracy above 52%. End of explanation """
adolfoguimaraes/machinelearning
Tensorflow/Tutorial01_IntroducaoTensorflow.ipynb
mit
import tensorflow as tf tf.__version__ """ Explanation: Tutorial 1 Esse tutorial tem como objetivo explorar os conceitos básicos do Tensorflow. Detalhes de como instalar o TensorFlow podem ser encontrados em: https://www.tensorflow.org/. Links de referência para esse material: Tutorial do Tensorflow Curso básico do Tensorflow TensorFlow Examples O código a seguir verifica se o tensorflow foi instalado e exibe a versão atual End of explanation """ #Criando os nós: node1 = tf.constant(3.0, tf.float32) #Cria o nó "a" com o valor 3 do tipo float node2 = tf.constant(4.0) # Cria o nó "b" com o valor 4 (implicitamente ele também é do tipo float) node3 = tf.add(node1, node2) # Cria o nó raiz aplicando as operações de soma nos nós a e b print("node1:", node1, "node2:", node2) print("node3:", node3) #Vamos criar uma seção do TensorFlow para executar as operações necessárias: sess = tf.Session() sess.run(node3) """ Explanation: Para entender um pouco do funcionamento do tensorflow, vamos criar a "rede" a seguir: <img src="http://adolfo.data2learning.com/ludiico/images/diagrama1.png" /> End of explanation """ a = tf.placeholder(tf.float32) b = tf.placeholder(tf.float32) adder_node = a + b # esse comando tem a mesma função de tf.add(node1, node2) do exemplo anterior print(sess.run(adder_node, feed_dict={a: 3, b: 4})) print(sess.run(adder_node, feed_dict={a: [1, 3], b:[2, 4]})) """ Explanation: O workflow de trabalho com o tensorflow pode ser dividio em 3 partes: Construção do grafo utilizando as operações do Tensorflow In[3] Alimentar com os dados e executar as operações do grafo (sess.run(op)) In[6] Atualizar as variáveis do grafo (e retornar os valores de saída) In[6] Na primeira solução, criamos o grafo já com os valores pré-definidos. No entanto, é interessante que esses valores sejam alimentados com os valore de entrada. Para isso, vamos criar os placeholder. Detalhes sobre feed e placeholder podem ser encontrados em: https://www.tensorflow.org/programmers_guide/reading_data#feeding End of explanation """ # Cria as variáveis weights = tf.Variable(tf.random_normal([784, 200], stddev=0.35), name="weights") biases = tf.Variable(tf.zeros([200]), name="biases") # Adiciona um operação que inicializa as variáveis que serão utilizadas no modelo init_op = tf.global_variables_initializer() # Quando for rodar o modelo, executa a inicialização das operações with tf.Session() as sess: sess.run(init_op) # ... # Use the model # ... """ Explanation: Um outro conceito básico importante é o conceito de Variáveis no Tensorflow. Quando criamos um modelo, as veriáveis são utilizadas para manter e atualizar os parâmetros deste modelo. Detalhes sobre variáveis podem ser encontrados em: https://www.tensorflow.org/programmers_guide/variables O exemplo a seguir, mostra brevemente como seria o uso de váriaveis no código. End of explanation """ # Cosntruindo o grafo utilizando operações do Tensorflow # Dados de entrada x_train = [1, 2, 3] y_train = [1, 2, 3] # Criando a operação da Hypothesis W = tf.Variable(tf.random_normal([1]), name="weight") b = tf.Variable(tf.random_normal([1]), name="bias") # XW + b hypothesis = x_train * W + b # cost function cost = tf.reduce_mean(tf.square(hypothesis - y_train)) # Gradient Descent optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) train = optimizer.minimize(cost) """ Explanation: Outro conceito importante é o conceito de Tensor. No tensorflow tudo é Tensor (Everything is Tensor). Você pode imaginar o Tensor como sendo um array n-dimensional ou uma lista. Um tensor possui um tipo estático e a dimensão é determinada de forma dinâmica. Uma observação importante é que apenas tensors podem ser passados entre os nós de um grafo no tensorflow. Dealhes podem ser encontrados em: https://www.tensorflow.org/programmers_guide/dims_types Para ilustrar esses componentes, vamos trabalhar com o modelo. O próposito é criar o modelo de regressão linear utilizando o tensorflow. Para treinar valos utilizar os seguinte modelo: Hypothesis $H(x) = Wx + b$ Const function $cost(W, b) = \frac{1}{m}\sum_{i=1}^{m}(H(x^{(i)})-y^{(i)})^2$ Gradient descent algorithm End of explanation """ # Criar a sessão do grafo sess = tf.Session() # Inicializa as variáveis no grafo sess.run(tf.global_variables_initializer()) #Treinamento for step in range(2001): sess.run(train) if step % 20 == 0: print(step, sess.run(cost), sess.run(W), sess.run(b)) """ Explanation: Uma vez que o modelo foi criado, o próximo passo é o de executar e atualizar o grafo e pegar o resultado final. End of explanation """ #Graphic display import matplotlib.pyplot as plt plt.plot(x_train, y_train, 'ro', label='Original data') plt.plot(x_train, sess.run(W) * x_train + sess.run(b), label='Fitted line') plt.legend() plt.show() """ Explanation: Vamos exibir os dados originais e o modelo treinado. End of explanation """ x_train = [3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167, 7.042,10.791,5.313,7.997,5.654,9.27,3.1] y_train = [1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221, 2.827,3.465,1.65,2.904,2.42,2.94,1.3] plt.plot(x_train, y_train, 'ro', label='Original data') plt.show() # Modelo completo utilizando o placeholder W = tf.Variable(tf.random_normal([1]), name="weight") b = tf.Variable(tf.random_normal([1]), name="bias") X = tf.placeholder(tf.float32) Y = tf.placeholder(tf.float32) hypothesis = X * W + b cost = tf.reduce_mean(tf.square(hypothesis - Y)) optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) train = optimizer.minimize(cost) sess = tf.Session() sess.run(tf.global_variables_initializer()) for step in range(2001): sess.run(train, feed_dict={X: x_train, Y: y_train}) if step % 20 == 0: print(step, sess.run(cost, feed_dict={X: x_train, Y: y_train}), sess.run(W), sess.run(b)) #Graphic display plt.plot(x_train, y_train, 'ro', label='Original data') plt.plot(x_train, sess.run(W) * x_train + sess.run(b), label='Fitted line') plt.legend() plt.show() """ Explanation: Vamos mudar nossa implementação para que use os conceitos de PlaceHolder e que a gente possa alimentar nosso modelo com os dados. Nessa etapa vamos utilizar mais dados. As listas a seguir e o gráfico mostram a distribuição dos dados que a gente vai criar. x_train = [3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167, 7.042,10.791,5.313,7.997,5.654,9.27,3.1] y_train = [1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221, 2.827,3.465,1.65,2.904,2.42,2.94,1.3] End of explanation """
tomwallis/PsyUtils
psydata_demo.ipynb
mit
import seaborn as sns import psyutils as pu %load_ext autoreload %autoreload 2 %matplotlib inline sns.set_style("white") sns.set_style("ticks") """ Explanation: Short demo of psydata functions End of explanation """ # load data: dat = pu.psydata.load_psy_data() dat.info() """ Explanation: I'll demo some functions here using a dataset I simulated earlier. End of explanation """ pu.psydata.binomial_binning(dat, y='correct', grouping_variables=['contrast', 'sf']) """ Explanation: Bin bernoulli trials, compute binomial statistics Compute binomial trials for each combination of contrast and sf, averaging over subjects: End of explanation """ pu.psydata.binomial_binning(dat, y='correct', grouping_variables=['contrast', 'sf'], rule_of_succession=True) """ Explanation: Notice that you can't compute binomial confidence intervals if the proportion success is 0 or 1. We can fix this using Laplace's Rule of Succession -- add one success and one failure to each observation (basically a prior that says that both successes and failures are possible). End of explanation """ g = pu.psydata.plot_psy(dat, 'contrast', 'correct', function='weibull', hue='sf', col='subject', log_x=True, col_wrap=3, errors=False, fixed={'gam': .5, 'lam':.02}, inits={'m': 0.01, 'w': 3}) g.add_legend() g.set(xlabel='Log Contrast', ylabel='Prop correct') g.fig.subplots_adjust(wspace=.8, hspace=.8); """ Explanation: Fit and plot a psychometric function to each subject, sf: End of explanation """ g = pu.psydata.plot_psy_params(dat, 'contrast', 'correct', x="sf", y="m", function='weibull', hue='subject', fixed={'gam': .5, 'lam':.02}) g.set(xlabel='Spatial Frequency', ylabel='Contrast threshold'); """ Explanation: Some kind of wonky fits (unrealistic slopes), but hey, that's what you get with a simple ML fit with no pooling / shrinkage / priors. End of explanation """
sbyrnes321/tmm
examples.ipynb
mit
from __future__ import division, print_function, absolute_import from tmm import (coh_tmm, unpolarized_RT, ellips, position_resolved, find_in_structure_with_inf) from numpy import pi, linspace, inf, array from scipy.interpolate import interp1d import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Examples of plots and calculations using the tmm package Imports End of explanation """ try: import colorpy.illuminants import colorpy.colormodels from tmm import color colors_were_imported = True except ImportError: # without colorpy, you can't run sample5(), but everything else is fine. colors_were_imported = False # "5 * degree" is 5 degrees expressed in radians # "1.2 / degree" is 1.2 radians expressed in degrees degree = pi/180 """ Explanation: Set up End of explanation """ # list of layer thicknesses in nm d_list = [inf,100,300,inf] # list of refractive indices n_list = [1,2.2,3.3+0.3j,1] # list of wavenumbers to plot in nm^-1 ks=linspace(0.0001,.01,num=400) # initialize lists of y-values to plot Rnorm=[] R45=[] for k in ks: # For normal incidence, s and p polarizations are identical. # I arbitrarily decided to use 's'. Rnorm.append(coh_tmm('s',n_list, d_list, 0, 1/k)['R']) R45.append(unpolarized_RT(n_list, d_list, 45*degree, 1/k)['R']) kcm = ks * 1e7 #ks in cm^-1 rather than nm^-1 plt.figure() plt.plot(kcm,Rnorm,'blue',kcm,R45,'purple') plt.xlabel('k (cm$^{-1}$)') plt.ylabel('Fraction reflected') plt.title('Reflection of unpolarized light at 0$^\circ$ incidence (blue), ' '45$^\circ$ (purple)'); """ Explanation: Sample 1 Here's a thin non-absorbing layer, on top of a thick absorbing layer, with air on both sides. Plotting reflected intensity versus wavenumber, at two different incident angles. End of explanation """ #index of refraction of my material: wavelength in nm versus index. material_nk_data = array([[200, 2.1+0.1j], [300, 2.4+0.3j], [400, 2.3+0.4j], [500, 2.2+0.4j], [750, 2.2+0.5j]]) material_nk_fn = interp1d(material_nk_data[:,0].real, material_nk_data[:,1], kind='quadratic') d_list = [inf,300,inf] #in nm lambda_list = linspace(200,750,400) #in nm T_list = [] for lambda_vac in lambda_list: n_list = [1, material_nk_fn(lambda_vac), 1] T_list.append(coh_tmm('s',n_list,d_list,0,lambda_vac)['T']) plt.figure() plt.plot(lambda_list,T_list) plt.xlabel('Wavelength (nm)') plt.ylabel('Fraction of power transmitted') plt.title('Transmission at normal incidence'); """ Explanation: Sample 2 Here's the transmitted intensity versus wavelength through a single-layer film which has some complicated wavelength-dependent index of refraction. (I made these numbers up, but in real life they could be read out of a graph / table published in the literature.) Air is on both sides of the film, and the light is normally incident. End of explanation """ n_list=[1,1.46,3.87+0.02j] ds=linspace(0,1000,num=100) #in nm psis=[] Deltas=[] for d in ds: e_data=ellips(n_list, [inf,d,inf], 70*degree, 633) #in nm psis.append(e_data['psi']/degree) # angle in degrees Deltas.append(e_data['Delta']/degree) # angle in degrees plt.figure() plt.plot(ds,psis,ds,Deltas) plt.xlabel('SiO2 thickness (nm)') plt.ylabel('Ellipsometric angles (degrees)') plt.title('Ellipsometric parameters for air/SiO2/Si, varying ' 'SiO2 thickness.\n' '@ 70$^\circ$, 633nm. ' 'Should agree with Handbook of Ellipsometry Fig. 1.14'); """ Explanation: Sample 3 Here is a calculation of the psi and Delta parameters measured in ellipsometry. This reproduces Fig. 1.14 in Handbook of Ellipsometry by Tompkins, 2005. End of explanation """ d_list = [inf, 100, 300, inf] #in nm n_list = [1, 2.2+0.2j, 3.3+0.3j, 1] th_0=pi/4 lam_vac=400 pol='p' coh_tmm_data = coh_tmm(pol,n_list,d_list,th_0,lam_vac) ds = linspace(0,400,num=1000) #position in structure poyn=[] absor=[] for d in ds: layer, d_in_layer = find_in_structure_with_inf(d_list,d) data=position_resolved(layer,d_in_layer,coh_tmm_data) poyn.append(data['poyn']) absor.append(data['absor']) # convert data to numpy arrays for easy scaling in the plot poyn = array(poyn) absor = array(absor) plt.figure() plt.plot(ds,poyn,'blue',ds,200*absor,'purple') plt.xlabel('depth (nm)') plt.ylabel('AU') plt.title('Local absorption (purple), Poynting vector (blue)'); """ Explanation: Sample 4 Here is an example where we plot absorption and Poynting vector as a function of depth. End of explanation """ if not colors_were_imported: print('Colorpy was not detected (or perhaps an error occurred when', 'loading it). You cannot do color calculations, sorry!', 'http://pypi.python.org/pypi/colorpy') else: # Crystalline silicon refractive index. Data from Palik via # http://refractiveindex.info, I haven't checked it, but this is just for # demonstration purposes anyway. Si_n_data = [[400, 5.57 + 0.387j], [450, 4.67 + 0.145j], [500, 4.30 + 7.28e-2j], [550, 4.08 + 4.06e-2j], [600, 3.95 + 2.57e-2j], [650, 3.85 + 1.64e-2j], [700, 3.78 + 1.26e-2j]] Si_n_data = array(Si_n_data) Si_n_fn = interp1d(Si_n_data[:,0], Si_n_data[:,1], kind='linear') # SiO2 refractive index (approximate): 1.46 regardless of wavelength SiO2_n_fn = lambda wavelength : 1.46 # air refractive index air_n_fn = lambda wavelength : 1 n_fn_list = [air_n_fn, SiO2_n_fn, Si_n_fn] th_0 = 0 # Print the colors, and show plots, for the special case of 300nm-thick SiO2 d_list = [inf, 300, inf] reflectances = color.calc_reflectances(n_fn_list, d_list, th_0) illuminant = colorpy.illuminants.get_illuminant_D65() spectrum = color.calc_spectrum(reflectances, illuminant) color_dict = color.calc_color(spectrum) print('air / 300nm SiO2 / Si --- rgb =', color_dict['rgb'], ', xyY =', color_dict['xyY']) plt.figure() color.plot_reflectances(reflectances, title='air / 300nm SiO2 / Si -- ' 'Fraction reflected at each wavelength') plt.figure() color.plot_spectrum(spectrum, title='air / 300nm SiO2 / Si -- ' 'Reflected spectrum under D65 illumination') # Calculate irgb color (i.e. gamma-corrected sRGB display color rounded to # integers 0-255) versus thickness of SiO2 max_SiO2_thickness = 600 SiO2_thickness_list = linspace(0,max_SiO2_thickness,num=80) irgb_list = [] for SiO2_d in SiO2_thickness_list: d_list = [inf, SiO2_d, inf] reflectances = color.calc_reflectances(n_fn_list, d_list, th_0) illuminant = colorpy.illuminants.get_illuminant_D65() spectrum = color.calc_spectrum(reflectances, illuminant) color_dict = color.calc_color(spectrum) irgb_list.append(color_dict['irgb']) # Plot those colors print('Making color vs SiO2 thickness graph. Compare to (for example)') print('http://www.htelabs.com/appnotes/sio2_color_chart_thermal_silicon_dioxide.htm') plt.figure() plt.plot([0,max_SiO2_thickness],[1,1]) plt.xlim(0,max_SiO2_thickness) plt.ylim(0,1) plt.xlabel('SiO2 thickness (nm)') plt.yticks([]) plt.title('Air / SiO2 / Si color vs SiO2 thickness') for i in range(len(SiO2_thickness_list)): # One strip of each color, centered at x=SiO2_thickness_list[i] if i==0: x0 = 0 else: x0 = (SiO2_thickness_list[i] + SiO2_thickness_list[i-1]) / 2 if i == len(SiO2_thickness_list) - 1: x1 = max_SiO2_thickness else: x1 = (SiO2_thickness_list[i] + SiO2_thickness_list[i+1]) / 2 y0 = 0 y1 = 1 poly_x = [x0, x1, x1, x0] poly_y = [y0, y0, y1, y1] color_string = colorpy.colormodels.irgb_string_from_irgb(irgb_list[i]) plt.fill(poly_x, poly_y, color_string, edgecolor=color_string) """ Explanation: Sample 5 Color calculations: What color is a air / thin SiO2 / Si wafer? End of explanation """ # list of layer thicknesses in nm d_list = [inf, 5, 30, inf] # list of refractive indices n_list = [1.517, 3.719+4.362j, 0.130+3.162j, 1] # wavelength in nm lam_vac = 633 # list of angles to plot theta_list = linspace(30*degree, 60*degree, num=300) # initialize lists of y-values to plot Rp = [] for theta in theta_list: Rp.append(coh_tmm('p', n_list, d_list, theta, lam_vac)['R']) plt.figure() plt.plot(theta_list/degree, Rp, 'blue') plt.xlabel('theta (degree)') plt.ylabel('Fraction reflected') plt.xlim(30, 60) plt.ylim(0, 1) plt.title('Reflection of p-polarized light with Surface Plasmon Resonance\n' 'Compare with http://doi.org/10.2320/matertrans.M2010003 Fig 6a'); """ Explanation: Sample 6 An example reflection plot with a surface plasmon resonance (SPR) dip. Compare with http://doi.org/10.2320/matertrans.M2010003 ("Spectral and Angular Responses of Surface Plasmon Resonance Based on the Kretschmann Prism Configuration") Fig 6a End of explanation """
brian-rose/ClimateModeling_courseware
Lectures/Lecture19 -- Seasonal cycle and heat capacity.ipynb
mit
# Ensure compatibility with Python 2 and 3 from __future__ import print_function, division """ Explanation: ATM 623: Climate Modeling Brian E. J. Rose, University at Albany Lecture 19: Modeling the seasonal cycle of surface temperature Warning: content out of date and not maintained You really should be looking at The Climate Laboratory book by Brian Rose, where all the same content (and more!) is kept up to date. Here you are likely to find broken links and broken code. About these notes: This document uses the interactive Jupyter notebook format. The notes can be accessed in several different ways: The interactive notebooks are hosted on github at https://github.com/brian-rose/ClimateModeling_courseware The latest versions can be viewed as static web pages rendered on nbviewer A complete snapshot of the notes as of May 2017 (end of spring semester) are available on Brian's website. Also here is a legacy version from 2015. Many of these notes make use of the climlab package, available at https://github.com/brian-rose/climlab End of explanation """ %matplotlib inline import numpy as np import matplotlib.pyplot as plt import xarray as xr import climlab from climlab import constants as const import cartopy.crs as ccrs # use cartopy to make some maps ## The NOAA ESRL server is shutdown! January 2019 ncep_url = "http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis.derived/" ncep_Ts = xr.open_dataset(ncep_url + "surface_gauss/skt.sfc.mon.1981-2010.ltm.nc", decode_times=False) #url = "http://apdrc.soest.hawaii.edu:80/dods/public_data/Reanalysis_Data/NCEP/NCEP/clima/" #ncep_Ts = xr.open_dataset(url + 'surface_gauss/skt') lat_ncep = ncep_Ts.lat; lon_ncep = ncep_Ts.lon Ts_ncep = ncep_Ts.skt print( Ts_ncep.shape) """ Explanation: Contents The observed seasonal cycle from NCEP Reanalysis data Analytical toy model of the seasonal cycle Exploring the amplitude of the seasonal cycle with an EBM The seasonal cycle for a planet with 90º obliquity <a id='section1'></a> 1. The observed seasonal cycle from NCEP Reanalysis data Look at the observed seasonal cycle in the NCEP reanalysis data. Read in the necessary data from the online server. The catalog is here: http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis.derived/catalog.html End of explanation """ maxTs = Ts_ncep.max(dim='time') minTs = Ts_ncep.min(dim='time') meanTs = Ts_ncep.mean(dim='time') fig = plt.figure( figsize=(16,6) ) ax1 = fig.add_subplot(1,2,1, projection=ccrs.Robinson()) cax1 = ax1.pcolormesh(lon_ncep, lat_ncep, meanTs, cmap=plt.cm.seismic , transform=ccrs.PlateCarree()) cbar1 = plt.colorbar(cax1) ax1.set_title('Annual mean surface temperature ($^\circ$C)', fontsize=14 ) ax2 = fig.add_subplot(1,2,2, projection=ccrs.Robinson()) cax2 = ax2.pcolormesh(lon_ncep, lat_ncep, maxTs - minTs, transform=ccrs.PlateCarree() ) cbar2 = plt.colorbar(cax2) ax2.set_title('Seasonal temperature range ($^\circ$C)', fontsize=14) for ax in [ax1,ax2]: #ax.contour( lon_cesm, lat_cesm, topo.variables['LANDFRAC'][:], [0.5], colors='k'); #ax.set_xlabel('Longitude', fontsize=14 ); ax.set_ylabel('Latitude', fontsize=14 ) ax.coastlines() """ Explanation: Make two maps: one of annual mean surface temperature, another of the seasonal range (max minus min). End of explanation """ Tmax = 65; Tmin = -Tmax; delT = 10 clevels = np.arange(Tmin,Tmax+delT,delT) fig_zonobs, ax = plt.subplots( figsize=(10,6) ) cax = ax.contourf(np.arange(12)+0.5, lat_ncep, Ts_ncep.mean(dim='lon').transpose(), levels=clevels, cmap=plt.cm.seismic, vmin=Tmin, vmax=Tmax) ax.set_xlabel('Month', fontsize=16) ax.set_ylabel('Latitude', fontsize=16 ) cbar = plt.colorbar(cax) ax.set_title('Zonal mean surface temperature (degC)', fontsize=20) """ Explanation: Make a contour plot of the zonal mean temperature as a function of time End of explanation """ omega = 2*np.pi / const.seconds_per_year omega B = 2. Hw = np.linspace(0., 100.) Ctilde = const.cw * const.rho_w * Hw * omega / B amp = 1./((Ctilde**2+1)*np.cos(np.arctan(Ctilde))) Phi = np.arctan(Ctilde) color1 = 'b' color2 = 'r' fig = plt.figure(figsize=(8,6)) ax1 = fig.add_subplot(111) ax1.plot(Hw, amp, color=color1) ax1.set_xlabel('water depth (m)', fontsize=14) ax1.set_ylabel('Seasonal amplitude ($Q^* / B$)', fontsize=14, color=color1) for tl in ax1.get_yticklabels(): tl.set_color(color1) ax2 = ax1.twinx() ax2.plot(Hw, np.rad2deg(Phi), color=color2) ax2.set_ylabel('Seasonal phase shift (degrees)', fontsize=14, color=color2) for tl in ax2.get_yticklabels(): tl.set_color(color2) ax1.set_title('Dependence of seasonal cycle phase and amplitude on water depth', fontsize=16) ax1.grid() ax1.plot([2.5, 2.5], [0, 1], 'k-'); """ Explanation: <a id='section2'></a> 2. Analytical toy model of the seasonal cycle What factors determine the above pattern of seasonal temperatures? How large is the winter-to-summer variation in temperature? What is its phasing relative to the seasonal variations in insolation? We will start to examine this in a very simple zero-dimensional EBM. Suppose the seasonal cycle of insolation at a point is $$ Q = Q^* \sin\omega t + Q_0$$ where $\omega = 2\pi ~ \text{year}^{-1}$, $Q_0$ is the annual mean insolation, and $Q^*$ is the amplitude of the seasonal variations. Here $\omega ~ t=0$ is spring equinox, $\omega~t = \pi/2$ is summer solstice, $\omega~t = \pi$ is fall equinox, and $ \omega ~t = 3 \pi/2$ is winter solstice. Now suppose the temperature is governed by $$ C \frac{d T}{d t} = Q - (A + B~T) $$ so that we have a simple model $$ C \frac{d T}{d t} = Q^* \sin\omega t + Q_0 - (A + B~T) $$ We want to ask two questions: What is the amplitude of the seasonal temperature variation? When does the temperature maximum occur? We will look for an oscillating solution $$ T(t) = T_0 + T^* \sin(\omega t - \Phi) $$ where $\Phi$ is an unknown phase shift and $T^*$ is the unknown amplitude of seasonal temperature variations. The annual mean: Integrate over one year to find $$ \overline{T} = T_0 $$ $$ Q_0 = A + B ~ \overline{T} $$ so that $$T_0 = \frac{Q_0 - A}{B} $$ The seasonal problem Now we need to solve for $T^*$ and $\Phi$. Take the derivative $$ \frac{d T}{dt} = T^* \omega \cos(\omega t - \Phi) $$ and plug into the model equation to get \begin{align} C~ T^ \omega \cos(\omega t - \Phi) &= Q^ \sin\omega t + Q_0 \ & - \left( A + B~(T_0 + T^ \sin(\omega t - \Phi) )\right) \end{align*} Subtracting out the annual mean leaves us with $$ C~ T^ \omega \cos(\omega t - \Phi) = Q^ \sin\omega t - B ~ T^* \sin(\omega t - \Phi) $$ Zero heat capacity: the radiative equilibrium solution It's instructive to first look at the case with $C=0$, which means that the system is not capable of storing heat, and the temperature must always be in radiative equilibrium with the insolation. In this case we would have $$ Q^ \sin\omega t = B ~ T^ \sin(\omega t - \Phi) $$ which requires that the phase shift is $$ \Phi = 0 $$ and the amplitude is $$ T^ = \frac{Q^}{B} $$ With no heat capacity, there can be no phase shift! The temperature goes up and does in lockstep with the insolation. As we will see, the amplitude of the temperature variations is maximum in this limit. As a practical example: at 45ºN the amplitude of the seasonal insolation cycle is about 180 W m$^{-2}$ (see the Insolation notes -- the difference between insolation at summer and winter solstice is about 360 W m$^{-2}$ which we divide by two to get the amplitude of seasonal variations). We will follow our previous EBM work and take $B = 2$ W m$^{-2}$ K$^{-1}$. This would give a seasonal temperature amplitude of 90ºC! This highlights to important role for heat capacity to buffer the seasonal variations in sunlight. Non-dimensional heat capacity parameter We can rearrange the seasonal equation to give $$ \frac{C~\omega}{B} \cos(\omega t - \Phi) + \sin(\omega t - \Phi) = \frac{Q^}{B~T^} \sin\omega t $$ The heat capacity appears in our equation through the non-dimensional ratio $$ \tilde{C} = \frac{C~\omega}{B} $$ This parameter measures the efficiency of heat storage versus damping of energy anomalies through longwave radiation to space in our system. We will now use trigonometric identities \begin{align} \cos(\omega t - \Phi) &= \cos\omega t \cos\Phi + \sin\omega t \sin\Phi \ \sin(\omega t - \Phi) &= \sin\omega t \cos\Phi - \cos\omega t \sin\Phi \end{align} to express our equation as \begin{align} \frac{Q^}{B~T^} \sin\omega t = &\tilde{C} \cos\omega t \cos\Phi \ + &\tilde{C} \sin\omega t \sin\Phi \ + &\sin\omega t \cos\Phi \ - &\cos\omega t \sin\Phi \end{align} Now gathering together all terms in $\cos\omega t$ and $\sin\omega t$: $$ \cos\omega t \left( \tilde{C} \cos\Phi - \sin\Phi \right) = \sin\omega t \left( \frac{Q^}{B~T^} - \tilde{C} \sin\Phi - \cos\Phi \right) $$ Solving for the phase shift The equation above must be true for all $t$, which means that sum of terms in each set of parentheses must be zero. We therefore have an equation for the phase shift $$ \tilde{C} \cos\Phi - \sin\Phi = 0 $$ which means that the phase shift is $$ \Phi = \arctan \tilde{C} $$ Solving for the amplitude The other equation is $$ \frac{Q^}{B~T^} - \tilde{C} \sin\Phi - \cos\Phi = 0 $$ or $$ \frac{Q^}{B~T^} - \cos\Phi \left( 1+ \tilde{C}^2 \right) = 0 $$ which we solve for $T^*$ to get $$ T^ = \frac{Q^}{B} \frac{1}{\left( 1+ \tilde{C}^2 \right) \cos\left(\arctan \tilde{C} \right) } $$ Shallow water limit: In low heat capacity limit, $$ \tilde{C} << 1 $$ the phase shift is $$ \Phi \approx \tilde{C} $$ and the amplitude is $$ T^ = \frac{Q^}{B} \left( 1 - \tilde{C} \right) $$ Notice that for a system with very little heat capacity, the phase shift approaches zero and the amplitude approaches its maximum value $T^ = \frac{Q^}{B}$. In the shallow water limit the temperature maximum will occur just slightly after the insolation maximum, and the seasonal temperature variations will be large. Deep water limit: Suppose instead we have an infinitely large heat reservoir (e.g. very deep ocean mixed layer). In the limit $\tilde{C} \rightarrow \infty$, the phase shift tends toward $$ \Phi \rightarrow \frac{\pi}{2} $$ so the warming is nearly perfectly out of phase with the insolation -- peak temperature would occur at fall equinox. But the amplitude in this limit is very small! $$ T^* \rightarrow 0 $$ What values of $\tilde{C}$ are realistic? We need to evaluate $$ \tilde{C} = \frac{C~\omega}{B} $$ for reasonable values of $C$ and $B$. $B$ is the longwave radiative feedback in our system: a measure of how efficiently a warm anomaly is radiated away to space. We have previously chosen $B = 2$ W m$^{-2}$ K$^{-1}$. $C$ is the heat capacity of the whole column, a number in J m$^{-2}$ K$^{-1}$. Heat capacity of the atmosphere Integrating from the surface to the top of the atmosphere, we can write $$ C_a = \int_0^{p_s} c_p \frac{dp}{g} $$ where $c_p = 10^3$ J kg$^{-1}$ K$^{-1}$ is the specific heat at constant pressure for a unit mass of air, and $dp/g$ is a mass element. This gives $C_a \approx 10^7$ J m$^{-2}$ K$^{-1}$. Heat capacity of a water surface As we wrote back in Lecture 2, the heat capacity for a well-mixed column of water is $$C_w = c_w \rho_w H_w $$ where $c_w = 4 \times 10^3$ J kg$^{-1}$ $^\circ$C$^{-1}$ is the specific heat of water, $\rho_w = 10^3$ kg m$^{-3}$ is the density of water, and $H_w $ is the depth of the water column The heat capacity of the entire atmosphere is thus equivalent to 2.5 meters of water. $\tilde{C}$ for a dry land surface A dry land surface has very little heat capacity and $C$ is actually dominated by the atmosphere. So we can take $C = C_a = 10^7$ J m$^{-2}$ K$^{-1}$ as a reasonable lower bound. So our lower bound on $\tilde{C}$ is thus, taking $B = 2$ W m$^{-2}$ K$^{-1}$ and $\omega = 2\pi ~ \text{year}^{-1} = 2 \times 10^{-7} \text{ s}^{-1}$: $$ \tilde{C} = 1 $$ $\tilde{C}$ for a 100 meter ocean mixed layer Setting $H_w = 100$ m gives $C_w = 4 \times 10^8$ J m$^{-2}$ K$^{-1}$. Then our non-dimensional parameter is $$ \tilde{C} = 40 $$ The upshot: $\tilde{C}$ is closer to the deep water limit Even for a dry land surface, $\tilde{C}$ is not small. This means that there is always going to be a substantial phase shift in the timing of the peak temperatures, and a reduction in the seasonal amplitude. Plot the full solution for a range of water depths End of explanation """ fig, ax = plt.subplots() years = np.linspace(0,2) Harray = np.array([0., 2.5, 10., 50.]) for Hw in Harray: Ctilde = const.cw * const.rho_w * Hw * omega / B Phi = np.arctan(Ctilde) ax.plot(years, np.sin(2*np.pi*years - Phi)/np.cos(Phi)/(1+Ctilde**2), label=Hw) ax.set_xlabel('Years', fontsize=14) ax.set_ylabel('Seasonal amplitude ($Q^* / B$)', fontsize=14) ax.set_title('Solution of toy seasonal model for several different water depths', fontsize=14) ax.legend(); ax.grid() """ Explanation: The blue line shows the amplitude of the seasonal cycle of temperature, expressed as a fraction of its maximum value $\frac{Q^*}{B}$ (the value that would occur if the system had zero heat capacity so that temperatures were always in radiative equilibrium with the instantaneous insolation). The red line shows the phase lag (in degrees) of the temperature cycle relative to the insolation cycle. The vertical black line indicates 2.5 meters of water, which is the heat capacity of the atmosphere and thus our effective lower bound on total column heat capacity. The seasonal phase shift Even for the driest surfaces the phase shift is about 45º and the amplitude is half of its theoretical maximum. For most wet surfaces the cycle is damped out and delayed further. Of course we are already familiar with this phase shift from our day-to-day experience. Our calendar says that summer "begins" at the solstice and last until the equinox. End of explanation """ # for convenience, set up a dictionary with our reference parameters param = {'A':210, 'B':2, 'a0':0.354, 'a2':0.25, 'D':0.6} param # We can pass the entire dictionary as keyword arguments using the ** notation model1 = climlab.EBM_seasonal(**param, name='Seasonal EBM') print( model1) """ Explanation: The blue curve in this figure is in phase with the insolation. <a id='section3'></a> 3. Exploring the amplitude of the seasonal cycle with an EBM Something important is missing from this toy model: heat transport! The amplitude of the seasonal cycle of insolation increases toward the poles, but the seasonal temperature variations are partly mitigated by heat transport from lower, warmer latitudes. Our 1D diffusive EBM is the appropriate tool for exploring this further. We are looking at the 1D (zonally averaged) energy balance model with diffusive heat transport. The equation is $$ C \frac{\partial T_s}{\partial t} = (1-\alpha) ~ Q - \left( A + B~T_s \right) + \frac{D}{\cos⁡\phi } \frac{\partial }{\partial \phi} \left( \cos⁡\phi ~ \frac{\partial T_s}{\partial \phi} \right) $$ with the albedo given by $$ \alpha(\phi) = \alpha_0 + \alpha_2 P_2(\sin\phi) $$ and we will use climlab.EBM_seasonal to solve this model numerically. One handy feature of climlab process code: the function integrate_years() automatically calculates the time averaged temperature. So if we run it for exactly one year, we get the annual mean temperature (and many other diagnostics) saved in the dictionary timeave. We will look at the seasonal cycle of temperature in three different models with different heat capacities (which we express through an equivalent depth of water in meters). All other parameters will be as chosen in Lecture 16 (which focussed on tuning the EBM to the annual mean energy budget). End of explanation """ # We will try three different water depths water_depths = np.array([2., 10., 50.]) num_depths = water_depths.size Tann = np.empty( [model1.lat.size, num_depths] ) models = [] for n in range(num_depths): ebm = climlab.EBM_seasonal(water_depth=water_depths[n], **param) models.append(ebm) models[n].integrate_years(20., verbose=False ) models[n].integrate_years(1., verbose=False) Tann[:,n] = np.squeeze(models[n].timeave['Ts']) """ Explanation: Notice that this model has an insolation subprocess called DailyInsolation, rather than AnnualMeanInsolation. These should be fairly self-explanatory. End of explanation """ lat = model1.lat fig, ax = plt.subplots() ax.plot(lat, Tann) ax.set_xlim(-90,90) ax.set_xlabel('Latitude') ax.set_ylabel('Temperature (degC)') ax.set_title('Annual mean temperature in the EBM') ax.legend( water_depths ) """ Explanation: All models should have the same annual mean temperature: End of explanation """ num_steps_per_year = int(model1.time['num_steps_per_year']) Tyear = np.empty((lat.size, num_steps_per_year, num_depths)) for n in range(num_depths): for m in range(num_steps_per_year): models[n].step_forward() Tyear[:,m,n] = np.squeeze(models[n].Ts) """ Explanation: There is no automatic function in the climlab code to keep track of minimum and maximum temperatures (though we might add that in the future!) Instead we'll step through one year "by hand" and save all the temperatures. End of explanation """ fig = plt.figure( figsize=(16,10) ) ax = fig.add_subplot(2,num_depths,2) cax = ax.contourf(np.arange(12)+0.5, lat_ncep, Ts_ncep.mean(dim='lon').transpose(), levels=clevels, cmap=plt.cm.seismic, vmin=Tmin, vmax=Tmax) ax.set_xlabel('Month') ax.set_ylabel('Latitude') cbar = plt.colorbar(cax) ax.set_title('Zonal mean surface temperature - observed (degC)', fontsize=20) for n in range(num_depths): ax = fig.add_subplot(2,num_depths,num_depths+n+1) cax = ax.contourf(4*np.arange(num_steps_per_year), lat, Tyear[:,:,n], levels=clevels, cmap=plt.cm.seismic, vmin=Tmin, vmax=Tmax) cbar1 = plt.colorbar(cax) ax.set_title('water depth = %.0f m' %models[n].param['water_depth'], fontsize=20 ) ax.set_xlabel('Days of year', fontsize=14 ) ax.set_ylabel('Latitude', fontsize=14 ) """ Explanation: Make a figure to compare the observed zonal mean seasonal temperature cycle to what we get from the EBM with different heat capacities: End of explanation """ def initial_figure(models): fig, axes = plt.subplots(1,len(models), figsize=(15,4)) lines = [] for n in range(len(models)): ax = axes[n] c1 = 'b' Tsline = ax.plot(lat, models[n].Ts, c1)[0] ax.set_title('water depth = %.0f m' %models[n].param['water_depth'], fontsize=20 ) ax.set_xlabel('Latitude', fontsize=14 ) if n is 0: ax.set_ylabel('Temperature', fontsize=14, color=c1 ) ax.set_xlim([-90,90]) ax.set_ylim([-60,60]) for tl in ax.get_yticklabels(): tl.set_color(c1) ax.grid() c2 = 'r' ax2 = ax.twinx() Qline = ax2.plot(lat, models[n].insolation, c2)[0] if n is 2: ax2.set_ylabel('Insolation (W m$^{-2}$)', color=c2, fontsize=14) for tl in ax2.get_yticklabels(): tl.set_color(c2) ax2.set_xlim([-90,90]) ax2.set_ylim([0,600]) lines.append([Tsline, Qline]) return fig, axes, lines def animate(step, models, lines): for n, ebm in enumerate(models): ebm.step_forward() # The rest of this is just updating the plot lines[n][0].set_ydata(ebm.Ts) lines[n][1].set_ydata(ebm.insolation) return lines # Plot initial data fig, axes, lines = initial_figure(models) # Some imports needed to make and display animations from IPython.display import HTML from matplotlib import animation num_steps = int(models[0].time['num_steps_per_year']) ani = animation.FuncAnimation(fig, animate, frames=num_steps, interval=80, fargs=(models, lines), ) HTML(ani.to_html5_video()) """ Explanation: Which one looks more realistic? Depends a bit on where you look. But overall, the observed seasonal cycle matches the 10 meter case best. The effective heat capacity governing the seasonal cycle of the zonal mean temperature is closer to 10 meters of water than to either 2 or 50 meters. Making an animation of the EBM solutions Let's animate the seasonal cycle of insolation and temperature in our models with the three different water depths End of explanation """ orb_highobl = {'ecc':0., 'obliquity':90., 'long_peri':0.} print( orb_highobl) model_highobl = climlab.EBM_seasonal(orb=orb_highobl, **param) print( model_highobl.param['orb']) """ Explanation: <a id='section4'></a> 4. The seasonal cycle for a planet with 90º obliquity The EBM code uses our familiar insolation.py code to calculate insolation, and therefore it's easy to set up a model with different orbital parameters. Here is an example with very different orbital parameters: 90º obliquity. We looked at the distribution of insolation by latitude and season for this type of planet in the last homework. End of explanation """ Tann_highobl = np.empty( [lat.size, num_depths] ) models_highobl = [] for n in range(num_depths): model = climlab.EBM_seasonal(water_depth=water_depths[n], orb=orb_highobl, **param) models_highobl.append(model) models_highobl[n].integrate_years(40., verbose=False ) models_highobl[n].integrate_years(1., verbose=False) Tann_highobl[:,n] = np.squeeze(models_highobl[n].timeave['Ts']) Tyear_highobl = np.empty([lat.size, num_steps_per_year, num_depths]) for n in range(num_depths): for m in range(num_steps_per_year): models_highobl[n].step_forward() Tyear_highobl[:,m,n] = np.squeeze(models_highobl[n].Ts) """ Explanation: Repeat the same procedure to calculate and store temperature throughout one year, after letting the models run out to equilibrium. End of explanation """ fig = plt.figure( figsize=(16,5) ) Tmax_highobl = 125; Tmin_highobl = -Tmax_highobl; delT_highobl = 10 clevels_highobl = np.arange(Tmin_highobl, Tmax_highobl+delT_highobl, delT_highobl) for n in range(num_depths): ax = fig.add_subplot(1,num_depths,n+1) cax = ax.contourf( 4*np.arange(num_steps_per_year), lat, Tyear_highobl[:,:,n], levels=clevels_highobl, cmap=plt.cm.seismic, vmin=Tmin_highobl, vmax=Tmax_highobl ) cbar1 = plt.colorbar(cax) ax.set_title('water depth = %.0f m' %models[n].param['water_depth'], fontsize=20 ) ax.set_xlabel('Days of year', fontsize=14 ) ax.set_ylabel('Latitude', fontsize=14 ) """ Explanation: And plot the seasonal temperature cycle same as we did above: End of explanation """ lat2 = np.linspace(-90, 90, 181) days = np.linspace(1.,50.)/50 * const.days_per_year Q_present = climlab.solar.insolation.daily_insolation( lat2, days ) Q_highobl = climlab.solar.insolation.daily_insolation( lat2, days, orb_highobl ) Q_present_ann = np.mean( Q_present, axis=1 ) Q_highobl_ann = np.mean( Q_highobl, axis=1 ) fig, ax = plt.subplots() ax.plot( lat2, Q_present_ann, label='Earth' ) ax.plot( lat2, Q_highobl_ann, label='90deg obliquity' ) ax.grid() ax.legend(loc='lower center') ax.set_xlabel('Latitude', fontsize=14 ) ax.set_ylabel('W m$^{-2}$', fontsize=14 ) ax.set_title('Annual mean insolation for two different obliquities', fontsize=16) """ Explanation: Note that the temperature range is much larger than for the Earth-like case above (but same contour interval, 10 degC). Why is the temperature so uniform in the north-south direction with 50 meters of water? To see the reason, let's plot the annual mean insolation at 90º obliquity, alongside the present-day annual mean insolation: End of explanation """ %load_ext version_information %version_information numpy, xarray, climlab """ Explanation: Though this is a bit misleading, because our model prescribes an increase in albedo from the equator to the pole. So the absorbed shortwave gradients look even more different. If you are interested in how ice-albedo feedback might work on a high-obliquity planet with a cold equator, then I suggest you take a look at this paper: Rose, Cronin and Bitz (2017): Ice Caps and Ice Belts: The Effects of Obliquity on Ice−Albedo Feedback, The Astrophysical Journal 846, doi:10.3847/1538-4357/aa8306 <div class="alert alert-success"> [Back to ATM 623 notebook home](../index.ipynb) </div> Version information End of explanation """
dchud/warehousing-course
lectures/week-11-20151124-redis-intro.ipynb
cc0-1.0
import redis """ Explanation: A brief tour of Redis A one-hour or less tour of Redis. tl:dr version: If you don't have time to read/run this, go to Try Redis and try it yourself. Redis is a data structure server. Not quite a database, not quite a key-value store. It is very fast and is a great tool for rapid analysis and other cases when you need something more than "just python" or "just R" but don't want to take the time to define and implement an RDBMS schema, etc. You have it on your VM. Redis stands for REmote DIctionary Server. The Try Redis app is easy for a quick tour; for a few more details, read the introduction to data types. Getting started Redis is a server process that you connect to with a client. On your VM, you can start it with the redis-server command, but it's best to run it in its own terminal, or to start it with the server. For our VM, the server is already running. You just need to connect to it with a client. You can do this in the shell using redis-cli, at which point you can send and receive commands directly to Redis. Here, though, let's use it with Python. We'll probably want some of Python's other facilities to read files, control flow, manage variables, etc. Note: this is a python 2 notebook. End of explanation """ import redis """ Explanation: Note: If this happens to you, just do this in the shell: % sudo apt-get install python-redis (Your password is "vagrant".) End of explanation """ r = redis.StrictRedis() """ Explanation: Remember we need to connect to the server, using Python as the client, just like we would connect to a database server. This will connect using the default port and host, which the Redis server on our VMs uses. End of explanation """ r.set('hi', 5) r.get('hi') r.get('bye') r.set('bye', 500) r.get('bye') """ Explanation: The simplest use of Redis is as a key-value store. We can use the get and set commands to stash values for arbitrary keys. End of explanation """ r2 = redis.StrictRedis() r2.get('bye') r2.set('new key', 10) r.get('new key') """ Explanation: Not particularly fancy, but useful. Why is this different from just using Python variables? For one thing, it's a server, so you can have multiple clients connecting. End of explanation """ r.get('hi') # increment the key 'hi' r.incr('hi') r.incr('hi') r.incr('hi', 20) r.decr('hi') r.decr('hi', 3) r.get('hi') """ Explanation: r and r2 could be different programs, or different users, or different languages. Much like a full RDBMS environment, the server backend supports multiple concurrent users. Unlike an RDBMS, though, Redis doesn't have the same sophisticated notion of access controls, so any connecting client can access, change, or delete data. More than just keys and values - basic data structures Just storing keys and values on a server still isn't terribly exciting. Keep in mind that Redis is a data structure server. With that in mind, it's more interesting to look at some of its data structures, such as counters, which (unsurprisingly) track and update counts of things. End of explanation """ r.get('hi') * 5 int(r.get('hi')) * 5 """ Explanation: Internally, Redis stores strings, so keep in mind that you'll have to cast values before doing math. End of explanation """ r.sadd('my set', 'thing one') r.sadd('my set', 'thing two', 'thing three', 'something else') r.smembers('my set') r.sadd('another set', 'thing two', 'thing three', 55, 'thing six') r.smembers('another set') r.sinter('my set', 'another set') r.sunion('my set', 'another set') """ Explanation: Counters are just the beginning. Next, we have sets: End of explanation """ len(r.smembers('my set')) [x.upper() for x in r.smembers('my set')] """ Explanation: And it's python, so we can do obvious things like: End of explanation """ r.zadd('sorted', 5, 'blue') r.zadd('sorted', 3, 'red') r.zadd('sorted', 7, 'purple') r.zadd('sorted', 10, 'pink') r.zadd('sorted', 6, 'grey') r.zrangebyscore('sorted', 0, 10) r.zrevrangebyscore('sorted', 100, 0, withscores=True) r.zrank('sorted', 'red')` r.zincrby('sorted', 'red', 5) r.zrevrangebyscore('sorted', 100, 0, withscores=True) r.zrank('sorted', 'red') """ Explanation: See what's going on here? Redis stores data structures as a server, but you can still manipulate those structures as if there were any other python variable. The differences are that they live on the server, so can be shared, and that this requires communication overhead between the client and the server. So doesn't that slow things down? Doesn't python already have a set() built-in type? (Yes, it does.) Why is it worth the overhead? More interesting data structures More interesting, perhaps, are sorted sets. End of explanation """ r.zadd('sales:10pm', 3, 'p1') r.zadd('sales:10pm', 1, 'p3') r.zadd('sales:10pm', 12, 'p1') r.zadd('sales:10pm', 5, 'p2') r.zadd('sales:11pm', 4, 'p1') r.zadd('sales:11pm', 8, 'p2') r.zadd('sales:11pm', 5, 'p2') r.zadd('sales:11pm', 2, 'p1') r.zadd('sales:11pm', 7, 'p1') """ Explanation: Here we've created a set that stores scores and automatically sorts the set members by scores. You can add new items or update the scores at any time, and fetch the rank order as well. Think "top ten anything". A note on keys The keys we used are named as you wish. So, for example, you can define key naming conventions that, for example, add identifiers to the keys for easy programmatic use. Let's say you're churning through a log of product sales orders and want to count the top sales for a given hour. End of explanation """ r.zrevrangebyscore('sales:10pm', 100, 0, withscores=True) r.zrevrangebyscore('sales:11pm', 100, 0, withscores=True) r.zunionstore('sales:combined', ['sales:10pm', 'sales:11pm']) r.zrevrangebyscore('sales:combined', 100, 0, withscores=True) """ Explanation: csvkit alone won't do the math for you, though csvsql could help. You could load your orders into R and do it, but perhaps you don't remember R and dplyr commands. In a little loop of python, you can throw all this data at Redis and it will return answers to useful questions. End of explanation """ import csv MAX_COUNT = 10000 count = 0 fp = open('bikeshare-q1.csv', 'rb') reader = csv.DictReader(fp) while count < MAX_COUNT: ride = reader.next() r.zincrby('start_station', ride['start_station'], 1) r.zincrby('end_station', ride['end_station'], 1) r.rpush('bike:%s' % ride['bike_id'], ride['end_station']) count += 1 r.zrevrangebyscore('start_station', 10000, 0, start=0, num=10, withscores=True, score_cast_func=int) print 'last bike seen:', ride['bike_id'] r.lrange('bike:%s' % ride['bike_id'], 0, 50) """ Explanation: Starting to get pretty cool, right? A practical example Let's look at something more concrete, using a familiar source: bikeshare data. What if we want to count station use and track bike movements? End of explanation """
karlstroetmann/Algorithms
Python/Chapter-07/ArrayMap.ipynb
gpl-2.0
class ArrayMap: def __init__(self, n): self.mArray = [None] * n def find(self, k): return self.mArray[k] def insert(self, k, v): self.mArray[k] = v def delete(self, k): self.mArray[k] = None def __repr__(self): result = '{ ' for key, value in enumerate(self.mArray): if value != None: result += f'{key}: {value}, ' if result == '{ ': return '{}' result = result[:-2] + ' }' return result squares = ArrayMap(10) squares for i in range(10): squares.insert(i, i * i) squares for i in range(10): print(f'find({i}) = {squares.find(i)}'); for i in range(10): squares.delete(i) squares """ Explanation: Implementing Maps as Arrays If the keys are natural numbers less than a given natural number n that is not too big, a map can be implemented via an array. The class ArrayMap shows how this is done. The constructor init takes a second argument n. This argument specifies the size of the array. End of explanation """ n = 10000 %%time S = ArrayMap(n+1) for i in range(2, n+1): S.insert(i, i) for i in range(2, n // 2 + 1): for j in range(i, n+1): if i * j > n: break S.delete(i * j) for k in range(2, n+1): if S.find(k): print(k, end=' ') """ Explanation: Lets compute the primes up to $n = 10000$. End of explanation """
benwaugh/NuffieldProject2016
notebooks/SimplifiedZZAnalysis.ipynb
mit
from ROOT import TChain, TH1F, TLorentzVector, TCanvas """ Explanation: Simplified ZZ analysis This is based on the ZZ analysis in the ATLAS outreach paper, but including all possible pairs of muons rather than selecting the combination closest to the Z mass. This time we will use ROOT histograms instead of Matplotlib: End of explanation """ data = TChain("mini"); # "mini" is the name of the TTree stored in the data files data.Add("http://atlas-opendata.web.cern.ch/atlas-opendata/release/samples/MC/mc_105986.ZZ.root") #data.Add("http://atlas-opendata.web.cern.ch/atlas-opendata/release/samples/Data/DataMuons.root") """ Explanation: Use some Monte Carlo ZZ events for testing before running on real data: End of explanation """ class Particle: ''' Represents a particle with a known type, charge and four-momentum ''' def __init__(self, four_momentum, pdg_code, charge): self.four_momentum = four_momentum self.typ = abs(pdg_code) self.charge = charge def leptons_from_event(event, pt_min=0.0): ''' Gets list of leptons from an event, subject to an optional minimum pT cut. ''' leptons = [] for i in xrange(event.lep_n): pt = event.lep_pt[i] if pt > pt_min: # only add lepton to output if it has enough pt p = TLorentzVector() p.SetPtEtaPhiE(pt, event.lep_eta[i], event.lep_phi[i], event.lep_E[i]) particle = Particle(p, event.lep_type[i], event.lep_charge[i]) leptons.append(particle) return leptons def pairs_from_leptons(leptons): ''' Get list of four-momenta for all possible opposite-charge pairs. ''' neg = [] pos = [] for lepton in leptons: if lepton.charge > 0: pos.append(lepton) elif lepton.charge < 0: neg.append(lepton) else: print("Warning: unexpected neutral particle") pairs = [] for p in pos: pp = p.four_momentum for n in neg: if p.typ == n.typ: # only combine if they are same type (e or mu) pn = n.four_momentum ptot = pp + pn pairs.append(ptot) return pairs """ Explanation: Define a class and some functions that we can use for extracting the information we want from the events: End of explanation """ c1 = TCanvas("TheCanvas","Canvas for plotting histograms",800,600) h1 = TH1F("h1","Dilepton mass",200,0,200) num_events = data.GetEntries() for event_num in xrange(1000): # loop over the events data.GetEntry(event_num) # read the next event into memory leptons = leptons_from_event(data,10000) # pt cut of 10 GeV if len(leptons) == 4: # require exactly 4 "good" leptons pairs = pairs_from_leptons(leptons) for pair in pairs: m = pair.M()/ 1000. # convert from MeV to GeV h1.Fill(m) h1.Draw('E') c1.Draw() """ Explanation: Now we can look for events with exactly four "good" leptons (those with a big enough pT) and combine them in pairs to make Z candidates: End of explanation """
HarshaDevulapalli/foundations-homework
14/14 - TF-IDF Homework.ipynb
mit
# If you'd like to download it through the command line... !curl -O http://www.cs.cornell.edu/home/llee/data/convote/convote_v1.1.tar.gz # And then extract it through the command line... !tar -zxf convote_v1.1.tar.gz """ Explanation: Homework 14 (or so): TF-IDF text analysis and clustering Hooray, we kind of figured out how text analysis works! Some of it is still magic, but at least the TF and IDF parts make a little sense. Kind of. Somewhat. No, just kidding, we're professionals now. Investigating the Congressional Record The Congressional Record is more or less what happened in Congress every single day. Speeches and all that. A good large source of text data, maybe? Let's pretend it's totally secret but we just got it leaked to us in a data dump, and we need to check it out. It was leaked from this page here. End of explanation """ # glob finds files matching a certain filename pattern import glob # Give me all the text files paths = glob.glob('convote_v1.1/data_stage_one/development_set/*') paths[:5] len(paths) """ Explanation: You can explore the files if you'd like, but we're going to get the ones from convote_v1.1/data_stage_one/development_set/. It's a bunch of text files. End of explanation """ speeches = [] for path in paths: with open(path) as speech_file: speech = { 'pathname': path, 'filename': path.split('/')[-1], 'content': speech_file.read() } speeches.append(speech) speeches_df = pd.DataFrame(speeches) speeches_df.head() """ Explanation: So great, we have 702 of them. Now let's import them. End of explanation """ speeches_df['content'].head(5) """ Explanation: In class we had the texts variable. For the homework can just do speeches_df['content'] to get the same sort of list of stuff. Take a look at the contents of the first 5 speeches End of explanation """ from sklearn.feature_extraction.text import CountVectorizer count_vectorizer = CountVectorizer(stop_words='english') X = count_vectorizer.fit_transform(speeches_df['content']) X.toarray() pd.DataFrame(X.toarray()) pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names()) """ Explanation: Doing our analysis Use the sklearn package and a plain boring CountVectorizer to get a list of all of the tokens used in the speeches. If it won't list them all, that's ok! Make a dataframe with those terms as columns. Be sure to include English-language stopwords End of explanation """ from sklearn.feature_extraction.text import CountVectorizer count_vectorizer = CountVectorizer(stop_words='english',max_features=100) X = count_vectorizer.fit_transform(speeches_df['content']) X.toarray() pd.DataFrame(X.toarray()) pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names()) """ Explanation: Okay, it's far too big to even look at. Let's try to get a list of features from a new CountVectorizer that only takes the top 100 words. End of explanation """ df=pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names()) """ Explanation: Now let's push all of that into a dataframe with nicely named columns. End of explanation """ df['chairman'].value_counts().head(5) #Chairman is NOT mentioned in 250 speeches. len(df[df['chairman']==0]) len(df[(df['chairman']==0) & (df['mr']==0)]) """ Explanation: Everyone seems to start their speeches with "mr chairman" - how many speeches are there total, and many don't mention "chairman" and how many mention neither "mr" nor "chairman"? End of explanation """ df['thank'].max() df[df['thank']==9] #Speech No 9 """ Explanation: What is the index of the speech thank is the most thankful, a.k.a. includes the word 'thank' the most times? End of explanation """ ctdf=df[(df['china']!=0) & (df['trade']!=0)] nctdf=pd.DataFrame([ctdf['china'], ctdf['trade'], ctdf['china'] + ctdf['trade']], index=["china", "trade", "china+trade"]).T nctdf.sort_values(by='china+trade',ascending=False).head(3) """ Explanation: If I'm searching for China and trade, what are the top 3 speeches to read according to the CountVectoriser? End of explanation """ porter_stemmer = PorterStemmer() def stemming_tokenizer(str_input): words = re.sub(r"[^A-Za-z0-9\-]", " ", str_input).lower().split() words = [porter_stemmer.stem(word) for word in words] return words from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vectorizer = TfidfVectorizer(stop_words='english', tokenizer=stemming_tokenizer, use_idf=False, norm='l1') X = tfidf_vectorizer.fit_transform(speeches_df['content']) newdf=pd.DataFrame(X.toarray(), columns=tfidf_vectorizer.get_feature_names()) newdf.head(5) """ Explanation: Now what if I'm using a TfidfVectorizer? End of explanation """ # index 0 is the first speech, which was the first one imported. paths[0] # Pass that into 'cat' using { } which lets you put variables in shell commands # that way you can pass the path to cat !type "convote_v1.1\data_stage_one\development_set\052_400011_0327014_DON.txt" #!type "{paths[0].replace("/","\\")}" !type "{paths[0].replace("/","\\")}" """ Explanation: What's the content of the speeches? Here's a way to get them: End of explanation """ ecdf=pd.DataFrame([newdf['elect'], newdf['chao'], newdf['elect'] + newdf['chao']], index=["elections", "chaos", "elections+chaos"]).T ecdf.sort_values(by='elections+chaos',ascending=False).head(5) """ Explanation: Now search for something else! Another two terms that might show up. elections and chaos? Whatever you thnik might be interesting. End of explanation """ #SIMPLE COUNTING VECTORIZER count_vectorizer = CountVectorizer(stop_words='english',max_features=100) X = count_vectorizer.fit_transform(speeches_df['content']) from sklearn.cluster import KMeans number_of_clusters = 8 km = KMeans(n_clusters=number_of_clusters) km.fit(X) print("Top terms per cluster:") order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words))) #SIMPLE TERM FREQUENCY VECTORIZER vectorizer = TfidfVectorizer(use_idf=True, tokenizer=stemming_tokenizer, stop_words='english') X = vectorizer.fit_transform(speeches_df['content']) from sklearn.cluster import KMeans number_of_clusters = 8 km = KMeans(n_clusters=number_of_clusters) km.fit(X) print("Top terms per cluster:") order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words))) # SIMPLE TFIDF VECTORIZER vectorizer = TfidfVectorizer(tokenizer=stemming_tokenizer,use_idf=False,stop_words='english') X = vectorizer.fit_transform(speeches_df['content']) from sklearn.cluster import KMeans number_of_clusters = 8 km = KMeans(n_clusters=number_of_clusters) km.fit(X) print("Top terms per cluster:") order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words))) """ Explanation: Enough of this garbage, let's cluster Using a simple counting vectorizer, cluster the documents into eight categories, telling me what the top terms are per category. Using a term frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category. Using a term frequency inverse document frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category. End of explanation """ paths = glob.glob('hp/*') paths[0] len(paths) speeches = [] for path in paths: with open(path) as speech_file: speech = { 'pathname': path, 'filename': path.split('/')[-1], 'content': speech_file.read() } speeches.append(speech) hpfanfic_df = pd.DataFrame(speeches) hpfanfic_df.head() hpfanfic_df['content'].head(5) vectorizer = TfidfVectorizer(use_idf=True, max_features=10000, stop_words='english') X = vectorizer.fit_transform(hpfanfic_df['content']) print(vectorizer.get_feature_names()[:10]) df = pd.DataFrame(X.toarray(), columns=vectorizer.get_feature_names()) df.head(5) print("Top terms per cluster:") order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words))) hpfanfic_df['category'] = km.labels_ hpfanfic_df.head() """ Explanation: Which one do you think works the best? IDF, works the best Harry Potter time I have a scraped collection of Harry Potter fanfiction at https://github.com/ledeprogram/courses/raw/master/algorithms/data/hp.zip. I want you to read them in, vectorize them and cluster them. Use this process to find out the two types of Harry Potter fanfiction. What is your hypothesis? End of explanation """
fcollonval/coursera_data_visualization
Chi-Square_Test.ipynb
mit
# Magic command to insert the graph directly in the notebook %matplotlib inline # Load a useful Python libraries for handling data import pandas as pd import numpy as np import statsmodels.formula.api as smf import seaborn as sns import scipy.stats as stats import matplotlib.pyplot as plt from IPython.display import Markdown, display """ Explanation: Data Analysis Tools Assignment: Running a Chi-Square Test of Independence Following is the Python program I wrote to fulfill the second assignment of the Data Analysis Tools online course. I decided to use Jupyter Notebook as it is a pretty way to write code and present results. As the previous assignment brought me to conclude my initial research question, I will look at a possible relationship between ethnicity (explanatory variable) and use of cannabis (response variable) from the NESARC database. As both variables are categoricals, the Chi-Square Test of Independence is the method to use. End of explanation """ nesarc = pd.read_csv('nesarc_pds.csv', low_memory=False) races = {1 : 'White', 2 : 'Black', 3 : 'American India \n Alaska', 4 : 'Asian \n Native Hawaiian \n Pacific', 5 : 'Hispanic or Latino'} subnesarc = (nesarc[['S3BQ1A5', 'ETHRACE2A']] .assign(S3BQ1A5=lambda x: pd.to_numeric(x['S3BQ1A5'].replace((2, 9), (0, np.nan)), errors='coerce')) .assign(ethnicity=lambda x: pd.Categorical(x['ETHRACE2A'].map(races)), use_cannabis=lambda x: pd.Categorical(x['S3BQ1A5'])) .dropna()) subnesarc.use_cannabis.cat.rename_categories(('No', 'Yes'), inplace=True) """ Explanation: Data management End of explanation """ g = sns.countplot(subnesarc['ethnicity']) _ = plt.title('Distribution of the ethnicity') g = sns.countplot(subnesarc['use_cannabis']) _ = plt.title('Distribution of ever use cannabis') """ Explanation: First, the distribution of both the use of cannabis and the ethnicity will be shown. End of explanation """ g = sns.factorplot(x='ethnicity', y='S3BQ1A5', data=subnesarc, kind="bar", ci=None) g.set_xticklabels(rotation=90) plt.ylabel('Ever use cannabis') _ = plt.title('Average number of cannabis user depending on the ethnicity') ct1 = pd.crosstab(subnesarc.use_cannabis, subnesarc.ethnicity) display(Markdown("Contingency table of observed counts")) ct1 # Note: normalize keyword is available starting from pandas version 0.18.1 ct2 = ct1/ct1.sum(axis=0) display(Markdown("Contingency table of observed counts normalized over each columns")) ct2 """ Explanation: Variance analysis Now that the univariate distribution as be plotted and described, the bivariate graphics will be plotted in order to test our research hypothesis. From the bivariate graphic below, it seems that there are some differences. For example American Indian versus Asian seems quite different. End of explanation """ stats.chi2_contingency(ct1) """ Explanation: The Chi-Square test will be applied on the all data to test the following hypothesis : The null hypothesis is There is no relationship between the use of cannabis and the ethnicity. The alternate hypothesis is There is a relationship between the use of cannabis and the ethnicity. End of explanation """ list_races = list(races.keys()) p_values = dict() for i in range(len(list_races)): for j in range(i+1, len(list_races)): race1 = races[list_races[i]] race2 = races[list_races[j]] subethnicity = subnesarc.ETHRACE2A.map(dict(((list_races[i], race1),(list_races[j], race2)))) comparison = pd.crosstab(subnesarc.use_cannabis, subethnicity) display(Markdown("Crosstable to compare {} and {}".format(race1, race2))) display(comparison) display(comparison/comparison.sum(axis=0)) chi_square, p, _, expected_counts = stats.chi2_contingency(comparison) p_values[(race1, race2)] = p """ Explanation: The p-value of 3.7e-91 confirm that the null hypothesis can be safetly rejected. The next obvious questions is which ethnic groups have a statistically significant difference regarding the use of cannabis. For that, the Chi-Square test will be performed on each pair of group thanks to the following code. End of explanation """ df = pd.DataFrame(p_values, index=['p-value', ]) (df.stack(level=[0, 1])['p-value'] .rename('p-value') .to_frame() .assign(Ha=lambda x: x['p-value'] < 0.05 / len(p_values))) """ Explanation: If we put together all p-values results and test them against our threshold of 0.005, we got the table below. The threshold is the standard 0.05 threshold divided by the number of pairs in the explanatory variables (here 10). End of explanation """
empet/Plotly-plots
Plotly-Slice-in-volumetric-data.ipynb
gpl-3.0
import numpy as np import plotly.graph_objects as go from IPython """ Explanation: Slice in volumetric data, via Plotly A volume included in a parallelepiped is described by the values of a scalar field, $f(x,y,z)$, with $x\in[a,b]$, $y\in [c,d]$, $z\in[e,f]$. A slice in this volume is visualized by coloring the surface of the slice, according to the values of the function f, restricted to that surface. In order to plot a planar or a nonlinear slice of equation z=s(x,y) one proceeds as follows: define a meshgrid in x,y; evaluate z=s(x,y) define an instance of the Plotly Surface class, that represents the surface z=s(x,y) this surface is colored according to the values, f(x,y,z), at its points. More precisely, the normalized values of the function f are mapped to a colormap/colorscale. With obvious modications we get slices of equation $x=s(y,z), y=s(z,x)$. End of explanation """ def get_the_slice(x,y,z, surfacecolor): return go.Surface(x=x, y=y, z=z, surfacecolor=surfacecolor, coloraxis='coloraxis') def get_lims_colors(surfacecolor):# color limits for a slice return np.min(surfacecolor), np.max(surfacecolor) """ Explanation: Define a function that returns a slice as a Plotly Surface: End of explanation """ scalar_f = lambda x,y,z: x*np.exp(-x**2-y**2-z**2) x = np.linspace(-2,2, 50) y = np.linspace(-2,2, 50) x, y = np.meshgrid(x,y) z = np.zeros(x.shape) surfcolor_z = scalar_f(x,y,z) sminz, smaxz = get_lims_colors(surfcolor_z) slice_z = get_the_slice(x, y, z, surfcolor_z) x = np.linspace(-2,2, 50) z = np.linspace(-2,2, 50) x, z = np.meshgrid(x,y) y = -0.5 * np.ones(x.shape) surfcolor_y = scalar_f(x,y,z) sminy, smaxy = get_lims_colors(surfcolor_y) vmin = min([sminz, sminy]) vmax = max([smaxz, smaxy]) slice_y = get_the_slice(x, y, z, surfcolor_y) """ Explanation: Let us plot the slices z=0 and y=-0.5 in the volume defined by: End of explanation """ def colorax(vmin, vmax): return dict(cmin=vmin, cmax=vmax) fig1 = go.Figure(data=[slice_z, slice_y]) fig1.update_layout( title_text='Slices in volumetric data', title_x=0.5, width=700, height=700, scene_zaxis_range=[-2,2], coloraxis=dict(colorscale='BrBG', colorbar_thickness=25, colorbar_len=0.75, **colorax(vmin, vmax))) #fig1.show() from IPython.display import IFrame IFrame('https://chart-studio.plotly.com/~empet/13862', width=700, height=700) """ Explanation: In order to be able to compare the two slices, we choose a unique interval of values to be mapped to the colorscale: End of explanation """ alpha = np.pi/4 x = np.linspace(-2, 2, 50) y = np.linspace(-2, 2, 50) x, y = np.meshgrid(x,y) z = -x * np.tan(alpha) surfcolor_obl = scalar_f(x,y,z) smino, smaxo = get_lims_colors(surfcolor_obl) vmin = min([sminz, smino]) vmax = max([smaxz, smaxo]) slice_obl = get_the_slice(x,y,z, surfcolor_obl) fig2 = go.Figure(data=[slice_z, slice_obl], layout=fig1.layout) fig2.update_layout( coloraxis=colorax(vmin, vmax)) #fig2.show() IFrame('https://chart-studio.plotly.com/~empet/13864', width=700, height=700) from IPython.core.display import HTML def css_styling(): styles = open("./custom.css", "r").read() return HTML(styles) css_styling() """ Explanation: Oblique slice in volumetric data As an example we plot comparatively two slices: a slice through $z=0$ and an oblique planar slice, that is defined by rotating the plane z=0 by $\alpha=\pi/4$, about Oy. Rotating the plane $z=c$ about Oy (from Oz towards Ox) with $\alpha$ radians we get the plane of equation $z=c/\cos(\alpha)-x\tan(\alpha)$ End of explanation """
Ccaccia73/semimonocoque
02_Triangular_Section.ipynb
mit
from pint import UnitRegistry import sympy import networkx as nx import numpy as np import matplotlib.pyplot as plt import sys %matplotlib inline from IPython.display import display """ Explanation: Semi-Monocoque Theory End of explanation """ from Section import Section """ Explanation: Import Section class, which contains all calculations End of explanation """ ureg = UnitRegistry() sympy.init_printing() """ Explanation: Initialization of sympy symbolic tool and pint for dimension analysis (not really implemented rn as not directly compatible with sympy) End of explanation """ A, A0, t, t0, a, b, h, L = sympy.symbols('A A_0 t t_0 a b h L', positive=True) """ Explanation: Define sympy parameters used for geometric description of sections End of explanation """ values = [(A, 150 * ureg.millimeter**2),(A0, 250 * ureg.millimeter**2),(a, 80 * ureg.millimeter), \ (b, 20 * ureg.millimeter),(h, 35 * ureg.millimeter),(L, 2000 * ureg.millimeter)] datav = [(v[0],v[1].magnitude) for v in values] """ Explanation: We also define numerical values for each symbol in order to plot scaled section and perform calculations End of explanation """ stringers = {1:[(sympy.Integer(0),h),A], 2:[(sympy.Integer(0),sympy.Integer(0)),A], 3:[(a,sympy.Integer(0)),A]} panels = {(1,2):t, (2,3):t, (3,1):t} """ Explanation: Triangular section Define graph describing the section: 1) stringers are nodes with parameters: - x coordinate - y coordinate - Area 2) panels are oriented edges with parameters: - thickness - lenght which is automatically calculated End of explanation """ S1 = Section(stringers, panels) S1.cycles """ Explanation: Define section and perform first calculations End of explanation """ start_pos={ii: [float(S1.g.node[ii]['ip'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() } plt.figure(figsize=(12,8),dpi=300) nx.draw(S1.g,with_labels=True, arrows= True, pos=start_pos) plt.arrow(0,0,20,0) plt.arrow(0,0,0,20) #plt.text(0,0, 'CG', fontsize=24) plt.axis('equal') plt.title("Section in starting reference Frame",fontsize=16); """ Explanation: Plot of S1 section in original reference frame Define a dictionary of coordinates used by Networkx to plot section as a Directed graph. Note that arrows are actually just thicker stubs End of explanation """ S1.Ixx0, S1.Iyy0, S1.Ixy0, S1.α0 """ Explanation: Expression of Inertial properties wrt Center of Gravity in with original rotation End of explanation """ positions={ii: [float(S1.g.node[ii]['pos'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() } x_ct, y_ct = S1.ct.subs(datav) plt.figure(figsize=(12,8),dpi=300) nx.draw(S1.g,with_labels=True, pos=positions) plt.plot([0],[0],'o',ms=12,label='CG') plt.plot([x_ct],[y_ct],'^',ms=12, label='SC') #plt.text(0,0, 'CG', fontsize=24) #plt.text(x_ct,y_ct, 'SC', fontsize=24) plt.legend(loc='lower right', shadow=True) plt.axis('equal') plt.title("Section in pricipal reference Frame",fontsize=16); """ Explanation: Plot of S1 section in inertial reference Frame Section is plotted wrt center of gravity and rotated (if necessary) so that x and y are principal axes. Center of Gravity and Shear Center are drawn End of explanation """ sympy.simplify(S1.Ixx), sympy.simplify(S1.Iyy), sympy.simplify(S1.Ixy), sympy.simplify(S1.θ) """ Explanation: Expression of inertial properties in principal reference frame End of explanation """ sympy.N(S1.ct.subs(datav)) """ Explanation: Shear center expression Expressions can be messy, so we evaluate them to numerical values End of explanation """ Tx, Ty, Nz, Mx, My, Mz, F, ry, ry, mz = sympy.symbols('T_x T_y N_z M_x M_y M_z F r_y r_x m_z') S1.set_loads(_Tx=0, _Ty=Ty, _Nz=0, _Mx=Mx, _My=0, _Mz=0) #S1.compute_stringer_actions() #S1.compute_panel_fluxes(); """ Explanation: Analisys of Loads We define some symbols End of explanation """ #S1.N """ Explanation: Axial Loads End of explanation """ #S1.q """ Explanation: Panel Fluxes End of explanation """ S1.set_loads(_Tx=0, _Ty=0, _Nz=0, _Mx=0, _My=0, _Mz=Mz) S1.compute_stringer_actions() S1.compute_panel_fluxes(); """ Explanation: Example 2: twisting moment in z direction End of explanation """ S1.N """ Explanation: Axial Loads End of explanation """ {k:sympy.N(S1.q[k].subs(datav)) for k in S1.q } """ Explanation: Panel Fluxes evaluated to numerical values End of explanation """ S1.compute_Jt() sympy.N(S1.Jt.subs(datav)) """ Explanation: Torsional moment of Inertia End of explanation """
srcole/qwm
hcmst/process_raw_data.ipynb
mit
import numpy as np import pandas as pd %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns pd.options.display.max_columns=1000 """ Explanation: Data info Data notes Wave I, the main survey, was fielded between February 21 and April 2, 2009. Wave 2 was fielded March 12, 2010 to June 8, 2010. Wave 3 was fielded March 22, 2011 to August 29, 2011. Wave 4 was fielded between March and November of 2013. Wave 5 was fielded between November, 2014 and March, 2015. End of explanation """ df = pd.read_stata('/gh/data/hcmst/1.dta') # df2 = pd.read_stata('/gh/data/hcmst/2.dta') # df3 = pd.read_stata('/gh/data/hcmst/3.dta') # df = df1.merge(df2, on='caseid_new') # df = df.merge(df3, on='caseid_new') df.head(2) """ Explanation: Load raw data End of explanation """ rename_cols_dict = {'ppage': 'age', 'ppeducat': 'education', 'ppethm': 'race', 'ppgender': 'sex', 'pphouseholdsize': 'household_size', 'pphouse': 'house_type', 'hhinc': 'income', 'ppmarit': 'marital_status', 'ppmsacat': 'in_metro', 'ppreg4': 'usa_region', 'pprent': 'house_payment', 'children_in_hh': 'N_child', 'ppwork': 'work', 'ppnet': 'has_internet', 'papglb_friend': 'has_gay_friendsfam', 'pppartyid3': 'politics', 'papreligion': 'religion', 'qflag': 'in_relationship', 'q9': 'partner_age', 'duration': 'N_minutes_survey', 'glbstatus': 'is_lgb', 's1': 'is_married', 'partner_race': 'partner_race', 'q7b': 'partner_religion', 'q10': 'partner_education', 'US_raised': 'USA_raised', 'q17a': 'N_marriages', 'q17b': 'N_marriages2', 'coresident': 'cohabit', 'q21a': 'age_first_met', 'q21b': 'age_relationship_begin', 'q21d': 'age_married', 'q23': 'relative_income', 'q25': 'same_high_school', 'q26': 'same_college', 'q27': 'same_hometown', 'age_difference': 'age_difference', 'q34':'relationship_quality', 'q24_met_online': 'met_online', 'met_through_friends': 'met_friends', 'met_through_family': 'met_family', 'met_through_as_coworkers': 'met_work'} df = df[list(rename_cols_dict.keys())] df.rename(columns=rename_cols_dict, inplace=True) # Process number of marriages df['N_marriages'] = df['N_marriages'].astype(str).replace({'nan':''}) + df['N_marriages2'].astype(str).replace({'nan':''}) df.drop('N_marriages2', axis=1, inplace=True) df['N_marriages'] = df['N_marriages'].replace({'':np.nan, 'once (this is my first marriage)': 'once', 'refused':np.nan}) df['N_marriages'] = df['N_marriages'].astype('category') # Clean entries to make simpler df['in_metro'] = df['in_metro']=='metro' df['relationship_excellent'] = df['relationship_quality'] == 'excellent' df['house_payment'].replace({'owned or being bought by you or someone in your household': 'owned', 'rented for cash': 'rent', 'occupied without payment of cash rent': 'free'}, inplace=True) df['race'].replace({'white, non-hispanic': 'white', '2+ races, non-hispanic': 'other, non-hispanic', 'black, non-hispanic': 'black'}, inplace=True) df['house_type'].replace({'a one-family house detached from any other house': 'house', 'a building with 2 or more apartments': 'apartment', 'a one-family house attached to one or more houses': 'house', 'a mobile home': 'mobile', 'boat, rv, van, etc.': 'mobile'}, inplace=True) df['is_not_working'] = df['work'].str.contains('not working') df['has_internet'] = df['has_internet'] == 'yes' df['has_gay_friends'] = np.logical_or(df['has_gay_friendsfam']=='yes, friends', df['has_gay_friendsfam']=='yes, both') df['has_gay_family'] = np.logical_or(df['has_gay_friendsfam']=='yes, relatives', df['has_gay_friendsfam']=='yes, both') df['religion_is_christian'] = df['religion'].isin(['protestant (e.g., methodist, lutheran, presbyterian, episcopal)', 'catholic', 'baptist-any denomination', 'other christian', 'pentecostal', 'mormon', 'eastern orthodox']) df['religion_is_none'] = df['religion'].isin(['none']) df['in_relationship'] = df['in_relationship']=='partnered' df['is_lgb'] = df['is_lgb']=='glb' df['is_married'] = df['is_married']=='yes, i am married' df['partner_race'].replace({'NH white': 'white', ' NH black': 'black', ' NH Asian Pac Islander':'other', ' NH Other': 'other', ' NH Amer Indian': 'other'}, inplace=True) df['partner_religion_is_christian'] = df['partner_religion'].isin(['protestant (e.g., methodist, lutheran, presbyterian, episcopal)', 'catholic', 'baptist-any denomination', 'other christian', 'pentecostal', 'mormon', 'eastern orthodox']) df['partner_religion_is_none'] = df['partner_religion'].isin(['none']) df['partner_education'] = df['partner_education'].map({'hs graduate or ged': 'high school', 'some college, no degree': 'some college', "associate degree": "some college", "bachelor's degree": "bachelor's degree or higher", "master's degree": "bachelor's degree or higher", "professional or doctorate degree": "bachelor's degree or higher"}) df['partner_education'].fillna('less than high school', inplace=True) df['USA_raised'] = df['USA_raised']=='raised in US' df['N_marriages'] = df['N_marriages'].map({'never married': '0', 'once': '1', 'twice': '2', 'three times': '3+', 'four or more times':'3+'}) df['relative_income'].replace({'i earned more': 'more', 'partner earned more': 'less', 'we earned about the same amount': 'same', 'refused': np.nan}, inplace=True) df['same_high_school'] = df['same_high_school']=='same high school' df['same_college'] = df['same_college']=='attended same college or university' df['same_hometown'] = df['same_hometown']=='yes' df['cohabit'] = df['cohabit']=='yes' df['met_online'] = df['met_online']=='met online' df['met_friends'] = df['met_friends']=='meet through friends' df['met_family'] = df['met_family']=='met through family' df['met_work'] = df['met_family']==1 df['age'] = df['age'].astype(int) for c in df.columns: if str(type(df[c])) == 'object': df[c] = df[c].astype('category') df.head() df.to_csv('/gh/data/hcmst/1_cleaned.csv') """ Explanation: Select and rename columns End of explanation """ for c in df.columns: print(df[c].value_counts()) # Countplot if categorical; distplot if numeric from pandas.api.types import is_numeric_dtype plt.figure(figsize=(40,40)) for i, c in enumerate(df.columns): plt.subplot(7,7,i+1) if is_numeric_dtype(df[c]): sns.distplot(df[c].dropna(), kde=False) else: sns.countplot(y=c, data=df) plt.savefig('temp.png') sns.barplot(x='income', y='race', data=df) """ Explanation: Distributions End of explanation """
AdityaSoni19031997/Machine-Learning
Classifying_datasets/MNIST/mnist-with-keras-for-beginners-99457.ipynb
mit
import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotlib.pyplot as plt #for plotting from collections import Counter from sklearn.metrics import confusion_matrix import itertools import seaborn as sns from subprocess import check_output print(check_output(["ls", "../input"]).decode("utf8")) %matplotlib inline """ Explanation: Importing Necessary Modules To discover something new is to explore where it has never been explored. Added Conv Visuals Also (Working) End of explanation """ #loading the dataset.......(Train) train = pd.read_csv("../input/train.csv") print(train.shape) train.head() z_train = Counter(train['label']) z_train sns.countplot(train['label']) #loading the dataset.......(Test) test= pd.read_csv("../input/test.csv") print(test.shape) test.head() x_train = (train.ix[:,1:].values).astype('float32') # all pixel values y_train = train.ix[:,0].values.astype('int32') # only labels i.e targets digits x_test = test.values.astype('float32') %matplotlib inline # preview the images first plt.figure(figsize=(12,10)) x, y = 10, 4 for i in range(40): plt.subplot(y, x, i+1) plt.imshow(x_train[i].reshape((28,28)),interpolation='nearest') plt.show() """ Explanation: Loading The Dataset End of explanation """ x_train = x_train/255.0 x_test = x_test/255.0 y_train """ Explanation: Normalising The Data End of explanation """ print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') """ Explanation: Printing the shape of the Datasets End of explanation """ X_train = x_train.reshape(x_train.shape[0], 28, 28,1) X_test = x_test.reshape(x_test.shape[0], 28, 28,1) import keras from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D from keras.layers.normalization import BatchNormalization from keras.preprocessing.image import ImageDataGenerator from keras.callbacks import ReduceLROnPlateau from sklearn.model_selection import train_test_split batch_size = 64 num_classes = 10 epochs = 20 input_shape = (28, 28, 1) # convert class vectors to binary class matrices One Hot Encoding y_train = keras.utils.to_categorical(y_train, num_classes) X_train, X_val, Y_train, Y_val = train_test_split(X_train, y_train, test_size = 0.1, random_state=42) """ Explanation: ## Reshape To Match The Keras's Expectations End of explanation """ model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',kernel_initializer='he_normal',input_shape=input_shape)) model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',kernel_initializer='he_normal')) model.add(MaxPool2D((2, 2))) model.add(Dropout(0.20)) model.add(Conv2D(64, (3, 3), activation='relu',padding='same',kernel_initializer='he_normal')) model.add(Conv2D(64, (3, 3), activation='relu',padding='same',kernel_initializer='he_normal')) model.add(MaxPool2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Conv2D(128, (3, 3), activation='relu',padding='same',kernel_initializer='he_normal')) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.25)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.RMSprop(), metrics=['accuracy']) learning_rate_reduction = ReduceLROnPlateau(monitor='val_acc', patience=3, verbose=1, factor=0.5, min_lr=0.0001) datagen = ImageDataGenerator( featurewise_center=False, # set input mean to 0 over the dataset samplewise_center=False, # set each sample mean to 0 featurewise_std_normalization=False, # divide inputs by std of the dataset samplewise_std_normalization=False, # divide each input by its std zca_whitening=False, # apply ZCA whitening rotation_range=15, # randomly rotate images in the range (degrees, 0 to 180) zoom_range = 0.1, # Randomly zoom image width_shift_range=0.1, # randomly shift images horizontally (fraction of total width) height_shift_range=0.1, # randomly shift images vertically (fraction of total height) horizontal_flip=False, # randomly flip images vertical_flip=False) # randomly flip images model.summary() datagen.fit(X_train) h = model.fit_generator(datagen.flow(X_train,Y_train, batch_size=batch_size), epochs = epochs, validation_data = (X_val,Y_val), verbose = 1, steps_per_epoch=X_train.shape[0] // batch_size , callbacks=[learning_rate_reduction],) """ Explanation: Linear Model End of explanation """ final_loss, final_acc = model.evaluate(X_val, Y_val, verbose=0) print("Final loss: {0:.6f}, final accuracy: {1:.6f}".format(final_loss, final_acc)) # Look at confusion matrix #Note, this code is taken straight from the SKLEARN website, an nice way of viewing confusion matrix. def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, cm[i, j], horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') # Predict the values from the validation dataset Y_pred = model.predict(X_val) # Convert predictions classes to one hot vectors Y_pred_classes = np.argmax(Y_pred, axis = 1) # Convert validation observations to one hot vectors Y_true = np.argmax(Y_val, axis = 1) # compute the confusion matrix confusion_mtx = confusion_matrix(Y_true, Y_pred_classes) # plot the confusion matrix plot_confusion_matrix(confusion_mtx, classes = range(10)) print(h.history.keys()) accuracy = h.history['acc'] val_accuracy = h.history['val_acc'] loss = h.history['loss'] val_loss = h.history['val_loss'] epochs = range(len(accuracy)) plt.plot(epochs, accuracy, 'bo', label='Training accuracy') plt.plot(epochs, val_accuracy, 'b', label='Validation accuracy') plt.title('Training and validation accuracy') plt.legend() plt.show() plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show() # Errors are difference between predicted labels and true labels errors = (Y_pred_classes - Y_true != 0) Y_pred_classes_errors = Y_pred_classes[errors] Y_pred_errors = Y_pred[errors] Y_true_errors = Y_true[errors] X_val_errors = X_val[errors] def display_errors(errors_index,img_errors,pred_errors, obs_errors): """ This function shows 6 images with their predicted and real labels""" n = 0 nrows = 2 ncols = 3 fig, ax = plt.subplots(nrows,ncols,sharex=True,sharey=True) for row in range(nrows): for col in range(ncols): error = errors_index[n] ax[row,col].imshow((img_errors[error]).reshape((28,28))) ax[row,col].set_title("Predicted label :{}\nTrue label :{}".format(pred_errors[error],obs_errors[error])) n += 1 # Probabilities of the wrong predicted numbers Y_pred_errors_prob = np.max(Y_pred_errors,axis = 1) # Predicted probabilities of the true values in the error set true_prob_errors = np.diagonal(np.take(Y_pred_errors, Y_true_errors, axis=1)) # Difference between the probability of the predicted label and the true label delta_pred_true_errors = Y_pred_errors_prob - true_prob_errors # Sorted list of the delta prob errors sorted_dela_errors = np.argsort(delta_pred_true_errors) # Top 6 errors most_important_errors = sorted_dela_errors[-6:] # Show the top 6 errors display_errors(most_important_errors, X_val_errors, Y_pred_classes_errors, Y_true_errors) """ Explanation: Basic Simple Plot And Evaluation End of explanation """ test_im = X_train[154] plt.imshow(test_im.reshape(28,28), cmap='viridis', interpolation='none') """ Explanation: Activations Look Like What? It looks like diversity of the similar patterns present on multiple classes effect the performance of the classifier although CNN is a robust architechture. End of explanation """ from keras import models layer_outputs = [layer.output for layer in model.layers[:8]] activation_model = models.Model(input=model.input, output=layer_outputs) activations = activation_model.predict(test_im.reshape(1,28,28,1)) first_layer_activation = activations[0] plt.matshow(first_layer_activation[0, :, :, 4], cmap='viridis') """ Explanation: Let's see the activation of the 2nd channel of the first layer: Had taken help from the keras docs, this answer on StackOverFlow End of explanation """ model.layers[:-1]# Droping The Last Dense Layer layer_names = [] for layer in model.layers[:-1]: layer_names.append(layer.name) images_per_row = 16 for layer_name, layer_activation in zip(layer_names, activations): if layer_name.startswith('conv'): n_features = layer_activation.shape[-1] size = layer_activation.shape[1] n_cols = n_features // images_per_row display_grid = np.zeros((size * n_cols, images_per_row * size)) for col in range(n_cols): for row in range(images_per_row): channel_image = layer_activation[0,:, :, col * images_per_row + row] channel_image -= channel_image.mean() channel_image /= channel_image.std() channel_image *= 64 channel_image += 128 channel_image = np.clip(channel_image, 0, 255).astype('uint8') display_grid[col * size : (col + 1) * size, row * size : (row + 1) * size] = channel_image scale = 1. / size plt.figure(figsize=(scale * display_grid.shape[1], scale * display_grid.shape[0])) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') layer_names = [] for layer in model.layers[:-1]: layer_names.append(layer.name) images_per_row = 16 for layer_name, layer_activation in zip(layer_names, activations): if layer_name.startswith('max'): n_features = layer_activation.shape[-1] size = layer_activation.shape[1] n_cols = n_features // images_per_row display_grid = np.zeros((size * n_cols, images_per_row * size)) for col in range(n_cols): for row in range(images_per_row): channel_image = layer_activation[0,:, :, col * images_per_row + row] channel_image -= channel_image.mean() channel_image /= channel_image.std() channel_image *= 64 channel_image += 128 channel_image = np.clip(channel_image, 0, 255).astype('uint8') display_grid[col * size : (col + 1) * size, row * size : (row + 1) * size] = channel_image scale = 1. / size plt.figure(figsize=(scale * display_grid.shape[1], scale * display_grid.shape[0])) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') layer_names = [] for layer in model.layers[:-1]: layer_names.append(layer.name) images_per_row = 16 for layer_name, layer_activation in zip(layer_names, activations): if layer_name.startswith('drop'): n_features = layer_activation.shape[-1] size = layer_activation.shape[1] n_cols = n_features // images_per_row display_grid = np.zeros((size * n_cols, images_per_row * size)) for col in range(n_cols): for row in range(images_per_row): channel_image = layer_activation[0,:, :, col * images_per_row + row] channel_image -= channel_image.mean() channel_image /= channel_image.std() channel_image *= 64 channel_image += 128 channel_image = np.clip(channel_image, 0, 255).astype('uint8') display_grid[col * size : (col + 1) * size, row * size : (row + 1) * size] = channel_image scale = 1. / size plt.figure(figsize=(scale * display_grid.shape[1], scale * display_grid.shape[0])) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') """ Explanation: Let's plot the activations of the other conv layers as well. End of explanation """ #get the predictions for the test data predicted_classes = model.predict_classes(X_test) #get the indices to be plotted y_true = test.iloc[:, 0] correct = np.nonzero(predicted_classes==y_true)[0] incorrect = np.nonzero(predicted_classes!=y_true)[0] from sklearn.metrics import classification_report target_names = ["Class {}".format(i) for i in range(num_classes)] print(classification_report(y_true, predicted_classes, target_names=target_names)) submissions=pd.DataFrame({"ImageId": list(range(1,len(predicted_classes)+1)), "Label": predicted_classes}) submissions.to_csv("asd.csv", index=False, header=True) model.save('my_model_1.h5') json_string = model.to_json() """ Explanation: Classifcation Report End of explanation """
jobovy/gaia_tools
examples/make_gaia_query_examples.ipynb
mit
circle = """ --Selections: Cluster RA 1=CONTAINS(POINT('ICRS',gaia.ra,gaia.dec), CIRCLE('ICRS',{ra:.4f},{dec:.4f},{rad:.2f})) """.format(ra=230, dec=0, rad=4) df = make_simple_query( WHERE=circle, # The WHERE part of the SQL random_index=1e4, # a shortcut to use the random_index in 'WHERE' ORDERBY='gaia.parallax', # setting the data ordering pprint=True, # print the query do_query=True, # perform the query using gaia_tools.query local=False, # whether to perform the query locally units=True # to fill in missing units from 'defaults' file ) df """ Explanation: Simple Query This is a simple single-level query. Default Columns End of explanation """ df = make_simple_query( WHERE=circle, # The WHERE part of the SQL random_index=1e4, # a shortcut to use the random_index in 'WHERE' ORDERBY='gaia.parallax', # setting the data ordering panstarrs1=True, twomass=True, do_query=True, # perform the query using gaia_tools.query local=False, # whether to perform the query locally units=True # to fill in missing units from 'defaults' file ) df """ Explanation: The system also supports Pan-STARRS1 and 2MASS cross-matches using the panstarrs1 and twomass keywords End of explanation """ df = make_simple_query( WHERE=circle, random_index=1e4, ORDERBY='gaia.parallax', do_query=True, local=False, units=True, defaults='empty', ) df """ Explanation: Different Defaults If you want fewer default columns, this is an option through the defaults keyword. End of explanation """ df = make_simple_query( WHERE=circle, random_index=1e4, ORDERBY='gaia.parallax', do_query=True, local=False, units=True, defaults='full' ) df """ Explanation: Likewise, there's an option for much greater detail. End of explanation """ ########### # Custom Calculations # Innermost Level l0cols = """ --Rotation Matrix {K00}*cos(radians(dec))*cos(radians(ra))+ {K01}*cos(radians(dec))*sin(radians(ra))+ {K02}*sin(radians(dec)) AS cosphi1cosphi2, {K10}*cos(radians(dec))*cos(radians(ra))+ {K11}*cos(radians(dec))*sin(radians(ra))+ {K12}*sin(radians(dec)) AS sinphi1cosphi2, {K20}*cos(radians(dec))*cos(radians(ra))+ {K21}*cos(radians(dec))*sin(radians(ra))+ {K22}*sin(radians(dec)) AS sinphi2, --c1, c2 {sindecngp}*cos(radians(dec)){mcosdecngp:+}*sin(radians(dec))*cos(radians(ra{mrangp:+})) as c1, {cosdecngp}*sin(radians(ra{mrangp:+})) as c2 """ # Inner Level l1cols = """ gaia.cosphi1cosphi2, gaia.sinphi1cosphi2, gaia.sinphi2, gaia.c1, gaia.c2, atan2(sinphi1cosphi2, cosphi1cosphi2) AS phi1, atan2(sinphi2, sinphi1cosphi2 / sin(atan2(sinphi1cosphi2, cosphi1cosphi2))) AS phi2""" # Inner Level l2cols = """ gaia.sinphi1cosphi2, gaia.cosphi1cosphi2, gaia.sinphi2, gaia.phi1, gaia.phi2, gaia.c1, gaia.c2, ( c1*pmra+c2*pmdec)/cos(phi2) AS pmphi1, (-c2*pmra+c1*pmdec)/cos(phi2) AS pmphi2""" # Outer Level l3cols = """ gaia.phi1, gaia.phi2, gaia.pmphi1, gaia.pmphi2""" ########### # Custom Selection l3sel = """ phi1 > {phi1min:+} AND phi1 < {phi1max:+} AND phi2 > {phi2min:+} AND phi2 < {phi2max:+} """ ########### # Custom substitutions l3userasdict = { 'K00': .656, 'K01': .755, 'K02': .002, 'K10': .701, 'K11': .469, 'K12': .537, 'K20': .53, 'K21': .458, 'K22': .713, 'sindecngp': -0.925, 'cosdecngp': .382, 'mcosdecngp': -.382, 'mrangp': -0, 'phi1min': -0.175, 'phi1max': 0.175, 'phi2min': -0.175, 'phi2max': 0.175} ########### # Making Query df = make_query( gaia_mags=True, panstarrs1=True, # doing a Pan-STARRS1 crossmatch user_cols=l3cols, use_AS=True, user_ASdict=l3userasdict, # Inner Query FROM=make_query( gaia_mags=True, user_cols=l2cols, # Inner Query FROM=make_query( gaia_mags=True, user_cols=l1cols, # Innermost Query FROM=make_query( gaia_mags=True, inmostquery=True, # telling system this is the innermost level user_cols=l0cols, random_index=1e4 # quickly specifying random index ) ) ), WHERE=l3sel, ORDERBY='gaia.source_id', pprint=True, # doing query do_query=True, local=False, units=True ) df """ Explanation: <br><br><br><br> <br><br><br><br> Complex Nested Query A complex query like this one shows the real utility of this package. Instead of keeping track of the complex SQL, we only need to pay close attention to the custom calculated columns. This ADQL queries for data within a rectangular area on a sky rotated by a rotation matrix and specified North Galactic Pole angles. The specifics aren't important -- the real takeaway is that the sky rotation and calculation are written in a clear format, with all the parts of the query close together. Running the query is trivial after that. End of explanation """
Luke035/dlnd-lessons
transfer-learning/image-net/Preprocess ImageNet.ipynb
mit
!rm -rf /tmp/ImageNetTrainTransfer #Import import pandas as pd import numpy as np import os import tensorflow as tf import random from PIL import Image #Inception preprocessing code from https://github.com/tensorflow/models/blob/master/slim/preprocessing/inception_preprocessing.py #useful to maintain training dimension from utils import inception_preprocessing import sys #from inception import inception ''' Uso di slim e nets_factory (come per SLIM Tensorflow https://github.com/tensorflow/models/blob/master/slim/train_image_classifier.py) per il ripristino della rete. Le reti devono essere censite in nets_factory (v. struttura file nella directory di questo notebook) ''' slim = tf.contrib.slim from nets import nets_factory #Global Variables IMAGE_NET_ROOT_PATH = '/home/carnd/transfer-learning-utils/tiny-imagenet-200/' #IMAGE_NET_ROOT_PATH = '/data/lgrazioli/' IMAGE_NET_LABELS_PATH = IMAGE_NET_ROOT_PATH + 'words.txt' IMAGE_NET_TRAIN_PATH = IMAGE_NET_ROOT_PATH + 'train/' TRAINING_CHECKPOINT_DIR = '/tmp/ImageNetTrainTransfer' #Transfer learning CHECKPOINT PATH #File ckpt della rete CHECKPOINT_PATH = '/home/carnd/transfer-learning-utils/inception_v4.ckpt' """ Explanation: Image Net Preprocessing Notebook di processamento delle immagini di Image Net. Obiettivo è realizzare un batch input che, sfruttando il meccasnismo a code descritto in <a href=https://www.tensorflow.org/programmers_guide/reading_data>Tensorflow</a>, fornisca batch della dimensione desiderata per il numero di epoche desiderato. Viene inoltre sfruttanto l'algoritmo di <a href=https://github.com/tensorflow/models/blob/master/slim/preprocessing/inception_preprocessing.py>Inception preprocessing</a> per fornire in input immagini della dimensione corretta con le correzioni preaddestramento fornite da Tensorflow End of explanation """ #Reading label file as Panda dataframe labels_df = pd.read_csv(IMAGE_NET_LABELS_PATH, sep='\\t', header=None, names=['id','labels']) labels_df.head(5) labels_df.count() """ Explanation: Lettura file words di ImageNet Lettura del file words di ImageNet come PandaDF. A ogni id (cartella che contiene immagini per le classi fornite) vengono assegnati i label End of explanation """ #new_labels = [] labels_lengths = [] for idx, row in labels_df.iterrows(): #Convertire a stringa perchè alcuni sono float current_labels = tuple(str(row['labels']).split(',')) #new_labels.append(current_labels) labels_lengths.append(len(current_labels)) labels_df['labels_length'] = labels_lengths labels_indices = [idx for idx, _ in labels_df.iterrows()] labels_df['indices'] = labels_indices labels_df.head(20) """ Explanation: Aggiunta colonna di lunghezza del label (quante classi contiene ogni label). End of explanation """ train_paths = [] for idx, label_dir in enumerate(os.listdir(IMAGE_NET_TRAIN_PATH)): image_dir_path = IMAGE_NET_TRAIN_PATH + label_dir + '/images/' print("Processing label {0}".format(label_dir)) for image in os.listdir(image_dir_path): #Estrazione class_id class_id = image.split('.')[0].split('_')[0] #Lookup su labels df target_label = labels_df[labels_df['id'] == class_id] #=> pass to tf.nn.one_hot #Estrazione del label target_label = target_label['labels'].values[0] train_paths.append((image_dir_path + image, class_id, image.split('.')[0].split('_')[1], target_label )) if idx == 10: break train_df = pd.DataFrame(train_paths, columns=['im_path','class', 'im_class_id', 'target_label']) print(train_df.count()) train_df.head() """ Explanation: Train DF Panda Dataframe che contiene i path di tutte le immagini, la relativa classe, id dell'immagine e classe. La classe viene ottenuta tramite lookup su labels_df (<b>tale operazione pesa molto in termini di tempi di esecuzione</b>) <b>Può richiedere del tempo. Per lanciare su un campione si può bloccare a un determinato valore di idx</b> End of explanation """ #Remove black and white images uncorrect_images = 0 #Salvataggio indici di immagini da eliminare to_remove_indexes = [] for idx, record in train_df.iterrows(): #Leggo immagine come np.array im_array = np.array(Image.open(record['im_path'])) #Se non ha 3 canali la aggiungo a quelle da eliminare if im_array.shape[-1] != 3: uncorrect_images += 1 to_remove_indexes.append(idx) if idx % 20 == 0: sys.stdout.write("\rProcessed {0} images".format(idx)) sys.stdout.flush() #Rimozione righe identificate train_df = train_df.drop(train_df.index[to_remove_indexes]) print("New size: {0}".format(len(train_df))) print("Removed {0} images".format(uncorrect_images)) #Eventuale campionamento da passare al generatore input example_file_list = list(train_df.im_path) print(len(example_file_list)) """ Explanation: Pulizia delle immagini che non sono nel formato desiderato da inception_preprocessing (3 canali). <b>Operazione lunga!</b> End of explanation """ labels_dict = {} unique_labels = set(labels_df['labels']) for idx, target in enumerate(unique_labels): labels_dict[target] = idx num_classes = len(labels_dict) num_classes """ Explanation: Definizione dizionario dei labels {label: indice} End of explanation """ example_label_list = [] for idx, value in train_df.iterrows(): example_label_list.append(labels_dict[value['target_label']]) len(example_label_list) num_classes = len(set(example_label_list)) num_classes reducted_label_dict = {} for idx,value in enumerate(set(example_label_list)): reducted_label_dict[value] = idx for idx,label in enumerate(example_label_list): example_label_list[idx] = reducted_label_dict[label] """ Explanation: Costruzione lista dei label (stesso ordine della lista di file) End of explanation """ ''' get_network_fn for returning the corresponding network function. Se num_classes è da cambiare, impostare is_training a True Ritorna la funzione definita nel corrispetivo file della rete ''' model_name = 'inception_v4' inception_net_fn = nets_factory.get_network_fn(model_name, num_classes=1001, is_training = False ) ''' with tf.device('/gpu:0'): sampl_input = tf.placeholder(tf.float32, [None, 300,300, 3], name='incpetion_input_placeholder') #Invocazione della model fn per la definizione delle variabili della rete #Usa questi tensori che sono quelli per i quali passa il modello #Necessario per ripristinare il grafo print(inception_net_fn(sampl_input)) ''' """ Explanation: Transfer Learning Ripristino Inception v4 model End of explanation """ EPOCHS = 50 BATCH_SIZE = 32 #Serve per capire quando il generatore è passato a batch appartenenti a una nuova epoca BATCH_PER_EPOCH = np.ceil(len(example_file_list) / BATCH_SIZE) def parse_single_image(filename_queue): #Dequeue a file name from the file name queue #filename, y = filename_queue.dequeue() #Non bisogna invocare il dequeue il parametro della funziona è già lo scodamento filename, y = filename_queue[0], filename_queue[1] #A y manca solo il one-hot y = tf.one_hot(y, num_classes) #Read image raw = tf.read_file(filename) #convert in jpg (in GPU!) jpeg_image = tf.image.decode_jpeg(raw) #Preprocessing with inception preprocessing jpeg_image = inception_preprocessing.preprocess_image(jpeg_image, 300, 300, is_training=True) return jpeg_image, y #jpeg_image = parse_single_image(filename_queue) def get_batch(filenames, labels, batch_size, num_epochs=None): #Coda lettura file, slice_input_producer accetta una lista di liste (stessa dimensione) #Risultato dello scodamento è l'elemento corrente di ciascuna delle liste #Le liste sono rispettivamente la lista di file e la lista dei label filename_queue = tf.train.slice_input_producer([filenames, labels]) #Lettura singolo record jpeg_image,y = parse_single_image(filename_queue) # min_after_dequeue defines how big a buffer we will randomly sample #   from -- bigger means better shuffling but slower start up and more #   memory used. # capacity must be larger than min_after_dequeue and the amount larger #   determines the maximum we will prefetch.  Recommendation: #   min_after_dequeue + (num_threads + a small safety margin) * batch_size min_after_dequeue = 10 capacity = min_after_dequeue + 3 * batch_size #tensors è la lista dei tensori delle single feature e immagini. Esegue batch_size volte i tensori example e label per ottenere il batch #num_threads incrementa effettivamente l'utilizzo della CPU (confermato dal throughput visisible sul cloudera manager, #resta comunque un throughput lento .... example_batch = tf.train.shuffle_batch( tensors=[jpeg_image, y], batch_size=batch_size, capacity=capacity, min_after_dequeue=min_after_dequeue, allow_smaller_final_batch=True, num_threads=2) return example_batch #TF Graph, per ora recupera solamente un batch with tf.device('/cpu:0'): with tf.name_scope('preprocessing') as scope: x,y = get_batch(example_file_list, example_label_list, batch_size=BATCH_SIZE) #x = tf.contrib.layers.flatten(x) with tf.device('/gpu:0'): #inception prelogits inception_net_fn(x) #prelogits = tf.placeholder(tf.float32, [None, 1536], name='prelogits_placeholder') prelogits = tf.get_default_graph().get_tensor_by_name("InceptionV4/Logits/PreLogitsFlatten/Reshape:0") with tf.device('/gpu:0'): with tf.variable_scope('trainable'): '''with tf.variable_scope('hidden') as scope: hidden = tf.layers.dense( prelogits, units=128, activation=tf.nn.relu )''' #Kenerl init None = glooroot initializers (sttdev = 1/sqrt(n)) with tf.variable_scope('readout') as scope: output = tf.layers.dense( prelogits, units=num_classes, activation=None ) with tf.variable_scope('train_op') as scope: # Define loss and optimizer targetvars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "trainable") cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=output, labels=y)) optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost, var_list=targetvars) # Accuracy correct_pred = tf.equal(tf.argmax(output, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) tf.summary.scalar('accuracy', accuracy) tf.summary.scalar('loss', cost) init = tf. global_variables_initializer() merged_summeries = tf.summary.merge_all() #GPU config config = tf.ConfigProto(log_device_placement=True) config.gpu_options.allow_growth = True #Saver per restoring inception net saver = tf.train.Saver() with tf.Session(config=config) as sess: sess.run(init) writer = tf.summary.FileWriter(TRAINING_CHECKPOINT_DIR, sess.graph) #Start populating the filename queue. coord = tf.train.Coordinator() #Senza questa chiamata non partono i thread per popolare la coda che permette di eseguire la read threads = tf.train.start_queue_runners(coord=coord) #Current epoch and step servono a capire quando cambiare epoca e quando fermarsi current_epoch = 0 current_step = 0 while current_epoch < EPOCHS: x_batch, y_batch = sess.run([x,y]) #Forward pass nella incpetion net #inception_pre_logits = sess.run(tf.get_default_graph().get_tensor_by_name("InceptionV4/Logits/PreLogitsFlatten/Reshape:0"), #feed_dict={sampl_input: x_batch}) sess.run(optimizer, feed_dict={x: x_batch, y: y_batch}) #print(x_batch.shape) if current_step % 10 == 0: #print("Batch shape {}".format(x_batch.shape)) print("Current step: {0}".format(current_step)) train_loss, train_accuracy, train_summ = sess.run([cost,accuracy,merged_summeries], feed_dict={x: x_batch, y: y_batch}) print("Loss: {0} accuracy {1}".format(train_loss, train_accuracy)) writer.add_summary(train_summ, current_epoch * current_step + 1) #Cambiare epoca, raggiunto il massimo per l'epoca corrente if current_step == (BATCH_PER_EPOCH - 1): current_epoch += 1 current_step = 0 print("EPOCH {0}".format(current_epoch)) #Epoche terminate -> chiudere if current_epoch >= EPOCHS: break if current_step == 0 and current_epoch == 0: writer.add_graph(sess.graph) #train_summary = sess.run([merged_summeries], feed_dict={x: x_batch, y: y_batch}) #writer.add_summary(train_summary, current_step) current_step += 1 #for i in range(10): #converted_im = sess.run(jpeg_image) #print(converted_im.shape) #Chiusura del coordinator (chiudi i thread di lettura) coord.request_stop() coord.join(threads) sess.close() tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "trainable") """ Explanation: Input pipeline Definizione della input pipeline al modello TF <b>NB: La memoria della GPU non va MAI oltre i 100MB!</b> End of explanation """
Brett777/Predict-Risk
Automatic Machine Learning.ipynb
apache-2.0
%%capture import h2o from h2o.automl import H2OAutoML import os import plotly import cufflinks import plotly.plotly as py import plotly.graph_objs as go import plotly.figure_factory as ff plotly.offline.init_notebook_mode(connected=True) myPlotlyKey = os.environ['SECRET_ENV_BRETTS_PLOTLY_KEY'] py.sign_in(username='bretto777',api_key=myPlotlyKey) # Suppress unwatned warnings import warnings warnings.filterwarnings('ignore') import pandas as pd import numpy as np %%capture #h2o.init(nthreads=1, max_mem_size="256M") h2o.connect(ip="35.199.178.30") #h2o.no_progress() # Import some data from Amazon S3 h2oDF = h2o.import_file("https://s3-us-west-1.amazonaws.com/dsclouddata/LendingClubData/LoansGoodBad.csv") # Stratified Split into Train/Test stratsplit = h2oDF["Bad_Loan"].stratified_split(test_frac=0.3, seed=12349453) train = h2oDF[stratsplit=="train"] test = h2oDF[stratsplit=="test"] dfSum = h2oDF.group_by(by="State").sum().frame dfMean = h2oDF.group_by(by="State").mean().frame stateData = dfSum.merge(dfMean).as_data_frame(use_pandas=True, header=True) stateData = stateData.iloc[1:] train.head(10) for col in stateData.columns: stateData[col] = stateData[col].astype(str) scl = [[0.0, 'rgb(164, 182, 216)'],[0.2, 'rgb(116, 141, 188)'],[0.4, 'rgb(69, 102, 165)'],\ [0.6, 'rgb(45, 82, 153)'],[0.8, 'rgb(26, 62, 132)'],[1.0, 'rgb(4, 37, 99)']] stateData['text'] = 'Avg Interest_Rate '+stateData['mean_Interest_Rate']+ '<br>' +\ 'Total Loan_Amount '+stateData['sum_Loan_Amount']+'<br>'+\ 'Avg Term '+stateData['mean_Term']+ '<br>' +\ 'Avg Income ' + stateData['mean_Annual_Income'] data = [ dict( type='choropleth', colorscale = scl, autocolorscale = False, locations = stateData['State'], z = stateData['sum_Bad_Loan'].astype(float), locationmode = 'USA-states', text = stateData['text'], marker = dict( line = dict ( color = 'rgb(255,255,255)', width = 2 ) ), colorbar = dict( title = "# Bad Loans") ) ] layout = dict( title = 'Bad Loans by State<br>(Hover for breakdown)', geo = dict( scope='usa', projection=dict( type='albers usa' ), showlakes = True, lakecolor = 'rgb(255, 255, 255)'), ) fig = dict( data=data, layout=layout ) py.iplot( fig, filename='d3-cloropleth-map' ) # Identify predictors and response x = train.columns y = "Bad_Loan" x.remove(y) # For binary classification, response should be a factor train[y] = train[y].asfactor() test[y] = test[y].asfactor() # Run AutoML, building 11 models autoModel = H2OAutoML(max_models=11) autoModel.train(x = x, y = y, training_frame = train, leaderboard_frame = test) """ Explanation: Loan Approval Model Created with H2O Automatic Machine Learning This notebook ingests a dataset, and trains many machine learning models intelligently searching the hyper-parameter space for optimal values. A leaderboard is maintained. Finally, an ensemble is created stacking together some of the base learners and the result is added to the leaderboard. The best model is deployed to production. End of explanation """ leaders = autoModel.leaderboard leaders """ Explanation: Leaderboard Display the best models, sorted by descending AUC End of explanation """ leaders[1, 0] importances = h2o.get_model(leaders[2, 0]).varimp(use_pandas=True) importances importances = h2o.get_model(leaders[2, 0]).varimp(use_pandas=True) importances = importances.loc[:,['variable','relative_importance']].groupby('variable').mean() importances.sort_values(by="relative_importance", ascending=False).iplot(kind='bar', colors='#5AC4F2', theme='white') """ Explanation: Variable Importance - Best Model End of explanation """ Model0 = np.array(h2o.get_model(leaders[0, 0]).roc(valid=True)) Model1 = np.array(h2o.get_model(leaders[1, 0]).roc(valid=True)) Model2 = np.array(h2o.get_model(leaders[2, 0]).roc(valid=True)) Model3 = np.array(h2o.get_model(leaders[3, 0]).roc(valid=True)) Model4 = np.array(h2o.get_model(leaders[4, 0]).roc(valid=True)) Model5 = np.array(h2o.get_model(leaders[5, 0]).roc(valid=True)) Model6 = np.array(h2o.get_model(leaders[6, 0]).roc(valid=True)) Model7 = np.array(h2o.get_model(leaders[7, 0]).roc(valid=True)) Model8 = np.array(h2o.get_model(leaders[8, 0]).roc(valid=True)) Model9 = np.array(h2o.get_model(leaders[9, 0]).roc(valid=True)) layout = go.Layout(autosize=False, width=725, height=575, xaxis=dict(title='False Positive Rate', titlefont=dict(family='Arial, sans-serif', size=15, color='grey')), yaxis=dict(title='True Positive Rate', titlefont=dict(family='Arial, sans-serif', size=15, color='grey'))) Model0Trace = go.Scatter(x = Model0[0], y = Model0[1], mode = 'lines', name = 'Leader', line = dict(color = ('rgb(26, 58, 126)'), width = 3)) Model1Trace = go.Scatter(x = Model1[0], y = Model1[1], mode = 'lines', name = 'Model 1', line = dict(color = ('rgb(135, 160, 216)'), width = 3)) Model2Trace = go.Scatter(x = Model2[0], y = Model2[1], mode = 'lines', name = 'Model 2', line = dict(color = ('rgb(156, 190, 241)'), width = 1)) Model3Trace = go.Scatter(x = Model3[0], y = Model3[1], mode = 'lines', name = 'Model 3', line = dict(color = ('rgb(156, 190, 241)'), width = 1)) Model4Trace = go.Scatter(x = Model4[0], y = Model4[1], mode = 'lines', name = 'Model 4', line = dict(color = ('rgb(156, 190, 241)'), width = 1)) Model5Trace = go.Scatter(x = Model5[0], y = Model5[1], mode = 'lines', name = 'Model 5', line = dict(color = ('rgb(156, 190, 241)'), width = 1)) Model6Trace = go.Scatter(x = Model6[0], y = Model6[1], mode = 'lines', name = 'Model 6', line = dict(color = ('rgb(156, 190, 241)'), width = 1)) Model7Trace = go.Scatter(x = Model7[0], y = Model7[1], mode = 'lines', name = 'Model 7', line = dict(color = ('rgb(156, 190, 241)'), width = 1)) Model8Trace = go.Scatter(x = Model8[0], y = Model8[1], mode = 'lines', name = 'Model 8', line = dict(color = ('rgb(156, 190, 241)'), width = 1)) Model9Trace = go.Scatter(x = Model9[0], y = Model9[1], mode = 'lines', name = 'Model 9', line = dict(color = ('rgb(156, 190, 241)'), width = 1)) traceChanceLine = go.Scatter(x = [0,1], y = [0,1], mode = 'lines+markers', name = 'chance', line = dict(color = ('rgb(136, 140, 150)'), width = 4, dash = 'dash')) fig = go.Figure(data=[Model0Trace,Model1Trace,Model2Trace,Model3Trace,Model4Trace,Model5Trace,Model7Trace,Model8Trace,Model9Trace,traceChanceLine], layout=layout) py.iplot(fig) """ Explanation: Leaderboard ROC Curves End of explanation """ cm = autoModel.leader.confusion_matrix(xval=True) cm = cm.table.as_data_frame() cm confusionMatrix = ff.create_table(cm) confusionMatrix.layout.height=300 confusionMatrix.layout.width=800 confusionMatrix.layout.font.size=17 py.iplot(confusionMatrix) """ Explanation: Confusion Matrix End of explanation """ CorrectPredictBad = cm.loc[0,'BAD'] CorrectPredictBadImpact = 500 cm1 = CorrectPredictBad*CorrectPredictBadImpact IncorrectPredictBad = cm.loc[1,'BAD'] IncorrectPredictBadImpact = -100 cm2 = IncorrectPredictBad*IncorrectPredictBadImpact IncorrectPredictGood = cm.loc[0,'GOOD'] IncorrectPredictGoodImpact = -1000 cm3 = IncorrectPredictGood*IncorrectPredictGoodImpact CorrectPredictGood = cm.loc[0,'GOOD'] CorrectPredictGoodImpact = 800 cm4 = CorrectPredictGood*CorrectPredictGoodImpact data_matrix = [['Business Impact', '($) Predicted BAD', '($) Predicted GOOD', '($) Total'], ['($) Actual BAD', cm1, cm3, '' ], ['($) Actual GOOD', cm2, cm4, ''], ['($) Total', cm1+cm2, cm3+cm4, cm1+cm2+cm3+cm4]] impactMatrix = ff.create_table(data_matrix, height_constant=20, hoverinfo='weight') impactMatrix.layout.height=300 impactMatrix.layout.width=800 impactMatrix.layout.font.size=17 py.iplot(impactMatrix) h2o.save_model(model=autoModel.leader) def approve_loan(Loan_Amount,Term,Interest_Rate,Employment_Years,Home_Ownership,Annual_Income,Verification_Status,Loan_Purpose,State, Debt_to_Income,Delinquent_2yr,Revolving_Cr_Util,Total_Accounts,Longest_Credit_Length): # connect to the model scoring service h2o.connect() # open the downloaded model ChurnPredictor = h2o.load_model(path='DRF_model_1496459915419_4') # define a feature vector to evaluate with the model newData = pd.DataFrame({'Loan_Amount' : Loan_Amount, 'Term' : Term, 'Interest_Rate' : Interest_Rate, 'Employment_Years' : Employment_Years, 'Home_Ownership' : Home_Ownership, 'Annual_Income' : Annual_Income, 'Verification_Status' : Verification_Status, 'Loan_Purpose' : Loan_Purpose, 'State' : State, 'Debt_to_Income' : Debt_to_Income, 'Delinquent_2yr' : Delinquent_2yr, 'Revolving_Cr_Util' : Revolving_Cr_Util, 'Total_Accounts' : Total_Accounts, 'Longest_Credit_Length' : Longest_Credit_Length}, index=[0]) # evaluate the feature vector using the model predictions = ChurnPredictor.predict(h2o.H2OFrame(newData)) predictionsOut = h2o.as_list(predictions, use_pandas=False) prediction = predictionsOut[1][0] probabilityBad = predictionsOut[1][1] probabilityGood = predictionsOut[1][2] return "Prediction: " + str(prediction) + " |Probability of Bad Loan: " + str(probabilityBad) + " |Probability of Good Loan: " + str(probabilityGood) Loan_Amount = 5000 Term = "60 months" Interest_Rate=13 Employment_Years=5 Home_Ownership="RENT" Annual_Income=75000 Verification_Status="VERIFIED - income" Loan_Purpose="credit_card" State="CA" Debt_to_Income="16.12" Delinquent_2yr="0" Revolving_Cr_Util=37 Total_Accounts=6 Longest_Credit_Length=97 approve_loan(Loan_Amount,Term,Interest_Rate,Employment_Years,Home_Ownership,Annual_Income,Verification_Status,Loan_Purpose,State,Debt_to_Income,Delinquent_2yr,Revolving_Cr_Util,Total_Accounts,Longest_Credit_Length) """ Explanation: Business Impact Matrix Weighting Predictions With a Dollar Value - Correctly predicting GOOD: +\$500 - Correctly predicting BAD: +\$800 - Incorrectly predicting GOOD: -\$1000 - Incorrectly predicting BAD: -\$100 End of explanation """
modin-project/modin
examples/tutorial/jupyter/execution/pandas_on_dask/local/exercise_2.ipynb
apache-2.0
import modin.pandas as pd import pandas import time from IPython.display import Markdown, display def printmd(string): display(Markdown(string)) """ Explanation: <center><h2>Scale your pandas workflows by changing one line of code</h2> Exercise 2: Speed improvements GOAL: Learn about common functionality that Modin speeds up by using all of your machine's cores. Concept for Exercise: read_csv speedups The most commonly used data ingestion method used in pandas is CSV files (link to pandas survey). This concept is designed to give an idea of the kinds of speedups possible, even on a non-distributed filesystem. Modin also supports other file formats for parallel and distributed reads, which can be found in the documentation. We will import both Modin and pandas so that the speedups are evident. Note: Rerunning the read_csv cells many times may result in degraded performance, depending on the memory of the machine End of explanation """ path = "s3://dask-data/nyc-taxi/2015/yellow_tripdata_2015-01.csv" """ Explanation: Dataset: 2015 NYC taxi trip data We will be using a version of this data already in S3, originally posted in this blog post: https://matthewrocklin.com/blog/work/2017/01/12/dask-dataframes Size: ~1.8GB End of explanation """ import modin.config as cfg cfg.Engine.put("dask") """ Explanation: Modin execution engine setting: End of explanation """ start = time.time() pandas_df = pandas.read_csv(path, parse_dates=["tpep_pickup_datetime", "tpep_dropoff_datetime"], quoting=3) end = time.time() pandas_duration = end - start print("Time to read with pandas: {} seconds".format(round(pandas_duration, 3))) """ Explanation: pandas.read_csv End of explanation """ start = time.time() modin_df = pd.read_csv(path, parse_dates=["tpep_pickup_datetime", "tpep_dropoff_datetime"], quoting=3) end = time.time() modin_duration = end - start print("Time to read with Modin: {} seconds".format(round(modin_duration, 3))) printmd("### Modin is {}x faster than pandas at `read_csv`!".format(round(pandas_duration / modin_duration, 2))) """ Explanation: Expect pandas to take >3 minutes on EC2, longer locally This is a good time to chat with your neighbor Dicussion topics - Do you work with a large amount of data daily? - How big is your data? - What’s the common use case of your data? - Do you use any big data analytics tools? - Do you use any interactive analytics tool? - What’s are some drawbacks of your current interative analytic tools today? modin.pandas.read_csv End of explanation """ pandas_df modin_df """ Explanation: Are they equal? End of explanation """ start = time.time() pandas_count = pandas_df.count() end = time.time() pandas_duration = end - start print("Time to count with pandas: {} seconds".format(round(pandas_duration, 3))) start = time.time() modin_count = modin_df.count() end = time.time() modin_duration = end - start print("Time to count with Modin: {} seconds".format(round(modin_duration, 3))) printmd("### Modin is {}x faster than pandas at `count`!".format(round(pandas_duration / modin_duration, 2))) """ Explanation: Concept for exercise: Reduces In pandas, a reduce would be something along the lines of a sum or count. It computes some summary statistics about the rows or columns. We will be using count. End of explanation """ pandas_count modin_count """ Explanation: Are they equal? End of explanation """ start = time.time() pandas_isnull = pandas_df.isnull() end = time.time() pandas_duration = end - start print("Time to isnull with pandas: {} seconds".format(round(pandas_duration, 3))) start = time.time() modin_isnull = modin_df.isnull() end = time.time() modin_duration = end - start print("Time to isnull with Modin: {} seconds".format(round(modin_duration, 3))) printmd("### Modin is {}x faster than pandas at `isnull`!".format(round(pandas_duration / modin_duration, 2))) """ Explanation: Concept for exercise: Map operations In pandas, map operations are operations that do a single pass over the data and do not change its shape. Operations like isnull and applymap are included in this. We will be using isnull. End of explanation """ pandas_isnull modin_isnull """ Explanation: Are they equal? End of explanation """ start = time.time() rounded_trip_distance_pandas = pandas_df["trip_distance"].apply(round) end = time.time() pandas_duration = end - start print("Time to groupby with pandas: {} seconds".format(round(pandas_duration, 3))) start = time.time() rounded_trip_distance_modin = modin_df["trip_distance"].apply(round) end = time.time() modin_duration = end - start print("Time to add a column with Modin: {} seconds".format(round(modin_duration, 3))) printmd("### Modin is {}x faster than pandas at `apply` on one column!".format(round(pandas_duration / modin_duration, 2))) """ Explanation: Concept for exercise: Apply over a single column Sometimes we want to compute some summary statistics on a single column from our dataset. End of explanation """ rounded_trip_distance_pandas rounded_trip_distance_modin """ Explanation: Are they equal? End of explanation """ start = time.time() pandas_df["rounded_trip_distance"] = rounded_trip_distance_pandas end = time.time() pandas_duration = end - start print("Time to groupby with pandas: {} seconds".format(round(pandas_duration, 3))) start = time.time() modin_df["rounded_trip_distance"] = rounded_trip_distance_modin end = time.time() modin_duration = end - start print("Time to add a column with Modin: {} seconds".format(round(modin_duration, 3))) printmd("### Modin is {}x faster than pandas add a column!".format(round(pandas_duration / modin_duration, 2))) """ Explanation: Concept for exercise: Add a column It is common to need to add a new column to an existing dataframe, here we show that this is significantly faster in Modin due to metadata management and an efficient zero copy implementation. End of explanation """ pandas_df modin_df """ Explanation: Are they equal? End of explanation """
BenEfrati/ex1
loan-prediction/HW4.ipynb
mit
%pylab inline """ Explanation: Exercise 2: Data Analysis with Python Based on this great tutorial: https://www.analyticsvidhya.com/blog/2016/01/complete-tutorial-learn-data-science-python-scratch-2/ Our Task: Loan Prediction Practice Problem From the challange hosted at: https://datahack.analyticsvidhya.com/contest/practice-problem-loan-prediction-iii/ Dream Housing Finance company deals in all home loans. They have presence across all urban, semi urban and rural areas. Customer first apply for home loan after that company validates the customer eligibility for loan. The company wants to automate the loan eligibility process (real time) based on customer detail provided while filling online application form. These details are Gender, Marital Status, Education, Number of Dependents, Income, Loan Amount, Credit History and others. To automate this process, they have given a problem to identify the customers segments, those are eligible for loan amount so that they can specifically target these customers. Here they have provided a partial data set. The Data Variable | Description ----------|-------------- Loan_ID | Unique Loan ID Gender | Male/ Female Married | Applicant married (Y/N) Dependents | Number of dependents Education | Applicant Education (Graduate/ Under Graduate) Self_Employed | Self employed (Y/N) ApplicantIncome | Applicant income CoapplicantIncome | Coapplicant income LoanAmount | Loan amount in thousands Loan_Amount_Term | Term of loan in months Credit_History | credit history meets guidelines Property_Area | Urban/ Semi Urban/ Rural Loan_Status | Loan approved (Y/N) Evaluation Metric is accuracy i.e. percentage of loan approval you correctly predict. You may upload the solution in the format of "sample_submission.csv" Setups To begin, start iPython interface in Inline Pylab mode by typing following on your terminal / windows command prompt: End of explanation """ plot(arange(5)) """ Explanation: This opens up iPython notebook in pylab environment, which has a few useful libraries already imported. Also, you will be able to plot your data inline, which makes this a really good environment for interactive data analysis. You can check whether the environment has loaded correctly, by typing the following command (and getting the output as seen in the figure below): End of explanation """ import pandas as pd import numpy as np import matplotlib as plt """ Explanation: Following are the libraries we will use during this task: - numpy - matplotlib - pandas Please note that you do not need to import matplotlib and numpy because of Pylab environment. I have still kept them in the code, in case you use the code in a different environment. End of explanation """ df = pd.read_csv("./data/train.csv") #Reading the dataset in a dataframe using Pandas test_df = pd.read_csv("./data/test.csv") #Reading the dataset in a dataframe using Pandas """ Explanation: After importing the library, you read the dataset using function read_csv(). The file is assumed to be downloaded from the moodle to the data folder in your working directory. End of explanation """ df.head(10) """ Explanation: Let’s begin with exploration Quick Data Exploration Once you have read the dataset, you can have a look at few top rows by using the function head() End of explanation """ df.describe() # get the summary of numerical variables """ Explanation: This should print 10 rows. Alternately, you can also look at more rows by printing the dataset. Next, you can look at summary of numerical fields by using describe() function End of explanation """ test_df.describe() """ Explanation: describe() function would provide count, mean, standard deviation (std), min, quartiles and max in its output Try to learn also from test set End of explanation """ df['Property_Area'].value_counts() test_df['Property_Area'].value_counts() """ Explanation: as we can see, there is no significant difference between test and train, so for the numeric value we will do the same preprocessing So it is not necessary to combine train and test data for filling missing value. For the non-numerical values (e.g. Property_Area, Credit_History etc.), we can look at frequency distribution to understand whether they make sense or not. The frequency table can be printed by following command: End of explanation """ df['ApplicantIncome'].hist(bins=50) test_df['ApplicantIncome'].hist(bins=50) """ Explanation: Distribution analysis Now that we are familiar with basic data characteristics, let us study distribution of various variables. Let us start with numeric variables – namely ApplicantIncome and LoanAmount Lets start by plotting the histogram of ApplicantIncome using the following commands: End of explanation """ df.boxplot(column='ApplicantIncome') """ Explanation: Here we observe that there are few extreme values. This is also the reason why 50 bins are required to depict the distribution clearly. Next, we look at box plots to understand the distributions. Box plot can be plotted by: End of explanation """ df.boxplot(column='ApplicantIncome', by = 'Education') """ Explanation: This confirms the presence of a lot of outliers/extreme values. This can be attributed to the income disparity in the society. Part of this can be driven by the fact that we are looking at people with different education levels. Let us segregate them by Education: End of explanation """ df['LoanAmount'].hist(bins=50) test_df['LoanAmount'].hist(bins=50) df.boxplot(column='LoanAmount') """ Explanation: We can see that there is no substantial different between the mean income of graduate and non-graduates. But there are a higher number of graduates with very high incomes, which are appearing to be the outliers. Task 2: Distribution Analysis Plot the histogram and boxplot of LoanAmount Check yourself: End of explanation """ temp1 = df['Credit_History'].value_counts(ascending=True) temp1 temp2 = test_df['Credit_History'].value_counts(ascending=True) temp2 """ Explanation: Again, there are some extreme values. Clearly, both ApplicantIncome and LoanAmount require some amount of data munging. LoanAmount has missing and well as extreme values values, while ApplicantIncome has a few extreme values, which demand deeper understanding. We will take this up in coming sections. Categorical variable analysis Frequency Table for Credit History: End of explanation """ temp3 = df.pivot_table(values='Loan_Status',index=['Credit_History'],aggfunc=lambda x: x.map({'Y':1,'N':0}).mean()) temp3 """ Explanation: Probability of getting loan for each Credit History class: End of explanation """ import matplotlib.pyplot as plt fig = plt.figure(figsize=(8,4)) ax1 = fig.add_subplot(121) ax1.set_xlabel('Credit_History') ax1.set_ylabel('Count of Applicants') ax1.set_title("Applicants by Credit_History") temp1.plot(kind='bar') ax2 = fig.add_subplot(122) temp2.plot(kind = 'bar') ax2.set_xlabel('Credit_History') ax2.set_ylabel('Probability of getting loan') ax2.set_title("Probability of getting loan by credit history") """ Explanation: This can be plotted as a bar chart using the “matplotlib” library with following code: End of explanation """ temp3 = pd.crosstab(df['Credit_History'], df['Loan_Status']) temp3.plot(kind='bar', stacked=True, color=['red','blue'], grid=False) """ Explanation: This shows that the chances of getting a loan are eight-fold if the applicant has a valid credit history. You can plot similar graphs by Married, Self-Employed, Property_Area, etc. Alternately, these two plots can also be visualized by combining them in a stacked chart:: End of explanation """ df.apply(lambda x: sum(x.isnull()),axis=0) test_df.apply(lambda x: sum(x.isnull()),axis=0) """ Explanation: We just saw how we can do exploratory analysis in Python using Pandas. I hope your love for pandas (the animal) would have increased by now – given the amount of help, the library can provide you in analyzing datasets. Next let’s explore ApplicantIncome and LoanStatus variables further, perform data munging and create a dataset for applying various modeling techniques. I would strongly urge that you take another dataset and problem and go through an independent example before reading further. Data Munging in Python : Using Pandas Data munging – recap of the need While our exploration of the data, we found a few problems in the data set, which needs to be solved before the data is ready for a good model. This exercise is typically referred as “Data Munging”. Here are the problems, we are already aware of: There are missing values in some variables. We should estimate those values wisely depending on the amount of missing values and the expected importance of variables. While looking at the distributions, we saw that ApplicantIncome and LoanAmount seemed to contain extreme values at either end. Though they might make intuitive sense, but should be treated appropriately. In addition to these problems with numerical fields, we should also look at the non-numerical fields i.e. Gender, Property_Area, Married, Education and Dependents to see, if they contain any useful information. Check missing values in the dataset Let us look at missing values in all the variables because most of the models don’t work with missing data and even if they do, imputing them helps more often than not. So, let us check the number of nulls / NaNs in the dataset. This command should tell us the number of missing values in each column as isnull() returns 1, if the value is null. End of explanation """ # df['LoanAmount'].fillna(df['LoanAmount'].mean(), inplace=True) """ Explanation: Though the missing values are not very high in number, but many variables have them and each one of these should be estimated and added in the data. Note: Remember that missing values may not always be NaNs. For instance, if the Loan_Amount_Term is 0, does it makes sense or would you consider that missing? I suppose your answer is missing and you’re right. So we should check for values which are unpractical. How to fill missing values in LoanAmount? How to fill missing values in LoanAmount? There are numerous ways to fill the missing values of loan amount – the simplest being replacement by mean, which can be done by following code: End of explanation """ df['Self_Employed'].value_counts() """ Explanation: The other extreme could be to build a supervised learning model to predict loan amount on the basis of other variables and then use age along with other variables to predict survival. Since, the purpose now is to bring out the steps in data munging, I’ll rather take an approach, which lies some where in between these 2 extremes. A key hypothesis is that whether a person is educated or self-employed can combine to give a good estimate of loan amount. But first, we have to ensure that each of Self_Employed and Education variables should not have a missing values. As we say earlier, Self_Employed has some missing values. Let’s look at the frequency table: End of explanation """ #df['Self_Employed'].fillna('No',inplace=True) """ Explanation: Since ~86% values are “No”, it is safe to impute the missing values as “No” as there is a high probability of success. This can be done using the following code: End of explanation """ from numpy.random import choice draw = choice(["Yes","No"], 1, p=[0.14,0.86])[0] df['Self_Employed'].fillna(draw,inplace=True) """ Explanation: Self Employe missing value If we replace all na with "No" the ratio will be more than 86% for "No". So we want to preserve this ratio, so radomly set 86% to No and the remaining to Yes End of explanation """ table = df.pivot_table(values='LoanAmount', index='Self_Employed' ,columns='Education', aggfunc=np.median) table """ Explanation: Now, we will create a Pivot table, which provides us median values for all the groups of unique values of Self_Employed and Education features. Next, we define a function, which returns the values of these cells and apply it to fill the missing values of loan amount: End of explanation """ def fage(x): return table.loc[x['Self_Employed'],x['Education']] """ Explanation: Define function to return value of this pivot_table: End of explanation """ df['LoanAmount'].fillna(df[df['LoanAmount'].isnull()].apply(fage, axis=1), inplace=True) """ Explanation: Replace missing values: End of explanation """ df['LoanAmount_log'] = np.log(df['LoanAmount']) df['LoanAmount_log'].hist(bins=20) """ Explanation: This should provide you a good way to impute missing values of loan amount. How to treat for extreme values in distribution of LoanAmount and ApplicantIncome? Let’s analyze LoanAmount first. Since the extreme values are practically possible, i.e. some people might apply for high value loans due to specific needs. So instead of treating them as outliers, let’s try a log transformation to nullify their effect: End of explanation """ df['TotalIncome'] = df['ApplicantIncome'] + df['CoapplicantIncome'] df['TotalIncome_log'] = np.log(df['TotalIncome']) df['LoanAmount_log'].hist(bins=20) """ Explanation: Now the distribution looks much closer to normal and effect of extreme values has been significantly subsided. Coming to ApplicantIncome. One intuition can be that some applicants have lower income but strong support Co-applicants. So it might be a good idea to combine both incomes as total income and take a log transformation of the same. End of explanation """ df['Loan_Amount_Term'].value_counts() df['Loan_Amount_Term'].fillna(360, inplace=True) df['Credit_History'].fillna(1, inplace=True) df['Dependents'].value_counts() df['Dependents'].fillna(0, inplace=True) df['Married'].value_counts() #df['Married'].fillna('Yes', inplace=True) #As in the self employed draw = choice(["Yes","No"], 1, p=[0.66,0.34])[0] df['Married'].fillna(draw,inplace=True) df['Gender'].value_counts() #df['Gender'].fillna('Male', inplace=True) #Save the ratio draw = choice(["Male","Female"], 1, p=[0.82,0.18])[0] df['Gender'].fillna(draw,inplace=True) """ Explanation: Now we see that the distribution is much better than before. Task 3: Remove nulls Impute the missing values for Gender, Married, Dependents, Loan_Amount_Term, Credit_History Also, I encourage you to think about possible additional information which can be derived from the data. For example, creating a column for LoanAmount/TotalIncome might make sense as it gives an idea of how well the applicant is suited to pay back his loan. check Yourself End of explanation """ df.dtypes from sklearn.preprocessing import LabelEncoder var_mod = ['Gender','Married','Dependents','Education','Self_Employed','Property_Area','Loan_Status'] le = LabelEncoder() for i in var_mod: df[i] = le.fit_transform(df[i].astype(str)) df.dtypes """ Explanation: Next, we will look at making predictive models. Building a Predictive Model in Python After, we have made the data useful for modeling, let’s now look at the python code to create a predictive model on our data set. Skicit-Learn (sklearn) is the most commonly used library in Python for this purpose and we will follow the trail. Since, sklearn requires all inputs to be numeric, we should convert all our categorical variables into numeric by encoding the categories. This can be done using the following code: End of explanation """ #Import models from scikit learn module: from sklearn.linear_model import LogisticRegression from sklearn.cross_validation import KFold #For K-fold cross validation from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier, GradientBoostingClassifier from sklearn.tree import DecisionTreeClassifier, export_graphviz from sklearn import metrics #Generic function for making a classification model and accessing performance: def classification_model(model, data, predictors, outcome): #Fit the model: model.fit(data[predictors],data[outcome]) #Make predictions on training set: predictions = model.predict(data[predictors]) #Print accuracy accuracy = metrics.accuracy_score(predictions,data[outcome]) print("Accuracy : %s" % "{0:.3%}".format(accuracy)) #Perform k-fold cross-validation with 5 folds kf = KFold(data.shape[0], n_folds=5) error = [] for train, test in kf: # Filter training data train_predictors = (data[predictors].iloc[train,:]) # The target we're using to train the algorithm. train_target = data[outcome].iloc[train] # Training the algorithm using the predictors and target. model.fit(train_predictors, train_target) #Record error from each cross-validation run error.append(model.score(data[predictors].iloc[test,:], data[outcome].iloc[test])) print("Cross-Validation Score : %s" % "{0:.3%}".format(np.mean(error))) #Fit the model again so that it can be refered outside the function: model.fit(data[predictors],data[outcome]) """ Explanation: Next, we will import the required modules. Then we will define a generic classification function, which takes a model as input and determines the Accuracy and Cross-Validation scores. End of explanation """ outcome_var = 'Loan_Status' model = LogisticRegression() predictor_var = ['Credit_History'] classification_model(model, df,predictor_var,outcome_var) """ Explanation: Logistic Regression Let’s make our first Logistic Regression model. One way would be to take all the variables into the model but this might result in overfitting. In simple words, taking all variables might result in the model understanding complex relations specific to the data and will not generalize well. We can easily make some intuitive hypothesis to set the ball rolling. The chances of getting a loan will be higher for: Applicants having a credit history (remember we observed this in exploration?) Applicants with higher applicant and co-applicant incomes Applicants with higher education level Properties in urban areas with high growth perspectives So let’s make our first model with ‘Credit_History’. End of explanation """ predictor_var = ['Credit_History','Education','Married','Self_Employed','Property_Area'] classification_model(model, df,predictor_var,outcome_var) #write_predict(model.predict(test_df[predictor_var]), test_df) """ Explanation: We can try different combination of variables: End of explanation """ model = DecisionTreeClassifier() predictor_var = ['Credit_History','Gender','Married','Education'] classification_model(model, df,predictor_var,outcome_var) """ Explanation: Generally we expect the accuracy to increase on adding variables. But this is a more challenging case. The accuracy and cross-validation score are not getting impacted by less important variables. Credit_History is dominating the mode. We have two options now: Feature Engineering: dereive new information and try to predict those. I will leave this to your creativity. Better modeling techniques. Let’s explore this next. Decision Tree Decision tree is another method for making a predictive model. It is known to provide higher accuracy than logistic regression model. End of explanation """ #We can try different combination of variables: predictor_var = ['Credit_History','Loan_Amount_Term','LoanAmount_log'] classification_model(model, df,predictor_var,outcome_var) """ Explanation: Here the model based on categorical variables is unable to have an impact because Credit History is dominating over them. Let’s try a few numerical variables: Task 4: Modeling the Data Train a decision tree classifier that accepts the following attributes as input: Credit_History, Loan_Amount_Term, LoanAmount_log Is it better that the previous decision tree? Check Yourself: End of explanation """ model = RandomForestClassifier(n_estimators=100) predictor_var = ['Gender', 'Married', 'Dependents', 'Education', 'Self_Employed', 'Loan_Amount_Term', 'Credit_History', 'Property_Area', 'LoanAmount_log','TotalIncome_log'] classification_model(model, df,predictor_var,outcome_var) """ Explanation: 2.Not much better according to the test score Here we observed that although the accuracy went up on adding variables, the cross-validation error went down. This is the result of model over-fitting the data. Let’s try an even more sophisticated algorithm and see if it helps: Random Forest Random forest is another algorithm for solving the classification problem. An advantage with Random Forest is that we can make it work with all the features and it returns a feature importance matrix which can be used to select features. End of explanation """ #Create a series with feature importances: featimp = pd.Series(model.feature_importances_, index=predictor_var).sort_values(ascending=False) print(featimp) """ Explanation: Here we see that the accuracy is 100% for the training set. This is the ultimate case of overfitting and can be resolved in two ways: Reducing the number of predictors Tuning the model parameters Let’s try both of these. First we see the feature importance matrix from which we’ll take the most important features. End of explanation """ model = RandomForestClassifier(n_estimators=100, min_samples_split=25) predictor_var = ['TotalIncome_log','LoanAmount_log','Credit_History','Gender','Married','Self_Employed','Dependents','Property_Area'] classification_model(model, df,predictor_var,outcome_var) #write_predict(model.predict(test_df[predictor_var]), test_df) """ Explanation: Let’s use the top 5 variables for creating a model. Also, we will modify the parameters of random forest model a little bit: End of explanation """ from numpy.random import choice from sklearn.preprocessing import LabelEncoder def preprocess(df, is_test): draw = choice(["Yes","No"], 1, p=[0.14,0.86])[0] df['Self_Employed'].fillna(draw,inplace=True) df['LoanAmount'].fillna(df[df['LoanAmount'].isnull()].apply(fage, axis=1), inplace=True) df['LoanAmount_log'] = np.log(df['LoanAmount']) df['TotalIncome'] = df['ApplicantIncome'] + df['CoapplicantIncome'] df['TotalIncome_log'] = np.log(df['TotalIncome']) term_list = df['Loan_Amount_Term'].value_counts().index.tolist() term_mean = np.mean(term_list) df['Loan_Amount_Term'].fillna(term_mean, inplace=True) draw = choice([1,0], 1, p=[0.14,0.86])[0] df['Credit_History'].fillna(1, inplace=True) df['Dependents'].fillna(0, inplace=True) draw = choice(["Yes","No"], 1, p=[0.66,0.34])[0] df['Married'].fillna(draw,inplace=True) draw = choice(["Male","Female"], 1, p=[0.82,0.18])[0] df['Gender'].fillna(draw,inplace=True) if is_test: var_mod = ['Gender','Married','Dependents','Education','Self_Employed','Property_Area'] else: var_mod = ['Gender','Married','Dependents','Education','Self_Employed','Property_Area','Loan_Status'] le = LabelEncoder() for i in var_mod: df[i] = le.fit_transform(df[i].astype(str)) def write_predict(result, df): df["Loan_Status"] = ['Y' if x==1 else 'N' for x in result] df.to_csv('./data/result.csv',columns=['Loan_ID','Loan_Status'],index=False) """ Explanation: Notice that although accuracy reduced, but the cross-validation score is improving showing that the model is generalizing well. Remember that random forest models are not exactly repeatable. Different runs will result in slight variations because of randomization. But the output should stay in the ballpark. You would have noticed that even after some basic parameter tuning on random forest, we have reached a cross-validation accuracy only slightly better than the original logistic regression model. This exercise gives us some very interesting and unique learning: Using a more sophisticated model does not guarantee better results. Avoid using complex modeling techniques as a black box without understanding the underlying concepts. Doing so would increase the tendency of overfitting thus making your models less interpretable Feature Engineering is the key to success. Everyone can use an Xgboost models but the real art and creativity lies in enhancing your features to better suit the model. Be proud of yourself for getting this far! You are invited to improve your result and submit to the site to test your place in the leaderboard. Data preprocessing Same as above, just in a function End of explanation """ test_df = pd.read_csv("./data/test.csv") #Reading the dataset in a dataframe using Pandas preprocess(test_df,True) df = pd.read_csv("./data/train.csv") #Reading the dataset in a dataframe using Pandas preprocess(df, False) """ Explanation: Test Set read again and do the same preprocessing for train and test before build the classification model End of explanation """ model = ExtraTreesClassifier(n_estimators=20) predictor_var = ['TotalIncome_log','LoanAmount_log','Credit_History','Property_Area'] classification_model(model, df,predictor_var,outcome_var) write_predict(model.predict(test_df[predictor_var]), test_df) featimp = pd.Series(model.feature_importances_, index=predictor_var).sort_values(ascending=False) print(featimp) """ Explanation: Model classifier parameters: n_estimators according to articles on this domain, the squre of number of trees needs to equals number of features. In addition, there is a relation between dataset size, which is small, and the number of trees that can be enlarged and affects on the improvement of the model. Also we did some empirical experiments and we found the best value for our case is 20-100 Extra Tree This class implements a meta estimator that fits a number of randomized decision trees (a.k.a. extra-trees) on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting. End of explanation """ model = GradientBoostingClassifier(n_estimators=100) predictor_var = ['Gender', 'Married', 'Dependents', 'Education', 'Self_Employed', 'Loan_Amount_Term', 'Credit_History', 'Property_Area', 'LoanAmount_log','TotalIncome_log'] classification_model(model, df,predictor_var,outcome_var) write_predict(model.predict(test_df[predictor_var]), test_df) #With different loss function predictor_var = ['Self_Employed', 'Credit_History', 'Property_Area'] model = GradientBoostingClassifier(n_estimators=10,loss='exponential') classification_model(model, df,predictor_var,outcome_var) write_predict(model.predict(test_df[predictor_var]), test_df) model = RandomForestClassifier(n_estimators=30,min_samples_split=15) predictor_var = ['TotalIncome_log','LoanAmount_log','Loan_Amount_Term','Credit_History','Property_Area'] classification_model(model, df,predictor_var,outcome_var) write_predict(model.predict(test_df[predictor_var]), test_df) """ Explanation: Gradient Boosting GB builds an additive model in a forward stage-wise fashion; it allows for the optimization of arbitrary differentiable loss functions. End of explanation """
jingr1/SelfDrivingCar
AStarSearch/project_notebook.ipynb
mit
# Run this cell first! from helpers import Map, load_map, show_map from student_code import shortest_path %load_ext autoreload %autoreload 2 """ Explanation: Implementing a Route Planner In this project you will use A* search to implement a "Google-maps" style route planning algorithm. End of explanation """ map_10 = load_map('map-10.pickle') show_map(map_10) """ Explanation: Map Basics End of explanation """ map_10.intersections """ Explanation: The map above (run the code cell if you don't see it) shows a disconnected network of 10 intersections. The two intersections on the left are connected to each other but they are not connected to the rest of the road network. These Map objects have two properties you will want to use to implement A* search: intersections and roads Intersections The intersections are represented as a dictionary. In this example, there are 10 intersections, each identified by an x,y coordinate. The coordinates are listed below. You can hover over each dot in the map above to see the intersection number. End of explanation """ # this shows that intersection 0 connects to intersections 7, 6, and 5 map_10.roads[8] # This shows the full connectivity of the map map_10.roads # map_40 is a bigger map than map_10 map_40 = load_map('map-40.pickle') show_map(map_40) map_40.roads[24] """ Explanation: Roads The roads property is a list where roads[i] contains a list of the intersections that intersection i connects to. End of explanation """ # run this code, note the effect of including the optional # parameters in the function call. show_map(map_40, start=5, goal=34, path=[5,16,37,12,34]) """ Explanation: Advanced Visualizations The map above shows a network of roads which spans 40 different intersections (labeled 0 through 39). The show_map function which generated this map also takes a few optional parameters which might be useful for visualizaing the output of the search algorithm you will write. start - The "start" node for the search algorithm. goal - The "goal" node. path - An array of integers which corresponds to a valid sequence of intersection visits on the map. End of explanation """ path = shortest_path(map_40, 5, 34) if path == [5, 16, 37, 12, 34]: print("great! Your code works for these inputs!") else: print("something is off, your code produced the following:") print(path) """ Explanation: Writing your algorithm You should open the file student_code.py in another tab and work on your algorithm there. Do that by selecting File &gt; Open and then selecting the appropriate file. The algorithm you write will be responsible for generating a path like the one passed into show_map above. In fact, when called with the same map, start and goal, as above you algorithm should produce the path [5, 16, 37, 12, 34] ```bash shortest_path(map_40, 5, 34) [5, 16, 37, 12, 34] ``` End of explanation """ from test import test test(shortest_path) """ Explanation: Testing your Code If the code below produces no errors, your algorithm is behaving correctly. You are almost ready to submit! Before you submit, go through the following submission checklist: Submission Checklist Does my code pass all tests? Does my code implement A* search and not some other search algorithm? Do I use an admissible heuristic to direct search efforts towards the goal? Do I use data structures which avoid unnecessarily slow lookups? When you can answer "yes" to all of these questions, submit by pressing the Submit button in the lower right! End of explanation """
InsightLab/data-science-cookbook
2020/05-geographic-information-system/Notebook_Geometric_Operations.ipynb
mit
# Import necessary modules import pandas as pd import geopandas as gpd from shapely.geometry import Point # Filepath fp = r"data/roubos.csv" # Read the data data = pd.read_csv(fp, sep=',') data """ Explanation: 1. Geocoding no Geopandas O Geocoding é o processo de transformar um endereço em coordenadas geográficas (formato numérico). Em contrapartida a geocodificação reversa transforma coordenadas em um endereço. Utilizando o geopandas, podemos fazer operação de geocoding através da função geocode(), que recebe uma lista de endereços (string) e retorna um GeoDataFrame contendo o resultado em objetos Point na coluna geometry. Nós geocodificaremos os endereços armazenados em um arquivo de texto chamado roubos.csv, que é uma pequena amostra com apenas 5 tuplas contendo informações de eventos de roubos que aconteceram na cidade de Fortaleza. vamos carregar os dados utilizando pandas com a função read_csv() e mostrá-los. End of explanation """ data['endereco'] = data['logradouro'] + ', ' + data['localNumero'].apply(str) data.head() """ Explanation: Perceba que apesar de possuírmos informação de endereço, não temos coordenadas dos eventos, o que dificulta qualquer tipo de análise. Para obtermos as coordenadas vamos fazer o geocoding dos endereços. Mas antes, vamos unir todas as informações de endereço em uma coluna só chamada de endereco. End of explanation """ # Import the geocoding tool from geopandas.tools import geocode # Geocode addresses with Nominatim backend geo = geocode(data['endereco'], provider = 'nominatim', user_agent ='carlos') geo """ Explanation: Agora vamos transformar os endereços em coordenadas usando geocode() com a ferramente de busca de dados Nominatim que realiza consultas no OpenStreetMap. Antes será necessário instalar a biblioteca geopy com o pip, para isso utilize o comando: pip install geopy End of explanation """ data['geometry'] = geo['geometry'] data.head() """ Explanation: Como resultado, temos um GeoDataFrame que contém nosso endereço e uma coluna 'geometry' contendo objeto Point que podemos usar para exportar os endereços para um Shapefile por exemplo. Como os indices das duas tabelas são iguais, podemos unir facilmente. End of explanation """ from shapely.geometry import Point, Polygon # Create Point objects p1 = Point(24.952242, 60.1696017) p2 = Point(24.976567, 60.1612500) # Create a Polygon coords = [(24.950899, 60.169158), (24.953492, 60.169158), (24.953510, 60.170104), (24.950958, 60.169990)] poly = Polygon(coords) # Let's check what we have print(p1) print(p2) print(poly) """ Explanation: Notas sobre a ferramenta Nominatim Nominatim funciona relativamente bem se você tiver endereços bem definidos e bem conhecidos, como os que usamos neste tutorial. No entanto, em alguns casos, talvez você não tenha endereços bem definidos e você pode ter, por exemplo, apenas o nome de um shopping ou uma lanchonete. Nesses casos, a Nominatim pode não fornecer resultados tão bons e, porém você pode utilizar outras APIs como o Google Geocoding API (V3). 2. Operações entre geometrias Descobrir se um certo ponto está localizado dentro ou fora de uma área, ou descobrir se uma linha cruza com outra linha ou polígono são operações geoespaciais fundamentais que são frequentemente usadas, e selecionar dados baseados na localização. Tais consultas espaciais são uma das primeiras etapas do fluxo de trabalho ao fazer análise espacial. 2.1 Como verificar se o ponto está dentro de um polígono? Computacionalmente, detectar se um ponto está dentro de um polígono é mais comumente feito utilizando uma fórmula específica chamada algoritmo Ray Casting. Em vez disso, podemos tomar vantagem dos predicados binários de Shapely que podem avaliar as relações topológicas com os objetos. Existem basicamente duas maneiras de conduzir essa consulta com o Shapely: usando uma função chamada    .within()    que verifica se um ponto está dentro de um polígono usando uma função chamada    .contains ()    que verifica se um polígono contém um ponto Aviso: apesar de estarmos falando aqui sobre a operação de Point dentro de um Polygon, também é possível verificar se um LineString ou Polygon esta dentro de outro Polygon. Vamos primeiro criar um polígono usando uma lista de coordenadas-tuplas e um    par de objetos pontuais End of explanation """ # Check if p1 is within the polygon using the within function print(p1.within(poly)) # Check if p2 is within the polygon print(p2.within(poly)) """ Explanation: Vamos verificar se esses pontos estão dentro do polígono End of explanation """ # Our point print(p1) # The centroid print(poly.centroid) """ Explanation: Então podemos ver que o primeiro ponto parece estar dentro do polígono e o segundo não. Na verdade, o primeiro ponto é perto do centro do polígono, como nós     podemos ver se compararmos a localização do ponto com o centróide do polígono: End of explanation """ from shapely.geometry import LineString, MultiLineString # Create two lines line_a = LineString([(0, 0), (1, 1)]) line_b = LineString([(1, 1), (0, 2)]) """ Explanation: 2.2 Interseção Outra operação geoespacial típica é ver se uma geometria intercepta ou toca outra geometria. A diferença entre esses dois é que: Se os objetos se cruzam, o limite e o interior de um objeto precisa interceptar com os do outro objeto. Se um objeto tocar o outro, só é necessário ter (pelo menos) um ponto único de suas fronteiras em comum, mas seus interiores não se cruzam. Vamos tentar isso. Vamos criar dois LineStrings End of explanation """ line_a.intersects(line_b) """ Explanation: Vamos ver se eles se interceptam End of explanation """ line_a.touches(line_b) """ Explanation: Eles também tocam um ao outro? End of explanation """ # Create a MultiLineString from line_a and line_b multi_line = MultiLineString([line_a, line_b]) multi_line """ Explanation: Sim, as duas operações são verdade e podemos ver isso plotando os dois objetos juntos. End of explanation """ ais_filep = 'data/ais.shp' ais_gdf = gpd.read_file(ais_filep) ais_gdf.crs ais_gdf.head() import matplotlib.pyplot as plt fig, ax = plt.subplots(1,1, figsize=(15,8)) ais_gdf.plot(ax=ax) plt.show() """ Explanation: 2.3 Ponto dentro de polygon usando o geopandas Uma das estratégias adotadas pela Secretaria da Segurança Pública e Defesa Social (SSPDS) para o aperfeiçoamento de trabalhos policiais, periciais e bombeirísticos em território cearense é a delimitação do Estado em Áreas Integradas de Segurança (AIS). A cidade de fortaleza por si só é dividida em cerca de 10 áreas integradas de segurança (AIS). Vamos carregar estas divisões administrativas e visualizar elas. End of explanation """ data_gdf = gpd.GeoDataFrame(data) data_gdf.crs = ais_gdf.crs data_gdf.head() """ Explanation: Agora vamos mostrar somente as fronteiras das AIS e os nosso eventos de crimes. Mas antes bora transformar os nosso dados de roubo em um GeoDataFrame. End of explanation """ fig, ax = plt.subplots(1,1, figsize=(15,8)) for idx, ais in ais_gdf.iterrows(): ax.plot(*ais['geometry'].exterior.xy, color='black') data_gdf.plot(ax=ax, color='red') plt.show() """ Explanation: Agora sim, vamos mostrar as fronteiras de cada AIS juntamente com os eventos de roubo. End of explanation """ ais_gdf """ Explanation: Relembrando o endereço dos nosso dados, dois roubos aconteceram na avenida bezerra de menezes próximos ao north shopping. Sabendo que a AIS que contém o shopping é a de número 6, vamos selecionar somente os eventos de roubo dentro da AIS 6. Primeiro vamos separar somente a geometria da AIS 6. Antes vamos visualizar os dados e verificar qual coluna pode nos ajudar nessa tarefa. End of explanation """ ais6 = ais_gdf[ais_gdf['AIS'] == 6] ais6.plot() plt.show() ais6_geometry = ais6.iloc[0].geometry ais6_geometry type(ais6) """ Explanation: Existem duas colunas que podem nos ajudar a filtrar a AIS desejada, a coluna AIS e a coluna NM_AIS. Vamos utilizar a primeira por ser necessário utilizar apenas o número. End of explanation """ mask = data_gdf.within(ais6.geometry[0]) mask data_gdf_ais6 = data_gdf[mask] data_gdf_ais6 """ Explanation: Agora podemos utilizar a função within() para selecionar apenas os eventos que aconteceram dentro da AIS 6. End of explanation """ import folium map_fortal = folium.Map(location=[data_gdf_ais6.loc[0, 'geometry'].y, data_gdf_ais6.loc[0, 'geometry'].x], zoom_start = 14) folium.Marker([data_gdf_ais6.loc[0, 'geometry'].y, data_gdf_ais6.loc[0, 'geometry'].x]).add_to(map_fortal) folium.Marker([data_gdf_ais6.loc[1, 'geometry'].y, data_gdf_ais6.loc[1, 'geometry'].x]).add_to(map_fortal) border_layer = folium.features.GeoJson(ais6_geometry, style_function=lambda feature: { 'color': 'red', 'weight' : 2, 'fillOpacity' : 0.2, 'opacity': 1, }).add_to(map_fortal) map_fortal """ Explanation: Vamos ver os nosso dados em um mapa utilizando a o módulo Folium: conda install -c conda-forge folium End of explanation """
tensorflow/docs-l10n
site/ko/tutorials/load_data/unicode.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #@title MIT License # # Copyright (c) 2017 François Chollet # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. """ Explanation: Copyright 2018 The TensorFlow Authors. End of explanation """ import tensorflow as tf """ Explanation: 유니코드 문자열 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/unicode"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" /> TensorFlow.org에서 보기</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/load_data/unicode.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/load_data/unicode.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />깃허브(GitHub) 소스 보기</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/load_data/unicode.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도 불구하고 공식 영문 문서의 내용과 일치하지 않을 수 있습니다. 이 번역에 개선할 부분이 있다면 tensorflow/docs-l10n 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다. 문서 번역이나 리뷰에 참여하려면 docs-ko@tensorflow.org로 메일을 보내주시기 바랍니다. 소개 자연어 처리 모델은 종종 다른 문자 집합을 갖는 다양한 언어를 다루게 됩니다. 유니코드(unicode)는 거의 모든 언어의 문자를 표현할 수 있는 표준 인코딩 시스템입니다. 각 문자는 0부터 0x10FFFF 사이의 고유한 정수 코드 포인트(code point)를 사용해서 인코딩됩니다. 유니코드 문자열은 0개 또는 그 이상의 코드 포인트로 이루어진 시퀀스(sequence)입니다. 이 튜토리얼에서는 텐서플로(Tensorflow)에서 유니코드 문자열을 표현하고, 표준 문자열 연산의 유니코드 버전을 사용해서 유니코드 문자열을 조작하는 방법에 대해서 소개합니다. 또한 스크립트 감지(script detection)를 활용하여 유니코드 문자열을 토큰으로 분리해 보겠습니다. End of explanation """ tf.constant(u"Thanks 😊") """ Explanation: tf.string 데이터 타입 텐서플로의 기본 tf.string dtype은 바이트 문자열로 이루어진 텐서를 만듭니다. 유니코드 문자열은 기본적으로 utf-8로 인코딩 됩니다. End of explanation """ tf.constant([u"You're", u"welcome!"]).shape """ Explanation: tf.string 텐서는 바이트 문자열을 최소 단위로 다루기 때문에 다양한 길이의 바이트 문자열을 다룰 수 있습니다. 문자열 길이는 텐서 차원(dimensions)에 포함되지 않습니다. End of explanation """ # UTF-8로 인코딩된 string 스칼라로 표현한 유니코드 문자열입니다. text_utf8 = tf.constant(u"语言处理") text_utf8 # UTF-16-BE로 인코딩된 string 스칼라로 표현한 유니코드 문자열입니다. text_utf16be = tf.constant(u"语言处理".encode("UTF-16-BE")) text_utf16be # 유니코드 코드 포인트의 벡터로 표현한 유니코드 문자열입니다. text_chars = tf.constant([ord(char) for char in u"语言处理"]) text_chars """ Explanation: 노트: 파이썬을 사용해 문자열을 만들 때 버전 2와 버전 3에서 유니코드를 다루는 방식이 다릅니다. 버전 2에서는 위와 같이 "u" 접두사를 사용하여 유니코드 문자열을 나타냅니다. 버전 3에서는 유니코드 인코딩된 문자열이 기본값입니다. 유니코드 표현 텐서플로에서 유니코드 문자열을 표현하기 위한 두 가지 방법이 있습니다: string 스칼라 — 코드 포인트의 시퀀스가 알려진 문자 인코딩을 사용해 인코딩됩니다. int32 벡터 — 위치마다 개별 코드 포인트를 포함합니다. 예를 들어, 아래의 세 가지 값이 모두 유니코드 문자열 "语言处理"(중국어로 "언어 처리"를 의미함)를 표현합니다. End of explanation """ tf.strings.unicode_decode(text_utf8, input_encoding='UTF-8') tf.strings.unicode_encode(text_chars, output_encoding='UTF-8') tf.strings.unicode_transcode(text_utf8, input_encoding='UTF8', output_encoding='UTF-16-BE') """ Explanation: 표현 간의 변환 텐서플로는 다른 표현으로 변환하기 위한 연산을 제공합니다. tf.strings.unicode_decode: 인코딩된 string 스칼라를 코드 포인트의 벡터로 변환합니다. tf.strings.unicode_encode: 코드 포인트의 벡터를 인코드된 string 스칼라로 변환합니다. tf.strings.unicode_transcode: 인코드된 string 스칼라를 다른 인코딩으로 변환합니다. End of explanation """ # UTF-8 인코딩된 문자열로 표현한 유니코드 문자열의 배치입니다. batch_utf8 = [s.encode('UTF-8') for s in [u'hÃllo', u'What is the weather tomorrow', u'Göödnight', u'😊']] batch_chars_ragged = tf.strings.unicode_decode(batch_utf8, input_encoding='UTF-8') for sentence_chars in batch_chars_ragged.to_list(): print(sentence_chars) """ Explanation: 배치(batch) 차원 여러 개의 문자열을 디코딩 할 때 문자열마다 포함된 문자의 개수는 동일하지 않습니다. 반환되는 값은 tf.RaggedTensor로 가장 안쪽 차원의 크기가 문자열에 포함된 문자의 개수에 따라 결정됩니다. End of explanation """ batch_chars_padded = batch_chars_ragged.to_tensor(default_value=-1) print(batch_chars_padded.numpy()) batch_chars_sparse = batch_chars_ragged.to_sparse() """ Explanation: tf.RaggedTensor를 바로 사용하거나, 패딩(padding)을 사용해 tf.Tensor로 변환하거나, tf.RaggedTensor.to_tensor 와 tf.RaggedTensor.to_sparse 메서드를 사용해 tf.SparseTensor로 변환할 수 있습니다. End of explanation """ tf.strings.unicode_encode([[99, 97, 116], [100, 111, 103], [ 99, 111, 119]], output_encoding='UTF-8') """ Explanation: 길이가 같은 여러 문자열을 인코딩할 때는 tf.Tensor를 입력으로 사용합니다. End of explanation """ tf.strings.unicode_encode(batch_chars_ragged, output_encoding='UTF-8') """ Explanation: 길이가 다른 여러 문자열을 인코딩할 때는 tf.RaggedTensor를 입력으로 사용해야 합니다. End of explanation """ tf.strings.unicode_encode( tf.RaggedTensor.from_sparse(batch_chars_sparse), output_encoding='UTF-8') tf.strings.unicode_encode( tf.RaggedTensor.from_tensor(batch_chars_padded, padding=-1), output_encoding='UTF-8') """ Explanation: 패딩된 텐서나 희소(sparse) 텐서는 unicode_encode를 호출하기 전에 tf.RaggedTensor로 바꿉니다. End of explanation """ # UTF8에서 마지막 문자는 4바이트를 차지합니다. thanks = u'Thanks 😊'.encode('UTF-8') num_bytes = tf.strings.length(thanks).numpy() num_chars = tf.strings.length(thanks, unit='UTF8_CHAR').numpy() print('{} 바이트; {}개의 UTF-8 문자'.format(num_bytes, num_chars)) """ Explanation: 유니코드 연산 길이 tf.strings.length 연산은 계산해야 할 길이를 나타내는 unit 인자를 가집니다. unit의 기본 단위는 "BYTE"이지만 인코딩된 string에 포함된 유니코드 코드 포인트의 수를 파악하기 위해 "UTF8_CHAR"나 "UTF16_CHAR"같이 다른 값을 설정할 수 있습니다. End of explanation """ # 기본: unit='BYTE'. len=1이면 바이트 하나를 반환합니다. tf.strings.substr(thanks, pos=7, len=1).numpy() # unit='UTF8_CHAR'로 지정하면 4 바이트인 문자 하나를 반환합니다. print(tf.strings.substr(thanks, pos=7, len=1, unit='UTF8_CHAR').numpy()) """ Explanation: 부분 문자열 이와 유사하게 tf.strings.substr 연산은 "unit" 매개변수 값을 사용해 "pos"와 "len" 매개변수로 지정된 문자열의 종류를 결정합니다. End of explanation """ tf.strings.unicode_split(thanks, 'UTF-8').numpy() """ Explanation: 유니코드 문자열 분리 tf.strings.unicode_split 연산은 유니코드 문자열의 개별 문자를 부분 문자열로 분리합니다. End of explanation """ codepoints, offsets = tf.strings.unicode_decode_with_offsets(u"🎈🎉🎊", 'UTF-8') for (codepoint, offset) in zip(codepoints.numpy(), offsets.numpy()): print("바이트 오프셋 {}: 코드 포인트 {}".format(offset, codepoint)) """ Explanation: 문자 바이트 오프셋 tf.strings.unicode_decode로 만든 문자 텐서를 원본 문자열과 위치를 맞추려면 각 문자의 시작 위치의 오프셋(offset)을 알아야 합니다. tf.strings.unicode_decode_with_offsets은 unicode_decode와 비슷하지만 각 문자의 시작 오프셋을 포함한 두 번째 텐서를 반환합니다. End of explanation """ uscript = tf.strings.unicode_script([33464, 1041]) # ['芸', 'Б'] print(uscript.numpy()) # [17, 8] == [USCRIPT_HAN, USCRIPT_CYRILLIC] """ Explanation: 유니코드 스크립트 각 유니코드 코드 포인트는 스크립트(script)라 부르는 하나의 코드 포인트의 집합(collection)에 속합니다. 문자의 스크립트는 문자가 어떤 언어인지 결정하는 데 도움이 됩니다. 예를 들어, 'Б'가 키릴(Cyrillic) 스크립트라는 것을 알고 있으면 이 문자가 포함된 텍스트는 아마도 (러시아어나 우크라이나어 같은) 슬라브 언어라는 것을 알 수 있습니다. 텐서플로는 주어진 코드 포인트가 어떤 스크립트를 사용하는지 판별하기 위해 tf.strings.unicode_script 연산을 제공합니다. 스크립트 코드는 International Components for Unicode (ICU) UScriptCode 값과 일치하는 int32 값입니다. End of explanation """ print(tf.strings.unicode_script(batch_chars_ragged)) """ Explanation: tf.strings.unicode_script 연산은 코드 포인트의 다차원 tf.Tensor나 tf.RaggedTensor에 적용할 수 있습니다: End of explanation """ # dtype: string; shape: [num_sentences] # # 처리할 문장들 입니다. 이 라인을 수정해서 다른 입력값을 시도해 보세요! sentence_texts = [u'Hello, world.', u'世界こんにちは'] """ Explanation: 예제: 간단한 분할 분할(segmentation)은 텍스트를 단어와 같은 단위로 나누는 작업입니다. 공백 문자가 단어를 나누는 구분자로 사용되는 경우는 쉽지만, (중국어나 일본어 같이) 공백을 사용하지 않는 언어나 (독일어 같이) 단어를 길게 조합하는 언어는 의미를 분석하기 위한 분할 과정이 꼭 필요합니다. 웹 텍스트에는 "NY株価"(New York Stock Exchange)와 같이 여러 가지 언어와 스크립트가 섞여 있는 경우가 많습니다. 스크립트의 변화를 단어 경계로 근사하여 (ML 모델 사용 없이) 대략적인 분할을 수행할 수 있습니다. 위에서 언급된 "NY株価"의 예와 같은 문자열에 적용됩니다. 다양한 스크립트의 공백 문자를 모두 USCRIPT_COMMON(실제 텍스트의 스크립트 코드와 다른 특별한 스크립트 코드)으로 분류하기 때문에 공백을 사용하는 대부분의 언어들에서도 역시 적용됩니다. End of explanation """ # dtype: int32; shape: [num_sentences, (num_chars_per_sentence)] # # sentence_char_codepoint[i, j]는 # i번째 문장 안에 있는 j번째 문자에 대한 코드 포인트 입니다. sentence_char_codepoint = tf.strings.unicode_decode(sentence_texts, 'UTF-8') print(sentence_char_codepoint) # dtype: int32; shape: [num_sentences, (num_chars_per_sentence)] # # sentence_char_codepoint[i, j]는 # i번째 문장 안에 있는 j번째 문자의 유니코드 스크립트 입니다. sentence_char_script = tf.strings.unicode_script(sentence_char_codepoint) print(sentence_char_script) """ Explanation: 먼저 문장을 문자 코드 포인트로 디코딩하고 각 문자에 대한 스크립트 식별자를 찾습니다. End of explanation """ # dtype: bool; shape: [num_sentences, (num_chars_per_sentence)] # # sentence_char_starts_word[i, j]는 # i번째 문장 안에 있는 j번째 문자가 단어의 시작이면 True 입니다. sentence_char_starts_word = tf.concat( [tf.fill([sentence_char_script.nrows(), 1], True), tf.not_equal(sentence_char_script[:, 1:], sentence_char_script[:, :-1])], axis=1) # dtype: int64; shape: [num_words] # # word_starts[i]은 (모든 문장의 문자를 일렬로 펼친 리스트에서) # i번째 단어가 시작되는 문자의 인덱스 입니다. word_starts = tf.squeeze(tf.where(sentence_char_starts_word.values), axis=1) print(word_starts) """ Explanation: 그다음 스크립트 식별자를 사용하여 단어 경계가 추가될 위치를 결정합니다. 각 문장의 시작과 이전 문자와 스크립트가 다른 문자에 단어 경계를 추가합니다. End of explanation """ # dtype: int32; shape: [num_words, (num_chars_per_word)] # # word_char_codepoint[i, j]은 # i번째 단어 안에 있는 j번째 문자에 대한 코드 포인트 입니다. word_char_codepoint = tf.RaggedTensor.from_row_starts( values=sentence_char_codepoint.values, row_starts=word_starts) print(word_char_codepoint) """ Explanation: 이 시작 오프셋을 사용하여 전체 배치에 있는 단어 리스트를 담은 RaggedTensor를 만듭니다. End of explanation """ # dtype: int64; shape: [num_sentences] # # sentence_num_words[i]는 i번째 문장 안에 있는 단어의 수입니다. sentence_num_words = tf.reduce_sum( tf.cast(sentence_char_starts_word, tf.int64), axis=1) # dtype: int32; shape: [num_sentences, (num_words_per_sentence), (num_chars_per_word)] # # sentence_word_char_codepoint[i, j, k]는 i번째 문장 안에 있는 # j번째 단어 안의 k번째 문자에 대한 코드 포인트입니다. sentence_word_char_codepoint = tf.RaggedTensor.from_row_lengths( values=word_char_codepoint, row_lengths=sentence_num_words) print(sentence_word_char_codepoint) """ Explanation: 마지막으로 단어 코드 포인트 RaggedTensor를 문장으로 다시 나눕니다. End of explanation """ tf.strings.unicode_encode(sentence_word_char_codepoint, 'UTF-8').to_list() """ Explanation: 최종 결과를 읽기 쉽게 utf-8 문자열로 다시 인코딩합니다. End of explanation """
daniel-koehn/Theory-of-seismic-waves-II
05_2D_acoustic_FD_modelling/7_fdac2d_sensitivity_kernels.ipynb
gpl-3.0
# Execute this cell to load the notebook's style sheet, then ignore it from IPython.core.display import HTML css_file = '../style/custom.css' HTML(open(css_file, "r").read()) """ Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi End of explanation """ # Import Libraries # ---------------- import numpy as np from numba import jit import matplotlib import matplotlib.pyplot as plt from pylab import rcParams # Ignore Warning Messages # ----------------------- import warnings warnings.filterwarnings("ignore") from mpl_toolkits.axes_grid1 import make_axes_locatable """ Explanation: Computation of Sensitivity Kernels by 2D acoustic FD modelling Beside the modelling of seismic surveys, our 2D acoustic FD code can be used as the core of a seismic full waveform inversion (FWI) approach. A very efficient implementation is possible in the frequency domain. The aim of acoustic frequency domain FWI is to minimize the data residuals $\mathbf{\delta \tilde{P} = \tilde{P}^{mod} - \tilde{P}^{obs}}$ between the modelled frequency domain data $\mathbf{\tilde{P}^{mod}}$ and field data $\mathbf{\tilde{P}^{obs}}$ to deduce high resolution models of the P-wave velocity distribution in the subsurface. To solve this non-linear inversion problem, an objective function E, as a measure of the data misfit, has to be defined. The classical choice is the L2-norm of the data residuals \begin{equation} E = ||\mathbf{\delta \tilde{P}}||2 = \frac{1}{2}\mathbf{\delta \tilde{P}}^\dagger \mathbf{\delta \tilde{P}} = \frac{1}{2} \sum{k=1}^{n_\omega} \sum_{i=1}^{ns} \sum_{j=1}^{nr} \delta \tilde{P}^*(\mathbf{x_s}_i, \mathbf{x_r}_j, \omega_k) \delta \tilde{P}(\mathbf{x_s}_i, \mathbf{x_r}_j, \omega_k) \notag \end{equation} where ns and nr are the number of shots and receivers, $n_\omega$ the number of discrete frequencies, $\dagger$ the complex transpose, $$ the complex conjugate, $\mathbf{x_s},\; \mathbf{x_r}$ the source and receiver positions, respectively. The objective function can be minimized by iteratively updating the P-wave velocity $\mathbf{Vp}$ at iteration step n, starting with an initial background model $\mathbf{Vp_0}$, along a search direction using the Newton* method: \begin{equation} \mathbf{Vp}{n+1} = \mathbf{Vp}{n} - \mu_n \mathbf{H}_n^{-1} \left(\mathbf{\frac{\partial E}{\partial Vp}}\right)_n, \notag \end{equation} where $\mu$ denotes the step length, $\mathbf{\frac{\partial E}{\partial Vp}}$ the gradient and ${\mathbf H}$ the second derivative (Hessian) of the objective function with respect to $\mathbf{Vp}$. The step length $\mu_{n}$ can be estimated by an inexact parabolic line search. The gradient $\mathbf{\frac{\partial E}{\partial Vp}}$ can be calculated by \begin{equation} \mathbf{\frac{\partial E}{\partial Vp}} = - \mathbf{K}^\dagger \delta P, \notag \end{equation} where $\mathbf{K}$ denotes the Sensitivity Kernel: \begin{equation} K(x,z,\omega) = {\cal{Re}}\biggl{\frac{2 \omega^2}{Vp^3} \frac{\tilde{P}(x,z,\omega,\mathbf{x_{s}}) \tilde{G}(x,z,\omega,\mathbf{x_{r}})}{max(\tilde{P}(x,z,\omega,\mathbf{x_{s}}))}\biggr} \end{equation} with the monochromatic forward wavefield $\tilde{P}(x,z,\omega,\mathbf{x_s})$ excitated at the source position $\mathbf{x_s}$ and the Green's function $\tilde{G}(x,z,\omega,\mathbf{x_r})$ excitated at the receiver postion $\mathbf{x_r}$, $\cal{Re}$ denotes the real part. Computation of monochromatic frequency domain wavefields from time-domain wavefields by Discrete Fourier Transform (DFT) To compute the sensitivity kernel eq. (1), we first need to estimate monochromatic frequency domain wavefields from the time domain wavefields. This can be easily implemented in our 2D acoustic FD code by the Discrete Fourier Transform (DFT) within the time-loop of the FD code. We approximate the continous Fourier transform \begin{equation} \tilde{f}(\omega) = \frac{1}{2 \pi} \int_{-\infty}^{\infty} f(t) exp(-i \omega t) dt \notag \end{equation} by \begin{equation} \tilde{f_i}(\omega) \approx \frac{1}{2 \pi} \sum_{n=0}^{nt} f_n(t_n) \biggl(cos(\omega t_n)-i\; sin(\omega t_n)\biggr) dt \notag \end{equation} with $nt$ the number of time steps in the FD code, $\omega = 2 \pi f$ the circular frequency based on the frequency $f$, $i^2 = -1$. The wavefield can be further decomposed into the real \begin{equation} Re{\tilde{f_i}(\omega)} \approx \frac{1}{2 \pi} \sum_{n=0}^{nt} f_n(t_n) \biggl(cos(2 \pi f t_n)\biggr) dt \notag \end{equation} and imaginary part \begin{equation} Im{\tilde{f_i}(\omega)} \approx -\frac{1}{2 \pi} \sum_{n=0}^{nt} f_n(t_n) \biggl(sin(2 \pi f t_n)\biggr) dt \notag \end{equation} The implementation into the FD code is quite straightforward. During the time-stepping we have to multiply the time-domain pressure wavefield by a geometrical factor and add the contributions. Let's implement it into our 2D acoustic FD code and try to compute the frequency domain wavefields for a homogeneous background model and a first arrival traveltime tomography result of the Marmousi-2 model ... End of explanation """ # Definition of modelling parameters # ---------------------------------- # Define model discretization nx = 500 # number of grid points in x-direction nz = 174 # number of grid points in z-direction dx = 20.0 # spatial grid point distance in x-direction (m) dz = dx # spatial grid point distance in z-direction (m) # Define xmax, zmax xmax = nx * dx zmax = nz * dz # Define maximum recording time tmax = 6.0 # maximum wave propagation time (s) # Define source and receiver position xsrc = 2000.0 # x-source position (m) zsrc = 40.0 # z-source position (m) xrec = 8000.0 # x-receiver position (m) zrec = 40.0 # z-source position (m) f0 = 10 # dominant frequency of the source (Hz) print("f0 = ", f0, " Hz") t0 = 4.0/f0 # source time shift (s) isnap = 2 # snapshot interval (timesteps) # Calculate monochromatic wavefields for discrete frequency freq freq = 5.0 # discrete frequency (Hz) """ Explanation: As always, we start with the definition of the basic modelling parameters ... End of explanation """ @jit(nopython=True) # use JIT for C-performance def update_d2px_d2pz_3pt(p, dx, dz, nx, nz, d2px, d2pz): for i in range(1, nx - 1): for j in range(1, nz - 1): d2px[i,j] = (p[i + 1,j] - 2 * p[i,j] + p[i - 1,j]) / dx**2 d2pz[i,j] = (p[i,j + 1] - 2 * p[i,j] + p[i,j - 1]) / dz**2 return d2px, d2pz """ Explanation: ... define a JIT-ed function for the spatial FD approximation ... End of explanation """ # Define simple absorbing boundary frame based on wavefield damping # according to Cerjan et al., 1985, Geophysics, 50, 705-708 def absorb(nx,nz): FW = 60 # thickness of absorbing frame (gridpoints) a = 0.0053 coeff = np.zeros(FW) # define coefficients in absorbing frame for i in range(FW): coeff[i] = np.exp(-(a**2 * (FW-i)**2)) # initialize array of absorbing coefficients absorb_coeff = np.ones((nx,nz)) # compute coefficients for left grid boundaries (x-direction) zb=0 for i in range(FW): ze = nz - i - 1 for j in range(zb,ze): absorb_coeff[i,j] = coeff[i] # compute coefficients for right grid boundaries (x-direction) zb=0 for i in range(FW): ii = nx - i - 1 ze = nz - i - 1 for j in range(zb,ze): absorb_coeff[ii,j] = coeff[i] # compute coefficients for bottom grid boundaries (z-direction) xb=0 for j in range(FW): jj = nz - j - 1 xb = j xe = nx - j for i in range(xb,xe): absorb_coeff[i,jj] = coeff[j] return absorb_coeff """ Explanation: ... initialize the absorbing boundary frame ... End of explanation """ # FD_2D_acoustic code with JIT optimization # ----------------------------------------- def FD_2D_acoustic_JIT(vp,dt,dx,dz,f0,xsrc,zsrc,op,freq): # calculate number of time steps nt # --------------------------------- nt = (int)(tmax/dt) # locate source on Cartesian FD grid # ---------------------------------- isrc = (int)(xsrc/dx) # source location in grid in x-direction jsrc = (int)(zsrc/dz) # source location in grid in x-direction # Source time function (Gaussian) # ------------------------------- src = np.zeros(nt + 1) time = np.linspace(0 * dt, nt * dt, nt) # 1st derivative of Gaussian src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2)) # define clip value: 0.1 * absolute maximum value of source wavelet clip = 0.1 * max([np.abs(src.min()), np.abs(src.max())]) / (dx*dz) * dt**2 # Define absorbing boundary frame # ------------------------------- absorb_coeff = absorb(nx,nz) # Define squared vp-model # ----------------------- vp2 = vp**2 # Initialize empty pressure arrays # -------------------------------- p = np.zeros((nx,nz)) # p at time n (now) pold = np.zeros((nx,nz)) # p at time n-1 (past) pnew = np.zeros((nx,nz)) # p at time n+1 (present) d2px = np.zeros((nx,nz)) # 2nd spatial x-derivative of p d2pz = np.zeros((nx,nz)) # 2nd spatial z-derivative of p # INITIALIZE ARRAYS FOR REAL AND IMAGINARY PARTS OF MONOCHROMATIC WAVEFIELDS HERE! # -------------------------------------------------------------------------------- p_real = np.zeros((nx,nz)) # real part of the monochromatic wavefield p_imag = np.zeros((nx,nz)) # imaginary part of the monochromatic wavefield # Initalize animation of pressure wavefield # ----------------------------------------- fig = plt.figure(figsize=(7,3)) # define figure size extent = [0.0,xmax,zmax,0.0] # define model extension # Plot Vp-model image = plt.imshow((vp.T)/1000, cmap=plt.cm.gray, interpolation='nearest', extent=extent) # Plot pressure wavefield movie image1 = plt.imshow(p.T, animated=True, cmap="RdBu", alpha=.75, extent=extent, interpolation='nearest', vmin=-clip, vmax=clip) plt.title('Pressure wavefield') plt.xlabel('x [m]') plt.ylabel('z [m]') plt.ion() plt.show(block=False) # Calculate Partial Derivatives # ----------------------------- for it in range(nt): # FD approximation of spatial derivative by 3 point operator if(op==3): d2px, d2pz = update_d2px_d2pz_3pt(p, dx, dz, nx, nz, d2px, d2pz) # Time Extrapolation # ------------------ pnew = 2 * p - pold + vp2 * dt**2 * (d2px + d2pz) # Add Source Term at isrc # ----------------------- # Absolute pressure w.r.t analytical solution pnew[isrc,jsrc] = pnew[isrc,jsrc] + src[it] / (dx * dz) * dt ** 2 # Apply absorbing boundary frame # ------------------------------ p *= absorb_coeff pnew *= absorb_coeff # Remap Time Levels # ----------------- pold, p = p, pnew # Calculate frequency domain wavefield at discrete frequency freq by DFT # ---------------------------------------------------------------------- time = it * dt # time trig1 = np.cos(2.0 * time * freq * np.pi) * dt # real part trig2 = np.sin(2.0 * time * freq * np.pi) * dt # imaginary part # Estimate real and imaginary part of pressur wavefield p by DFT # -------------------------------------------------------------- p_real += p * trig1 p_imag += p * trig2 # display pressure snapshots if (it % isnap) == 0: image1.set_data(p.T) fig.canvas.draw() # Finalize computation of DFT p_real = p_real / (2.0 * np.pi) p_imag = - p_imag / (2.0 * np.pi) # Return real and imaginary parts of the monochromatic wavefield return p_real, p_imag """ Explanation: In the 2D FD acoustic modelling code, we implement the DFT of the time-domain wavefields, by initializing the real and imaginary parts of the pressure wavefields, calculate the trigonometric factors for the DFT within the time-loop, apply the DFT to the pressure wavefield p for the discrete frequency freq and finally return the real and imaginary parts of the frequency domain wavefields ... End of explanation """ # Run 2D acoustic FD modelling with 3-point spatial operater # ---------------------------------------------------------- %matplotlib notebook op = 3 # define spatial FD operator (3-point) # define homogeneous model with vp = 2500 m/s vp_hom = 2500.0 * np.ones((nx,nz)) # Define time step dt = dx / (np.sqrt(2) * np.max(vp_hom))# time step (s) p_hom_re, p_hom_im = FD_2D_acoustic_JIT(vp_hom,dt,dx,dz,f0,xsrc,zsrc,op,freq) """ Explanation: Modelling monochromatic frequency domain wavefields for a homogeneous acoustic medium Now, everything is assembled to compute frequency domain wavefields. We only have to define the discrete frequency freq for which the monochromatic wavefields should be calculated. Let's start with a homogeneous model: End of explanation """ %matplotlib notebook # Plot real and imaginary parts of monochromatic wavefields clip_seis = 5e-10 extent_seis = [0.0,xmax/1000,zmax/1000,0.0] ax = plt.subplot(211) plt.imshow(p_hom_re.T, cmap=plt.cm.RdBu, aspect=1, vmin=-clip_seis, vmax=clip_seis, extent=extent_seis) plt.title('Real part of monochromatic wavefield') #plt.xlabel('x [km]') ax.set_xticks([]) plt.ylabel('z [km]') plt.subplot(212) plt.imshow(p_hom_im.T, cmap=plt.cm.RdBu, aspect=1, vmin=-clip_seis, vmax=clip_seis, extent=extent_seis) plt.title('Imaginary part of monochromatic wavefield') plt.xlabel('x [km]') plt.ylabel('z [km]') plt.tight_layout() plt.show() """ Explanation: The time domain wavefields seem to be correct. Let's take a look at the frequency domain wavefield: End of explanation """ # Import FATT result for Marmousi-2 Vp model # ------------------------------------------ # Define model filename name_vp = "marmousi-2/marmousi_II_fatt.vp" # Open file and write binary data to vp f = open(name_vp) data_type = np.dtype ('float32').newbyteorder ('<') vp_fatt = np.fromfile (f, dtype=data_type) # Reshape (1 x nx*nz) vector to (nx x nz) matrix vp_fatt = vp_fatt.reshape(nx,nz) # Plot Marmousi-2 vp-model # ------------------------ %matplotlib notebook extent = [0, xmax/1000, zmax/1000, 0] fig = plt.figure(figsize=(7,3)) # define figure size image = plt.imshow((vp_fatt.T)/1000, cmap=plt.cm.viridis, interpolation='nearest', extent=extent) cbar = plt.colorbar(aspect=12, pad=0.02) cbar.set_label('Vp [km/s]', labelpad=10) plt.title('Marmousi-2 FATT model') plt.xlabel('x [km]') plt.ylabel('z [km]') plt.show() """ Explanation: Modelling monochromatic frequency domain wavefields for the Marmousi-2 FATT model In the next step, we calculate monochromatic wavefields for the first arrival traveltime tomography (FATT) result of the Marmousi-2 model, which could be an initial model for a subsequent FWI. First, we import the FATT model to Python: End of explanation """ # Run 2D acoustic FD modelling with 3-point spatial operater # ---------------------------------------------------------- %matplotlib notebook op = 3 # define spatial FD operator (3-point) # Define time step by CFL criterion dt = dx / (np.sqrt(2) * np.max(vp_fatt))# time step (s) p_fatt_re, p_fatt_im = FD_2D_acoustic_JIT(vp_fatt,dt,dx,dz,f0,xsrc,zsrc,op,freq) %matplotlib notebook # Plot real and imaginary parts of monochromatic wavefields clip_seis = 5e-9 extent_seis = [0.0,xmax/1000,zmax/1000,0.0] ax = plt.subplot(211) plt.imshow(p_fatt_re.T, cmap=plt.cm.RdBu, aspect=1, vmin=-clip_seis, vmax=clip_seis, extent=extent_seis) plt.title('Real part of monochromatic wavefield') #plt.xlabel('x [km]') ax.set_xticks([]) plt.ylabel('z [km]') plt.subplot(212) plt.imshow(p_fatt_im.T, cmap=plt.cm.RdBu, aspect=1, vmin=-clip_seis, vmax=clip_seis, extent=extent_seis) plt.title('Imaginary part of monochromatic wavefield') plt.xlabel('x [km]') plt.ylabel('z [km]') plt.tight_layout() plt.show() """ Explanation: ... and run the time-domain modelling code: End of explanation """ # COMPUTE RECEIVER GREEN'S FUNCTION FOR HOMOGENEOUS MODEL HERE! # ------------------------------------------------------------- %matplotlib notebook op = 3 # define spatial FD operator (3-point) # Define time step dt = dx / (np.sqrt(2) * np.max(vp_hom))# time step (s) g_hom_re, g_hom_im = FD_2D_acoustic_JIT(vp_hom,dt,dx,dz,f0,xsrc,zsrc,op,freq) %matplotlib notebook # COMPUTE SENSITVITY KERNEL FOR HOMOGENEOUS MODEL HERE! clip = 4e-18 extent = [0.0,xmax/1000,zmax/1000,0.0] # define model extension fig = plt.figure(figsize=(7,3)) # define figure size # Plot Vp-model image = plt.imshow((vp_hom.T)/1000, cmap=plt.cm.gray, interpolation='nearest', extent=extent) # Plot Sensitivity Kernel image1 = plt.imshow(K_hom.T, cmap="RdBu", alpha=.75, extent=extent, interpolation='nearest', vmin=-clip, vmax=clip) plt.title('Sensitivity Kernel (homogeneous model)') plt.xlabel('x [km]') plt.ylabel('z [km]') plt.show() # COMPUTE RECEIVER GREEN'S FUNCTION FOR MARMOUSI-2 FATT MODEL HERE! # ----------------------------------------------------------------- %matplotlib notebook op = 3 # define spatial FD operator (3-point) # Define time step dt = dx / (np.sqrt(2) * np.max(vp_fatt))# time step (s) g_fatt_re, g_fatt_im = FD_2D_acoustic_JIT(vp_fatt,dt,dx,dz,f0,xsrc,zsrc,op,freq) %matplotlib notebook # COMPUTE SENSITVITY KERNEL FOR Marmousi-2 FATT MODEL HERE! clip = 8e-17 extent = [0.0,xmax/1000,zmax/1000,0.0] # define model extension fig = plt.figure(figsize=(7,3)) # define figure size # Plot Vp-model image = plt.imshow((vp_fatt.T)/1000, cmap=plt.cm.gray, interpolation='nearest', extent=extent) # Plot Sensitivity Kernel image1 = plt.imshow(K_hom.T, cmap="RdBu", alpha=.5, extent=extent, interpolation='nearest', vmin=-clip, vmax=clip) plt.title('Sensitivity Kernel (Marmousi-2 FATT model)') plt.xlabel('x [km]') plt.ylabel('z [km]') plt.show() """ Explanation: Sensitivity Kernels The monochromatic frequency domain wavefields are the key to calculate sensitivty kernels, which will be computed in the next exercise ... Exercise Compute 5 Hz frequency domain sensitivity kernels \begin{equation} K(x,z,\omega) = {\cal{Re}}\biggl{\frac{2 \omega^2}{Vp^3} \frac{\tilde{P}(x,z,\omega,\mathbf{x_{s}}) \tilde{G}(x,z,\omega,\mathbf{x_{r}})}{max(\tilde{P}(x,z,\omega,\mathbf{x_{s}}))}\biggr} \notag \end{equation} for the homogenous and FATT Marmousi-2 model. The frequency domain forward wavefields $\tilde{P}(x,z,\omega,\mathbf{x_{s}})$ for a source at $x_{s} = 2000.0\; m$ and $z_{s} = 40.0\; m$ where already computated in the previous sections of the notebook (p_hom_re, p_hom_im,p_fatt_re, p_fatt_im). You only have to compute the receiver Green's functions $\tilde{G}(x,z,\omega,\mathbf{x_{r}})$ by placing a source at the receiver position $x_{r} = 8000.0\; m$ and $z_{r} = 40.0\; m$ Compute the sensitivity kernels $K(x,z,\omega)$. Hint: In Python complex numbers are defined as real_part + 1j*imag_part. This can also be applied to NumPy arrays. Plot, describe and interpret the sensitivity kernels for the homogeneous and Marmousi-2 FATT model. Where would you expect model updates in a subsequent FWI? End of explanation """