Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
7,500
Given the following text description, write Python code to implement the functionality described below step by step Description: H2O Tutorial Author Step1: Enable inline plotting in the Jupyter Notebook Step2: Intro to H2O Data Munging Read csv data into H2O. This loads the data into the H2O column compressed, in-memory, key-value store. Step3: View the top of the H2O frame. Step4: View the bottom of the H2O Frame Step5: Select a column fr["VAR_NAME"] Step6: Select a few columns Step7: Select a subset of rows Unlike in Pandas, columns may be identified by index or column name. Therefore, when subsetting by rows, you must also pass the column selection. Step8: Key attributes Step9: Select rows based on value Step10: Boolean masks can be used to subselect rows based on a criteria. Step11: Get summary statistics of the data and additional data distribution information. Step12: Set up the predictor and response column names Using H2O algorithms, it's easier to reference predictor and response columns by name in a single frame (i.e., don't split up X and y) Step13: Machine Learning With H2O H2O is a machine learning library built in Java with interfaces in Python, R, Scala, and Javascript. It is open source and well-documented. Unlike Scikit-learn, H2O allows for categorical and missing data. The basic work flow is as follows Step14: The performance of the model can be checked using the holdout dataset Step15: Train-Test Split Instead of taking the first 400 observations for training, we can use H2O to create a random test train split of the data. Step16: There was a massive jump in the R^2 value. This is because the original data is not shuffled. Cross validation H2O's machine learning algorithms take an optional parameter nfolds to specify the number of cross-validation folds to build. H2O's cross-validation uses an internal weight vector to build the folds in an efficient manner (instead of physically building the splits). In conjunction with the nfolds parameter, a user may specify the way in which observations are assigned to each fold with the fold_assignment parameter, which can be set to either Step17: However, you can still make use of the cross_val_score from Scikit-Learn Cross validation Step18: You still must use H2O to make the folds. Currently, there is no H2OStratifiedKFold. Additionally, the H2ORandomForestEstimator is analgous to the scikit-learn RandomForestRegressor object with its own fit method Step19: There isn't much difference in the R^2 value since the fold strategy is exactly the same. However, there was a major difference in terms of computation time and memory usage. Since the progress bar print out gets annoying let's disable that Step20: Grid Search Grid search in H2O is still under active development and it will be available very soon. However, it is possible to make use of Scikit's grid search infrastructure (with some performance penalties) Randomized grid search Step21: If you have 0.16.1, then your system can't handle complex randomized grid searches (it works in every other version of sklearn, including the soon to be released 0.16.2 and the older versions). The steps to perform a randomized grid search Step24: We might be tempted to think that we just had a large improvement; however we must be cautious. The function below creates a more detailed report. Step25: Based on the grid search report, we can narrow the parameters to search and rerun the analysis. The parameters below were chosen after a few runs Step26: Transformations Rule of machine learning Step27: Normalize Data Step28: Then, we can apply PCA and keep the top 5 components. Step29: Although this is MUCH simpler than keeping track of all of these transformations manually, it gets to be somewhat of a burden when you want to chain together multiple transformers. Pipelines "Tranformers unite!" If your raw data is a mess and you have to perform several transformations before using it, use a pipeline to keep things simple. Steps Step30: This is so much easier!!! But, wait a second, we did worse after applying these transformations! We might wonder how different hyperparameters for the transformations impact the final score. Combining randomized grid search and pipelines "Yo dawg, I heard you like models, so I put models in your models to model models." Steps Step31: Currently Under Development (drop-in scikit-learn pieces)
Python Code: import pandas as pd import numpy from numpy.random import choice from sklearn.datasets import load_boston import h2o h2o.init() # transfer the boston data from pandas to H2O boston_data = load_boston() X = pd.DataFrame(data=boston_data.data, columns=boston_data.feature_names) X["Median_value"] = boston_data.target X = h2o.H2OFrame.from_python(X.to_dict("list")) # select 10% for valdation r = X.runif(seed=123456789) train = X[r < 0.9,:] valid = X[r >= 0.9,:] h2o.export_file(train, "Boston_housing_train.csv", force=True) h2o.export_file(valid, "Boston_housing_test.csv", force=True) Explanation: H2O Tutorial Author: Spencer Aiello Contact: spencer@h2oai.com This tutorial steps through a quick introduction to H2O's Python API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from Python. Also, to help those that are accustomed to Scikit Learn and Pandas, the demo will be specific call outs for differences between H2O and those packages; this is intended to help anyone that needs to do machine learning on really Big Data make the transition. It is not meant to be a tutorial on machine learning or algorithms. Detailed documentation about H2O's and the Python API is available at http://docs.h2o.ai. Setting up your system for this demo The following code creates two csv files using data from the Boston Housing dataset which is built into scikit-learn and adds them to the local directory End of explanation %matplotlib inline import matplotlib.pyplot as plt Explanation: Enable inline plotting in the Jupyter Notebook End of explanation fr = h2o.import_file("Boston_housing_train.csv") Explanation: Intro to H2O Data Munging Read csv data into H2O. This loads the data into the H2O column compressed, in-memory, key-value store. End of explanation fr.head() Explanation: View the top of the H2O frame. End of explanation fr.tail() Explanation: View the bottom of the H2O Frame End of explanation fr["CRIM"].head() # Tab completes Explanation: Select a column fr["VAR_NAME"] End of explanation columns = ["CRIM", "RM", "RAD"] fr[columns].head() Explanation: Select a few columns End of explanation fr[2:7,:] # explicitly select all columns with : Explanation: Select a subset of rows Unlike in Pandas, columns may be identified by index or column name. Therefore, when subsetting by rows, you must also pass the column selection. End of explanation # The columns attribute is exactly like Pandas print "Columns:", fr.columns, "\n" print "Columns:", fr.names, "\n" print "Columns:", fr.col_names, "\n" # There are a number of attributes to get at the shape print "length:", str( len(fr) ), "\n" print "shape:", fr.shape, "\n" print "dim:", fr.dim, "\n" print "nrow:", fr.nrow, "\n" print "ncol:", fr.ncol, "\n" # Use the "types" attribute to list the column types print "types:", fr.types, "\n" Explanation: Key attributes: * columns, names, col_names * len, shape, dim, nrow, ncol * types Note: Since the data is not in local python memory there is no "values" attribute. If you want to pull all of the data into the local python memory then do so explicitly with h2o.export_file and reading the data into python memory from disk. End of explanation fr.shape Explanation: Select rows based on value End of explanation mask = fr["CRIM"]>1 fr[mask,:].shape Explanation: Boolean masks can be used to subselect rows based on a criteria. End of explanation fr.describe() Explanation: Get summary statistics of the data and additional data distribution information. End of explanation x = fr.names[:] y="Median_value" x.remove(y) Explanation: Set up the predictor and response column names Using H2O algorithms, it's easier to reference predictor and response columns by name in a single frame (i.e., don't split up X and y) End of explanation model = h2o.random_forest(x=fr[:400,x],y=fr[:400,y],seed=42) # Define and fit first 400 points model.predict(fr[400:fr.nrow,:]) # Predict the rest Explanation: Machine Learning With H2O H2O is a machine learning library built in Java with interfaces in Python, R, Scala, and Javascript. It is open source and well-documented. Unlike Scikit-learn, H2O allows for categorical and missing data. The basic work flow is as follows: * Fit the training data with a machine learning algorithm * Predict on the testing data Simple model End of explanation perf = model.model_performance(fr[400:fr.nrow,:]) perf.r2() # get the r2 on the holdout data perf.mse() # get the mse on the holdout data perf # display the performance object Explanation: The performance of the model can be checked using the holdout dataset End of explanation r = fr.runif(seed=12345) # build random uniform column over [0,1] train= fr[r<0.75,:] # perform a 75-25 split test = fr[r>=0.75,:] model = h2o.random_forest(x=train[x],y=train[y],seed=42) perf = model.model_performance(test) perf.r2() Explanation: Train-Test Split Instead of taking the first 400 observations for training, we can use H2O to create a random test train split of the data. End of explanation model = h2o.random_forest(x=fr[x],y=fr[y], nfolds=10) # build a 10-fold cross-validated model scores = numpy.array([m.r2() for m in model.xvals]) # iterate over the xval models using the xvals attribute print "Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96) print "Scores:", scores.round(2) Explanation: There was a massive jump in the R^2 value. This is because the original data is not shuffled. Cross validation H2O's machine learning algorithms take an optional parameter nfolds to specify the number of cross-validation folds to build. H2O's cross-validation uses an internal weight vector to build the folds in an efficient manner (instead of physically building the splits). In conjunction with the nfolds parameter, a user may specify the way in which observations are assigned to each fold with the fold_assignment parameter, which can be set to either: * AUTO: Perform random assignment * Random: Each row has a equal (1/nfolds) chance of being in any fold. * Modulo: Observations are in/out of the fold based by modding on nfolds End of explanation from sklearn.cross_validation import cross_val_score from h2o.cross_validation import H2OKFold from h2o.estimators.random_forest import H2ORandomForestEstimator from h2o.model.regression import h2o_r2_score from sklearn.metrics.scorer import make_scorer Explanation: However, you can still make use of the cross_val_score from Scikit-Learn Cross validation: H2O and Scikit-Learn End of explanation model = H2ORandomForestEstimator(seed=42) scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer custom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv scores = cross_val_score(model, fr[x], fr[y], scoring=scorer, cv=custom_cv) print "Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96) print "Scores:", scores.round(2) Explanation: You still must use H2O to make the folds. Currently, there is no H2OStratifiedKFold. Additionally, the H2ORandomForestEstimator is analgous to the scikit-learn RandomForestRegressor object with its own fit method End of explanation h2o.__PROGRESS_BAR__=False h2o.no_progress() Explanation: There isn't much difference in the R^2 value since the fold strategy is exactly the same. However, there was a major difference in terms of computation time and memory usage. Since the progress bar print out gets annoying let's disable that End of explanation from sklearn import __version__ sklearn_version = __version__ print sklearn_version Explanation: Grid Search Grid search in H2O is still under active development and it will be available very soon. However, it is possible to make use of Scikit's grid search infrastructure (with some performance penalties) Randomized grid search: H2O and Scikit-Learn End of explanation %%time from h2o.estimators.random_forest import H2ORandomForestEstimator # Import model from sklearn.grid_search import RandomizedSearchCV # Import grid search from scipy.stats import randint, uniform model = H2ORandomForestEstimator(seed=42) # Define model params = {"ntrees": randint(20,50), "max_depth": randint(1,10), "min_rows": randint(1,10), # scikit's min_samples_leaf "mtries": randint(2,fr[x].shape[1]),} # Specify parameters to test scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer custom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv random_search = RandomizedSearchCV(model, params, n_iter=30, scoring=scorer, cv=custom_cv, random_state=42, n_jobs=1) # Define grid search object random_search.fit(fr[x], fr[y]) print "Best R^2:", random_search.best_score_, "\n" print "Best params:", random_search.best_params_ Explanation: If you have 0.16.1, then your system can't handle complex randomized grid searches (it works in every other version of sklearn, including the soon to be released 0.16.2 and the older versions). The steps to perform a randomized grid search: 1. Import model and RandomizedSearchCV 2. Define model 3. Specify parameters to test 4. Define grid search object 5. Fit data to grid search object 6. Collect scores All the steps will be repeated from above. Because 0.16.1 is installed, we use scipy to define specific distributions ADVANCED TIP: Turn off reference counting for spawning jobs in parallel (n_jobs=-1, or n_jobs > 1). We'll turn it back on again in the aftermath of a Parallel job. If you don't want to run jobs in parallel, don't turn off the reference counting. Pattern is: >>> h2o.turn_off_ref_cnts() >>> .... parallel job .... >>> h2o.turn_on_ref_cnts() End of explanation def report_grid_score_detail(random_search, charts=True): Input fit grid search estimator. Returns df of scores with details df_list = [] for line in random_search.grid_scores_: results_dict = dict(line.parameters) results_dict["score"] = line.mean_validation_score results_dict["std"] = line.cv_validation_scores.std()*1.96 df_list.append(results_dict) result_df = pd.DataFrame(df_list) result_df = result_df.sort("score", ascending=False) if charts: for col in get_numeric(result_df): if col not in ["score", "std"]: plt.scatter(result_df[col], result_df.score) plt.title(col) plt.show() for col in list(result_df.columns[result_df.dtypes == "object"]): cat_plot = result_df.score.groupby(result_df[col]).mean()[0] cat_plot.sort() cat_plot.plot(kind="barh", xlim=(.5, None), figsize=(7, cat_plot.shape[0]/2)) plt.show() return result_df def get_numeric(X): Return list of numeric dtypes variables return X.dtypes[X.dtypes.apply(lambda x: str(x).startswith(("float", "int", "bool")))].index.tolist() report_grid_score_detail(random_search).head() Explanation: We might be tempted to think that we just had a large improvement; however we must be cautious. The function below creates a more detailed report. End of explanation %%time params = {"ntrees": randint(30,40), "max_depth": randint(4,10), "mtries": randint(4,10),} custom_cv = H2OKFold(fr, n_folds=5, seed=42) # In small datasets, the fold size can have a big # impact on the std of the resulting scores. More random_search = RandomizedSearchCV(model, params, # folds --> Less examples per fold --> higher n_iter=10, # variation per sample scoring=scorer, cv=custom_cv, random_state=43, n_jobs=1) random_search.fit(fr[x], fr[y]) print "Best R^2:", random_search.best_score_, "\n" print "Best params:", random_search.best_params_ report_grid_score_detail(random_search) Explanation: Based on the grid search report, we can narrow the parameters to search and rerun the analysis. The parameters below were chosen after a few runs: End of explanation from h2o.transforms.preprocessing import H2OScaler from h2o.transforms.decomposition import H2OPCA Explanation: Transformations Rule of machine learning: Don't use your testing data to inform your training data. Unfortunately, this happens all the time when preparing a dataset for the final model. But on smaller datasets, you must be especially careful. At the moment, there are no classes for managing data transformations. On the one hand, this requires the user to tote around some extra state, but on the other, it allows the user to be more explicit about transforming H2OFrames. Basic steps: Remove the response variable from transformations. Import transformer Define transformer Fit train data to transformer Transform test and train data Re-attach the response variable. First let's normalize the data using the means and standard deviations of the training data. Then let's perform a principal component analysis on the training data and select the top 5 components. Using these components, let's use them to reduce the train and test design matrices. End of explanation y_train = train.pop("Median_value") y_test = test.pop("Median_value") norm = H2OScaler() norm.fit(train) X_train_norm = norm.transform(train) X_test_norm = norm.transform(test) print X_test_norm.shape X_test_norm Explanation: Normalize Data: Use the means and standard deviations from the training data. End of explanation pca = H2OPCA(k=5) pca.fit(X_train_norm) X_train_norm_pca = pca.transform(X_train_norm) X_test_norm_pca = pca.transform(X_test_norm) # prop of variance explained by top 5 components? print X_test_norm_pca.shape X_test_norm_pca[:5] model = H2ORandomForestEstimator(seed=42) model.fit(X_train_norm_pca,y_train) y_hat = model.predict(X_test_norm_pca) h2o_r2_score(y_test,y_hat) Explanation: Then, we can apply PCA and keep the top 5 components. End of explanation from h2o.transforms.preprocessing import H2OScaler from h2o.transforms.decomposition import H2OPCA from h2o.estimators.random_forest import H2ORandomForestEstimator from sklearn.pipeline import Pipeline # Import Pipeline <other imports not shown> model = H2ORandomForestEstimator(seed=42) pipe = Pipeline([("standardize", H2OScaler()), # Define pipeline as a series of steps ("pca", H2OPCA(k=5)), ("rf", model)]) # Notice the last step is an estimator pipe.fit(train, y_train) # Fit training data y_hat = pipe.predict(test) # Predict testing data (due to last step being an estimator) h2o_r2_score(y_test, y_hat) # Notice the final score is identical to before Explanation: Although this is MUCH simpler than keeping track of all of these transformations manually, it gets to be somewhat of a burden when you want to chain together multiple transformers. Pipelines "Tranformers unite!" If your raw data is a mess and you have to perform several transformations before using it, use a pipeline to keep things simple. Steps: Import Pipeline, transformers, and model Define pipeline. The first and only argument is a list of tuples where the first element of each tuple is a name you give the step and the second element is a defined transformer. The last step is optionally an estimator class (like a RandomForest). Fit the training data to pipeline Either transform or predict the testing data End of explanation pipe = Pipeline([("standardize", H2OScaler()), ("pca", H2OPCA()), ("rf", H2ORandomForestEstimator(seed=42))]) params = {"standardize__center": [True, False], # Parameters to test "standardize__scale": [True, False], "pca__k": randint(2, 6), "rf__ntrees": randint(50,80), "rf__max_depth": randint(4,10), "rf__min_rows": randint(5,10), } # "rf__mtries": randint(1,4),} # gridding over mtries is # problematic with pca grid over # k above from sklearn.grid_search import RandomizedSearchCV from h2o.cross_validation import H2OKFold from h2o.model.regression import h2o_r2_score from sklearn.metrics.scorer import make_scorer custom_cv = H2OKFold(fr, n_folds=5, seed=42) random_search = RandomizedSearchCV(pipe, params, n_iter=30, scoring=make_scorer(h2o_r2_score), cv=custom_cv, random_state=42, n_jobs=1) random_search.fit(fr[x],fr[y]) results = report_grid_score_detail(random_search) results.head() Explanation: This is so much easier!!! But, wait a second, we did worse after applying these transformations! We might wonder how different hyperparameters for the transformations impact the final score. Combining randomized grid search and pipelines "Yo dawg, I heard you like models, so I put models in your models to model models." Steps: Import Pipeline, grid search, transformers, and estimators <Not shown below> Define pipeline Define parameters to test in the form: "(Step name)__(argument name)" A double underscore separates the two words. Define grid search Fit to grid search End of explanation best_estimator = random_search.best_estimator_ # fetch the pipeline from the grid search h2o_model = h2o.get_model(best_estimator._final_estimator._id) # fetch the model from the pipeline save_path = h2o.save_model(h2o_model, path=".", force=True) print save_path # assumes new session my_model = h2o.load_model(path=save_path) my_model.predict(X_test_norm_pca) Explanation: Currently Under Development (drop-in scikit-learn pieces): * Richer set of transforms (only PCA and Scale are implemented) * Richer set of estimators (only RandomForest is available) * Full H2O Grid Search Other Tips: Model Save/Load It is useful to save constructed models to disk and reload them between H2O sessions. Here's how: End of explanation
7,501
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: Is there any way for me to preserve punctuation marks of !, ?, " and ' from my text documents using text CountVectorizer parameters in scikit-learn?
Problem: import numpy as np import pandas as pd from sklearn.feature_extraction.text import CountVectorizer text = load_data() vent = CountVectorizer(token_pattern=r"(?u)\b\w\w+\b|!|\?|\"|\'") transformed_text = vent.fit_transform([text])
7,502
Given the following text description, write Python code to implement the functionality described below step by step Description: A full-fledged scraper Import our modules or packages that we will need to scrape a website, including requests and bs4 and csv Step1: Make a request to the webpage url that we are scraping. The url is Step2: Assign the html code from that site to a variable Step3: Alternatively, to access this from local file in html/ dir, uncomment the next line r= open('../project2/html/movies.html', 'r') html = r.read() Parse the html Step4: Isolate the table Step5: Find the rows, at the same time we are going to use slicing to skip the first two header rows. Step6: We are going to the csv module's DictWriter to write out our results. The DictWriter requires two things when we create it - the file and the fieldnames. First open our output file Step7: Next specify the fieldnames. Step8: Point our csv.DictWriter at the output file and specify the fieldnames along with other necessary parameters. Step9: close the csv file to officially finish writing to it
Python Code: import requests from bs4 import BeautifulSoup import csv Explanation: A full-fledged scraper Import our modules or packages that we will need to scrape a website, including requests and bs4 and csv End of explanation r = requests.get('https://s3-us-west-2.amazonaws.com/nicar-2015/Weekly+Rankings+-+Weekend+Box+Office+Results+++Rentrak.html') Explanation: Make a request to the webpage url that we are scraping. The url is: https://s3-us-west-2.amazonaws.com/nicar-2015/Weekly+Rankings+-+Weekend+Box+Office+Results+++Rentrak.html End of explanation html = r.text print(html) Explanation: Assign the html code from that site to a variable End of explanation soup = BeautifulSoup(html, "html.parser") print(soup) Explanation: Alternatively, to access this from local file in html/ dir, uncomment the next line r= open('../project2/html/movies.html', 'r') html = r.read() Parse the html End of explanation table = soup.find('table',{'class':'entChartTable'}) print(table) Explanation: Isolate the table End of explanation rows = table.find_all('tr') # print(rows) #skip the blank rows rows = rows[2:] # print(rows) Explanation: Find the rows, at the same time we are going to use slicing to skip the first two header rows. End of explanation csvfile = open("../project2/data/movies.csv","w", newline="") Explanation: We are going to the csv module's DictWriter to write out our results. The DictWriter requires two things when we create it - the file and the fieldnames. First open our output file: End of explanation fieldnames = [ "title", "world_box_office", "international_box_office", "domestic_box_office", "world_cume", "international_cume", "domestic_cume", "international_distributor", "number_territories", "domestic_distributor" ] Explanation: Next specify the fieldnames. End of explanation output = csv.DictWriter(csvfile, fieldnames=fieldnames, delimiter=',',quotechar='"',quoting=csv.QUOTE_MINIMAL) output.writeheader() #loop through the rows for row in rows: #grab the table cells from each row cells = row.find_all('td') #create a dictionary and assign the cell values to keys in our dictionary result = { "title" : cells[0].text.strip(), "world_box_office" : cells[1].text.strip(), "international_box_office" : cells[2].text.strip(), "domestic_box_office" : cells[3].text.strip(), "world_cume" : cells[4].text.strip(), "international_cume" : cells[5].text.strip(), "domestic_cume" : cells[6].text.strip(), "international_distributor" : cells[7].text.strip(), "number_territories" : cells[8].text.strip(), "domestic_distributor" : cells[9].text.strip() } #write the variables out to a csv file output.writerow(result) Explanation: Point our csv.DictWriter at the output file and specify the fieldnames along with other necessary parameters. End of explanation csvfile.close() #win Explanation: close the csv file to officially finish writing to it End of explanation
7,503
Given the following text description, write Python code to implement the functionality described below step by step Description: By deploying or using this software you agree to comply with the AI Hub Terms of Service and the Google APIs Terms of Service. To the extent of a direct conflict of terms, the AI Hub Terms of Service will control. Overview This notebook provides an example workflow of using the Tabular data inspection for a visual inspection of datasets. Dataset The notebook uses the Boston housing price regression dataset. It containers 506 observations with 13 features describing a house in Boston and a corresponding house price, stored in a 506x14 table. Objective The goal of this notebook is to go through a common training workflow Step1: Authenticate your GCP account If you are using AI Platform Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps Step2: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. You need to have a "workspace" bucket that will hold the dataset and the output from the ML Container. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. You may also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Cloud AI Platform services are available. You may not use a Multi-Regional Storage bucket for training with AI Platform. Step3: Only if your bucket doesn't already exist Step4: Finally, validate access to your Cloud Storage bucket by examining its contents Step5: Import libraries and define constants Step6: Create a dataset Step7: Cloud Run Accelerator and distribution support | GPU | Multi-GPU Node | TPU | Workers | Parameter Server | |---|---|---|---|---| | No | No | No | No | No | AI Platform training Tabular data inspection. AI Platform training documentation. Local Run Step8: Local training snippet Note that the training can also be done locally with Docker bash docker run \ -v /tmp Step9: Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial.
Python Code: PROJECT_ID = "[your-project-id]" #@param {type:"string"} ! gcloud config set project $PROJECT_ID Explanation: By deploying or using this software you agree to comply with the AI Hub Terms of Service and the Google APIs Terms of Service. To the extent of a direct conflict of terms, the AI Hub Terms of Service will control. Overview This notebook provides an example workflow of using the Tabular data inspection for a visual inspection of datasets. Dataset The notebook uses the Boston housing price regression dataset. It containers 506 observations with 13 features describing a house in Boston and a corresponding house price, stored in a 506x14 table. Objective The goal of this notebook is to go through a common training workflow: - Create a dataset - Use AI Platform Training service to create a visual "Run Report" from the dataset - Inspect the dataset by looking at the generated "Run Report" Costs This tutorial uses billable components of Google Cloud Platform (GCP): Cloud AI Platform Cloud Storage Learn about Cloud AI Platform pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or AI Platform Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Google Cloud SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Google Cloud guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the Cloud SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate that environment and run pip install jupyter in a shell to install Jupyter. Run jupyter notebook in a shell to launch Jupyter. Open this notebook in the Jupyter Notebook Dashboard. Set up your GCP project The following steps are required, regardless of your notebook environment. Select or create a GCP project.. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the AI Platform APIs and Compute Engine APIs. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. End of explanation import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. if 'google.colab' in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. else: %env GOOGLE_APPLICATION_CREDENTIALS '' Explanation: Authenticate your GCP account If you are using AI Platform Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the GCP Console, go to the Create service account key page. From the Service account drop-down list, select New service account. In the Service account name field, enter a name. From the Role drop-down list, select Machine Learning Engine > AI Platform Admin and Storage > Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"} REGION = 'us-central1' #@param {type:"string"} Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. You need to have a "workspace" bucket that will hold the dataset and the output from the ML Container. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. You may also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Cloud AI Platform services are available. You may not use a Multi-Regional Storage bucket for training with AI Platform. End of explanation ! gsutil mb -l $REGION gs://$BUCKET_NAME Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation ! gsutil ls -al gs://$BUCKET_NAME Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import time import pandas as pd import tensorflow as tf from IPython.core.display import HTML Explanation: Import libraries and define constants End of explanation bh = tf.keras.datasets.boston_housing (X_train, y_train), (X_eval, y_eval) = bh.load_data() training = pd.DataFrame(X_train) training['target'] = y_train validation = pd.DataFrame(X_eval) validation['target'] = y_eval print('Data head:') display(training.head(2)) data = os.path.join('gs://', BUCKET_NAME, 'data.csv') print('Copy the data in bucket ...') with tf.io.gfile.GFile(data, 'w') as f: training.append(validation).to_csv(f, index=False) Explanation: Create a dataset End of explanation output_location = os.path.join('gs://', BUCKET_NAME, 'output') job_name = "data_inspection_{}".format(time.strftime("%Y%m%d%H%M%S")) !gcloud ai-platform jobs submit training $job_name \ --master-image-uri gcr.io/aihub-c2t-containers/kfp-components/oob_algorithm/tabular_data_inspection:latest \ --region $REGION \ --scale-tier CUSTOM \ --master-machine-type standard \ -- \ --output-location {output_location} \ --data {data} \ --data-type csv Explanation: Cloud Run Accelerator and distribution support | GPU | Multi-GPU Node | TPU | Workers | Parameter Server | |---|---|---|---|---| | No | No | No | No | No | AI Platform training Tabular data inspection. AI Platform training documentation. Local Run End of explanation if not tf.io.gfile.exists(os.path.join(output_location, 'report.html')): raise RuntimeError('The file report.html was not found. Did the training job finish?') with tf.io.gfile.GFile(os.path.join(output_location, 'report.html')) as f: display(HTML(f.read())) Explanation: Local training snippet Note that the training can also be done locally with Docker bash docker run \ -v /tmp:/tmp \ -it gcr.io/aihub-c2t-containers/kfp-components/oob_algorithm/tabular_data_inspection:latest \ --output-location /tmp/tabular_data_inspection \ --data /tmp/data.csv \ --data-type csv Inspect the Run Report The "Run Report" will help you identify if the model was successfully trained. End of explanation # If training job is still running, cancel it ! gcloud ai-platform jobs cancel $job_name --quiet # Delete Cloud Storage objects that were created ! gsutil -m rm -r $BUCKET_NAME Explanation: Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial. End of explanation
7,504
Given the following text description, write Python code to implement the functionality described below step by step Description: Variable Coefficient Poisson Derive the form of a test variable-coefficient elliptic equation with periodic boundary conditions for testing the variable-coefficient multigrid solver. We want to solve an equation of the form $\nabla \cdot (\alpha \nabla \phi) = f$ Note Step1: we want to compute $\nabla \cdot (\alpha \nabla \phi)$ Step2: General Elliptic Derive the form of a test variable-coefficient elliptic equation with periodic boundary conditions for testing the variable-coefficient multigrid solver. We are solving $$\alpha \phi + \nabla \cdot (\beta \nabla \phi ) + \gamma \cdot \nabla \phi = f$$ Note Step3: Inhomogeneous BCs Step4: boundary conditions
Python Code: %pylab inline from sympy import init_session init_session() alpha = 2.0 + cos(2*pi*x)*cos(2*pi*y) phi = sin(2*pi*x)*sin(2*pi*y) Explanation: Variable Coefficient Poisson Derive the form of a test variable-coefficient elliptic equation with periodic boundary conditions for testing the variable-coefficient multigrid solver. We want to solve an equation of the form $\nabla \cdot (\alpha \nabla \phi) = f$ Note: it is important for solvability that the RHS, f, integrate to zero over our domain. It seems sufficient to ensure that phi integrates to zero to have this condition met. End of explanation phi_x = diff(phi, x) phi_y = diff(phi, y) f = diff(alpha*phi_x, x) + diff(alpha*phi_y, y) f = simplify(f) f print(f) phi.subs(x, 0) phi.subs(x, 1) phi.subs(y, 0) phi.subs(y, 1) Explanation: we want to compute $\nabla \cdot (\alpha \nabla \phi)$ End of explanation phi = sin(2*pi*x)*sin(2*pi*y) phi alpha = 1.0 beta = 2.0 + cos(2*pi*x)*cos(2*pi*y) gamma_x = sin(2*pi*x) gamma_y = sin(2*pi*y) alpha beta gamma_x gamma_y phi_x = diff(phi, x) phi_y = diff(phi, y) f = alpha*phi + diff(beta*phi_x, x) + diff(beta*phi_y, y) + gamma_x*phi_x + gamma_y*phi_y f = simplify(f) f print(f) print(alpha) print(beta) print(gamma_x) print (gamma_y) Explanation: General Elliptic Derive the form of a test variable-coefficient elliptic equation with periodic boundary conditions for testing the variable-coefficient multigrid solver. We are solving $$\alpha \phi + \nabla \cdot (\beta \nabla \phi ) + \gamma \cdot \nabla \phi = f$$ Note: it is important for solvability that the RHS, f, integrate to zero over our domain. It seems sufficient to ensure that phi integrates to zero to have this condition met. End of explanation phi = cos(Rational(1,2)*pi*x)*cos(Rational(1,2)*pi*y) phi alpha = 10 gamma_x = 1 gamma_y = 1 beta = x*y + 1 phi_x = diff(phi, x) phi_y = diff(phi, y) f = alpha*phi + diff(beta*phi_x, x) + diff(beta*phi_y, y) + gamma_x*phi_x + gamma_y*phi_y f simplify(f) Explanation: Inhomogeneous BCs End of explanation phi.subs(x,0) phi.subs(x,1) phi.subs(y,0) phi.subs(y,1) Explanation: boundary conditions End of explanation
7,505
Given the following text description, write Python code to implement the functionality described below step by step Description: Tests overlaying rule 18 values on top of spacetime diagram Step1: Tests overlaying inferred states on top of rule 18 spacetime diagram
Python Code: overlay_test(rule_18.get_spacetime(),rule_18.get_spacetime(),t_max=20, x_max=20, text_color='red') overlay_test(rule_18.get_spacetime(),rule_18.get_spacetime(),t_max=20, x_max=20, colors=plt.cm.Set2, text_color='black') overlay_test(rule_18.get_spacetime(),rule_18.get_spacetime(),t_max=20, x_max=20, colorbar=True) Explanation: Tests overlaying rule 18 values on top of spacetime diagram End of explanation rule_18 = ECA(18, domain_18(500)) rule_18.evolve(500) states_18 = epsilon_field(rule_18.get_spacetime()) states_18.estimate_states(3,3,1,alpha=0) states_18.filter_data() overlay_test(rule_18.get_spacetime(), states_18.get_causal_field(), t_min=200, t_max=240, x_min=200, x_max=240) overlay_test(rule_18.get_spacetime(), states_18.get_causal_field(), t_min=200, t_max=220, x_min=200, x_max=220) print states_18.get_causal_field()[200:220, 200:220] Explanation: Tests overlaying inferred states on top of rule 18 spacetime diagram End of explanation
7,506
Given the following text description, write Python code to implement the functionality described below step by step Description: Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br> Allen Downey Read the female respondent file. Step1: Make a PMF of <tt>numkdhh</tt>, the number of children under 18 in the respondent's household. Step2: Display the PMF. Step4: Define <tt>BiasPmf</tt>. Step5: Make a the biased Pmf of children in the household, as observed if you surveyed the children instead of the respondents. Step6: Display the actual Pmf and the biased Pmf on the same axes. Step7: Compute the means of the two Pmfs.
Python Code: %matplotlib inline import chap01soln resp = chap01soln.ReadFemResp() Explanation: Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br> Allen Downey Read the female respondent file. End of explanation import thinkstats2 as ts children_PMF = ts.Pmf(resp.numkdhh) Explanation: Make a PMF of <tt>numkdhh</tt>, the number of children under 18 in the respondent's household. End of explanation import thinkplot as tp tp.Pmf(children_PMF, label='numkdhh') tp.Show() Explanation: Display the PMF. End of explanation def BiasPmf(pmf, label=''): Returns the Pmf with oversampling proportional to value. If pmf is the distribution of true values, the result is the distribution that would be seen if values are oversampled in proportion to their values; for example, if you ask students how big their classes are, large classes are oversampled in proportion to their size. Args: pmf: Pmf object. label: string label for the new Pmf. Returns: Pmf object new_pmf = pmf.Copy(label=label) for x, p in pmf.Items(): new_pmf.Mult(x, x) new_pmf.Normalize() return new_pmf Explanation: Define <tt>BiasPmf</tt>. End of explanation biasChildrenPmf = BiasPmf(children_PMF, label = 'biased') Explanation: Make a the biased Pmf of children in the household, as observed if you surveyed the children instead of the respondents. End of explanation tp.preplot(2) tp.Pmfs([children_PMF, biasChildrenPmf]) tp.Show() Explanation: Display the actual Pmf and the biased Pmf on the same axes. End of explanation child Explanation: Compute the means of the two Pmfs. End of explanation
7,507
Given the following text description, write Python code to implement the functionality described below step by step Description: Repairing artifacts with SSP This tutorial covers the basics of signal-space projection (SSP) and shows how SSP can be used for artifact repair; extended examples illustrate use of SSP for environmental noise reduction, and for repair of ocular and heartbeat artifacts. We begin as always by importing the necessary Python modules. To save ourselves from repeatedly typing mne.preprocessing we'll directly import a handful of functions from that submodule Step1: <div class="alert alert-info"><h4>Note</h4><p>Before applying SSP (or any artifact repair strategy), be sure to observe the artifacts in your data to make sure you choose the right repair tool. Sometimes the right tool is no tool at all — if the artifacts are small enough you may not even need to repair them to get good analysis results. See `tut-artifact-overview` for guidance on detecting and visualizing various types of artifact.</p></div> What is SSP? Signal-space projection (SSP) Step2: The example data &lt;sample-dataset&gt; also includes an "empty room" recording taken the same day as the recording of the subject. This will provide a more accurate estimate of environmental noise than the projectors stored with the system (which are typically generated during annual maintenance and tuning). Since we have this subject-specific empty-room recording, we'll create our own projectors from it and discard the system-provided SSP projectors (saving them first, for later comparison with the custom ones) Step3: Notice that the empty room recording itself has the system-provided SSP projectors in it — we'll remove those from the empty room file too. Step4: Visualizing the empty-room noise Let's take a look at the spectrum of the empty room noise. We can view an individual spectrum for each sensor, or an average (with confidence band) across sensors Step5: Creating the empty-room projectors We create the SSP vectors using ~mne.compute_proj_raw, and control the number of projectors with parameters n_grad and n_mag. Once created, the field pattern of the projectors can be easily visualized with ~mne.viz.plot_projs_topomap. We include the parameter vlim='joint' so that the colormap is computed jointly for all projectors of a given channel type; this makes it easier to compare their relative smoothness. Note that for the function to know the types of channels in a projector, you must also provide the corresponding ~mne.Info object Step6: Notice that the gradiometer-based projectors seem to reflect problems with individual sensor units rather than a global noise source (indeed, planar gradiometers are much less sensitive to distant sources). This is the reason that the system-provided noise projectors are computed only for magnetometers. Comparing the system-provided projectors to the subject-specific ones, we can see they are reasonably similar (though in a different order) and the left-right component seems to have changed polarity. Step7: Visualizing how projectors affect the signal We could visualize the different effects these have on the data by applying each set of projectors to different copies of the ~mne.io.Raw object using ~mne.io.Raw.apply_proj. However, the ~mne.io.Raw.plot method has a proj parameter that allows us to temporarily apply projectors while plotting, so we can use this to visualize the difference without needing to copy the data. Because the projectors are so similar, we need to zoom in pretty close on the data to see any differences Step8: The effect is sometimes easier to see on averaged data. Here we use an interactive feature of mne.Evoked.plot_topomap to turn projectors on and off to see the effect on the data. Of course, the interactivity won't work on the tutorial website, but you can download the tutorial and try it locally Step9: Plotting the ERP/F using evoked.plot() or evoked.plot_joint() with and without projectors applied can also be informative, as can plotting with proj='reconstruct', which can reduce the signal bias introduced by projections (see tut-artifact-ssp-reconstruction below). Example Step10: Repairing ECG artifacts with SSP MNE-Python provides several functions for detecting and removing heartbeats from EEG and MEG data. As we saw in tut-artifact-overview, ~mne.preprocessing.create_ecg_epochs can be used to both detect and extract heartbeat artifacts into an ~mne.Epochs object, which can be used to visualize how the heartbeat artifacts manifest across the sensors Step11: Looks like the EEG channels are pretty spread out; let's baseline-correct and plot again Step12: To compute SSP projectors for the heartbeat artifact, you can use ~mne.preprocessing.compute_proj_ecg, which takes a ~mne.io.Raw object as input and returns the requested number of projectors for magnetometers, gradiometers, and EEG channels (default is two projectors for each channel type). ~mne.preprocessing.compute_proj_ecg also returns an Step13: The first line of output tells us that ~mne.preprocessing.compute_proj_ecg found three existing projectors already in the ~mne.io.Raw object, and will include those in the list of projectors that it returns (appending the new ECG projectors to the end of the list). If you don't want that, you can change that behavior with the boolean no_proj parameter. Since we've already run the computation, we can just as easily separate out the ECG projectors by indexing the list of projectors Step14: Just like with the empty-room projectors, we can visualize the scalp distribution Step15: Since no dedicated ECG sensor channel was detected in the ~mne.io.Raw object, by default ~mne.preprocessing.compute_proj_ecg used the magnetometers to estimate the ECG signal (as stated on the third line of output, above). You can also supply the ch_name parameter to restrict which channel to use for ECG artifact detection; this is most useful when you had an ECG sensor but it is not labeled as such in the ~mne.io.Raw file. The next few lines of the output describe the filter used to isolate ECG events. The default settings are usually adequate, but the filter can be customized via the parameters ecg_l_freq, ecg_h_freq, and filter_length (see the documentation of ~mne.preprocessing.compute_proj_ecg for details). .. TODO what are the cases where you might need to customize the ECG filter? infants? Heart murmur? Once the ECG events have been identified, ~mne.preprocessing.compute_proj_ecg will also filter the data channels before extracting epochs around each heartbeat, using the parameter values given in l_freq, h_freq, filter_length, filter_method, and iir_params. Here again, the default parameter values are usually adequate. .. TODO should advice for filtering here be the same as advice for filtering raw data generally? (e.g., keep high-pass very low to avoid peak shifts? what if your raw data is already filtered?) By default, the filtered epochs will be averaged together before the projection is computed; this can be controlled with the boolean average parameter. In general this improves the signal-to-noise (where "signal" here is our artifact!) ratio because the artifact temporal waveform is fairly similar across epochs and well time locked to the detected events. To get a sense of how the heartbeat affects the signal at each sensor, you can plot the data with and without the ECG projectors Step16: Finally, note that above we passed reject=None to the ~mne.preprocessing.compute_proj_ecg function, meaning that all detected ECG epochs would be used when computing the projectors (regardless of signal quality in the data sensors during those epochs). The default behavior is to reject epochs based on signal amplitude Step17: Just like we did with the heartbeat artifact, we can compute SSP projectors for the ocular artifact using ~mne.preprocessing.compute_proj_eog, which again takes a ~mne.io.Raw object as input and returns the requested number of projectors for magnetometers, gradiometers, and EEG channels (default is two projectors for each channel type). This time, we'll pass no_proj parameter (so we get back only the new EOG projectors, not also the existing projectors in the ~mne.io.Raw object), and we'll ignore the events array by assigning it to _ (the conventional way of handling unwanted return elements in Python). Step18: Just like with the empty-room and ECG projectors, we can visualize the scalp distribution Step19: Now we repeat the plot from above (with empty room and ECG projectors) and compare it to a plot with empty room, ECG, and EOG projectors, to see how well the ocular artifacts have been repaired Step20: Notice that the small peaks in the first to magnetometer channels (MEG 1411 and MEG 1421) that occur at the same time as the large EEG deflections have also been removed. Choosing the number of projectors In the examples above, we used 3 projectors (all magnetometer) to capture empty room noise, and saw how projectors computed for the gradiometers failed to capture global patterns (and thus we discarded the gradiometer projectors). Then we computed 3 projectors (1 for each channel type) to capture the heartbeat artifact, and 3 more to capture the ocular artifact. How did we choose these numbers? The short answer is "based on experience" — knowing how heartbeat artifacts typically manifest across the sensor array allows us to recognize them when we see them, and recognize when additional projectors are capturing something else other than a heartbeat artifact (and thus may be removing brain signal and should be discarded). Visualizing SSP sensor-space bias via signal reconstruction .. sidebar
Python Code: import os import numpy as np import matplotlib.pyplot as plt import mne from mne.preprocessing import (create_eog_epochs, create_ecg_epochs, compute_proj_ecg, compute_proj_eog) Explanation: Repairing artifacts with SSP This tutorial covers the basics of signal-space projection (SSP) and shows how SSP can be used for artifact repair; extended examples illustrate use of SSP for environmental noise reduction, and for repair of ocular and heartbeat artifacts. We begin as always by importing the necessary Python modules. To save ourselves from repeatedly typing mne.preprocessing we'll directly import a handful of functions from that submodule: End of explanation sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_raw.fif') # here we crop and resample just for speed raw = mne.io.read_raw_fif(sample_data_raw_file).crop(0, 60) raw.load_data().resample(100) Explanation: <div class="alert alert-info"><h4>Note</h4><p>Before applying SSP (or any artifact repair strategy), be sure to observe the artifacts in your data to make sure you choose the right repair tool. Sometimes the right tool is no tool at all — if the artifacts are small enough you may not even need to repair them to get good analysis results. See `tut-artifact-overview` for guidance on detecting and visualizing various types of artifact.</p></div> What is SSP? Signal-space projection (SSP) :footcite:UusitaloIlmoniemi1997 is a technique for removing noise from EEG and MEG signals by :term:projecting &lt;projector&gt; the signal onto a lower-dimensional subspace. The subspace is chosen by calculating the average pattern across sensors when the noise is present, treating that pattern as a "direction" in the sensor space, and constructing the subspace to be orthogonal to the noise direction (for a detailed walk-through of projection see tut-projectors-background). The most common use of SSP is to remove noise from MEG signals when the noise comes from environmental sources (sources outside the subject's body and the MEG system, such as the electromagnetic fields from nearby electrical equipment) and when that noise is stationary (doesn't change much over the duration of the recording). However, SSP can also be used to remove biological artifacts such as heartbeat (ECG) and eye movement (EOG) artifacts. Examples of each of these are given below. Example: Environmental noise reduction from empty-room recordings The example data &lt;sample-dataset&gt; was recorded on a Neuromag system, which stores SSP projectors for environmental noise removal in the system configuration (so that reasonably clean raw data can be viewed in real-time during acquisition). For this reason, all the ~mne.io.Raw data in the example dataset already includes SSP projectors, which are noted in the output when loading the data: End of explanation system_projs = raw.info['projs'] raw.del_proj() empty_room_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'ernoise_raw.fif') # cropped to 60 sec just for speed empty_room_raw = mne.io.read_raw_fif(empty_room_file).crop(0, 30) Explanation: The example data &lt;sample-dataset&gt; also includes an "empty room" recording taken the same day as the recording of the subject. This will provide a more accurate estimate of environmental noise than the projectors stored with the system (which are typically generated during annual maintenance and tuning). Since we have this subject-specific empty-room recording, we'll create our own projectors from it and discard the system-provided SSP projectors (saving them first, for later comparison with the custom ones): End of explanation empty_room_raw.del_proj() Explanation: Notice that the empty room recording itself has the system-provided SSP projectors in it — we'll remove those from the empty room file too. End of explanation for average in (False, True): empty_room_raw.plot_psd(average=average, dB=False, xscale='log') Explanation: Visualizing the empty-room noise Let's take a look at the spectrum of the empty room noise. We can view an individual spectrum for each sensor, or an average (with confidence band) across sensors: End of explanation empty_room_projs = mne.compute_proj_raw(empty_room_raw, n_grad=3, n_mag=3) mne.viz.plot_projs_topomap(empty_room_projs, colorbar=True, vlim='joint', info=empty_room_raw.info) Explanation: Creating the empty-room projectors We create the SSP vectors using ~mne.compute_proj_raw, and control the number of projectors with parameters n_grad and n_mag. Once created, the field pattern of the projectors can be easily visualized with ~mne.viz.plot_projs_topomap. We include the parameter vlim='joint' so that the colormap is computed jointly for all projectors of a given channel type; this makes it easier to compare their relative smoothness. Note that for the function to know the types of channels in a projector, you must also provide the corresponding ~mne.Info object: End of explanation fig, axs = plt.subplots(2, 3) for idx, _projs in enumerate([system_projs, empty_room_projs[3:]]): mne.viz.plot_projs_topomap(_projs, axes=axs[idx], colorbar=True, vlim='joint', info=empty_room_raw.info) Explanation: Notice that the gradiometer-based projectors seem to reflect problems with individual sensor units rather than a global noise source (indeed, planar gradiometers are much less sensitive to distant sources). This is the reason that the system-provided noise projectors are computed only for magnetometers. Comparing the system-provided projectors to the subject-specific ones, we can see they are reasonably similar (though in a different order) and the left-right component seems to have changed polarity. End of explanation mags = mne.pick_types(raw.info, meg='mag') for title, projs in [('system', system_projs), ('subject-specific', empty_room_projs[3:])]: raw.add_proj(projs, remove_existing=True) with mne.viz.use_browser_backend('matplotlib'): fig = raw.plot(proj=True, order=mags, duration=1, n_channels=2) fig.subplots_adjust(top=0.9) # make room for title fig.suptitle('{} projectors'.format(title), size='xx-large', weight='bold') Explanation: Visualizing how projectors affect the signal We could visualize the different effects these have on the data by applying each set of projectors to different copies of the ~mne.io.Raw object using ~mne.io.Raw.apply_proj. However, the ~mne.io.Raw.plot method has a proj parameter that allows us to temporarily apply projectors while plotting, so we can use this to visualize the difference without needing to copy the data. Because the projectors are so similar, we need to zoom in pretty close on the data to see any differences: End of explanation events = mne.find_events(raw, stim_channel='STI 014') event_id = {'auditory/left': 1} # NOTE: appropriate rejection criteria are highly data-dependent reject = dict(mag=4000e-15, # 4000 fT grad=4000e-13, # 4000 fT/cm eeg=150e-6, # 150 µV eog=250e-6) # 250 µV # time range where we expect to see the auditory N100: 50-150 ms post-stimulus times = np.linspace(0.05, 0.15, 5) epochs = mne.Epochs(raw, events, event_id, proj='delayed', reject=reject) fig = epochs.average().plot_topomap(times, proj='interactive') Explanation: The effect is sometimes easier to see on averaged data. Here we use an interactive feature of mne.Evoked.plot_topomap to turn projectors on and off to see the effect on the data. Of course, the interactivity won't work on the tutorial website, but you can download the tutorial and try it locally: End of explanation # pick some channels that clearly show heartbeats and blinks regexp = r'(MEG [12][45][123]1|EEG 00.)' artifact_picks = mne.pick_channels_regexp(raw.ch_names, regexp=regexp) raw.plot(order=artifact_picks, n_channels=len(artifact_picks)) Explanation: Plotting the ERP/F using evoked.plot() or evoked.plot_joint() with and without projectors applied can also be informative, as can plotting with proj='reconstruct', which can reduce the signal bias introduced by projections (see tut-artifact-ssp-reconstruction below). Example: EOG and ECG artifact repair Visualizing the artifacts As mentioned in the ICA tutorial &lt;tut-artifact-ica&gt;, an important first step is visualizing the artifacts you want to repair. Here they are in the raw data: End of explanation ecg_evoked = create_ecg_epochs(raw).average() ecg_evoked.plot_joint() Explanation: Repairing ECG artifacts with SSP MNE-Python provides several functions for detecting and removing heartbeats from EEG and MEG data. As we saw in tut-artifact-overview, ~mne.preprocessing.create_ecg_epochs can be used to both detect and extract heartbeat artifacts into an ~mne.Epochs object, which can be used to visualize how the heartbeat artifacts manifest across the sensors: End of explanation ecg_evoked.apply_baseline((None, None)) ecg_evoked.plot_joint() Explanation: Looks like the EEG channels are pretty spread out; let's baseline-correct and plot again: End of explanation projs, events = compute_proj_ecg(raw, n_grad=1, n_mag=1, n_eeg=1, reject=None) Explanation: To compute SSP projectors for the heartbeat artifact, you can use ~mne.preprocessing.compute_proj_ecg, which takes a ~mne.io.Raw object as input and returns the requested number of projectors for magnetometers, gradiometers, and EEG channels (default is two projectors for each channel type). ~mne.preprocessing.compute_proj_ecg also returns an :term:events array containing the sample numbers corresponding to the peak of the R wave_ of each detected heartbeat. End of explanation ecg_projs = projs[3:] print(ecg_projs) Explanation: The first line of output tells us that ~mne.preprocessing.compute_proj_ecg found three existing projectors already in the ~mne.io.Raw object, and will include those in the list of projectors that it returns (appending the new ECG projectors to the end of the list). If you don't want that, you can change that behavior with the boolean no_proj parameter. Since we've already run the computation, we can just as easily separate out the ECG projectors by indexing the list of projectors: End of explanation mne.viz.plot_projs_topomap(ecg_projs, info=raw.info) Explanation: Just like with the empty-room projectors, we can visualize the scalp distribution: End of explanation raw.del_proj() for title, proj in [('Without', empty_room_projs), ('With', ecg_projs)]: raw.add_proj(proj, remove_existing=False) with mne.viz.use_browser_backend('matplotlib'): fig = raw.plot(order=artifact_picks, n_channels=len(artifact_picks)) fig.subplots_adjust(top=0.9) # make room for title fig.suptitle('{} ECG projectors'.format(title), size='xx-large', weight='bold') Explanation: Since no dedicated ECG sensor channel was detected in the ~mne.io.Raw object, by default ~mne.preprocessing.compute_proj_ecg used the magnetometers to estimate the ECG signal (as stated on the third line of output, above). You can also supply the ch_name parameter to restrict which channel to use for ECG artifact detection; this is most useful when you had an ECG sensor but it is not labeled as such in the ~mne.io.Raw file. The next few lines of the output describe the filter used to isolate ECG events. The default settings are usually adequate, but the filter can be customized via the parameters ecg_l_freq, ecg_h_freq, and filter_length (see the documentation of ~mne.preprocessing.compute_proj_ecg for details). .. TODO what are the cases where you might need to customize the ECG filter? infants? Heart murmur? Once the ECG events have been identified, ~mne.preprocessing.compute_proj_ecg will also filter the data channels before extracting epochs around each heartbeat, using the parameter values given in l_freq, h_freq, filter_length, filter_method, and iir_params. Here again, the default parameter values are usually adequate. .. TODO should advice for filtering here be the same as advice for filtering raw data generally? (e.g., keep high-pass very low to avoid peak shifts? what if your raw data is already filtered?) By default, the filtered epochs will be averaged together before the projection is computed; this can be controlled with the boolean average parameter. In general this improves the signal-to-noise (where "signal" here is our artifact!) ratio because the artifact temporal waveform is fairly similar across epochs and well time locked to the detected events. To get a sense of how the heartbeat affects the signal at each sensor, you can plot the data with and without the ECG projectors: End of explanation eog_evoked = create_eog_epochs(raw).average() eog_evoked.apply_baseline((None, None)) eog_evoked.plot_joint() Explanation: Finally, note that above we passed reject=None to the ~mne.preprocessing.compute_proj_ecg function, meaning that all detected ECG epochs would be used when computing the projectors (regardless of signal quality in the data sensors during those epochs). The default behavior is to reject epochs based on signal amplitude: epochs with peak-to-peak amplitudes exceeding 50 µV in EEG channels, 250 µV in EOG channels, 2000 fT/cm in gradiometer channels, or 3000 fT in magnetometer channels. You can change these thresholds by passing a dictionary with keys eeg, eog, mag, and grad (though be sure to pass the threshold values in volts, teslas, or teslas/meter). Generally, it is a good idea to reject such epochs when computing the ECG projectors (since presumably the high-amplitude fluctuations in the channels are noise, not reflective of brain activity); passing reject=None above was done simply to avoid the dozens of extra lines of output (enumerating which sensor(s) were responsible for each rejected epoch) from cluttering up the tutorial. <div class="alert alert-info"><h4>Note</h4><p>`~mne.preprocessing.compute_proj_ecg` has a similar parameter ``flat`` for specifying the *minimum* acceptable peak-to-peak amplitude for each channel type.</p></div> While ~mne.preprocessing.compute_proj_ecg conveniently combines several operations into a single function, MNE-Python also provides functions for performing each part of the process. Specifically: mne.preprocessing.find_ecg_events for detecting heartbeats in a ~mne.io.Raw object and returning a corresponding :term:events array mne.preprocessing.create_ecg_epochs for detecting heartbeats in a ~mne.io.Raw object and returning an ~mne.Epochs object mne.compute_proj_epochs for creating projector(s) from any ~mne.Epochs object See the documentation of each function for further details. Repairing EOG artifacts with SSP Once again let's visualize our artifact before trying to repair it. We've seen above the large deflections in frontal EEG channels in the raw data; here is how the ocular artifacts manifests across all the sensors: End of explanation eog_projs, _ = compute_proj_eog(raw, n_grad=1, n_mag=1, n_eeg=1, reject=None, no_proj=True) Explanation: Just like we did with the heartbeat artifact, we can compute SSP projectors for the ocular artifact using ~mne.preprocessing.compute_proj_eog, which again takes a ~mne.io.Raw object as input and returns the requested number of projectors for magnetometers, gradiometers, and EEG channels (default is two projectors for each channel type). This time, we'll pass no_proj parameter (so we get back only the new EOG projectors, not also the existing projectors in the ~mne.io.Raw object), and we'll ignore the events array by assigning it to _ (the conventional way of handling unwanted return elements in Python). End of explanation mne.viz.plot_projs_topomap(eog_projs, info=raw.info) Explanation: Just like with the empty-room and ECG projectors, we can visualize the scalp distribution: End of explanation for title in ('Without', 'With'): if title == 'With': raw.add_proj(eog_projs) with mne.viz.use_browser_backend('matplotlib'): fig = raw.plot(order=artifact_picks, n_channels=len(artifact_picks)) fig.subplots_adjust(top=0.9) # make room for title fig.suptitle('{} EOG projectors'.format(title), size='xx-large', weight='bold') Explanation: Now we repeat the plot from above (with empty room and ECG projectors) and compare it to a plot with empty room, ECG, and EOG projectors, to see how well the ocular artifacts have been repaired: End of explanation evoked_eeg = epochs.average().pick('eeg') evoked_eeg.del_proj().add_proj(ecg_projs).add_proj(eog_projs) fig, axes = plt.subplots(1, 3, figsize=(8, 3), squeeze=False) for ii in range(axes.shape[0]): axes[ii, 0].get_shared_y_axes().join(*axes[ii]) for pi, proj in enumerate((False, True, 'reconstruct')): evoked_eeg.plot(proj=proj, axes=axes[:, pi], spatial_colors=True) if pi == 0: for ax in axes[:, pi]: parts = ax.get_title().split('(') ax.set(ylabel=f'{parts[0]} ({ax.get_ylabel()})\n' f'{parts[1].replace(")", "")}') axes[0, pi].set(title=f'proj={proj}') for text in list(axes[0, pi].texts): text.remove() plt.setp(axes[1:, :].ravel(), title='') plt.setp(axes[:, 1:].ravel(), ylabel='') plt.setp(axes[:-1, :].ravel(), xlabel='') mne.viz.tight_layout() Explanation: Notice that the small peaks in the first to magnetometer channels (MEG 1411 and MEG 1421) that occur at the same time as the large EEG deflections have also been removed. Choosing the number of projectors In the examples above, we used 3 projectors (all magnetometer) to capture empty room noise, and saw how projectors computed for the gradiometers failed to capture global patterns (and thus we discarded the gradiometer projectors). Then we computed 3 projectors (1 for each channel type) to capture the heartbeat artifact, and 3 more to capture the ocular artifact. How did we choose these numbers? The short answer is "based on experience" — knowing how heartbeat artifacts typically manifest across the sensor array allows us to recognize them when we see them, and recognize when additional projectors are capturing something else other than a heartbeat artifact (and thus may be removing brain signal and should be discarded). Visualizing SSP sensor-space bias via signal reconstruction .. sidebar:: SSP reconstruction Internally, the reconstruction is performed by effectively using a minimum-norm source localization to a spherical source space with the projections accounted for, and then projecting the source-space data back out to sensor space. Because SSP performs an orthogonal projection, any spatial component in the data that is not perfectly orthogonal to the SSP spatial direction(s) will have its overall amplitude reduced by the projection operation. In other words, SSP typically introduces some amount of amplitude reduction bias in the sensor space data. When performing source localization of M/EEG data, these projections are properly taken into account by being applied not just to the M/EEG data but also to the forward solution, and hence SSP should not bias the estimated source amplitudes. However, for sensor space analyses, it can be useful to visualize the extent to which SSP projection has biased the data. This can be explored by using proj='reconstruct' in evoked plotting functions, for example via evoked.plot() &lt;mne.Evoked.plot&gt;, here restricted to just EEG channels for speed: End of explanation
7,508
Given the following text description, write Python code to implement the functionality described below step by step Description: Transient Fickian Diffusion The package OpenPNM allows for the simulation of many transport phenomena in porous media such as Stokes flow, Fickian diffusion, advection-diffusion, transport of charged species, etc. Transient and steady-state simulations are both supported. An example of a transient Fickian diffusion simulation through a Cubic pore network is shown here. First, OpenPNM is imported. Step1: Define new workspace and project Step2: Generate a pore network An arbitrary Cubic 3D pore network is generated consisting of a layer of $29\times13$ pores with a constant pore to pore centers spacing of ${10}^{-4}{m}$. Step3: Create a geometry Here, a geometry, corresponding to the created network, is created. The geometry contains information about the size of pores and throats in the network such as length and diameter, etc. OpenPNM has many prebuilt geometries that represent the microstructure of different materials such as Toray090 carbon papers, sand stone, electrospun fibers, etc. In this example, a simple geometry known as SpheresAndCylinders that assigns random diameter values to pores throats, with certain constraints, is used. Step4: Add a phase Then, a phase (water in this example) is added to the simulation and assigned to the network. The phase contains the physical properties of the fluid considered in the simulation such as the viscosity, etc. Many predefined phases as available on OpenPNM. Step5: Add a physics Next, a physics object is defined. The physics object stores information about the different physical models used in the simulation and is assigned to specific network, geometry and phase objects. This ensures that the different physical models will only have access to information about the network, geometry and phase objects to which they are assigned. In fact, models (such as Stokes flow or Fickian diffusion) require information about the network (such as the connectivity between pores), the geometry (such as the pores and throats diameters), and the phase (such as the diffusivity coefficient). Step6: The diffusivity coefficient of the considered chemical species in water is also defined. Step7: Defining a new model The physical model, consisting of Fickian diffusion, is defined and attached to the physics object previously defined. Step8: Define a transient Fickian diffusion algorithm Here, an algorithm for the simulation of transient Fickian diffusion is defined. It is assigned to the network and phase of interest to be able to retrieve all the information needed to build systems of linear equations. Step9: Add boundary conditions Next, Dirichlet boundary conditions are added over the back and front boundaries of the network. Step10: Define initial conditions Initial conditions must be specified when alg.run is called as alg.run(x0=x0), where x0 could either be a scalar (in which case it'll be broadcasted to all pores), or an array. Setup the transient algorithm settings The settings of the transient algorithm are updated here. When calling alg.run, you can pass the following arguments Step11: Print the algorithm settings One can print the algorithm's settings as shown here. Step12: Note that the quantity corresponds to the quantity solved for. Run the algorithm The algorithm is run here. Step13: Post process and export the results Once the simulation is successfully performed. The solution at every time steps is stored within the algorithm object. The algorithm's stored information is printed here. Step14: Note that the solutions at every exported time step contain the @ character followed by the time value. Here the solution is exported after each $5s$ in addition to the final time step which is not a multiple of $5$ in this example. To print the solution at $t=10s$ Step15: The solution is here stored in the phase before export. Step16: Visialization using Matplotlib One can perform post processing and visualization using the exported files on an external software such as Paraview. Additionally, the Pyhton library Matplotlib can be used as shown here to plot the concentration color map at steady-state.
Python Code: import numpy as np import openpnm as op %config InlineBackend.figure_formats = ['svg'] np.random.seed(10) %matplotlib inline np.set_printoptions(precision=5) Explanation: Transient Fickian Diffusion The package OpenPNM allows for the simulation of many transport phenomena in porous media such as Stokes flow, Fickian diffusion, advection-diffusion, transport of charged species, etc. Transient and steady-state simulations are both supported. An example of a transient Fickian diffusion simulation through a Cubic pore network is shown here. First, OpenPNM is imported. End of explanation ws = op.Workspace() ws.settings["loglevel"] = 40 proj = ws.new_project() Explanation: Define new workspace and project End of explanation shape = [13, 29, 1] net = op.network.Cubic(shape=shape, spacing=1e-4, project=proj) Explanation: Generate a pore network An arbitrary Cubic 3D pore network is generated consisting of a layer of $29\times13$ pores with a constant pore to pore centers spacing of ${10}^{-4}{m}$. End of explanation geo = op.geometry.SpheresAndCylinders(network=net, pores=net.Ps, throats=net.Ts) Explanation: Create a geometry Here, a geometry, corresponding to the created network, is created. The geometry contains information about the size of pores and throats in the network such as length and diameter, etc. OpenPNM has many prebuilt geometries that represent the microstructure of different materials such as Toray090 carbon papers, sand stone, electrospun fibers, etc. In this example, a simple geometry known as SpheresAndCylinders that assigns random diameter values to pores throats, with certain constraints, is used. End of explanation phase = op.phases.Water(network=net) Explanation: Add a phase Then, a phase (water in this example) is added to the simulation and assigned to the network. The phase contains the physical properties of the fluid considered in the simulation such as the viscosity, etc. Many predefined phases as available on OpenPNM. End of explanation phys = op.physics.GenericPhysics(network=net, phase=phase, geometry=geo) Explanation: Add a physics Next, a physics object is defined. The physics object stores information about the different physical models used in the simulation and is assigned to specific network, geometry and phase objects. This ensures that the different physical models will only have access to information about the network, geometry and phase objects to which they are assigned. In fact, models (such as Stokes flow or Fickian diffusion) require information about the network (such as the connectivity between pores), the geometry (such as the pores and throats diameters), and the phase (such as the diffusivity coefficient). End of explanation phase['pore.diffusivity'] = 2e-09 Explanation: The diffusivity coefficient of the considered chemical species in water is also defined. End of explanation mod = op.models.physics.diffusive_conductance.ordinary_diffusion phys.add_model(propname='throat.diffusive_conductance', model=mod, regen_mode='normal') Explanation: Defining a new model The physical model, consisting of Fickian diffusion, is defined and attached to the physics object previously defined. End of explanation fd = op.algorithms.TransientFickianDiffusion(network=net, phase=phase) Explanation: Define a transient Fickian diffusion algorithm Here, an algorithm for the simulation of transient Fickian diffusion is defined. It is assigned to the network and phase of interest to be able to retrieve all the information needed to build systems of linear equations. End of explanation fd.set_value_BC(pores=net.pores('back'), values=0.5) fd.set_value_BC(pores=net.pores('front'), values=0.2) Explanation: Add boundary conditions Next, Dirichlet boundary conditions are added over the back and front boundaries of the network. End of explanation x0 = 0.2 tspan = (0, 100) saveat = 5 Explanation: Define initial conditions Initial conditions must be specified when alg.run is called as alg.run(x0=x0), where x0 could either be a scalar (in which case it'll be broadcasted to all pores), or an array. Setup the transient algorithm settings The settings of the transient algorithm are updated here. When calling alg.run, you can pass the following arguments: - x0: initial conditions - tspan: integration time span - saveat: the interval at which the solution is to be stored End of explanation print(fd.settings) Explanation: Print the algorithm settings One can print the algorithm's settings as shown here. End of explanation soln = fd.run(x0=x0, tspan=tspan, saveat=saveat) Explanation: Note that the quantity corresponds to the quantity solved for. Run the algorithm The algorithm is run here. End of explanation print(fd) Explanation: Post process and export the results Once the simulation is successfully performed. The solution at every time steps is stored within the algorithm object. The algorithm's stored information is printed here. End of explanation soln(10) Explanation: Note that the solutions at every exported time step contain the @ character followed by the time value. Here the solution is exported after each $5s$ in addition to the final time step which is not a multiple of $5$ in this example. To print the solution at $t=10s$ End of explanation phase.update(fd.results()) Explanation: The solution is here stored in the phase before export. End of explanation import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable c = fd.x.reshape(shape) fig, ax = plt.subplots(figsize=(6, 6)) im = ax.imshow(c[:,:,0]) ax.set_title('concentration (mol/m$^3$)') divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="4%", pad=0.1) plt.colorbar(im, cax=cax); Explanation: Visialization using Matplotlib One can perform post processing and visualization using the exported files on an external software such as Paraview. Additionally, the Pyhton library Matplotlib can be used as shown here to plot the concentration color map at steady-state. End of explanation
7,509
Given the following text description, write Python code to implement the functionality described below step by step Description: How to forecast time series in BigQuery ML This notebook accompanies the article How to do time series forecasting in BigQuery Install library and extensions if needed You don't need to do this if you use AI Platform Notebooks Step1: Helper plot functions Step2: Plot the time series The first step, as with any machine learning problem is to gather the training data and explore it. Assume that we have the data on rentals until mid-June of 2015 and we'd like to predict for the rest of the month. We can gather the past 6 weeks of data using Step3: Train ARIMA model We can use this data to train an ARIMA model, telling BigQuery which column is the data column and which one the timestamp column Step4: We can get the forecast data using Step5: Forecasting a bunch of series So far, I have been forecasting the overall rental volume for all the bicycle stations in Hyde Park. How do we predict the rental volume for each individual station? Use the time_series_id_col Step6: Note that instead of training the series on 45 days (May 1 to June 15), I'm now training on a longer time period. That's because the aggregate time series will tend to be smoother and much easier to predict than the time series for individual stations. So, we have to show the model a longer trendline. Step7: As you would expect, the aggregated time series over all the stations is much smoother and more predictable than the time series of just one station (the one station data will be more noisy). So, some forecasts will be better than others. Step8: Evaluation As you can see from the graphs above, the predictions accuracy varies by station. Can we gauge how good the prediction for a station is going to be?
Python Code: #!pip install google-cloud-bigquery %load_ext google.cloud.bigquery Explanation: How to forecast time series in BigQuery ML This notebook accompanies the article How to do time series forecasting in BigQuery Install library and extensions if needed You don't need to do this if you use AI Platform Notebooks End of explanation import matplotlib.pyplot as plt import pandas as pd def plot_historical_and_forecast(input_timeseries, timestamp_col_name, data_col_name, forecast_output=None, actual=None): input_timeseries = input_timeseries.sort_values(timestamp_col_name) plt.figure(figsize=(20,6)) plt.plot(input_timeseries[timestamp_col_name], input_timeseries[data_col_name], label = 'Historical') plt.xlabel(timestamp_col_name) plt.ylabel(data_col_name) if forecast_output is not None: forecast_output = forecast_output.sort_values('forecast_timestamp') forecast_output['forecast_timestamp'] = pd.to_datetime(forecast_output['forecast_timestamp']) x_data = forecast_output['forecast_timestamp'] y_data = forecast_output['forecast_value'] confidence_level = forecast_output['confidence_level'].iloc[0] * 100 low_CI = forecast_output['confidence_interval_lower_bound'] upper_CI = forecast_output['confidence_interval_upper_bound'] # Plot the data, set the linewidth, color and transparency of the # line, provide a label for the legend plt.plot(x_data, y_data, alpha = 1, label = 'Forecast', linestyle='--') # Shade the confidence interval plt.fill_between(x_data, low_CI, upper_CI, color = '#539caf', alpha = 0.4, label = str(confidence_level) + '% confidence interval') # actual if actual is not None: actual = actual.sort_values(timestamp_col_name) plt.plot(actual[timestamp_col_name], actual[data_col_name], label = 'Actual', linestyle='--') # Display legend plt.legend(loc = 'upper center', prop={'size': 16}) Explanation: Helper plot functions End of explanation %%bigquery df SELECT CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date , COUNT(*) AS numrentals FROM `bigquery-public-data`.london_bicycles.cycle_hire WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park GROUP BY date HAVING date BETWEEN '2015-05-01' AND '2015-06-15' ORDER BY date plot_historical_and_forecast(df, 'date', 'numrentals'); Explanation: Plot the time series The first step, as with any machine learning problem is to gather the training data and explore it. Assume that we have the data on rentals until mid-June of 2015 and we'd like to predict for the rest of the month. We can gather the past 6 weeks of data using End of explanation !bq ls ch09eu || bq mk --location EU ch09eu %%bigquery CREATE OR REPLACE MODEL ch09eu.numrentals_forecast OPTIONS(model_type='ARIMA', time_series_data_col='numrentals', time_series_timestamp_col='date') AS SELECT CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date , COUNT(*) AS numrentals FROM `bigquery-public-data`.london_bicycles.cycle_hire WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park GROUP BY date HAVING date BETWEEN '2015-05-01' AND '2015-06-15' Explanation: Train ARIMA model We can use this data to train an ARIMA model, telling BigQuery which column is the data column and which one the timestamp column: End of explanation %%bigquery fcst SELECT * FROM ML.FORECAST(MODEL ch09eu.numrentals_forecast, STRUCT(14 AS horizon, 0.9 AS confidence_level)) plot_historical_and_forecast(df, 'date', 'numrentals', fcst); %%bigquery actual SELECT CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date , COUNT(*) AS numrentals FROM `bigquery-public-data`.london_bicycles.cycle_hire WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park GROUP BY date HAVING date BETWEEN '2015-06-16' AND '2015-07-01' ORDER BY date plot_historical_and_forecast(df, 'date', 'numrentals', fcst, actual); Explanation: We can get the forecast data using: End of explanation %%bigquery CREATE OR REPLACE MODEL ch09eu.numrentals_forecast OPTIONS(model_type='ARIMA', time_series_data_col='numrentals', time_series_timestamp_col='date', time_series_id_col='start_station_name') AS SELECT start_station_name , CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date , COUNT(*) AS numrentals FROM `bigquery-public-data`.london_bicycles.cycle_hire WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park GROUP BY start_station_name, date HAVING date BETWEEN '2015-01-01' AND '2015-06-15' Explanation: Forecasting a bunch of series So far, I have been forecasting the overall rental volume for all the bicycle stations in Hyde Park. How do we predict the rental volume for each individual station? Use the time_series_id_col: End of explanation %%bigquery SELECT * FROM ML.ARIMA_COEFFICIENTS(MODEL ch09eu.numrentals_forecast) ORDER BY start_station_name %%bigquery fcst SELECT * FROM ML.FORECAST(MODEL ch09eu.numrentals_forecast, STRUCT(14 AS horizon, 0.9 AS confidence_level)) ORDER By start_station_name, forecast_timestamp %%bigquery df SELECT start_station_name , CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date , COUNT(*) AS numrentals FROM `bigquery-public-data`.london_bicycles.cycle_hire WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park GROUP BY start_station_name, date HAVING date BETWEEN '2015-05-01' AND '2015-06-15' -- this is just for plotting, hence we'll keep this 45 days. %%bigquery actual SELECT start_station_name , CAST(EXTRACT(date from start_date) AS TIMESTAMP) AS date , COUNT(*) AS numrentals FROM `bigquery-public-data`.london_bicycles.cycle_hire WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park GROUP BY start_station_name, date HAVING date BETWEEN '2015-06-16' AND '2015-07-01' Explanation: Note that instead of training the series on 45 days (May 1 to June 15), I'm now training on a longer time period. That's because the aggregate time series will tend to be smoother and much easier to predict than the time series for individual stations. So, we have to show the model a longer trendline. End of explanation %%bigquery stations SELECT DISTINCT start_station_name FROM `bigquery-public-data`.london_bicycles.cycle_hire WHERE start_station_name LIKE '%Hyde%' -- all stations in Hyde Park ORDER by start_station_name ASC stations station = stations['start_station_name'].iloc[3] # Hyde Park Corner print(station) plot_historical_and_forecast(df[df['start_station_name']==station], 'date', 'numrentals', fcst[fcst['start_station_name']==station], actual[actual['start_station_name']==station]); station = stations['start_station_name'].iloc[6] # Serpentine Car Park, print(station) plot_historical_and_forecast(df[df['start_station_name']==station], 'date', 'numrentals', fcst[fcst['start_station_name']==station], actual[actual['start_station_name']==station]); station = stations['start_station_name'].iloc[4] # Knightsbridge print(station) plot_historical_and_forecast(df[df['start_station_name']==station], 'date', 'numrentals', fcst[fcst['start_station_name']==station], actual[actual['start_station_name']==station]); Explanation: As you would expect, the aggregated time series over all the stations is much smoother and more predictable than the time series of just one station (the one station data will be more noisy). So, some forecasts will be better than others. End of explanation %%bigquery SELECT * FROM ML.EVALUATE(MODEL ch09eu.numrentals_forecast) ORDER BY variance DESC Explanation: Evaluation As you can see from the graphs above, the predictions accuracy varies by station. Can we gauge how good the prediction for a station is going to be? End of explanation
7,510
Given the following text description, write Python code to implement the functionality described below step by step Description: Machine Learning Using Python by ARCC What is Machine Learning? Machine Learning is the subfield of computer science, which is defined by Arthur Samuel as "Giving computers the ability to learn without being explicitly programmed". Generally speaking, it can be defined as the ability of machines to learn to perform a task efficiently based on experience. Two major types of Machine Learning problems The two major types of Machine Learning problems are Step1: Using the best fittng straight line, we can predict the next expected value. Hence Regression is a Machine Learning technique which helps to predict the output in a model that takes continous values. Linear Classifier A classifier, for now, can be thought of as a program that uses an object's characteristics to identify which class it belongs to. For example, classifying a fruit as an orange or an apple. The following program is a simple Supervised Learning classifier, that makes use of a decision tree classifier (An example of a decision tree is shown below). Step2: Support Vector Machines “Support Vector Machine” (SVM) is a supervised machine learning algorithm which can be used for both classification or regression challenges. However, it is mostly used in classification problems. An example of SVM would be using a linear hyperplane to seperate two clusters of data points. The following code implements the same. Step3: Basic workflow when using Machine Learning algorithms Neural Networks Neural Networks can be thought of as a one large composite function, which is comprised of other functions. Each layer is a function that is fed an input which is the result of the previous function's output. For example Step4: Optional
Python Code: #NumPy is the fundamental package for scientific computing with Python import numpy as np # Matplotlib is a Python 2D plotting library import matplotlib.pyplot as plt #Number of data points n=50 x=np.random.randn(n) y=np.random.randn(n) #Create a figure and a set of subplots fig, ax = plt.subplots() #Find best fitting straight line #This will return the coefficients of the best fitting straight line #i.e. m and c in the slope intercept form of a line-> y=m*x+c fit = np.polyfit(x, y, 1) #Plot the straight line ax.plot(x, fit[0] * x + fit[1], color='black') #scatter plot the data set ax.scatter(x, y) plt.ylabel('y axis') plt.xlabel('x axis') plt.show() #predict output for an input say x=5 x_input=5 predicted_value= fit[0] * x_input + fit[1] print(predicted_value) Explanation: Machine Learning Using Python by ARCC What is Machine Learning? Machine Learning is the subfield of computer science, which is defined by Arthur Samuel as "Giving computers the ability to learn without being explicitly programmed". Generally speaking, it can be defined as the ability of machines to learn to perform a task efficiently based on experience. Two major types of Machine Learning problems The two major types of Machine Learning problems are: 1. Supervised Learning : In this type of Machine Learning problem, the learning algorithms are provided with a data set that has a known label or result, such as classifying a bunch of emails as spam/not-spam emails. 2. Unsupervised Learning : In this type of Machine Learning problem, the learning algorithms are provided with a data set that has no known label or result. The algorithm without any supervision is expected to find some structure in the data by itself. For example search engines. In order to limit the scope of this boot camp, as well as save us some time, we will focus on Supervised Learning today. Machine Learning's "Hello World" programs Supervised Learning In this section we focus on three Supervised Learning algorithms namely Linear Regression, Linear Classifier and Support Vector Machines. Linear Regression In order to explain what Linear Regression is, lets write a program that performs Linear Regression. Our goal is to find the best fitting straight line through a data set comprising of 50 random points. Equation of a line in slope intercept form is: $y=m*x+C$ End of explanation #Import the decision tree classifier class from the scikit-learn machine learning library from sklearn import tree #List of features #Say we have 9 inputs each with two features i.e. [feature one=1:9, feature two=0 or 1] features=[[1,1],[8,0],[5,1],[2,1],[6,0],[9,1],[3,1],[4,1],[7,0]] #The 9 inputs are classified explicitly into three classes (0,1 and 2) by us # For example input 1,1 belongs to class 0 # input 4,1 belongs to class 1 # input 8,1 belongs to class 2 labels=[0,0,0,1,1,1,2,2,2] #Features are the inputs to the classifier and labels are the outputs #Create decision tree classifier clf=tree.DecisionTreeClassifier() #Training algorithm, included in the object, is executed clf=clf.fit(features,labels) #Fit is a synonym for "find patterns in data" #Predict to which class does an input belong #for example [20,1] print (clf.predict([[2,1]])) Explanation: Using the best fittng straight line, we can predict the next expected value. Hence Regression is a Machine Learning technique which helps to predict the output in a model that takes continous values. Linear Classifier A classifier, for now, can be thought of as a program that uses an object's characteristics to identify which class it belongs to. For example, classifying a fruit as an orange or an apple. The following program is a simple Supervised Learning classifier, that makes use of a decision tree classifier (An example of a decision tree is shown below). End of explanation #import basic libraries for plotting and scientific computing import numpy as np import matplotlib.pyplot as plt from matplotlib import style #emulates the aesthetics of ggplot in R style.use("ggplot") #import class svm from scikit-learn from sklearn import svm #input data X = [1, 5, 1.5, 8, 1, 9] Y = [2, 8, 1.8, 8, 0.6, 11] #classes assigned to input data y = [0,1,0,1,0,1] #plot input data #plt.scatter(X,Y) #plt.show() #Create the Linear Support Vector Classification clf = svm.SVC(kernel='linear', C = 1.0) #input data in 2-D points=[[1,2],[5,8],[1.5,1.8],[8,8],[1,0.6],[9,11]]#,[2,2],[1,4]] #Fit the data with Linear Support Vector Classification clf.fit(points,y) #Fit is a synonym for "find patterns in data" #Predict the class for the following two points depending on which side of the SVM they lie print(clf.predict([0.58,0.76])) print(clf.predict([10.58,10.76])) #find coefficients of the linear svm w = clf.coef_[0] #print(w) #find slope of the line we wish to draw between the two classes #a=change in y/change in x a = -w[0] / w[1] #Draw the line #x points for a line #linspace ->Return evenly spaced numbers over a specified interval. xx = np.linspace(0,12) #equation our SVM hyperplane yy = a * xx - clf.intercept_[0] / w[1] #plot the hyperplane h0 = plt.plot(xx, yy, 'k-', label="svm") #plot the data points as a scatter plot plt.scatter(X,Y) plt.legend() plt.show() Explanation: Support Vector Machines “Support Vector Machine” (SVM) is a supervised machine learning algorithm which can be used for both classification or regression challenges. However, it is mostly used in classification problems. An example of SVM would be using a linear hyperplane to seperate two clusters of data points. The following code implements the same. End of explanation #import numpy library import numpy as np # Sigmoid function which maps any value to between 0 and 1 # This is the function which will our layers will comprise of # It is used to convert numbers to probabilties def sigmoid(x): return 1/(1+np.exp(-x)) # input dataset # 4 set of inputs x = np.array([[0.45,0,0.21],[0.5,0.32,0.21],[0.6,0.5,0.19],[0.7,0.9,0.19]]) # output dataset corresponding to our set of inputs # .T takes the transpose of the 1x4 matrix which will give us a 4x1 matrix y = np.array([[0.5,0.4,0.6,0.9]]).T #Makes our model deterministic for us to judge the outputs better #Numbers will be randomly distributed but randomly distributed in exactly the same way each time the model is trained np.random.seed(1) # initialize weights randomly with mean 0, weights lie within -1 to 1 # dimensions are 3x1 because we have three inputs and one output weights = 2*np.random.random((3,1))-1 #Network training code #Train our neural network for iter in range(1000): #get input input_var = x #This is our prediction step #first predict given input, then study how it performs and adjust to get better #This line first multiplies the input by the weights and then passes it to the sigmoid function output = sigmoid(np.dot(input_var,weights)) #now we have guessed an output based on the provided input #subtract from the actual answer to see how much did we miss error = y - output #based on error update our weights weights += np.dot(input_var.T,error) #The best fit weights by our neural net is as following: print("The weights that the neural network found was:") print(weights) #Predict with new inputs i.e. dot with weights and then send to our prediction function predicted_output = sigmoid(np.dot(np.array([0.3,0.9,0.1]),weights)) print ("Predicted Output:") print (predicted_output) Explanation: Basic workflow when using Machine Learning algorithms Neural Networks Neural Networks can be thought of as a one large composite function, which is comprised of other functions. Each layer is a function that is fed an input which is the result of the previous function's output. For example: The example above was rather rudimentary. Let us look at a case where we have more than one inputs, fed to a prediction function that maps them to an output. This can be depicted by the following graph. Building a Neural Network The function carried out by our layer is termed as the sigmoid. It takes the form: $1/(1+exp(-x))$ Steps to follow to create our neural network: <br> 1) Get set of input <br> 2) dot with a set of weights i.e. weight1input1+weight2input2+weight3*input3 <br> 3) send the dot product to our prediction function i.e. sigmoid <br> 4) check how much we missed i.e. calculate error <br> 5) adjust weights accordingly <br> 6) Do this for all inputs and about 1000 times End of explanation import cv2 import numpy as np import matplotlib.pyplot as plt # Feature set containing (x,y) values of 25 known/training data trainData = np.random.randint(0,100,(25,2)).astype(np.float32) # Labels each one either Red or Blue with numbers 0 and 1 responses = np.random.randint(0,2,(25,1)).astype(np.float32) # Take Red families and plot them red = trainData[responses.ravel()==0] plt.scatter(red[:,0],red[:,1],80,'r','^') # Take Blue families and plot them blue = trainData[responses.ravel()==1] plt.scatter(blue[:,0],blue[:,1],80,'b','s') #New unknown data point newcomer = np.random.randint(0,100,(1,2)).astype(np.float32) #Make this unknown data point green plt.scatter(newcomer[:,0],newcomer[:,1],80,'g','o') #Carry out the K nearest neighbour classification knn = cv2.ml.KNearest_create() #Train the algorithm #passing 0 as a parameter considers the length of array as 1 for entire row. knn.train(trainData, 0, responses) #Find 3 nearest neighbours...also make sure the neighbours found belong to both classes ret, results, neighbours ,dist = knn.findNearest(newcomer, 3) print ("result: ", results,"\n") print ("neighbours: ", neighbours,"\n") print ("distance: ", dist) plt.show() Explanation: Optional : K-Nearest Neighbour End of explanation
7,511
Given the following text description, write Python code to implement the functionality described below step by step Description: Excercises Electric Machinery Fundamentals Chapter 1 Problem 1-18 Step1: Description Assume that the voltage applied to a load is $\vec{V} = 208\,V\angle -30^\circ$ and the current flowing through the load $\vec{I} = 2\,A\angle 20^\circ$. (a) Calculate the complex power $S$ consumed by this load. (b) Is this load inductive or capacitive? (c) Calculate the power factor of this load? (d) Calculate the reactive power consumed or supplied by this load. Does the load consume reactive power from the source or supply it to the source? Step2: SOLUTION (a) The complex power $S$ consumed by this load is Step3: (b) This is a capacitive load. (c) The power factor of this load is leading and Step4: (d) This load supplies reactive power to the source. The reactive power of the load is
Python Code: %pylab notebook Explanation: Excercises Electric Machinery Fundamentals Chapter 1 Problem 1-18 End of explanation V = 208.0 * exp(-1j*30/180*pi) # [V] I = 2.0 * exp( 1j*20/180*pi) # [A] Explanation: Description Assume that the voltage applied to a load is $\vec{V} = 208\,V\angle -30^\circ$ and the current flowing through the load $\vec{I} = 2\,A\angle 20^\circ$. (a) Calculate the complex power $S$ consumed by this load. (b) Is this load inductive or capacitive? (c) Calculate the power factor of this load? (d) Calculate the reactive power consumed or supplied by this load. Does the load consume reactive power from the source or supply it to the source? End of explanation S = V * conjugate(I) # The complex conjugate of a complex number is # obtained by changing the sign of its imaginary part. S_angle = arctan(S.imag/S.real) print('S = {:.1f} VA ∠{:.1f}°'.format(*(abs(S), S_angle/pi*180))) print('====================') Explanation: SOLUTION (a) The complex power $S$ consumed by this load is: $$ S = V\cdot I^* $$ End of explanation PF = cos(S_angle) print('PF = {:.3f} leading'.format(PF)) print('==================') Explanation: (b) This is a capacitive load. (c) The power factor of this load is leading and: End of explanation Q = abs(S)*sin(S_angle) print('Q = {:.1f} var'.format(Q)) print('==============') Explanation: (d) This load supplies reactive power to the source. The reactive power of the load is: $$ Q = VI\sin\theta = S\sin\theta$$ End of explanation
7,512
Given the following text description, write Python code to implement the functionality described below step by step Description: Source alignment and coordinate frames The aim of this tutorial is to show how to visually assess that the data are well aligned in space for computing the forward solution, and understand the different coordinate frames involved in this process. Step1: Understanding coordinate frames For M/EEG source imaging, there are three coordinate frames that we must bring into alignment using two 3D transformation matrices &lt;trans_matrices&gt;_ that define how to rotate and translate points in one coordinate frame to their equivalent locations in another. Step2: Coordinate frame definitions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. raw Step3: It is quite clear that the MRI surfaces (head, brain) are not well aligned to the head digitization points (dots). A good example Here is the same plot, this time with the trans properly defined (using a precomputed matrix). Step4: Defining the head↔MRI trans using the GUI You can try creating the head↔MRI transform yourself using Step5: Alignment without MRI The surface alignments above are possible if you have the surfaces available from Freesurfer.
Python Code: import os.path as op import numpy as np from mayavi import mlab import mne from mne.datasets import sample print(__doc__) data_path = sample.data_path() subjects_dir = op.join(data_path, 'subjects') raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif') trans_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw-trans.fif') raw = mne.io.read_raw_fif(raw_fname) trans = mne.read_trans(trans_fname) src = mne.read_source_spaces(op.join(subjects_dir, 'sample', 'bem', 'sample-oct-6-src.fif')) Explanation: Source alignment and coordinate frames The aim of this tutorial is to show how to visually assess that the data are well aligned in space for computing the forward solution, and understand the different coordinate frames involved in this process. :depth: 2 Let's start out by loading some data. End of explanation mne.viz.plot_alignment(raw.info, trans=trans, subject='sample', subjects_dir=subjects_dir, surfaces='head-dense', show_axes=True, dig=True, eeg=[], meg='sensors', coord_frame='meg') mlab.view(45, 90, distance=0.6, focalpoint=(0., 0., 0.)) print('Distance from head origin to MEG origin: %0.1f mm' % (1000 * np.linalg.norm(raw.info['dev_head_t']['trans'][:3, 3]))) print('Distance from head origin to MRI origin: %0.1f mm' % (1000 * np.linalg.norm(trans['trans'][:3, 3]))) Explanation: Understanding coordinate frames For M/EEG source imaging, there are three coordinate frames that we must bring into alignment using two 3D transformation matrices &lt;trans_matrices&gt;_ that define how to rotate and translate points in one coordinate frame to their equivalent locations in another. :func:mne.viz.plot_alignment is a very useful function for inspecting these transformations, and the resulting alignment of EEG sensors, MEG sensors, brain sources, and conductor models. If the subjects_dir and subject parameters are provided, the function automatically looks for the Freesurfer MRI surfaces to show from the subject's folder. We can use the show_axes argument to see the various coordinate frames given our transformation matrices. These are shown by axis arrows for each coordinate frame: shortest arrow is (R)ight/X medium is forward/(A)nterior/Y longest is up/(S)uperior/Z i.e., a RAS coordinate system in each case. We can also set the coord_frame argument to choose which coordinate frame the camera should initially be aligned with. Let's take a look: End of explanation mne.viz.plot_alignment(raw.info, trans=None, subject='sample', src=src, subjects_dir=subjects_dir, dig=True, surfaces=['head-dense', 'white'], coord_frame='meg') Explanation: Coordinate frame definitions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. raw:: html <style> .pink {color:DarkSalmon; font-weight:bold} .blue {color:DeepSkyBlue; font-weight:bold} .gray {color:Gray; font-weight:bold} .magenta {color:Magenta; font-weight:bold} .purple {color:Indigo; font-weight:bold} .green {color:LimeGreen; font-weight:bold} .red {color:Red; font-weight:bold} </style> .. role:: pink .. role:: blue .. role:: gray .. role:: magenta .. role:: purple .. role:: green .. role:: red Neuromag head coordinate frame ("head", :pink:pink axes) Defined by the intersection of 1) the line between the LPA (:red:red sphere) and RPA (:purple:purple sphere), and 2) the line perpendicular to this LPA-RPA line one that goes through the Nasion (:green:green sphere). The axes are oriented as X origin→RPA, Y origin→Nasion, Z origin→upward (orthogonal to X and Y). .. note:: This gets defined during the head digitization stage during acquisition, often by use of a Polhemus or other digitizer. MEG device coordinate frame ("meg", :blue:blue axes) This is defined by the MEG manufacturers. From the Elekta user manual: The origin of the device coordinate system is located at the center of the posterior spherical section of the helmet with axis going from left to right and axis pointing front. The axis is, again normal to the plane with positive direction up. .. note:: The device is coregistered with the head coordinate frame during acquisition via emission of sinusoidal currents in head position indicator (HPI) coils (:magenta:magenta spheres) at the beginning of the recording. This is stored in raw.info['dev_head_t']. MRI coordinate frame ("mri", :gray:gray axes) Defined by Freesurfer, the MRI (surface RAS) origin is at the center of a 256×256×256 1mm anisotropic volume (may not be in the center of the head). .. note:: This is aligned to the head coordinate frame that we typically refer to in MNE as trans. A bad example Let's try using trans=None, which (incorrectly!) equates the MRI and head coordinate frames. End of explanation mne.viz.plot_alignment(raw.info, trans=trans, subject='sample', src=src, subjects_dir=subjects_dir, dig=True, surfaces=['head-dense', 'white'], coord_frame='meg') Explanation: It is quite clear that the MRI surfaces (head, brain) are not well aligned to the head digitization points (dots). A good example Here is the same plot, this time with the trans properly defined (using a precomputed matrix). End of explanation # mne.gui.coregistration(subject='sample', subjects_dir=subjects_dir) Explanation: Defining the head↔MRI trans using the GUI You can try creating the head↔MRI transform yourself using :func:mne.gui.coregistration. First you must load the digitization data from the raw file (Head Shape Source). The MRI data is already loaded if you provide the subject and subjects_dir. Toggle Always Show Head Points to see the digitization points. To set the landmarks, toggle Edit radio button in MRI Fiducials. Set the landmarks by clicking the radio button (LPA, Nasion, RPA) and then clicking the corresponding point in the image. After doing this for all the landmarks, toggle Lock radio button. You can omit outlier points, so that they don't interfere with the finetuning. .. note:: You can save the fiducials to a file and pass mri_fiducials=True to plot them in :func:mne.viz.plot_alignment. The fiducials are saved to the subject's bem folder by default. * Click Fit Head Shape. This will align the digitization points to the head surface. Sometimes the fitting algorithm doesn't find the correct alignment immediately. You can try first fitting using LPA/RPA or fiducials and then align according to the digitization. You can also finetune manually with the controls on the right side of the panel. * Click Save As... (lower right corner of the panel), set the filename and read it with :func:mne.read_trans. For more information, see step by step instructions in these slides &lt;http://www.slideshare.net/mne-python/mnepython-coregistration&gt;_. Uncomment the following line to align the data yourself. End of explanation sphere = mne.make_sphere_model(info=raw.info, r0='auto', head_radius='auto') src = mne.setup_volume_source_space(sphere=sphere, pos=10.) mne.viz.plot_alignment( raw.info, eeg='projected', bem=sphere, src=src, dig=True, surfaces=['brain', 'outer_skin'], coord_frame='meg', show_axes=True) Explanation: Alignment without MRI The surface alignments above are possible if you have the surfaces available from Freesurfer. :func:mne.viz.plot_alignment automatically searches for the correct surfaces from the provided subjects_dir. Another option is to use a spherical conductor model. It is passed through bem parameter. End of explanation
7,513
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Ocean MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Is Required Step9: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required Step10: 2.2. Eos Functional Temp Is Required Step11: 2.3. Eos Functional Salt Is Required Step12: 2.4. Eos Functional Depth Is Required Step13: 2.5. Ocean Freezing Point Is Required Step14: 2.6. Ocean Specific Heat Is Required Step15: 2.7. Ocean Reference Density Is Required Step16: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required Step17: 3.2. Type Is Required Step18: 3.3. Ocean Smoothing Is Required Step19: 3.4. Source Is Required Step20: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required Step21: 4.2. River Mouth Is Required Step22: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required Step23: 5.2. Code Version Is Required Step24: 5.3. Code Languages Is Required Step25: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required Step26: 6.2. Canonical Horizontal Resolution Is Required Step27: 6.3. Range Horizontal Resolution Is Required Step28: 6.4. Number Of Horizontal Gridpoints Is Required Step29: 6.5. Number Of Vertical Levels Is Required Step30: 6.6. Is Adaptive Grid Is Required Step31: 6.7. Thickness Level 1 Is Required Step32: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required Step33: 7.2. Global Mean Metrics Used Is Required Step34: 7.3. Regional Metrics Used Is Required Step35: 7.4. Trend Metrics Used Is Required Step36: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required Step37: 8.2. Scheme Is Required Step38: 8.3. Consistency Properties Is Required Step39: 8.4. Corrected Conserved Prognostic Variables Is Required Step40: 8.5. Was Flux Correction Used Is Required Step41: 9. Grid Ocean grid 9.1. Overview Is Required Step42: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required Step43: 10.2. Partial Steps Is Required Step44: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required Step45: 11.2. Staggering Is Required Step46: 11.3. Scheme Is Required Step47: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required Step48: 12.2. Diurnal Cycle Is Required Step49: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required Step50: 13.2. Time Step Is Required Step51: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required Step52: 14.2. Scheme Is Required Step53: 14.3. Time Step Is Required Step54: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required Step55: 15.2. Time Step Is Required Step56: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required Step57: 17. Advection Ocean advection 17.1. Overview Is Required Step58: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required Step59: 18.2. Scheme Name Is Required Step60: 18.3. ALE Is Required Step61: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required Step62: 19.2. Flux Limiter Is Required Step63: 19.3. Effective Order Is Required Step64: 19.4. Name Is Required Step65: 19.5. Passive Tracers Is Required Step66: 19.6. Passive Tracers Advection Is Required Step67: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required Step68: 20.2. Flux Limiter Is Required Step69: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required Step70: 21.2. Scheme Is Required Step71: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required Step72: 22.2. Order Is Required Step73: 22.3. Discretisation Is Required Step74: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required Step75: 23.2. Constant Coefficient Is Required Step76: 23.3. Variable Coefficient Is Required Step77: 23.4. Coeff Background Is Required Step78: 23.5. Coeff Backscatter Is Required Step79: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required Step80: 24.2. Submesoscale Mixing Is Required Step81: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required Step82: 25.2. Order Is Required Step83: 25.3. Discretisation Is Required Step84: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required Step85: 26.2. Constant Coefficient Is Required Step86: 26.3. Variable Coefficient Is Required Step87: 26.4. Coeff Background Is Required Step88: 26.5. Coeff Backscatter Is Required Step89: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required Step90: 27.2. Constant Val Is Required Step91: 27.3. Flux Type Is Required Step92: 27.4. Added Diffusivity Is Required Step93: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required Step94: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required Step95: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required Step96: 30.2. Closure Order Is Required Step97: 30.3. Constant Is Required Step98: 30.4. Background Is Required Step99: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required Step100: 31.2. Closure Order Is Required Step101: 31.3. Constant Is Required Step102: 31.4. Background Is Required Step103: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required Step104: 32.2. Tide Induced Mixing Is Required Step105: 32.3. Double Diffusion Is Required Step106: 32.4. Shear Mixing Is Required Step107: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required Step108: 33.2. Constant Is Required Step109: 33.3. Profile Is Required Step110: 33.4. Background Is Required Step111: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required Step112: 34.2. Constant Is Required Step113: 34.3. Profile Is Required Step114: 34.4. Background Is Required Step115: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required Step116: 35.2. Scheme Is Required Step117: 35.3. Embeded Seaice Is Required Step118: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required Step119: 36.2. Type Of Bbl Is Required Step120: 36.3. Lateral Mixing Coef Is Required Step121: 36.4. Sill Overflow Is Required Step122: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required Step123: 37.2. Surface Pressure Is Required Step124: 37.3. Momentum Flux Correction Is Required Step125: 37.4. Tracers Flux Correction Is Required Step126: 37.5. Wave Effects Is Required Step127: 37.6. River Runoff Budget Is Required Step128: 37.7. Geothermal Heating Is Required Step129: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required Step130: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required Step131: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required Step132: 40.2. Ocean Colour Is Required Step133: 40.3. Extinction Depth Is Required Step134: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required Step135: 41.2. From Sea Ice Is Required Step136: 41.3. Forced Mode Restoring Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-1', 'ocean') Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: BNU Source ID: SANDBOX-1 Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:41 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" # TODO - please enter value(s) Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation
7,514
Given the following text description, write Python code to implement the functionality described below step by step Description: An IPython notebook that explores the relationship(correlation) between betweenness centrality and community membership of a number of mailing-lists in a given time period. Step1: The following sets start month and end month, both inclusive. Step2: new_dict is a dictionary with keys as users' names, and values of their community membership(can have different interpretation) Here the community membership for a user is defined as sum of log(Ni + 1), with Ni corresponds to the number of emails a user sent to Mailing list i.
Python Code: %matplotlib inline from bigbang.archive import Archive import bigbang.parse as parse import bigbang.analysis.graph as graph import bigbang.ingress.mailman as mailman import bigbang.analysis.process as process import networkx as nx import matplotlib.pyplot as plt import pandas as pd from pprint import pprint as pp import pytz import numpy as np import math from itertools import repeat urls = ["https://mm.icann.org/pipermail/cc-humanrights/", "http://mm.icann.org/pipermail/future-challenges/"] archives= [Archive(url,archive_dir="../archives") for url in urls] Explanation: An IPython notebook that explores the relationship(correlation) between betweenness centrality and community membership of a number of mailing-lists in a given time period. End of explanation min_date = [] max_date = [] for arch in archives: if min_date == [] and max_date == []: max_date = max(arch.data['Date']) min_date = min(arch.data['Date']) else: if max(arch.data['Date']) > max_date: max_date = max_date = max(arch.data['Date']) if min(arch.data['Date']) < min_date: min_date = min_date = min(arch.data['Date']) max_date = [int(str(max_date)[:4]), int(str(max_date)[5:7])] min_date = [int(str(min_date)[:4]), int(str(min_date)[5:7])] date_from_whole = [2015,7]#min_date #set date_from_whole based on actual time frame in mailing lists date_to_whole = max_date #set date_to_whole based on actual time fram in mailing lists print(date_from_whole) print(date_to_whole) total_month = (date_to_whole[0] - date_from_whole[0])*12 + (date_to_whole[1]-date_from_whole[1]+1) date_from = [] date_to = [] temp_year = date_from_whole[0] temp_month = date_from_whole[1] for i in range(total_month): date_from.append(pd.datetime(temp_year,temp_month,1,tzinfo=pytz.utc)) if temp_month == 12: temp_year += 1 temp_month = 0 date_to.append(pd.datetime(temp_year,temp_month+1,1,tzinfo=pytz.utc)) temp_month += 1 def filter_by_date(df,d_from,d_to): return df[(df['Date'] > d_from) & (df['Date'] < d_to)] IG = [] for k in range(total_month): dfs = [filter_by_date(arx.data, date_from[k], date_to[k]) for arx in archives] bdf = pd.concat(dfs) IG.append(graph.messages_to_interaction_graph(bdf)) #RG = graph.messages_to_reply_graph(messages) #IG = graph.messages_to_interaction_graph(bdf) print(len(bdf)) bc = [] for j in range(total_month): bc.append(pd.Series(nx.betweenness_centrality(IG[j]))) len(bc) Explanation: The following sets start month and end month, both inclusive. End of explanation new_dict = [{} for i in repeat(None, total_month)] new_dict1 = [{} for i in repeat(None, total_month)] for t in range(total_month): filtered_activity = [] for i in range(len(archives)): df = archives[i].data fdf = filter_by_date(df,date_from[t],date_to[t]) if len(fdf) >0: filtered_activity.append(Archive(fdf).get_activity().sum()) for k in range(len(filtered_activity)): for g in range(len(filtered_activity[k])): original_key = list(filtered_activity[k].keys())[g] new_key = (original_key[original_key.index("(") + 1:original_key.rindex(")")]) if new_key not in new_dict[t]: new_dict[t][new_key] = 0 new_dict1[t][new_key] = 0 new_dict[t][new_key] += math.log(filtered_activity[k].get_values()[g]+1) #can define community membership by changing the above line. #example, direct sum of emails would be new_dict1[t][new_key] += filtered_activity[k].get_values()[g] for i in range(len(new_dict1)): [x+1 for x in list(new_dict1[i].values())] [np.log(x) for x in list(new_dict1[i].values())] #check if there's name difference, return nothing if perfect. for i in range(total_month): set(new_dict[i].keys()).difference(bc[i].index.values) set(bc[i].index.values).difference(list(new_dict[i].keys())) set(new_dict1[i].keys()).difference(bc[i].index.values) set(bc[i].index.values).difference(list(new_dict1[i].keys())) #A list of corresponding betweenness centrality and community membership for all users, monthly comparison = [] comparison1 = [] for i in range(len(new_dict)): comparison.append(pd.DataFrame([new_dict[i], bc[i]])) comparison1.append(pd.DataFrame([new_dict1[i], bc[i]])) corr = [] corr1 = [] for i in range(len(new_dict)): corr.append(np.corrcoef(comparison[i].get_values()[0],comparison[i].get_values()[1])[0,1]) corr1.append(np.corrcoef(comparison1[i].get_values()[0],comparison1[i].get_values()[1])[0,1]) corr1 #Blue as sum of log, red as log of sum, respect to community membership x = list(range(1,total_month+1)) y = corr plt.plot(x, y, marker='o') z = corr1 plt.plot(x, z, marker='o', linestyle='--', color='r') Explanation: new_dict is a dictionary with keys as users' names, and values of their community membership(can have different interpretation) Here the community membership for a user is defined as sum of log(Ni + 1), with Ni corresponds to the number of emails a user sent to Mailing list i. End of explanation
7,515
Given the following text description, write Python code to implement the functionality described below step by step Description: In TBtrans and TranSiesta one is capable of performing real space transport calculations by using real space self-energies (see here). Currently the real space self-energy calculation has to be performed in sisl since it is not implemented in TranSiesta. A real space self-energy is a $\mathbf k$ averaged self-energy which can emulate any 2D or 3D electrode. I.e. for an STM junction a tip and a surface. In such a system the surface could be modelled using the real space self-energy to remove mirror effects of STM tips. This is important since the distance between periodic images disturbs the calculation due to long range potential effects. The basic principle for calculating the real space self-energy is the Brillouin zone integral Step1: Once the minimal graphene unit-cell (here orthogonal) is created we now turn to the calculation of the real space self-energy. The construction of this object is somewhat complicated and has a set of required input options Step2: Now we can create the real space self-energy. In TBtrans (and TranSiesta) the electrode atomic indices must be in consecutive order. This is a little troublesome since the natural order in a device would be an order according to $x$, $y$ or $z$. To create the correct order we extract the real space coupling matrix which is where the real space self-energy would live, the self-energy is calculated using Step3: The above yields the electrode region which contains the self-energies. Since the full device region is nothing but the H_minimal tiled $10\times10$ times with an attached STM tip on top. Here we need to arange the electrode atoms first, then the final device region. The real_space_parent method returns the Hamiltonian that obeys the unfolded size. In this case $10\times10$ times larger. One should always use this method to get the correct device order of atoms since the order of tiling is determined by the semi_axes and k_axis arguments. Step4: Lastly, we need to add the STM tip. Here we simply add a gold atom and manually add the hoppings. Since this is tight-binding we have full control over the self-energy and potential land-scape. Therefore we don't need to extend the electrode region to screen off the tip region. In DFT systems, a properly screened region is required. Step5: Before we can run calculations we need to create the real space self-energy for the graphene flake in sisl. Since the algorithm is not implemented in TBtrans (nor TranSiesta) it needs to be done here. This is somewhat complicated since the files requires a specific order. For ease this tutorial implements it for you. Step6: Exercises Calculate transport, density of state and bond-currents. Please search the manual on how to edit the RUN.fdf according to the following
Python Code: graphene = sisl.geom.graphene(orthogonal=True) # Graphene tight-binding parameters on, nn = 0, -2.7 H_minimal = sisl.Hamiltonian(graphene) H_minimal.construct([[0.1, 1.44], [on, nn]]) Explanation: In TBtrans and TranSiesta one is capable of performing real space transport calculations by using real space self-energies (see here). Currently the real space self-energy calculation has to be performed in sisl since it is not implemented in TranSiesta. A real space self-energy is a $\mathbf k$ averaged self-energy which can emulate any 2D or 3D electrode. I.e. for an STM junction a tip and a surface. In such a system the surface could be modelled using the real space self-energy to remove mirror effects of STM tips. This is important since the distance between periodic images disturbs the calculation due to long range potential effects. The basic principle for calculating the real space self-energy is the Brillouin zone integral: \begin{equation} \mathbf G_{\mathcal R}(E) = \int_{\mathrm{BZ}}\mathbf G_\mathbf k \end{equation} In this example we will construct an STM tip probing a graphene flake. This example is rather complicated and is the reason why basically everything is already done for you. Please try and understand each step. We start by creating the graphene tight-binding model. End of explanation # object = H_minimal # semi_axes = 0, x-axis uses recursive self-energy calculation # k_axis = 1, y-axis uses a Brillouin zone integral # unfold = (10, 10, 1), the full real-space green function is equivalent to the system # H_minimal.tile(10, 0).tile(10, 1) RSSE = sisl.RealSpaceSE(H_minimal, 0, 1, (10, 10, 1)) Explanation: Once the minimal graphene unit-cell (here orthogonal) is created we now turn to the calculation of the real space self-energy. The construction of this object is somewhat complicated and has a set of required input options: - object: the Hamiltonian - semi_axes: which axes to use for the recursive self-energy - k_axis: which axis to integrate in the Brillouin zone - unfold: how many times the object needs to be unfolded along each lattice vector, this is an integer vector of length 3 End of explanation H_elec, elec_indices = RSSE.real_space_coupling(ret_indices=True) H_elec.write('GRAPHENE.nc') Explanation: Now we can create the real space self-energy. In TBtrans (and TranSiesta) the electrode atomic indices must be in consecutive order. This is a little troublesome since the natural order in a device would be an order according to $x$, $y$ or $z$. To create the correct order we extract the real space coupling matrix which is where the real space self-energy would live, the self-energy is calculated using: \begin{equation} \boldsymbol\Sigma^{\mathcal R} = E \mathbf S - \mathbf H - \Big[\int_{\mathrm{BZ}} \mathbf G\Big]^{-1}. \end{equation} Another way to calculate the self-energy would be to transfer the Green function from the infinite bulk into the region of interest: \begin{equation} \boldsymbol\Sigma^{\mathcal R} = \mathbf V_{\mathcal R\infty}\mathbf G_{\infty\setminus\mathcal R}\mathbf V_{\infty\mathcal R}. \end{equation} From the 2nd equation it is obvious that the self-energy only lives on the boundary that $\mathbf V_{\infty\mathcal R}$ couples to. Exactly this region is extracted using real_space_coupling as below. Take some time to draw a simple 2D lattice coupling and confirm the area that the real-space self energies couples to. In this example we also retrieve the indices for the electrode atoms, those that connect out to the infinite plane. End of explanation H = RSSE.real_space_parent() # Create the true device by re-arranging the atoms indices = np.arange(len(H)) indices = np.delete(indices, elec_indices) # first electrodes, then rest of device indices = np.concatenate([elec_indices, indices]) # Now re-arange matrix H = H.sub(indices) Explanation: The above yields the electrode region which contains the self-energies. Since the full device region is nothing but the H_minimal tiled $10\times10$ times with an attached STM tip on top. Here we need to arange the electrode atoms first, then the final device region. The real_space_parent method returns the Hamiltonian that obeys the unfolded size. In this case $10\times10$ times larger. One should always use this method to get the correct device order of atoms since the order of tiling is determined by the semi_axes and k_axis arguments. End of explanation STM = sisl.Geometry([0, 0, 0], atoms=sisl.Atom('Au', R=1.0001), sc=sisl.SuperCell([10, 10, 1], nsc=[1, 1, 3])) H_STM = sisl.Hamiltonian(STM) H_STM.construct([(0.1, 1.1), (0, -0.75)]) H_STM.write('STM.nc') mid_xyz = H.geometry.center() idx = H.close(mid_xyz, R=1.33)[0] H_device = H.add(H_STM, offset=H.geometry.xyz[idx] - H_STM.geometry.xyz[0] + [0, 0, 2]) na = len(H) idx = H_device.close(na, R=(0.1, 2.25))[1][0] H_device[na, idx] = -0.1 H_device[idx, na] = -0.1 H_device.write('DEVICE.nc') Explanation: Lastly, we need to add the STM tip. Here we simply add a gold atom and manually add the hoppings. Since this is tight-binding we have full control over the self-energy and potential land-scape. Therefore we don't need to extend the electrode region to screen off the tip region. In DFT systems, a properly screened region is required. End of explanation # A real space transport calculation ONLY needs the Gamma-point gamma = sisl.MonkhorstPack(H_elec, [1] * 3) # Energy contour dE = 0.04 E = np.arange(-2, 2 + dE / 2, dE) sisl.io.tableSile("contour.E", 'w').write_data(E, np.zeros(E.size) + dE) # Now create the file (should take around 3-4 minutes) eta = 0.001 * 1j with sisl.io.tbtgfSileTBtrans("GRAPHENE.TBTGF") as f: f.write_header(gamma, E + eta) for ispin, new_k, k, e in tqdm(f, unit="rsSE"): if new_k: f.write_hamiltonian(H_elec.Hk(format='array', dtype=np.complex128)) SeHSE = RSSE.self_energy(e + eta, bulk=True, coupling=True) f.write_self_energy(SeHSE) Explanation: Before we can run calculations we need to create the real space self-energy for the graphene flake in sisl. Since the algorithm is not implemented in TBtrans (nor TranSiesta) it needs to be done here. This is somewhat complicated since the files requires a specific order. For ease this tutorial implements it for you. End of explanation tbt = sisl.get_sile('siesta.TBT.nc') Explanation: Exercises Calculate transport, density of state and bond-currents. Please search the manual on how to edit the RUN.fdf according to the following: Force tbtrans to use the generated TBTGF file. This is the same as in TB_07 example, i.e. using out-of-core calculations Force tbtrans to use an energy grid defined in an external file (contour.E) Plot the bond-currents and check their symmetry, does the symmetry depend on the injection point? Is there a particular reason for choosing semi_axis and k_axes as they are chosen? Or could they be swapped? TIME Redo the calculations using 3 electrodes (left/right/tip) using k-points. Converge transmission and then plot the bond-currents. Do they look as the real space calculation? If they are the same, why? End of explanation
7,516
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: How does one convert a left-tailed p-value to a z_score from the Z-distribution (standard normal distribution, Gaussian distribution)? I have yet to find the magical function in Scipy's stats module to do this, but one must be there.
Problem: import numpy as np import scipy.stats p_values = [0.1, 0.225, 0.5, 0.75, 0.925, 0.95] z_scores = scipy.stats.norm.ppf(p_values)
7,517
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1 align="center">TensorFlow Neural Network Lab</h1> <img src="image/notmnist.png"> In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J). Step5: <img src="image/Mean_Variance_Image.png" style="height Step6: Checkpoint All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed. Step7: Problem 2 Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer. <img src="image/network_diagram.png" style="height Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height Step9: Test You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
Python Code: import hashlib import os import pickle from urllib.request import urlretrieve import numpy as np from PIL import Image from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelBinarizer from sklearn.utils import resample from tqdm import tqdm from zipfile import ZipFile print('All modules imported.') Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1> <img src="image/notmnist.png"> In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts. The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in! To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported". End of explanation def download(url, file): Download file from <url> :param url: URL to file :param file: Local file path if not os.path.isfile(file): print('Downloading ' + file + '...') urlretrieve(url, file) print('Download Finished') # Download the training and test dataset. download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip') download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip') # Make sure the files aren't corrupted assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\ 'notMNIST_train.zip file is corrupted. Remove the file and try again.' assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\ 'notMNIST_test.zip file is corrupted. Remove the file and try again.' # Wait until you see that all files have been downloaded. print('All files downloaded.') def uncompress_features_labels(file): Uncompress features and labels from a zip file :param file: The zip file to extract the data from features = [] labels = [] with ZipFile(file) as zipf: # Progress Bar filenames_pbar = tqdm(zipf.namelist(), unit='files') # Get features and labels from all files for filename in filenames_pbar: # Check if the file is a directory if not filename.endswith('/'): with zipf.open(filename) as image_file: image = Image.open(image_file) image.load() # Load image data as 1 dimensional array # We're using float32 to save on memory space feature = np.array(image, dtype=np.float32).flatten() # Get the the letter from the filename. This is the letter of the image. label = os.path.split(filename)[1][0] features.append(feature) labels.append(label) return np.array(features), np.array(labels) # Get the features and labels from the zip files train_features, train_labels = uncompress_features_labels('notMNIST_train.zip') test_features, test_labels = uncompress_features_labels('notMNIST_test.zip') # Limit the amount of data to work with a docker container docker_size_limit = 150000 train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit) # Set flags for feature engineering. This will prevent you from skipping an important step. is_features_normal = False is_labels_encod = False # Wait until you see that all features and labels have been uncompressed. print('All features and labels uncompressed.') Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J). End of explanation # Problem 1 - Implement Min-Max scaling for grayscale image data def normalize_grayscale(image_data): Normalize the image data with Min-Max scaling to a range of [0.1, 0.9] :param image_data: The image data to be normalized :return: Normalized image data # TODO: Implement Min-Max scaling for grayscale image data min = 0.1 max = 0.9 diff = max - min gray_min = 0 gray_max = 255 return min + (image_data - gray_min) * diff / (gray_max - gray_min) ### DON'T MODIFY ANYTHING BELOW ### # Test Cases np.testing.assert_array_almost_equal( normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])), [0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314, 0.125098039216, 0.128235294118, 0.13137254902, 0.9], decimal=3) np.testing.assert_array_almost_equal( normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])), [0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078, 0.896862745098, 0.9]) if not is_features_normal: train_features = normalize_grayscale(train_features) test_features = normalize_grayscale(test_features) is_features_normal = True print('Tests Passed!') if not is_labels_encod: # Turn labels into numbers and apply One-Hot Encoding encoder = LabelBinarizer() encoder.fit(train_labels) train_labels = encoder.transform(train_labels) test_labels = encoder.transform(test_labels) # Change to float32, so it can be multiplied against the features in TensorFlow, which are float32 train_labels = train_labels.astype(np.float32) test_labels = test_labels.astype(np.float32) is_labels_encod = True print('Labels One-Hot Encoded') assert is_features_normal, 'You skipped the step to normalize the features' assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels' # Get randomized datasets for training and validation train_features, valid_features, train_labels, valid_labels = train_test_split( train_features, train_labels, test_size=0.05, random_state=832289) print('Training features and labels randomized and split.') # Save the data for easy access pickle_file = 'notMNIST.pickle' if not os.path.isfile(pickle_file): print('Saving data to pickle file...') try: with open('notMNIST.pickle', 'wb') as pfile: pickle.dump( { 'train_dataset': train_features, 'train_labels': train_labels, 'valid_dataset': valid_features, 'valid_labels': valid_labels, 'test_dataset': test_features, 'test_labels': test_labels, }, pfile, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise print('Data cached in pickle file.') Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%"> Problem 1 The first problem involves normalizing the features for your training and test data. Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9. Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255. Min-Max Scaling: $ X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}} $ If you're having trouble solving problem 1, you can view the solution here. End of explanation %matplotlib inline # Load the modules import pickle import math import numpy as np import tensorflow as tf from tqdm import tqdm import matplotlib.pyplot as plt # Reload the data pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f: pickle_data = pickle.load(f) train_features = pickle_data['train_dataset'] train_labels = pickle_data['train_labels'] valid_features = pickle_data['valid_dataset'] valid_labels = pickle_data['valid_labels'] test_features = pickle_data['test_dataset'] test_labels = pickle_data['test_labels'] del pickle_data # Free up memory print('Data and modules loaded.') Explanation: Checkpoint All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed. End of explanation # All the pixels in the image (28 * 28 = 784) features_count = 784 # All the labels labels_count = 10 # TODO: Set the features and labels tensors features = tf.placeholder(tf.float32) labels = tf.placeholder(tf.float32) # TODO: Set the weights and biases tensors weights = tf.Variable(tf.truncated_normal((features_count, labels_count))) biases = tf.Variable(tf.zeros(labels_count)) ### DON'T MODIFY ANYTHING BELOW ### #Test Cases from tensorflow.python.ops.variables import Variable assert features._op.name.startswith('Placeholder'), 'features must be a placeholder' assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder' assert isinstance(weights, Variable), 'weights must be a TensorFlow variable' assert isinstance(biases, Variable), 'biases must be a TensorFlow variable' assert features._shape == None or (\ features._shape.dims[0].value is None and\ features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect' assert labels._shape == None or (\ labels._shape.dims[0].value is None and\ labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect' assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect' assert biases._variable._shape == (10), 'The shape of biases is incorrect' assert features._dtype == tf.float32, 'features must be type float32' assert labels._dtype == tf.float32, 'labels must be type float32' # Feed dicts for training, validation, and test session train_feed_dict = {features: train_features, labels: train_labels} valid_feed_dict = {features: valid_features, labels: valid_labels} test_feed_dict = {features: test_features, labels: test_labels} # Linear Function WX + b logits = tf.matmul(features, weights) + biases prediction = tf.nn.softmax(logits) # Cross entropy cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1) # Training loss loss = tf.reduce_mean(cross_entropy) # Create an operation that initializes all variables init = tf.global_variables_initializer() # Test Cases with tf.Session() as session: session.run(init) session.run(loss, feed_dict=train_feed_dict) session.run(loss, feed_dict=valid_feed_dict) session.run(loss, feed_dict=test_feed_dict) biases_data = session.run(biases) assert not np.count_nonzero(biases_data), 'biases must be zeros' print('Tests Passed!') # Determine if the predictions are correct is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1)) # Calculate the accuracy of the predictions accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32)) print('Accuracy function created.') Explanation: Problem 2 Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer. <img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%"> For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network. For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors: - features - Placeholder tensor for feature data (train_features/valid_features/test_features) - labels - Placeholder tensor for label data (train_labels/valid_labels/test_labels) - weights - Variable Tensor with random numbers from a truncated normal distribution. - See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help. - biases - Variable Tensor with all zeros. - See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help. If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here. End of explanation # Change if you have memory restrictions batch_size = 128 # TODO: Find the best parameters for each configuration epochs = 5 learning_rate = .2 ### DON'T MODIFY ANYTHING BELOW ### # Gradient Descent optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # The accuracy measured against the validation set validation_accuracy = 0.0 # Measurements use for graphing loss and accuracy log_batch_step = 50 batches = [] loss_batch = [] train_acc_batch = [] valid_acc_batch = [] with tf.Session() as session: session.run(init) batch_count = int(math.ceil(len(train_features)/batch_size)) for epoch_i in range(epochs): # Progress bar batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches') # The training cycle for batch_i in batches_pbar: # Get a batch of training features and labels batch_start = batch_i*batch_size batch_features = train_features[batch_start:batch_start + batch_size] batch_labels = train_labels[batch_start:batch_start + batch_size] # Run optimizer and get loss _, l = session.run( [optimizer, loss], feed_dict={features: batch_features, labels: batch_labels}) # Log every 50 batches if not batch_i % log_batch_step: # Calculate Training and Validation accuracy training_accuracy = session.run(accuracy, feed_dict=train_feed_dict) validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict) # Log batches previous_batch = batches[-1] if batches else 0 batches.append(log_batch_step + previous_batch) loss_batch.append(l) train_acc_batch.append(training_accuracy) valid_acc_batch.append(validation_accuracy) # Check accuracy against Validation data validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict) loss_plot = plt.subplot(211) loss_plot.set_title('Loss') loss_plot.plot(batches, loss_batch, 'g') loss_plot.set_xlim([batches[0], batches[-1]]) acc_plot = plt.subplot(212) acc_plot.set_title('Accuracy') acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy') acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy') acc_plot.set_ylim([0, 1.0]) acc_plot.set_xlim([batches[0], batches[-1]]) acc_plot.legend(loc=4) plt.tight_layout() plt.show() print('Validation accuracy at {}'.format(validation_accuracy)) Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%"> Problem 3 Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy. Parameter configurations: Configuration 1 * Epochs: 1 * Learning Rate: * 0.8 * 0.5 * 0.1 * 0.05 * 0.01 Configuration 2 * Epochs: * 1 * 2 * 3 * 4 * 5 * Learning Rate: 0.2 The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed. If you're having trouble solving problem 3, you can view the solution here. End of explanation ### DON'T MODIFY ANYTHING BELOW ### # The accuracy measured against the test set test_accuracy = 0.0 with tf.Session() as session: session.run(init) batch_count = int(math.ceil(len(train_features)/batch_size)) for epoch_i in range(epochs): # Progress bar batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches') # The training cycle for batch_i in batches_pbar: # Get a batch of training features and labels batch_start = batch_i*batch_size batch_features = train_features[batch_start:batch_start + batch_size] batch_labels = train_labels[batch_start:batch_start + batch_size] # Run optimizer _ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels}) # Check accuracy against Test data test_accuracy = session.run(accuracy, feed_dict=test_feed_dict) assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy) print('Nice Job! Test Accuracy is {}'.format(test_accuracy)) Explanation: Test You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. End of explanation
7,518
Given the following text description, write Python code to implement the functionality described below step by step Description: Sample code to test features of 'NumPy', 'Matplotlib' and 'Scipy' Importing includes Step1: Testing variable assignment and operations in python Step2: Extra Step3: Simple Linear Regression Let $y = 3x + 2 + 10*n$ be the equation of a line, where $n \sim \eta(0,1)$ is standard normal distribution. Let x = [1 Step5: From linear regression using the model $$p(y_i/\mathbf{x_i}) = \eta(y_i/\mathbf{w}^T\mathbf{x_i},\sigma^2)$$ We have $$ \mathbf{w} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y} $$ Now we will plot the original points and the line fitted using linear regression.
Python Code: %matplotlib inline %pylab inline from __future__ import print_function import matplotlib.pyplot as plt import matplotlib.animation as animation import theano import numpy as np from theano import tensor as T from numpy.linalg import inv Explanation: Sample code to test features of 'NumPy', 'Matplotlib' and 'Scipy' Importing includes End of explanation x = 2 print(x) y = x**2 print(y) Explanation: Testing variable assignment and operations in python End of explanation # Theano symbolic gradient example B = T.scalar('E') R = T.sqr(B) A = T.grad(R,B) Z = theano.function([B], A) # Theano symbolic gradient example - Numeric a = range(10) da= range(10) for idx,x in enumerate(a): da[idx] = Z(x) plt.plot(a,da) plt.xlabel('x') plt.ylabel('dx') plt.title('Gradient of $f(x)=x^2$') plt.show() Explanation: Extra: Testing Theano capabalities in handling symbolic variables End of explanation a = 3 b = 2 N = 100 # y = ax+b x = np.reshape(range(N),(N,1)) y = a*x + b + 10*np.random.randn(N,1) #plot plt.scatter(x,y) plt.xlabel('x') plt.ylabel('y') plt.title('Plot of $y = 3x+2 + 10*\eta (0,1)$') plt.show() Explanation: Simple Linear Regression Let $y = 3x + 2 + 10*n$ be the equation of a line, where $n \sim \eta(0,1)$ is standard normal distribution. Let x = [1:100]. We will plot the scatter plot of y vs x End of explanation # Linear regression (MSE) # Augment x with 1 X = np.hstack((np.ones((N,1)),x)) w = np.dot(inv(X.T.dot(X)),X.T.dot(y)) print('a = ',w[1],'b = ',w[0]) plt.scatter(x,y) plt.plot(x,X.dot(w)) plt.xlabel('x') plt.ylabel('y') plt.legend(['Fitted model','Input']) plt.title('Plot of $y = 3x+2 + 10*\eta (0,1)$') plt.show() from tempfile import NamedTemporaryFile VIDEO_TAG = <video controls> <source src="data:video/x-m4v;base64,{0}" type="video/mp4"> Your browser does not support the video tag. </video> def anim_to_html(anim): if not hasattr(anim, '_encoded_video'): with NamedTemporaryFile(suffix='.mp4') as f: anim.save(f.name, fps=20, extra_args=['-vcodec', 'libx264']) video = open(f.name, "rb").read() anim._encoded_video = video.encode("base64") return VIDEO_TAG.format(anim._encoded_video) from IPython.display import HTML def display_animation(anim): plt.close(anim._fig) return HTML(anim_to_html(anim)) # Animation: MSE gradient descent fig1 = plt.figure() def init(): line.set_data([], []) return line, def update_w(i): global w off = 2*a*X.T.dot((X.dot(w)-y)) w = w - off line.set_data(x,X.dot(w)) return line, X = np.hstack((np.ones((N,1)),x)) w = np.random.rand(X.shape[1],1) ax = plt.axes(xlim=(-20, 120), ylim=(-50, 350)) line, = ax.plot([], [], lw=2) a = 0.0000001 plt.scatter(x,y) plt.xlabel('x') plt.ylabel('y') plt.legend(['Fitted model','Input']) plt.title('Plot of $y = 3x+2 + 10*\eta (0,1)$') line_ani = animation.FuncAnimation(fig1, update_w,init_func=init, frames=100, interval=25, blit=True) #plt.show() display_animation(line_ani) Explanation: From linear regression using the model $$p(y_i/\mathbf{x_i}) = \eta(y_i/\mathbf{w}^T\mathbf{x_i},\sigma^2)$$ We have $$ \mathbf{w} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y} $$ Now we will plot the original points and the line fitted using linear regression. End of explanation
7,519
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright Jana Schaich Borg/Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) MySQL Exercise 3 Step1: 1. Use AS to change the titles of the columns in your output The AS clause allows you to assign an alias (a temporary name) to a table or a column in a table. Aliases can be useful for increasing the readability of queries, for abbreviating long names, and for changing column titles in query outputs. To implement the AS clause, include it in your query code immediately after the column or table you want to rename. For example, if you wanted to change the name of the time stamp field of the completed_tests table from "created_at" to "time_stamp" in your output, you could take advantage of the AS clause and execute the following query Step2: 2. Use DISTINCT to remove duplicate rows Especially in databases like the Dognition database where no primary keys were declared in each table, sometimes entire duplicate rows can be entered in error. Even with no duplicate rows present, sometimes your queries correctly output multiple instances of the same value in a column, but you are interested in knowing what the different possible values in the column are, not what each value in each row is. In both of these cases, the best way to arrive at the clean results you want is to instruct the query to return only values that are distinct, or different from all the rest. The SQL keyword that allows you to do this is called DISTINCT. To use it in a query, place it directly after the word SELECT in your query. For example, if we wanted a list of all the breeds of dogs in the Dognition database, we could try the following query from a previous exercise Step3: If you scroll through the output, you will see that no two entries are the same. Of note, if you use the DISTINCT clause on a column that has NULL values, MySQL will include one NULL value in the DISTINCT output from that column. <mark> When the DISTINCT clause is used with multiple columns in a SELECT statement, the combination of all the columns together is used to determine the uniqueness of a row in a result set.</mark> For example, if you wanted to know all the possible combinations of states and cities in the users table, you could query Step4: If you examine the query output carefully, you will see that there are many rows with California (CA) in the state column and four rows that have Gainesville in the city column (Georgia, Arkansas, Florida, and Virginia all have cities named Gainesville in our user table), but no two rows have the same state and city combination. When you use the DISTINCT clause with the LIMIT clause in a statement, MySQL stops searching when it finds the number of unique rows specified in the LIMIT clause, not when it goes through the number of rows in the LIMIT clause. For example, if the first 6 entries of the breed column in the dogs table were Step5: 3. Use ORDER BY to sort the output of your query As you might have noticed already when examining the output of the queries you have executed thus far, databases do not have built-in sorting mechanisms that automatically sort the output of your query. However, SQL permits the use of the powerful ORDER BY clause to allow you to sort the output according to your own specifications. Let's look at how you would implement a simple ORDER BY clause. Recall our query outline Step6: (You might notice that some of the breeds start with a hyphen; we'll come back to that later.) The default is to sort the output in ascending order. However, you can tell SQL to sort the output in descending order as well Step7: You might notice that some of the rows have null values in the state field. You could revise your query to only select rows that do not have null values in either the state or membership_type column Step8: 4. Export your query results to a text file Next week, we will learn how to complete some basic forms of data analysis in SQL. However, if you know how to use other analysis or visualization software like Excel or Tableau, you can implement these analyses with the SQL skills you have gained already, as long as you can export the results of your SQL queries in a format other software packages can read. Almost every database interface has a different method for exporting query results, so you will need to look up how to do it every time you try a new interface (another place where having a desire to learn new things will come in handy!). There are two ways to export your query results using our Jupyter interface. You can select and copy the output you see in an output window, and paste it into another program. Although this strategy is very simple, it only works if your output is very limited in size (since you can only paste 1000 rows at a time). You can tell MySQL to put the results of a query into a variable (for our purposes consider a variable to be a temporary holding place), and then use Python code to format the data in the variable as a CSV file (comma separated value file, a .CSV file) that can be downloaded. When you use this strategy, all of the results of a query will be saved into the variable, not just the first 1000 rows as displayed in Jupyter, even if we have set up Jupyter to only display 1000 rows of the output. Let's see how we could export query results using the second method. To tell MySQL to put the results of a query into a variable, use the following syntax Step9: Once your variable is created, using the above command tell Jupyter to format the variable as a csv file using the following syntax Step10: You should see a link in the output line that says "CSV results." You can click on this link to see the text file in a tab in your browser or to download the file to your computer (exactly how this works will differ depending on your browser and settings, but your options will be the same as if you were trying to open or download a file from any other website.) You can also open the file directly from the home page of your Jupyter account. Behind the scenes, your csv file was written to your directory on the Jupyter server, so you should now see this file listed in your Jupyter account landing page along with the list of your notebooks. Just like a notebook, you can copy it, rename it, or delete it from your directory by clicking on the check box next to the file and clicking the "duplicate," "rename," or trash can buttons at the top of the page. <img src="https Step11: That was helpful, but you'll still notice some issues with the output. First, the leading dashes are indeed removed in the breed_fixed column, but now the dashes used to separate breeds in entries like 'French Bulldog-Boston Terrier Mix' are missing as well. So REPLACE isn't the right choice to selectively remove leading dashes. Perhaps we could try using the TRIM function Step12: That certainly gets us a lot closer to the list we might want, but there are still some entries in the breed_fixed column that are conceptual duplicates of each other, due to poor consistency in how the breed names were entered. For example, one entry is "Beagle Mix" while another is "Beagle- Mix". These entries are clearly meant to refer to the same breed, but they will be counted as separate breeds as long as their breed names are different. Cleaning up all of the entries in the breed column would take quite a bit of work, so we won't go through more details about how to do it in this lesson. Instead, use this exercise as a reminder for why it's so important to always look at the details of your data, and as motivation to explore the MySQL functions we won't have time to discuss in the course. If you push yourself to learn new SQL functions and embrace the habit of getting to know your data by exploring its raw values and outputs, you will find that SQL provides very efficient tools to clean real-world messy data sets, and you will arrive at the correct conclusions about what your data indicate your company should do. Now it's time to practice using AS, DISTINCT, and ORDER BY in your own queries. Question 4 Step13: Question 5 Step14: Question 6 Step15: Question 7 Step16: Question 8
Python Code: %load_ext sql %sql mysql://studentuser:studentpw@mysqlserver/dognitiondb %sql USE dognitiondb %config SqlMagic.displaylimit=25 Explanation: Copyright Jana Schaich Borg/Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) MySQL Exercise 3: Formatting Selected Data In this lesson, we are going to learn about three SQL clauses or functionalities that will help you format and edit the output of your queries. We will also learn how to export the results of your formatted queries to a text file so that you can analyze them in other software packages such as Tableau or Excel. Begin by loading the SQL library into Jupyter, connecting to the Dognition database, and setting Dognition as the default database. python %load_ext sql %sql mysql://studentuser:studentpw@mysqlserver/dognitiondb %sql USE dognitiondb End of explanation %%sql SELECT start_time as 'exam start time' FROM exam_answers LIMIT 0, 5; Explanation: 1. Use AS to change the titles of the columns in your output The AS clause allows you to assign an alias (a temporary name) to a table or a column in a table. Aliases can be useful for increasing the readability of queries, for abbreviating long names, and for changing column titles in query outputs. To implement the AS clause, include it in your query code immediately after the column or table you want to rename. For example, if you wanted to change the name of the time stamp field of the completed_tests table from "created_at" to "time_stamp" in your output, you could take advantage of the AS clause and execute the following query: mySQL SELECT dog_guid, created_at AS time_stamp FROM complete_tests Note that if you use an alias that includes a space, the alias must be surrounded in quotes: mySQL SELECT dog_guid, created_at AS "time stamp" FROM complete_tests You could also make an alias for a table: mySQL SELECT dog_guid, created_at AS "time stamp" FROM complete_tests AS tests Since aliases are strings, again, MySQL accepts both double and single quotation marks, but some database systems only accept single quotation marks. It is good practice to avoid using SQL keywords in your aliases, but if you have to use an SQL keyword in your alias for some reason, the string must be enclosed in backticks instead of quotation marks. Question 1: How would you change the title of the "start_time" field in the exam_answers table to "exam start time" in a query output? Try it below: End of explanation %%sql SELECT DISTINCT breed FROM dogs; Explanation: 2. Use DISTINCT to remove duplicate rows Especially in databases like the Dognition database where no primary keys were declared in each table, sometimes entire duplicate rows can be entered in error. Even with no duplicate rows present, sometimes your queries correctly output multiple instances of the same value in a column, but you are interested in knowing what the different possible values in the column are, not what each value in each row is. In both of these cases, the best way to arrive at the clean results you want is to instruct the query to return only values that are distinct, or different from all the rest. The SQL keyword that allows you to do this is called DISTINCT. To use it in a query, place it directly after the word SELECT in your query. For example, if we wanted a list of all the breeds of dogs in the Dognition database, we could try the following query from a previous exercise: mySQL SELECT breed FROM dogs; However, the output of this query would not be very helpful, because it would output the entry for every single row in the breed column of the dogs table, regardless of whether it duplicated the breed of a previous entry. Fortunately, we could arrive at the list we want by executing the following query with the DISTINCT modifier: mySQL SELECT DISTINCT breed FROM dogs; Try it yourself (If you do not limit your output, you should get 2006 rows in your output): End of explanation %%sql SELECT DISTINCT state, city FROM users; Explanation: If you scroll through the output, you will see that no two entries are the same. Of note, if you use the DISTINCT clause on a column that has NULL values, MySQL will include one NULL value in the DISTINCT output from that column. <mark> When the DISTINCT clause is used with multiple columns in a SELECT statement, the combination of all the columns together is used to determine the uniqueness of a row in a result set.</mark> For example, if you wanted to know all the possible combinations of states and cities in the users table, you could query: mySQL SELECT DISTINCT state, city FROM users; Try it (if you don't limit your output you'll see 3999 rows in the query result, of which the first 1000 are displayed): End of explanation %%sql SELECT DISTINCT test_name, subcategory_name FROM complete_tests; Explanation: If you examine the query output carefully, you will see that there are many rows with California (CA) in the state column and four rows that have Gainesville in the city column (Georgia, Arkansas, Florida, and Virginia all have cities named Gainesville in our user table), but no two rows have the same state and city combination. When you use the DISTINCT clause with the LIMIT clause in a statement, MySQL stops searching when it finds the number of unique rows specified in the LIMIT clause, not when it goes through the number of rows in the LIMIT clause. For example, if the first 6 entries of the breed column in the dogs table were: Labrador Retriever Shetland Sheepdog Golden Retriever Golden Retriever Shih Tzu Siberian Husky The output of the following query: mySQL SELECT DISTINCT breed FROM dogs LIMIT 5; would be the first 5 different breeds: Labrador Retriever Shetland Sheepdog Golden Retriever Shih Tzu Siberian Husky not the distinct breeds in the first 5 rows: Labrador Retriever Shetland Sheepdog Golden Retriever Shih Tzu Question 2: How would you list all the possible combinations of test names and subcategory names in complete_tests table? (If you do not limit your output, you should retrieve 45 possible combinations) End of explanation %%sql SELECT DISTINCT breed FROM dogs ORDER BY breed Explanation: 3. Use ORDER BY to sort the output of your query As you might have noticed already when examining the output of the queries you have executed thus far, databases do not have built-in sorting mechanisms that automatically sort the output of your query. However, SQL permits the use of the powerful ORDER BY clause to allow you to sort the output according to your own specifications. Let's look at how you would implement a simple ORDER BY clause. Recall our query outline: <img src="https://duke.box.com/shared/static/l9v2khefe7er98pj1k6oyhmku4tz5wpf.jpg" width=400 alt="SELECT FROM WHERE ORDER BY" /> Your ORDER BY clause will come after everything else in the main part of your query, but before a LIMIT clause. If you wanted the breeds of dogs in the dog table sorted in alphabetical order, you could query: mySQL SELECT DISTINCT breed FROM dogs ORDER BY breed Try it yourself: End of explanation %%sql SELECT DISTINCT user_guid, state, membership_type FROM users WHERE country="US" ORDER BY state ASC, membership_type ASC Explanation: (You might notice that some of the breeds start with a hyphen; we'll come back to that later.) The default is to sort the output in ascending order. However, you can tell SQL to sort the output in descending order as well: mySQL SELECT DISTINCT breed FROM dogs ORDER BY breed DESC Combining ORDER BY with LIMIT gives you an easy way to select the "top 10" and "last 10" in a list or column. For example, you could select the User IDs and Dog IDs of the 5 customer-dog pairs who spent the least median amount of time between their Dognition tests: mySQL SELECT DISTINCT user_guid, median_ITI_minutes FROM dogs ORDER BY median_ITI_minutes LIMIT 5 or the greatest median amount of time between their Dognition tests: mySQL SELECT DISTINCT user_guid, median_ITI_minutes FROM dogs ORDER BY median_ITI_minutes DESC LIMIT 5 You can also sort your output based on a derived field. If you wanted your inter-test interval to be expressed in seconds instead of minutes, you could incorporate a derived column and an alias into your last query to get the 5 customer-dog pairs who spent the greatest median amount of time between their Dognition tests in seconds: mySQL SELECT DISTINCT user_guid, (median_ITI_minutes * 60) AS median_ITI_sec FROM dogs ORDER BY median_ITI_sec DESC LIMIT 5 Note that the parentheses are important in that query; without them, the database would try to make an alias for 60 instead of median_ITI_minutes * 60. SQL queries also allow you to sort by multiple fields in a specified order, similar to how Excel allows to include multiple levels in a sort (see image below): <img src="https://duke.box.com/shared/static/lbubaw9rkqoyv5xd61y57o3lpqkvrj10.jpg" width=600 alt="SELECT FROM WHERE" /> To achieve this in SQL, you include all the fields (or aliases) by which you want to sort the results after the ORDER BY clause, separated by commas, in the order you want them to be used for sorting. You can then specify after each field whether you want the sort using that field to be ascending or descending. If you wanted to select all the distinct User IDs of customers in the United States (abbreviated "US") and sort them according to the states they live in in alphabetical order first, and membership type second, you could query: mySQL SELECT DISTINCT user_guid, state, membership_type FROM users WHERE country="US" ORDER BY state ASC, membership_type ASC Go ahead and try it yourself (if you do not limit the output, you should get 9356 rows in your output): End of explanation %%sql SELECT DISTINCT user_guid, state, membership_type FROM users WHERE country="US" ORDER BY membership_type DESC, state ASC Explanation: You might notice that some of the rows have null values in the state field. You could revise your query to only select rows that do not have null values in either the state or membership_type column: mySQL SELECT DISTINCT user_guid, state, membership_type FROM users WHERE country="US" AND state IS NOT NULL and membership_type IS NOT NULL ORDER BY state ASC, membership_type ASC Question 3: Below, try executing a query that would sort the same output as described above by membership_type first in descending order, and state second in ascending order: End of explanation breed_list = %sql SELECT DISTINCT breed FROM dogs ORDER BY breed; Explanation: 4. Export your query results to a text file Next week, we will learn how to complete some basic forms of data analysis in SQL. However, if you know how to use other analysis or visualization software like Excel or Tableau, you can implement these analyses with the SQL skills you have gained already, as long as you can export the results of your SQL queries in a format other software packages can read. Almost every database interface has a different method for exporting query results, so you will need to look up how to do it every time you try a new interface (another place where having a desire to learn new things will come in handy!). There are two ways to export your query results using our Jupyter interface. You can select and copy the output you see in an output window, and paste it into another program. Although this strategy is very simple, it only works if your output is very limited in size (since you can only paste 1000 rows at a time). You can tell MySQL to put the results of a query into a variable (for our purposes consider a variable to be a temporary holding place), and then use Python code to format the data in the variable as a CSV file (comma separated value file, a .CSV file) that can be downloaded. When you use this strategy, all of the results of a query will be saved into the variable, not just the first 1000 rows as displayed in Jupyter, even if we have set up Jupyter to only display 1000 rows of the output. Let's see how we could export query results using the second method. To tell MySQL to put the results of a query into a variable, use the following syntax: python variable_name_of_your_choice = %sql [your full query goes here]; In this case, you must execute your SQL query all on one line. So if you wanted to export the list of dog breeds in the dogs table, you could begin by executing: python breed_list = %sql SELECT DISTINCT breed FROM dogs ORDER BY breed; Go ahead and try it: End of explanation breed_list.csv('breed_list.csv') Explanation: Once your variable is created, using the above command tell Jupyter to format the variable as a csv file using the following syntax: python the_output_name_you_want.csv('the_output_name_you_want.csv') Since this line is being run in Python, do NOT include the %sql prefix when trying to execute the line. We could therefore export the breed list by executing: python breed_list.csv('breed_list.csv') When you do this, all of the results of the query will be saved in the text file but the results will not be displayed in your notebook. This is a convenient way to retrieve large amounts of data from a query without taxing your browser or the server. Try it yourself: End of explanation %%sql SELECT DISTINCT breed, REPLACE(breed,'-','') AS breed_fixed FROM dogs ORDER BY breed_fixed LIMIT 0, 5; Explanation: You should see a link in the output line that says "CSV results." You can click on this link to see the text file in a tab in your browser or to download the file to your computer (exactly how this works will differ depending on your browser and settings, but your options will be the same as if you were trying to open or download a file from any other website.) You can also open the file directly from the home page of your Jupyter account. Behind the scenes, your csv file was written to your directory on the Jupyter server, so you should now see this file listed in your Jupyter account landing page along with the list of your notebooks. Just like a notebook, you can copy it, rename it, or delete it from your directory by clicking on the check box next to the file and clicking the "duplicate," "rename," or trash can buttons at the top of the page. <img src="https://duke.box.com/shared/static/0k33vrxct1k03iz5u0cunfzf81vyn3ns.jpg" width=400 alt="JUPYTER SCREEN SHOT" /> 5. A Bird's Eye View of Other Functions You Might Want to Explore When you open your breed list results file, you will notice the following: 1) All of the rows of the output are included, even though you can only see 1000 of those rows when you run the query through the Jupyter interface. 2) There are some strange values in the breed list. Some of the entries in the breed column seem to have a dash included before the name. This is an example of what real business data sets look like...they are messy! We will use this as an opportunity to highlight why it is so important to be curious and explore MySQL functions on your own. If you needed an accurate list of all the dog breeds in the dogs table, you would have to find some way to "clean up" the breed list you just made. Let's examine some of the functions that could help you achieve this cleaning using SQL syntax rather than another program or language outside of the database. I included these links to MySQL functions in an earlier notebook: http://dev.mysql.com/doc/refman/5.7/en/func-op-summary-ref.html http://www.w3resource.com/mysql/mysql-functions-and-operators.php The following description of a function called REPLACE is included in that resource: "REPLACE(str,from_str,to_str) Returns the string str with all occurrences of the string from_str replaced by the string to_str. REPLACE() performs a case-sensitive match when searching for from_str." One thing we could try is using this function to replace any dashes included in the breed names with no character: mySQL SELECT DISTINCT breed, REPLACE(breed,'-','') AS breed_fixed FROM dogs ORDER BY breed_fixed In this query, we put the field/column name in the replace function where the syntax instructions listed "str" in order to tell the REPLACE function to act on the entire column. The "-" was the "from_str", which is the string we wanted to replace. The "" was the to_str, which is the character with which we want to replace the "from_str". Try looking at the output: End of explanation %%sql SELECT DISTINCT breed, TRIM(LEADING '-' FROM breed) AS breed_fixed FROM dogs ORDER BY breed_fixed Explanation: That was helpful, but you'll still notice some issues with the output. First, the leading dashes are indeed removed in the breed_fixed column, but now the dashes used to separate breeds in entries like 'French Bulldog-Boston Terrier Mix' are missing as well. So REPLACE isn't the right choice to selectively remove leading dashes. Perhaps we could try using the TRIM function: http://www.w3resource.com/mysql/string-functions/mysql-trim-function.php sql SELECT DISTINCT breed, TRIM(LEADING '-' FROM breed) AS breed_fixed FROM dogs ORDER BY breed_fixed Try the query written above yourself, and inspect the output carefully: End of explanation %%sql SELECT DISTINCT subcategory_name FROM complete_tests ORDER BY subcategory_name; Explanation: That certainly gets us a lot closer to the list we might want, but there are still some entries in the breed_fixed column that are conceptual duplicates of each other, due to poor consistency in how the breed names were entered. For example, one entry is "Beagle Mix" while another is "Beagle- Mix". These entries are clearly meant to refer to the same breed, but they will be counted as separate breeds as long as their breed names are different. Cleaning up all of the entries in the breed column would take quite a bit of work, so we won't go through more details about how to do it in this lesson. Instead, use this exercise as a reminder for why it's so important to always look at the details of your data, and as motivation to explore the MySQL functions we won't have time to discuss in the course. If you push yourself to learn new SQL functions and embrace the habit of getting to know your data by exploring its raw values and outputs, you will find that SQL provides very efficient tools to clean real-world messy data sets, and you will arrive at the correct conclusions about what your data indicate your company should do. Now it's time to practice using AS, DISTINCT, and ORDER BY in your own queries. Question 4: How would you get a list of all the subcategories of Dognition tests, in alphabetical order, with no test listed more than once (if you do not limit your output, you should retrieve 16 rows)? End of explanation %%sql SELECT DISTINCT country FROM users WHERE country != 'US' ORDER BY country; %%sql Describe users Explanation: Question 5: How would you create a text file with a list of all the non-United States countries of Dognition customers with no country listed more than once? End of explanation %%sql SELECT user_guid, dog_guid, test_name, created_at FROM complete_tests complete_tests LIMIT 0, 10; Explanation: Question 6: How would you find the User ID, Dog ID, and test name of the first 10 tests to ever be completed in the Dognition database? End of explanation %%sql Describe users %%sql SELECT user_guid, state, created_at FROM users WHERE membership_type=2 AND state='NC' AND created_at >= '2014-03-01' ORDER BY created_at DESC; Explanation: Question 7: How would create a text file with a list of all the customers with yearly memberships who live in the state of North Carolina (USA) and joined Dognition after March 1, 2014, sorted so that the most recent member is at the top of the list? End of explanation %%sql SELECT DISTINCT breed, UPPER(TRIM(LEADING '-' FROM breed)) AS breed_fixed FROM dogs ORDER BY breed; Explanation: Question 8: See if you can find an SQL function from the list provided at: http://www.w3resource.com/mysql/mysql-functions-and-operators.php that would allow you to output all of the distinct breed names in UPPER case. Create a query that would output a list of these names in upper case, sorted in alphabetical order. End of explanation
7,520
Given the following text description, write Python code to implement the functionality described below step by step Description: A Network Tour of Data Science, EPFL 2016 Project Step1: Load the data from *.csv file The first step is to load the data from the .csv file. <br> The format of the csv line is<br> class{0,1,2,3,4,5,6},pix0 pix2304,DataUsage(not used)<br> e.g.<br> 2,234 1 34 23 ..... 234 256 0,Training<br> The picture is always 48x48 pixels, 0-255 greyscale. Remove strange data In the database there are some images thar are not good (e.g. some images are pixelated, unrelevant, from animations). It has been tried to filter them by looking at the maximum of the histogram. If the image is very homogenous, the maximum value of the histogram will be very high (that is to say above a certain threshold) then this image is filtered out. Of course in this way are also removed some relevant information, but it's better for the CNN not to consider these images. Merge class 0 and 1 We discovered that class 1 has a very small amount of occurance in the test data et. This class, (disgust) is very similar to anger and that is why we merger class 0 and 1 together. Therefore, the recognized emotions and labels are 0-Angry + Disgust 1-Fear 2-Happy 3-Sad 4-Surprise 5-Neutral Step2: Explore the correct data Plot some random pictures from each class. Step3: Prepare the Data for CNN Here the initial data have been divided to create train and test data. <bv> This two subsets have both an associated label to train the neural network and to test its accuracy with the test data. The number of images used for each category of emotions is shown both for the train as for the test data. Step4: Prepare the data for CNN Convert, normalize, subtract the const mean value from the data images. Step5: Model 1 - Overfitting the data TODO not overfitting with 35k data In the first model it has been implemented a baseline softmax classifier using a single convolutional layer and a one fully connected layer. For the initial baseline it has not be used any regularization, dropout, or batch normalization. The equation of the classifier is simply Step6: As it is possible to see from this result, the model overfits the training data already at iteration 400, while getting a test accuracy of only 28%. In order to prevent overfitting in the following model have been applied different techniques such as dropout and pool, as well as tried to implement a neural network of more layers. This should help and improve the model since the first convolutional layer will just extract some simplest characteristics of the image such as edges, lines and curves. Adding layers will improve the performances because they will detect some high level feature which in this case could be really relevant since it's about face expressions. Advanced computational graphs - functions Step7: Model 2 - 4 x Convolutional Layers, 1x Fully Connected Step8: Computational graph - 6 Layers, Conv-Relu-Maxpool, 1 Fully Connected L. $$ x= maxpool2x2( ReLU( ReLU( x* W_1+b_1) * W_2+b_2))$$ 2 times (also for $W_3,b_3 W_4,b_4$) $$ then -> ReLU( x* W_5+b_5) $$ (1 additional conv layer) $$ then-> y=\textrm{softmax} {( x W_6+b_1)}$$(fully connected layer at the end) Step9: saving the trained graph in TF file https Step10: Feeding the CNN with some data (camera/file) Finally to test if the model really works it's needed to feed some new row and unlabeled data into the neural network. To do so some images are taken from the internet or could be taken directly from the camera.
Python Code: import random import numpy as np import tensorflow as tf import matplotlib.pyplot as plt import csv import scipy.misc import time import collections import os import utils as ut import importlib import copy importlib.reload(ut) # This is a bit of magic to make matplotlib figures appear inline in the notebook # rather than in a new window. %matplotlib inline plt.rcParams['figure.figsize'] = (20.0, 20.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' #Data Visualization # Load the shortened raw CSV data, it contains only 300 pictures with labels emotions_dataset_dir = 'fer2013_full.csv' #obtaining the number of line of the csv file file = open(emotions_dataset_dir) numline = len(file.readlines()) print ('Number of data in the dataset:',numline) Explanation: A Network Tour of Data Science, EPFL 2016 Project: Facial Emotion Recognition Dataset taken from: kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge <br> <br> students: Patryk Oleniuk, Carmen Galotta The project presented here is an algorithm to recognize and detect emotions from a face picture. Of course, the task of recognize face emotions is very easy for humans to do even if somethimes is really hard to understand how a person feels, but what can be easily understood thanks to human's brain, is difficult to emulate by a machine. The aim of this project is to classify faces in discrete human emotions. Due to the success of Neural Network in images classification tasks it has been tought that employing it could be a good idea in also face emotion. The dataset has been taken from the kaggle competition and consists of 48x48 grey images already labeled with a number coding for classes of emotions, namely: 0-Angry<br> 1-Disgust<br> 2-Fear<br> 3-Happy<br> 4-Sad<br> 5-Surprise<br> 6-Neutral<br> The faces are mostly centered in the image. Configuration, dataset file End of explanation #Load the file in csv ifile = open(emotions_dataset_dir, "rt") reader = csv.reader(ifile) hist_threshold = 350 # images above this threshold will be removed hist_div = 100 #parameter of the histogram print('Loading Images. It may take a while, depending on the database size.') images, emotions, strange_im, num_strange, num_skipped = ut.load_dataset(reader, numline, hist_div, hist_threshold) ifile.close() print('Skipped', num_skipped, 'happy class images.') print(str( len(images) ) + ' are left after \'strange images\' removal.') print('Deleted ' + str( num_strange ) + ' strange images. Images are shown below') # showing strange images plt.rcParams['figure.figsize'] = (5.0, 5.0) # set default size of plots idxs = np.random.choice(range(1,num_strange ), 6, replace=False) for i, idx in enumerate(idxs): plt_idx = i plt.subplot(1, 6, plt_idx+1) plt.imshow(strange_im[idx]) plt.axis('off') if(i == 0): plt.title('Some of the images removed from dataset (max(histogram) thresholded)') plt.show() Explanation: Load the data from *.csv file The first step is to load the data from the .csv file. <br> The format of the csv line is<br> class{0,1,2,3,4,5,6},pix0 pix2304,DataUsage(not used)<br> e.g.<br> 2,234 1 34 23 ..... 234 256 0,Training<br> The picture is always 48x48 pixels, 0-255 greyscale. Remove strange data In the database there are some images thar are not good (e.g. some images are pixelated, unrelevant, from animations). It has been tried to filter them by looking at the maximum of the histogram. If the image is very homogenous, the maximum value of the histogram will be very high (that is to say above a certain threshold) then this image is filtered out. Of course in this way are also removed some relevant information, but it's better for the CNN not to consider these images. Merge class 0 and 1 We discovered that class 1 has a very small amount of occurance in the test data et. This class, (disgust) is very similar to anger and that is why we merger class 0 and 1 together. Therefore, the recognized emotions and labels are 0-Angry + Disgust 1-Fear 2-Happy 3-Sad 4-Surprise 5-Neutral End of explanation classes = [0,1,2,3,4,5] str_emotions = ['angry','scared','happy','sad','surprised','normal'] num_classes = len(classes) samples_per_class = 6 plt.rcParams['figure.figsize'] = (10.0, 10.0) # set default size of plots for y, cls in enumerate(classes): idxs = np.flatnonzero(emotions == y) idxs = np.random.choice(idxs, samples_per_class, replace=False) for i, idx in enumerate(idxs): plt_idx = i * num_classes + y + 1 plt.subplot(samples_per_class, num_classes, plt_idx) plt.imshow(images[idx]) y_h, x_h = np.histogram( images[idx], hist_div ); plt.axis('off') if(i == 0): plt.title(str_emotions[y] ) plt.show() Explanation: Explore the correct data Plot some random pictures from each class. End of explanation print('number of clean data:' + str(images.shape[0]) + ' 48x48 pix , 0-255 greyscale images') n_all = images.shape[0]; n_train = 64; # number of data for training and for batch # dividing the input data train_data_orig = images[0:n_all-n_train,:,:] train_labels = emotions[0:n_all-n_train] test_data_orig = images[n_all-n_train:n_all,:,:] test_labels = emotions[n_all-n_train:n_all] # Convert to float train_data_orig = train_data_orig.astype('float32') y_train = train_labels.astype('float32') test_data_orig = test_data_orig.astype('float32') y_test = test_labels.astype('float32') print('orig train data ' + str(train_data_orig.shape)) print('orig train labels ' + str(train_labels.shape) + 'from ' + str(train_labels.min()) + ' to ' + str(train_labels.max()) ) print('orig test data ' + str(test_data_orig.shape)) print('orig test labels ' + str(test_labels.shape)+ 'from ' + str(test_labels.min()) + ' to ' + str(test_labels.max()) ) for i in range (0, 5): print('TRAIN: number of' , i, 'labels',len(train_labels[train_labels == i])) for i in range (0, 5): print('TEST: number of', i, 'labels',len(test_labels[test_labels == i])) Explanation: Prepare the Data for CNN Here the initial data have been divided to create train and test data. <bv> This two subsets have both an associated label to train the neural network and to test its accuracy with the test data. The number of images used for each category of emotions is shown both for the train as for the test data. End of explanation # Data pre-processing n = train_data_orig.shape[0]; train_data = np.zeros([n,48**2]) for i in range(n): xx = train_data_orig[i,:,:] xx -= np.mean(xx) xx /= np.linalg.norm(xx) train_data[i,:] = xx.reshape(2304); #np.reshape(xx,[-1]) n = test_data_orig.shape[0] test_data = np.zeros([n,48**2]) for i in range(n): xx = test_data_orig[i,:,:] xx -= np.mean(xx) xx /= np.linalg.norm(xx) test_data[i] = np.reshape(xx,[-1]) #print(train_data.shape) #print(test_data.shape) #print(train_data_orig[0][2][2]) #print(test_data[0][2]) plt.rcParams['figure.figsize'] = (2.0, 2.0) # set default size of plots plt.imshow(train_data[4].reshape([48,48])); plt.title('example image after processing'); # Convert label values to one_hot vector train_labels = ut.convert_to_one_hot(train_labels,num_classes) test_labels = ut.convert_to_one_hot(test_labels,num_classes) print('train labels shape',train_labels.shape) print('test labels shape',test_labels.shape) Explanation: Prepare the data for CNN Convert, normalize, subtract the const mean value from the data images. End of explanation # Define computational graph (CG) batch_size = n_train # batch size d = train_data.shape[1] # data dimensionality nc = 6 # number of classes # CG inputs xin = tf.placeholder(tf.float32,[batch_size,d]); #print('xin=',xin,xin.get_shape()) y_label = tf.placeholder(tf.float32,[batch_size,nc]); #print('y_label=',y_label,y_label.get_shape()) #d = tf.placeholder(tf.float32); # Convolutional layer K0 = 8 # size of the patch F0 = 64 # number of filters ncl0 = K0*K0*F0 Wcl0 = tf.Variable(tf.truncated_normal([K0,K0,1,F0], stddev=tf.sqrt(2./tf.to_float(ncl0)) )); print('Wcl=',Wcl0.get_shape()) #bcl0 = tf.Variable(tf.zeros([F0])); print('bcl=',bcl0.get_shape()) bcl0 = bias_variable([F0]); print('bcl0=',bcl0.get_shape()) #in ReLu case, small positive bias added to prevent killing of gradient when input is negative. x_2d0 = tf.reshape(xin, [-1,48,48,1]); print('x_2d=',x_2d0.get_shape()) x = tf.nn.conv2d(x_2d0, Wcl0, strides=[1, 1, 1, 1], padding='SAME') x += bcl0; print('x2=',x.get_shape()) # ReLU activation x = tf.nn.relu(x) # Dropout #x = tf.nn.dropout(x, 0.25) # Fully Connected layer nfc = 48*48*F0 x = tf.reshape(x, [batch_size,-1]); print('x3=',x.get_shape()) Wfc = tf.Variable(tf.truncated_normal([nfc,nc], stddev=tf.sqrt(2./tf.to_float(nfc+nc)) )); print('Wfc=',Wfc.get_shape()) bfc = tf.Variable(tf.zeros([nc])); print('bfc=',bfc.get_shape()) y = tf.matmul(x, Wfc); print('y1=',y.get_shape()) y += bfc; print('y2=',y.get_shape()) # Softmax y = tf.nn.softmax(y); print('y3(SOFTMAX)=',y.get_shape()) # Loss cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1)) total_loss = cross_entropy # Optimization scheme #train_step = tf.train.GradientDescentOptimizer(0.02).minimize(total_loss) train_step = tf.train.AdamOptimizer(0.004).minimize(total_loss) # Accuracy correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Run Computational Graph n = train_data.shape[0] indices = collections.deque() init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) for i in range(1001): # Batch extraction if len(indices) < batch_size: indices.extend(np.random.permutation(n)) idx = [indices.popleft() for i in range(batch_size)] batch_x, batch_y = train_data[idx,:], train_labels[idx] #print(batch_x.shape,batch_y.shape) # Run CG for vao to increase the test acriable training _,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={xin: batch_x, y_label: batch_y}) # Run CG for test set if not i%100: print('\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o) acc_test = sess.run(accuracy, feed_dict={xin: test_data, y_label: test_labels}) print('test accuracy=',acc_test) Explanation: Model 1 - Overfitting the data TODO not overfitting with 35k data In the first model it has been implemented a baseline softmax classifier using a single convolutional layer and a one fully connected layer. For the initial baseline it has not be used any regularization, dropout, or batch normalization. The equation of the classifier is simply: $$ y=\textrm{softmax}(ReLU( x \ast W_1+b_1)W_2+b_2) $$ End of explanation d = train_data.shape[1] #Défining network def weight_variable2(shape, nc10): initial2 = tf.random_normal(shape, stddev=tf.sqrt(2./tf.to_float(ncl0)) ) return tf.Variable(initial2) def conv2dstride2(x,W): return tf.nn.conv2d(x,W,strides=[1, 2, 2, 1], padding='SAME') def conv2d(x,W): return tf.nn.conv2d(x,W,strides=[1, 1, 1, 1], padding='SAME') def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=1/np.sqrt(d/2) ) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.01,shape=shape) return tf.Variable(initial) Explanation: As it is possible to see from this result, the model overfits the training data already at iteration 400, while getting a test accuracy of only 28%. In order to prevent overfitting in the following model have been applied different techniques such as dropout and pool, as well as tried to implement a neural network of more layers. This should help and improve the model since the first convolutional layer will just extract some simplest characteristics of the image such as edges, lines and curves. Adding layers will improve the performances because they will detect some high level feature which in this case could be really relevant since it's about face expressions. Advanced computational graphs - functions End of explanation tf.reset_default_graph() # Define computational graph (CG) batch_size = n_train # batch size d = train_data.shape[1] # data dimensionality nc = 6 # number of classes # CG inputs xin = tf.placeholder(tf.float32,[batch_size,d]); #print('xin=',xin,xin.get_shape()) y_label = tf.placeholder(tf.float32,[batch_size,nc]); #print('y_label=',y_label,y_label.get_shape()) #d = tf.placeholder(tf.float32); # Convolutional layer K0 = 7 # size of the patch F0 = 16 # number of filters ncl0 = K0*K0*F0 K1 = 5 # size of the patch F1 = 16 # number of filters ncl0 = K1*K1*F1 K2 = 3 # size of the patch F2 = 2 # number of filters ncl0 = K2*K2*F2 nfc = int(48*48*F0/4) nfc1 = int(48*48*F1/4) nfc2 = int(48*48*F2/4) keep_prob_input=tf.placeholder(tf.float32) #First set of conv followed by conv stride 2 operation and dropout 0.5 W_conv1=weight_variable([K0,K0,1,F0]); print('W_conv1=',W_conv1.get_shape()) b_conv1=bias_variable([F0]); print('b_conv1=',b_conv1.get_shape()) x_2d0 = tf.reshape(xin, [-1,48,48,1]); print('x_2d0=',x_2d0.get_shape()) h_conv1=tf.nn.relu(conv2d(x_2d0,W_conv1)+b_conv1); print('h_conv1=',h_conv1.get_shape()) h_conv1= tf.nn.dropout(h_conv1,keep_prob_input); # 2nd convolutional layer W_conv2=weight_variable([K0,K0,F0,F0]); print('W_conv2=',W_conv2.get_shape()) b_conv2=bias_variable([F0]); print('b_conv2=',b_conv2.get_shape()) h_conv2 = tf.nn.relu(conv2d(h_conv1,W_conv2)+b_conv2); print('h_conv2=',h_conv2.get_shape()) h_conv2_pooled = max_pool_2x2(h_conv2); print('h_conv2_pooled=',h_conv2_pooled.get_shape()) # reshaping for fully connected h_conv2_pooled_rs = tf.reshape(h_conv2_pooled, [batch_size,-1]); print('x_rs',h_conv2_pooled_rs.get_shape()); W_norm3 = weight_variable([nfc1, nfc]); print('W_norm3=',W_norm3.get_shape()) b_conv3 = bias_variable([nfc1]); print('b_conv3=',b_conv3.get_shape()) # fully connected layer h_full3 = tf.matmul( W_norm3, tf.transpose(h_conv2_pooled_rs) ); print('h_full3=',h_full3.get_shape()) h_full3 = tf.transpose(h_full3); print('h_full3=',h_full3.get_shape()) h_full3 += b_conv3; print('h_full3=',h_full3.get_shape()) h_full3=tf.nn.relu(h_full3); print('h_full3=',h_full3.get_shape()) h_full3=tf.nn.dropout(h_full3,keep_prob_input); print('h_full3_dropout=',h_full3.get_shape()) #reshaping back to conv h_full3_rs = tf.reshape(h_full3, [batch_size, 24,24,-1]); print('h_full3_rs=',h_full3_rs.get_shape()) #Second set of conv followed by conv stride 2 operation W_conv4=weight_variable([K1,K1,F1,F1]); print('W_conv4=',W_conv4.get_shape()) b_conv4=bias_variable([F1]); print('b_conv4=',b_conv4.get_shape()) h_conv4=tf.nn.relu(conv2d(h_full3_rs,W_conv4)+b_conv4); print('h_conv4=',h_conv4.get_shape()) h_conv4 = max_pool_2x2(h_conv4); print('h_conv4_pooled=',h_conv4.get_shape()) # reshaping for fully connected h_conv4_pooled_rs = tf.reshape(h_conv4, [batch_size,-1]); print('x2_rs',h_conv4_pooled_rs.get_shape()); W_norm4 = weight_variable([ 2304, nc]); print('W_norm4=',W_norm4.get_shape()) b_conv4 = tf.Variable(tf.zeros([nc])); print('b_conv4=',b_conv4.get_shape()) # fully connected layer h_full4 = tf.matmul( h_conv4_pooled_rs, W_norm4 ); print('h_full4=',h_full4.get_shape()) h_full4 += b_conv4; print('h_full4=',h_full4.get_shape()) y = h_full4; ## Softmax y = tf.nn.softmax(y); print('y(SOFTMAX)=',y.get_shape()) # Loss cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1)) total_loss = cross_entropy # Optimization scheme #train_step = tf.train.GradientDescentOptimizer(0.02).minimize(total_loss) train_step = tf.train.AdamOptimizer(0.001).minimize(total_loss) # Accuracy correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Run Computational Graph n = train_data.shape[0] indices = collections.deque() init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) for i in range(15001): # Batch extraction if len(indices) < batch_size: indices.extend(np.random.permutation(n)) idx = [indices.popleft() for i in range(batch_size)] batch_x, batch_y = train_data[idx,:], train_labels[idx] #print(batch_x.shape,batch_y.shape) # Run CG for vao to increase the test acriable training _,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={xin: batch_x, y_label: batch_y, keep_prob_input: 0.2}) # Run CG for test set if not i%50: print('\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o) acc_test = sess.run(accuracy, feed_dict = {xin: test_data, y_label: test_labels, keep_prob_input: 1.0}) print('test accuracy=',acc_test) Explanation: Model 2 - 4 x Convolutional Layers, 1x Fully Connected End of explanation tf.reset_default_graph() # implementation of Conv-Relu-COVN-RELU - pool # based on : http://cs231n.github.io/convolutional-networks/ # Define computational graph (CG) batch_size = n_train # batch size d = train_data.shape[1] # data dimensionality nc = 6 # number of classes # CG inputs xin = tf.placeholder(tf.float32,[batch_size,d]); #print('xin=',xin,xin.get_shape()) y_label = tf.placeholder(tf.float32,[batch_size,nc]); #print('y_label=',y_label,y_label.get_shape()) #d = tf.placeholder(tf.float32); #for the first conc-conv # Convolutional layer K0 = 8 # size of the patch F0 = 22 # number of filters ncl0 = K0*K0*F0 #for the second conc-conv K1 = 4 # size of the patch F1 = F0 # number of filters ncl1 = K1*K1*F1 #drouput probability keep_prob_input=tf.placeholder(tf.float32) #1st set of conv followed by conv2d operation and dropout 0.5 W_conv1=weight_variable([K0,K0,1,F0]); print('W_conv1=',W_conv1.get_shape()) b_conv1=bias_variable([F0]); print('b_conv1=',b_conv1.get_shape()) x_2d1 = tf.reshape(xin, [-1,48,48,1]); print('x_2d1=',x_2d1.get_shape()) #conv2d h_conv1=tf.nn.relu(conv2d(x_2d1, W_conv1) + b_conv1); print('h_conv1=',h_conv1.get_shape()) #h_conv1= tf.nn.dropout(h_conv1,keep_prob_input); # 2nd convolutional layer + max pooling W_conv2=weight_variable([K0,K0,F0,F0]); print('W_conv2=',W_conv2.get_shape()) b_conv2=bias_variable([F0]); print('b_conv2=',b_conv2.get_shape()) # conv2d + max pool h_conv2 = tf.nn.relu(conv2d(h_conv1,W_conv2)+b_conv2); print('h_conv2=',h_conv2.get_shape()) h_conv2_pooled = max_pool_2x2(h_conv2); print('h_conv2_pooled=',h_conv2_pooled.get_shape()) #3rd set of conv W_conv3=weight_variable([K0,K0,F0,F0]); print('W_conv3=',W_conv3.get_shape()) b_conv3=bias_variable([F1]); print('b_conv3=',b_conv3.get_shape()) x_2d3 = tf.reshape(h_conv2_pooled, [-1,24,24,F0]); print('x_2d3=',x_2d3.get_shape()) #conv2d h_conv3=tf.nn.relu(conv2d(x_2d3, W_conv3) + b_conv3); print('h_conv3=',h_conv3.get_shape()) # 4th convolutional layer W_conv4=weight_variable([K1,K1,F1,F1]); print('W_conv4=',W_conv4.get_shape()) b_conv4=bias_variable([F1]); print('b_conv4=',b_conv4.get_shape()) #conv2d + max pool 4x4 h_conv4 = tf.nn.relu(conv2d(h_conv3,W_conv4)+b_conv4); print('h_conv4=',h_conv4.get_shape()) h_conv4_pooled = max_pool_2x2(h_conv4); print('h_conv4_pooled=',h_conv4_pooled.get_shape()) h_conv4_pooled = max_pool_2x2(h_conv4_pooled); print('h_conv4_pooled=',h_conv4_pooled.get_shape()) #5th set of conv W_conv5=weight_variable([K1,K1,F1,F1]); print('W_conv5=',W_conv5.get_shape()) b_conv5=bias_variable([F1]); print('b_conv5=',b_conv5.get_shape()) x_2d5 = tf.reshape(h_conv4_pooled, [-1,6,6,F1]); print('x_2d5=',x_2d5.get_shape()) #conv2d h_conv5=tf.nn.relu(conv2d(x_2d5, W_conv5) + b_conv5); print('h_conv5=',h_conv5.get_shape()) # 6th convolutional layer W_conv6=weight_variable([K1,K1,F1,F1]); print('W_con6=',W_conv6.get_shape()) b_conv6=bias_variable([F1]); print('b_conv6=',b_conv6.get_shape()) b_conv6= tf.nn.dropout(b_conv6,keep_prob_input); #conv2d + max pool 4x4 h_conv6 = tf.nn.relu(conv2d(h_conv5,W_conv6)+b_conv6); print('h_conv6=',h_conv6.get_shape()) h_conv6_pooled = max_pool_2x2(h_conv6); print('h_conv6_pooled=',h_conv6_pooled.get_shape()) # reshaping for fully connected h_conv6_pooled_rs = tf.reshape(h_conv6, [batch_size,-1]); print('x2_rs',h_conv6_pooled_rs.get_shape()); W_norm6 = weight_variable([ 6*6*F1, nc]); print('W_norm6=',W_norm6.get_shape()) b_norm6 = bias_variable([nc]); print('b_conv6=',b_norm6.get_shape()) # fully connected layer h_full6 = tf.matmul( h_conv6_pooled_rs, W_norm6 ); print('h_full6=',h_full6.get_shape()) h_full6 += b_norm6; print('h_full6=',h_full6.get_shape()) y = h_full6; ## Softmax y = tf.nn.softmax(y); print('y3(SOFTMAX)=',y.get_shape()) # Loss cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1)) total_loss = cross_entropy # Optimization scheme #train_step = tf.train.GradientDescentOptimizer(0.02).minimize(total_loss) train_step = tf.train.AdamOptimizer(0.001).minimize(total_loss) # Accuracy correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Run Computational Graph n = train_data.shape[0] indices = collections.deque() init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) for i in range(20001): # Batch extraction if len(indices) < batch_size: indices.extend(np.random.permutation(n)) idx = [indices.popleft() for i in range(batch_size)] batch_x, batch_y = train_data[idx,:], train_labels[idx] #print(batch_x.shape,batch_y.shape) # Run CG for vao to increase the test acriable training _,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={xin: batch_x, y_label: batch_y, keep_prob_input: 0.5}) # Run CG for test set if not i%100: print('\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o) acc_test = sess.run(accuracy, feed_dict = {xin: test_data, y_label: test_labels, keep_prob_input: 1.0}) print('test accuracy=',acc_test) Explanation: Computational graph - 6 Layers, Conv-Relu-Maxpool, 1 Fully Connected L. $$ x= maxpool2x2( ReLU( ReLU( x* W_1+b_1) * W_2+b_2))$$ 2 times (also for $W_3,b_3 W_4,b_4$) $$ then -> ReLU( x* W_5+b_5) $$ (1 additional conv layer) $$ then-> y=\textrm{softmax} {( x W_6+b_1)}$$(fully connected layer at the end) End of explanation # Add ops to save and restore all the variables. saver = tf.train.Saver() # Save the variables to disk. save_path = saver.save(sess, "model_6layers.ckpt") print("Model saved in file: %s" % save_path) # calculating accuracy for each class separately for the test set result_cnn = sess.run([y], feed_dict = {xin: test_data, keep_prob_input: 1.0}) #result = sess.run(y, feed_dict={xin: test_data, keep_prob_input: 1.0}) tset = test_labels.argmax(1); result = np.asarray(result_cnn[:][0]).argmax(1); for i in range (0,nc): print('accuracy',str_emotions[i]+str(' '), '\t',ut.calc_partial_accuracy(tset, result, i)) Explanation: saving the trained graph in TF file https://www.tensorflow.org/how_tos/variables/ End of explanation faces, marked_img = ut.get_faces_from_img('diff_emotions.jpg'); #faces, marked_img = ut.get_faces_from_img('big_bang.png'); #faces, marked_img = ut.get_faces_from_img('camera'); # if some face was found in the image if(len(faces)): #creating the blank test vector data_orig = np.zeros([n_train, 48,48]) #putting face data into the vector (only first few) for i in range(0, len(faces)): data_orig[i,:,:] = ut.contrast_stretch(faces[i,:,:]); #preparing image and putting it into the batch n = data_orig.shape[0]; data = np.zeros([n,48**2]) for i in range(n): xx = data_orig[i,:,:] xx -= np.mean(xx) xx /= np.linalg.norm(xx) data[i,:] = xx.reshape(2304); #np.reshape(xx,[-1]) result = sess.run([y], feed_dict={xin: data, keep_prob_input: 1.0}) plt.rcParams['figure.figsize'] = (10.0, 10.0) # set default size of plots for i in range(0, len(faces)): emotion_nr = np.argmax(result[0][i]); plt_idx = (2*i)+1; plt.subplot( 5, 2*len(faces)/5+1, plt_idx) plt.imshow(np.reshape(data[i,:], (48,48))) plt.axis('off') plt.title(str_emotions[emotion_nr]) ax = plt.subplot(5, 2*len(faces)/5+1, plt_idx +1) ax.bar(np.arange(nc) , result[0][i]) ax.set_xticklabels(str_emotions, rotation=45, rotation_mode="anchor") ax.set_yticks([]) plt.show() Explanation: Feeding the CNN with some data (camera/file) Finally to test if the model really works it's needed to feed some new row and unlabeled data into the neural network. To do so some images are taken from the internet or could be taken directly from the camera. End of explanation
7,521
Given the following text description, write Python code to implement the functionality described below step by step Description: SAM Registry Hive Handle Request Metadata | | | | Step1: Download & Process Mordor Dataset Step2: Analytic I Monitor for any handle requested for the SAM registry hive | Data source | Event Provider | Relationship | Event | |
Python Code: from openhunt.mordorutils import * spark = get_spark() Explanation: SAM Registry Hive Handle Request Metadata | | | |:------------------|:---| | collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] | | creation date | 2019/07/25 | | modification date | 2020/09/20 | | playbook related | ['WIN-190625024610'] | Hypothesis Adversaries might be getting a handle to the SAM database to extract credentials in my environment Technical Context Every computer that runs Windows has its own local domain; that is, it has an account database for accounts that are specific to that computer. Conceptually,this is an account database like any other with accounts, groups, SIDs, and so on. These are referred to as local accounts, local groups, and so on. Because computers typically do not trust each other for account information, these identities stay local to the computer on which they were created. Offensive Tradecraft Adversaries might use tools like Mimikatz with lsadump::sam commands or scripts such as Invoke-PowerDump to get the SysKey to decrypt Security Account Mannager (SAM) database entries (from registry or hive) and get NTLM, and sometimes LM hashes of local accounts passwords. In addition, adversaries can use the built-in Reg.exe utility to dump the SAM hive in order to crack it offline. Additional reading * https://github.com/OTRF/ThreatHunter-Playbook/tree/master/docs/library/windows/security_account_manager_database.md * https://github.com/OTRF/ThreatHunter-Playbook/tree/master/docs/library/windows/syskey.md Mordor Test Data | | | |:----------|:----------| | metadata | https://mordordatasets.com/notebooks/small/windows/06_credential_access/SDWIN-190625103712.html | | link | https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/credential_access/host/empire_mimikatz_sam_access.zip | Analytics Initialize Analytics Engine End of explanation mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/credential_access/host/empire_mimikatz_sam_access.zip" registerMordorSQLTable(spark, mordor_file, "mordorTable") Explanation: Download & Process Mordor Dataset End of explanation df = spark.sql( ''' SELECT `@timestamp`, Hostname, SubjectUserName, ProcessName, ObjectName, AccessMask FROM mordorTable WHERE LOWER(Channel) = "security" AND EventID = 4656 AND ObjectType = "Key" AND lower(ObjectName) LIKE "%sam" ''' ) df.show(10,False) Explanation: Analytic I Monitor for any handle requested for the SAM registry hive | Data source | Event Provider | Relationship | Event | |:------------|:---------------|--------------|-------| | Windows registry | Microsoft-Windows-Security-Auditing | Process requested access Windows registry key | 4656 | | Windows registry | Microsoft-Windows-Security-Auditing | User requested access Windows registry key | 4656 | End of explanation
7,522
Given the following text description, write Python code to implement the functionality described below step by step Description: dataset We generate a dataset, y and a sub-sampled version y_sub Step1: Regularisation We wish to solve problems that involve the term
Python Code: # pull a dataset and make it 1D for now url='https://upload.wikimedia.org/wikipedia/en/0/04/TCF_centre.jpg' im = np.array(Image.open(urllib.request.urlopen(url)).convert("L")).astype(float) # y = im/im.max() plt.imshow(y,cmap='gray') plt.colorbar() # sub-sample # how many samples to mask? nmask = int(y.size * 1/3) # mask imin = np.random.choice(y.size,nmask,replace=False) mask = np.unravel_index(imin,y.shape) # apply mask y_sub = y.copy() y_sub[mask] = np.nan # plot plt.imshow(y_sub,cmap='gray') plt.colorbar() Explanation: dataset We generate a dataset, y and a sub-sampled version y_sub: End of explanation gamma = 1. f = fastDiff(y,axis=(0,1),gamma=gamma) diff = f.diff() # plot plt.imshow(diff,cmap='gray') plt.colorbar() # filter in Frequency domain plt.imshow(filt,cmap='gray') plt.colorbar() from scipy.fftpack.realtransforms import dct,idct r = f.dctND(1/(1+filt),f=idct,axis=(0,1)) plt.plot(r[int(r.shape[0]/2)]) plt.plot(r[int(3 *r.shape[0]/2/4)]) res.keys() plt.imshow(res['jac'].reshape(x.shape)) Explanation: Regularisation We wish to solve problems that involve the term: $$H = \left( I_n + s D^T D\right)^{-1}$$ where $I_n$ is an identity matrix of size $n$ by $n$, $D$ is a first-order differential operator on a grid with unit spacing and $^T$ stands for the matrix transpose. $s$ is a smoothness parameter. Garcia D, Robust smoothing of gridded data in one and higher dimensions with missing values. Computational Statistics & Data Analysis, 2010;54:1167-1178. For example, define a vector of observations $y = \hat{y} + \varepsilon$ where $\varepsilon$ is independent Gaussian noise with zero mean and vector $\sigma_y^2$. Then we can estimate the noise-free signal $\hat{y}$ from: $$ \hat{y} = H y $$ In that context, we can quantify discrepency a $J_y$ as: $$ J_y = \frac{1}{2 \sigma_y^2} \left( \hat{y} - {y} \right)^T \left( \hat{y} - {y} \right) $$ Assuming a reflexive boundary condition to the construction of $D$, this can be conveniently expressed in the frequency domain, using a discrete cosine transform (DCT) (Type II). Writing $D = U \Lambda U^{−1}$ where $U$ is a forward DCT matrix and $U^{-1}$ the inverse and $\Lambda = diag(\lambda_1,...,\lambda_n)$ with $\lambda_i =−2+2\cos\alpha_i$ with $\alpha_i=(i−1)\pi/n$. Here, $\lambda_i$ are the eigenvalues of $D$. $U$ is a unitary matrix, so $U^{-1} = U^T$ and: $$H = \left( I_n + s U \Lambda^2 U^T\right)^{-1} \equiv U \Gamma U^T $$ The matrix $\Gamma$ is diagonal, with elements all zero other than the leading diagonal which is defined by $\Gamma_{i,i} \equiv \gamma_i = \left(1+s \lambda_i^2\right)^{-1}$. For the evenly-spaced data we have here, the trace of $H$ is $Tr(H) = \sum_{i=1}^{n} \gamma_i$. Garcia notes that for large $n$: $$ \frac{Tr(H)}{n} \approx \frac{\sqrt{ 1 + s'}}{ \sqrt{2} s' } $$ with $s' = \sqrt{1 + 16 s}$. Also: $$ J_y = \frac{1}{2 \sigma_y^2} \sum_{i=1}^{n} \left( \gamma_i - 1\right)^2 DCT^2_i(y) $$ where $DCT_i(y)$ refers to the $i^{th}$ component of the DCT of $y$. filter develop the filters End of explanation
7,523
Given the following text description, write Python code to implement the functionality described below step by step Description: This is functionally similar to the the other notebook. All the operations here have been vectorized. This results in much much faster code, but is also much unreadable. The vectorization also necessitated the replacement of the Gauss-Seidel smoother with under-relaxed Jacobi. That change has had some effect since GS is "twice as better" as Jacobi. The Making of a Preconditioner ---Vectorized Version This is a demonstration of a multigrid preconditioned krylov solver in python3. The code and more examples are present on github here. The problem solved is a Poisson equation on a rectangular domain with homogenous dirichlet boundary conditions. Finite difference with cell-centered discretization is used to get a second order accurate solution, that is further improved to 4th order using deferred correction. The first step is a multigrid algorithm. This is the simplest 2D geometric multigrid solver. 1. Multigrid algorithm We need some terminology before going further. - Approximation Step1: 1.1 Smoothing operator This can be a certain number of Jacobi or a Gauss-Seidel iterations. Below is defined smoother that does under-relaxed Jacobi sweeps and returns the result along with the residual. Step2: 1.2 Interpolation Operator This operator takes values on a coarse grid and transfers them onto a fine grid. It is also called prolongation. The function below uses bilinear interpolation for this purpose. 'v' is on a coarse grid and we want to interpolate it on a fine grid and store it in v_f. Step3: 1.3 Restriction This is exactly the opposite of the interpolation. It takes values from the find grid and transfers them onto the coarse grid. It is kind of an averaging process. This is fundamentally different from interpolation. Each coarse grid point is surrounded by four fine grid points. So quite simply we take the value of the coarse point to be the average of 4 fine points. Here 'v' is the fine grid quantity and 'v_c' is the coarse grid quantity Step4: 1.4 Bottom Solver Note that we have looped over the coarse grid in both the cases above. It is easier to access the variables this way. The last part is the Bottom Solver. This must be something that gives us the exact/converged solution to what ever we feed it. What we feed to the bottom solver is the problem at the coarsest level. This has generally has very few points (e.g 2x2=4 in our case) and can be solved exactly by the smoother itself with few iterations. That is what we do here but, any other direct method can also be used. 50 Iterations are used here. If we coarsify to just one point, then just one iteration will solve it exactly. 1.5 V-cycle Now that we have all the parts, we are ready to build our multigrid algorithm. First we will look at a V-cycle. It is self explanatory. It is a recursive function ,i.e., it calls itself. It takes as input an initial guess 'u', the rhs 'f', the number of multigrid levels 'num_levels' among other things. At each level the V cycle calls another V-cycle. At the lowest level the solving is exact. Step5: Thats it! Now we can see it in action. We can use a problem with a known solution to test our code. The following functions set up a rhs for a problem with homogenous dirichlet BC on the unit square. Step6: Let us set up the problem, discretization and solver details. The number of divisions along each dimension is given as a power of two function of the number of levels. In principle this is not required, but having it makes the inter-grid transfers easy. The coarsest problem is going to have a 2-by-2 grid. Step7: Now we can call the solver Step8: True error is the difference of the approximation with the analytical solution. It is largely the discretization error. This what would be present when we solve the discrete equation with a direct/exact method like gaussian elimination. We see that true error stops reducing at the 5th cycle. The approximation is not getting any better after this point. So we can stop after 5 cycles. But, in general we dont know the true error. In practice we use the norm of the (relative) residual as a stopping criterion. As the cycles progress the floating point round-off error limit is reached and the residual also stops decreasing. This was the multigrid V cycle. We can use this as preconditioner to a Krylov solver. But before we get to that let's complete the multigrid introduction by looking at the Full Multi-Grid algorithm. You can skip this section safely. 1.6 Full Multi-Grid We started with a zero initial guess for the V-cycle. Presumably, if we had a better initial guess we would get better results. So we solve a coarse problem exactly and interpolate it onto the fine grid and use that as the initial guess for the V-cycle. The result of doing this recursively is the Full Multi-Grid(FMG) Algorithm. Unlike the V-cycle which was an iterative procedure, FMG is a direct solver. There is no successive improvement of the approximation. It straight away gives us an approximation that is within the discretization error. The FMG algorithm is given below. Step9: Lets call the FMG solver for the same problem Step10: It works wonderfully. The residual is large but the true error is within the discretization level. FMG is said to be scalable because the amount of work needed is linearly proportional to the the size of the problem. In big-O notation, FMG is $\mathcal{O}(N)$. Where N is the number of unknowns. Exact methods (Gaussian Elimination, LU decomposition ) are typically $\mathcal{O}(N^3)$ 2. Stationary iterative methods as preconditioners A preconditioner reduces the condition number of the coefficient matrix, thereby making it easier to solve. We dont explicitly need a matrix because we dont access the elements by index, coefficient matrix or preconditioner. What we do need is the action of the matrix on a vector. That is, we need only the matrix-vector product. The coefficient matrix can be defined as a function that takes in a vector and returns the matrix vector product. Any stationary method has an iteration matrix associated with it. This is easily seen for Jacobi or GS methods. This iteration matrix can be used as a preconditioner. But we dont explicitly need it. The stationary iterative method for solving an equation can be written as a Richardson iteration. When the initial guess is set to zero and one iteration is performed, what you get is the action of the preconditioner on the RHS vector. That is, we get a preconditioner-vector product, which is what we want. This allows us to use any blackbox stationary iterative method as a preconditioner To repeat, if there is a stationary iterative method that you want to use as a preconditioner, set the initial guess to zero, set the RHS to the vector you want to multiply the preconditioner with and perform one iteration of the stationary method. We can use the multigrid V-cycle as a preconditioner this way. We cant use FMG because it is not an iterative method. The matrix as a function can be defined using LinearOperator from scipy.sparse.linalg. It gives us an object which works like a matrix in-so-far as the product with a vector is concerned. It can be used as a regular 2D numpy array in multiplication with a vector. This can be passed to CG(), GMRES() or BiCGStab() as a preconditioner. Having a symmetric preconditioner would be nice because it will retain the symmetry if the original problem is symmetric and we can still use CG. If the preconditioner is not symmetric CG will not converge, and we would have to use a more general solver. Below is the code for defining a V-Cycle preconditioner. The default is one V-cycle. In the V-cycle, the defaults are one pre-sweep, one post-sweep. Step11: Let us define the Poisson matrix also as a LinearOperator Step12: The nested function is required because "matvec" in LinearOperator takes only one argument-- the vector. But we require the grid details and boundary condition information to create the Poisson matrix. Now will use these to solve a problem. Unlike earlier where we used an analytical solution and RHS, we will start with a random vector which will be our exact solution, and multiply it with the Poisson matrix to get the Rhs vector for the problem. There is no analytical equation associated with the matrix equation. The scipy sparse solve routines do not return the number of iterations performed. We can use this wrapper to get the number of iterations Step13: Lets look at what happens with and without the preconditioner. Step14: Without the preconditioner ~150 iterations were needed, where as with the V-cycle preconditioner the solution was obtained in far fewer iterations. Let's try with CG
Python Code: import numpy as np Explanation: This is functionally similar to the the other notebook. All the operations here have been vectorized. This results in much much faster code, but is also much unreadable. The vectorization also necessitated the replacement of the Gauss-Seidel smoother with under-relaxed Jacobi. That change has had some effect since GS is "twice as better" as Jacobi. The Making of a Preconditioner ---Vectorized Version This is a demonstration of a multigrid preconditioned krylov solver in python3. The code and more examples are present on github here. The problem solved is a Poisson equation on a rectangular domain with homogenous dirichlet boundary conditions. Finite difference with cell-centered discretization is used to get a second order accurate solution, that is further improved to 4th order using deferred correction. The first step is a multigrid algorithm. This is the simplest 2D geometric multigrid solver. 1. Multigrid algorithm We need some terminology before going further. - Approximation: - Residual: - Exact solution (of the discrete problem) - Correction This is a geometric multigrid algorithm, where a series of nested grids are used. There are four parts to a multigrid algorithm - Smoothing Operator (a.k.a Relaxation) - Restriction Operator - Interpolation Operator (a.k.a Prolongation Operator) - Bottom solver We will define each of these in sequence. These operators act of different quantities that are stored at the cell center. We will get to exactly what later on. To begin import numpy. End of explanation def Jacrelax(nx,ny,u,f,iters=1): ''' under-relaxed Jacobi iteration ''' dx=1.0/nx; dy=1.0/ny Ax=1.0/dx**2; Ay=1.0/dy**2 Ap=1.0/(2.0*(Ax+Ay)) #Dirichlet BC u[ 0,:] = -u[ 1,:] u[-1,:] = -u[-2,:] u[:, 0] = -u[:, 1] u[:,-1] = -u[:,-2] for it in range(iters): u[1:nx+1,1:ny+1] = 0.8*Ap*(Ax*(u[2:nx+2,1:ny+1] + u[0:nx,1:ny+1]) + Ay*(u[1:nx+1,2:ny+2] + u[1:nx+1,0:ny]) - f[1:nx+1,1:ny+1])+0.2*u[1:nx+1,1:ny+1] #Dirichlet BC u[ 0,:] = -u[ 1,:] u[-1,:] = -u[-2,:] u[:, 0] = -u[:, 1] u[:,-1] = -u[:,-2] res=np.zeros([nx+2,ny+2]) res[1:nx+1,1:ny+1]=f[1:nx+1,1:ny+1]-(( Ax*(u[2:nx+2,1:ny+1]+u[0:nx,1:ny+1]) + Ay*(u[1:nx+1,2:ny+2]+u[1:nx+1,0:ny]) - 2.0*(Ax+Ay)*u[1:nx+1,1:ny+1])) return u,res Explanation: 1.1 Smoothing operator This can be a certain number of Jacobi or a Gauss-Seidel iterations. Below is defined smoother that does under-relaxed Jacobi sweeps and returns the result along with the residual. End of explanation def prolong(nx,ny,v): ''' interpolate 'v' to the fine grid ''' v_f=np.zeros([2*nx+2,2*ny+2]) v_f[1:2*nx:2 ,1:2*ny:2 ] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[0:nx ,1:ny+1]+v[1:nx+1,0:ny] )+0.0625*v[0:nx ,0:ny ] v_f[2:2*nx+1:2,1:2*ny:2 ] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[2:nx+2,1:ny+1]+v[1:nx+1,0:ny] )+0.0625*v[2:nx+2,0:ny ] v_f[1:2*nx:2 ,2:2*ny+1:2] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[0:nx ,1:ny+1]+v[1:nx+1,2:ny+2])+0.0625*v[0:nx ,2:ny+2] v_f[2:2*nx+1:2,2:2*ny+1:2] = 0.5625*v[1:nx+1,1:ny+1]+0.1875*(v[2:nx+2,1:ny+1]+v[1:nx+1,2:ny+2])+0.0625*v[2:nx+2,2:ny+2] return v_f Explanation: 1.2 Interpolation Operator This operator takes values on a coarse grid and transfers them onto a fine grid. It is also called prolongation. The function below uses bilinear interpolation for this purpose. 'v' is on a coarse grid and we want to interpolate it on a fine grid and store it in v_f. End of explanation def restrict(nx,ny,v): ''' restrict 'v' to the coarser grid ''' v_c=np.zeros([nx+2,ny+2]) v_c[1:nx+1,1:ny+1]=0.25*(v[1:2*nx:2,1:2*ny:2]+v[1:2*nx:2,2:2*ny+1:2]+v[2:2*nx+1:2,1:2*ny:2]+v[2:2*nx+1:2,2:2*ny+1:2]) return v_c Explanation: 1.3 Restriction This is exactly the opposite of the interpolation. It takes values from the find grid and transfers them onto the coarse grid. It is kind of an averaging process. This is fundamentally different from interpolation. Each coarse grid point is surrounded by four fine grid points. So quite simply we take the value of the coarse point to be the average of 4 fine points. Here 'v' is the fine grid quantity and 'v_c' is the coarse grid quantity End of explanation def V_cycle(nx,ny,num_levels,u,f,level=1): if(level==num_levels):#bottom solve u,res=Jacrelax(nx,ny,u,f,iters=50) return u,res #Step 1: Relax Au=f on this grid u,res=Jacrelax(nx,ny,u,f,iters=1) #Step 2: Restrict residual to coarse grid res_c=restrict(nx//2,ny//2,res) #Step 3:Solve A e_c=res_c on the coarse grid. (Recursively) e_c=np.zeros_like(res_c) e_c,res_c=V_cycle(nx//2,ny//2,num_levels,e_c,res_c,level+1) #Step 4: Interpolate(prolong) e_c to fine grid and add to u u+=prolong(nx//2,ny//2,e_c) #Step 5: Relax Au=f on this grid u,res=Jacrelax(nx,ny,u,f,iters=1) return u,res Explanation: 1.4 Bottom Solver Note that we have looped over the coarse grid in both the cases above. It is easier to access the variables this way. The last part is the Bottom Solver. This must be something that gives us the exact/converged solution to what ever we feed it. What we feed to the bottom solver is the problem at the coarsest level. This has generally has very few points (e.g 2x2=4 in our case) and can be solved exactly by the smoother itself with few iterations. That is what we do here but, any other direct method can also be used. 50 Iterations are used here. If we coarsify to just one point, then just one iteration will solve it exactly. 1.5 V-cycle Now that we have all the parts, we are ready to build our multigrid algorithm. First we will look at a V-cycle. It is self explanatory. It is a recursive function ,i.e., it calls itself. It takes as input an initial guess 'u', the rhs 'f', the number of multigrid levels 'num_levels' among other things. At each level the V cycle calls another V-cycle. At the lowest level the solving is exact. End of explanation #analytical solution def Uann(x,y): return (x**3-x)*(y**3-y) #RHS corresponding to above def source(x,y): return 6*x*y*(x**2+ y**2 - 2) Explanation: Thats it! Now we can see it in action. We can use a problem with a known solution to test our code. The following functions set up a rhs for a problem with homogenous dirichlet BC on the unit square. End of explanation #input max_cycles = 30 nlevels = 6 NX = 2*2**(nlevels-1) NY = 2*2**(nlevels-1) tol = 1e-15 #the grid has one layer of ghost cellss uann=np.zeros([NX+2,NY+2])#analytical solution u =np.zeros([NX+2,NY+2])#approximation f =np.zeros([NX+2,NY+2])#RHS #calcualte the RHS and exact solution DX=1.0/NX DY=1.0/NY xc=np.linspace(0.5*DX,1-0.5*DX,NX) yc=np.linspace(0.5*DY,1-0.5*DY,NY) XX,YY=np.meshgrid(xc,yc,indexing='ij') uann[1:NX+1,1:NY+1]=Uann(XX,YY) f[1:NX+1,1:NY+1] =source(XX,YY) Explanation: Let us set up the problem, discretization and solver details. The number of divisions along each dimension is given as a power of two function of the number of levels. In principle this is not required, but having it makes the inter-grid transfers easy. The coarsest problem is going to have a 2-by-2 grid. End of explanation print('mgd2d.py solver:') print('NX:',NX,', NY:',NY,', tol:',tol,'levels: ',nlevels) for it in range(1,max_cycles+1): u,res=V_cycle(NX,NY,nlevels,u,f) rtol=np.max(np.max(np.abs(res))) if(rtol<tol): break error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1] print(' cycle: ',it,', L_inf(res.)= ',rtol,',L_inf(true error): ',np.max(np.max(np.abs(error)))) error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1] print('L_inf (true error): ',np.max(np.max(np.abs(error)))) Explanation: Now we can call the solver End of explanation def FMG(nx,ny,num_levels,f,nv=1,level=1): if(level==num_levels):#bottom solve u=np.zeros([nx+2,ny+2]) u,res=Jacrelax(nx,ny,u,f,iters=50) return u,res #Step 1: Restrict the rhs to a coarse grid f_c=restrict(nx//2,ny//2,f) #Step 2: Solve the coarse grid problem using FMG u_c,_=FMG(nx//2,ny//2,num_levels,f_c,nv,level+1) #Step 3: Interpolate u_c to the fine grid u=prolong(nx//2,ny//2,u_c) #step 4: Execute 'nv' V-cycles for _ in range(nv): u,res=V_cycle(nx,ny,num_levels-level,u,f) return u,res Explanation: True error is the difference of the approximation with the analytical solution. It is largely the discretization error. This what would be present when we solve the discrete equation with a direct/exact method like gaussian elimination. We see that true error stops reducing at the 5th cycle. The approximation is not getting any better after this point. So we can stop after 5 cycles. But, in general we dont know the true error. In practice we use the norm of the (relative) residual as a stopping criterion. As the cycles progress the floating point round-off error limit is reached and the residual also stops decreasing. This was the multigrid V cycle. We can use this as preconditioner to a Krylov solver. But before we get to that let's complete the multigrid introduction by looking at the Full Multi-Grid algorithm. You can skip this section safely. 1.6 Full Multi-Grid We started with a zero initial guess for the V-cycle. Presumably, if we had a better initial guess we would get better results. So we solve a coarse problem exactly and interpolate it onto the fine grid and use that as the initial guess for the V-cycle. The result of doing this recursively is the Full Multi-Grid(FMG) Algorithm. Unlike the V-cycle which was an iterative procedure, FMG is a direct solver. There is no successive improvement of the approximation. It straight away gives us an approximation that is within the discretization error. The FMG algorithm is given below. End of explanation print('mgd2d.py FMG solver:') print('NX:',NX,', NY:',NY,', levels: ',nlevels) u,res=FMG(NX,NY,nlevels,f,nv=1) rtol=np.max(np.max(np.abs(res))) print(' FMG L_inf(res.)= ',rtol) error=uann[1:NX+1,1:NY+1]-u[1:NX+1,1:NY+1] print('L_inf (true error): ',np.max(np.max(np.abs(error)))) Explanation: Lets call the FMG solver for the same problem End of explanation from scipy.sparse.linalg import LinearOperator,bicgstab,cg def MGVP(nx,ny,num_levels): ''' Multigrid Preconditioner. Returns a (scipy.sparse.linalg.) LinearOperator that can be passed to Krylov solvers as a preconditioner. ''' def pc_fn(v): u =np.zeros([nx+2,ny+2]) f =np.zeros([nx+2,ny+2]) f[1:nx+1,1:ny+1] =v.reshape([nx,ny]) #in practice this copying can be avoived #perform one V cycle u,res=V_cycle(nx,ny,num_levels,u,f) return u[1:nx+1,1:ny+1].reshape(v.shape) M=LinearOperator((nx*ny,nx*ny), matvec=pc_fn) return M Explanation: It works wonderfully. The residual is large but the true error is within the discretization level. FMG is said to be scalable because the amount of work needed is linearly proportional to the the size of the problem. In big-O notation, FMG is $\mathcal{O}(N)$. Where N is the number of unknowns. Exact methods (Gaussian Elimination, LU decomposition ) are typically $\mathcal{O}(N^3)$ 2. Stationary iterative methods as preconditioners A preconditioner reduces the condition number of the coefficient matrix, thereby making it easier to solve. We dont explicitly need a matrix because we dont access the elements by index, coefficient matrix or preconditioner. What we do need is the action of the matrix on a vector. That is, we need only the matrix-vector product. The coefficient matrix can be defined as a function that takes in a vector and returns the matrix vector product. Any stationary method has an iteration matrix associated with it. This is easily seen for Jacobi or GS methods. This iteration matrix can be used as a preconditioner. But we dont explicitly need it. The stationary iterative method for solving an equation can be written as a Richardson iteration. When the initial guess is set to zero and one iteration is performed, what you get is the action of the preconditioner on the RHS vector. That is, we get a preconditioner-vector product, which is what we want. This allows us to use any blackbox stationary iterative method as a preconditioner To repeat, if there is a stationary iterative method that you want to use as a preconditioner, set the initial guess to zero, set the RHS to the vector you want to multiply the preconditioner with and perform one iteration of the stationary method. We can use the multigrid V-cycle as a preconditioner this way. We cant use FMG because it is not an iterative method. The matrix as a function can be defined using LinearOperator from scipy.sparse.linalg. It gives us an object which works like a matrix in-so-far as the product with a vector is concerned. It can be used as a regular 2D numpy array in multiplication with a vector. This can be passed to CG(), GMRES() or BiCGStab() as a preconditioner. Having a symmetric preconditioner would be nice because it will retain the symmetry if the original problem is symmetric and we can still use CG. If the preconditioner is not symmetric CG will not converge, and we would have to use a more general solver. Below is the code for defining a V-Cycle preconditioner. The default is one V-cycle. In the V-cycle, the defaults are one pre-sweep, one post-sweep. End of explanation def Laplace(nx,ny): ''' Action of the Laplace matrix on a vector v ''' def mv(v): u =np.zeros([nx+2,ny+2]) u[1:nx+1,1:ny+1]=v.reshape([nx,ny]) dx=1.0/nx; dy=1.0/ny Ax=1.0/dx**2; Ay=1.0/dy**2 #BCs. Needs to be generalized! u[ 0,:] = -u[ 1,:] u[-1,:] = -u[-2,:] u[:, 0] = -u[:, 1] u[:,-1] = -u[:,-2] ut = (Ax*(u[2:nx+2,1:ny+1]+u[0:nx,1:ny+1]) + Ay*(u[1:nx+1,2:ny+2]+u[1:nx+1,0:ny]) - 2.0*(Ax+Ay)*u[1:nx+1,1:ny+1]) return ut.reshape(v.shape) A = LinearOperator((nx*ny,nx*ny), matvec=mv) return A Explanation: Let us define the Poisson matrix also as a LinearOperator End of explanation def solve_sparse(solver,A, b,tol=1e-10,maxiter=500,M=None): num_iters = 0 def callback(xk): nonlocal num_iters num_iters+=1 x,status=solver(A, b,tol=tol,maxiter=maxiter,callback=callback,M=M) return x,status,num_iters Explanation: The nested function is required because "matvec" in LinearOperator takes only one argument-- the vector. But we require the grid details and boundary condition information to create the Poisson matrix. Now will use these to solve a problem. Unlike earlier where we used an analytical solution and RHS, we will start with a random vector which will be our exact solution, and multiply it with the Poisson matrix to get the Rhs vector for the problem. There is no analytical equation associated with the matrix equation. The scipy sparse solve routines do not return the number of iterations performed. We can use this wrapper to get the number of iterations End of explanation A = Laplace(NX,NY) #Exact solution and RHS uex=np.random.rand(NX*NY,1) b=A*uex #Multigrid Preconditioner M=MGVP(NX,NY,nlevels) u,info,iters=solve_sparse(bicgstab,A,b,tol=1e-10,maxiter=500) print('Without preconditioning. status:',info,', Iters: ',iters) error=uex-u print('error :',np.max(np.abs(error))) u,info,iters=solve_sparse(bicgstab,A,b,tol=1e-10,maxiter=500,M=M) print('With preconditioning. status:',info,', Iters: ',iters) error=uex-u print('error :',np.max(np.abs(error))) Explanation: Lets look at what happens with and without the preconditioner. End of explanation u,info,iters=solve_sparse(cg,A,b,tol=1e-10,maxiter=500) print('Without preconditioning. status:',info,', Iters: ',iters) error=uex-u print('error :',np.max(np.abs(error))) u,info,iters=solve_sparse(cg,A,b,tol=1e-10,maxiter=500,M=M) print('With preconditioning. status:',info,', Iters: ',iters) error=uex-u print('error :',np.max(np.abs(error))) Explanation: Without the preconditioner ~150 iterations were needed, where as with the V-cycle preconditioner the solution was obtained in far fewer iterations. Let's try with CG: End of explanation
7,524
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Nikola Static site generator Features It’s just a bunch of HTML files and assets. Incremental builds/rebuild using doit, so Nikola is fast. Multilingual Extensible Friendly CLI Multiple input formats such as reStructuredText, Markdown, HTML and Jupyter Notebooks (out of the box as part of the core!!) The core of the Nikola / Jupyter integration https Step4: Some other gems Step5: Let see it in action! Step6: ``` Creating New Post Title
Python Code: from nbconvert.exporters import HTMLExporter ... def _compile_string(self, nb_json): Export notebooks as HTML strings. self._req_missing_ipynb() c = Config(self.site.config['IPYNB_CONFIG']) c.update(get_default_jupyter_config()) exportHtml = HTMLExporter(config=c) body, _ = exportHtml.from_notebook_node(nb_json) return body Explanation: Nikola Static site generator Features It’s just a bunch of HTML files and assets. Incremental builds/rebuild using doit, so Nikola is fast. Multilingual Extensible Friendly CLI Multiple input formats such as reStructuredText, Markdown, HTML and Jupyter Notebooks (out of the box as part of the core!!) The core of the Nikola / Jupyter integration https://github.com/getnikola/nikola/blob/master/nikola/plugins/compile/ipynb.py End of explanation def read_metadata(self, post, lang=None): Read metadata directly from ipynb file. As ipynb files support arbitrary metadata as json, the metadata used by Nikola will be assume to be in the 'nikola' subfield. self._req_missing_ipynb() if lang is None: lang = LocaleBorg().current_lang source = post.translated_source_path(lang) with io.open(source, "r", encoding="utf8") as in_file: nb_json = nbformat.read(in_file, current_nbformat) # Metadata might not exist in two-file posts or in hand-crafted # .ipynb files. return nb_json.get('metadata', {}).get('nikola', {}) def create_post(self, path, **kw): Create a new post. ... if content.startswith("{"): # imported .ipynb file, guaranteed to start with "{" because it’s JSON. nb = nbformat.reads(content, current_nbformat) else: nb = nbformat.v4.new_notebook() nb["cells"] = [nbformat.v4.new_markdown_cell(content)] Explanation: Some other gems End of explanation cd /media/data/devel/damian_blog/ !ls title = "We are above 1000 stars!" tags_list = ['Jupyter', 'python', 'reveal', 'RISE', 'slideshow'] tags = ', '.join(tags_list) !nikola new_post -f ipynb -t "{title}" --tags="{tags}" Explanation: Let see it in action! End of explanation !nikola build !nikola deploy from IPython.display import IFrame IFrame("http://www.damian.oquanta.info/", 980, 600) Explanation: ``` Creating New Post Title: We are above 1000 stars! Scanning posts......done! [2017-07-12T16:45:00Z] NOTICE: compile_ipynb: No kernel specified, assuming "python3". [2017-07-12T16:45:01Z] INFO: new_post: Your post's text is at: posts/we-are-above-1000-stars.ipynb ``` End of explanation
7,525
Given the following text description, write Python code to implement the functionality described below step by step Description: Chapter 1 Introduction Step1: History
Python Code: show_image('fig1_5.png', figsize=[12, 10]) show_image('fig1_4.png', figsize=[10, 8]) Explanation: Chapter 1 Introduction End of explanation show_image('fig1_11.png', figsize=[10, 8]) Explanation: History: distributed representation back-propagation long short-term memory (LSTM) network: used for many sequence modeling tasks, including natural language processing. Recurrent neural networks: sequence-to-sequence learning, such as machine translation. self-programming technology reinforcement learning: an autonomous agent must learn to perform a task by trial and error, without any guidance from the human operator. Trend: + Increasing Dataset Sizes 1. Deep learning has become more useful as the amount of available training data has increased. 2. Fortunately, the amount of skill required reduces as the amount of training data increases. 3. rule of thumb for superivsed learning: + acceptable performance: 5000 labeled examples per catecory. + match or exceed human performace: at least 10 million labeled examples. + Increasing Model Sizes End of explanation
7,526
Given the following text description, write Python code to implement the functionality described below step by step Description: =============================================================== Model selection with Probabilistic PCA and Factor Analysis (FA) =============================================================== Probabilistic PCA and Factor Analysis are probabilistic models. The consequence is that the likelihood of new data can be used for model selection and covariance estimation. Here we compare PCA and FA with cross-validation on low rank data corrupted with homoscedastic noise (noise variance is the same for each feature) or heteroscedastic noise (noise variance is the different for each feature). In a second step we compare the model likelihood to the likelihoods obtained from shrinkage covariance estimators. One can observe that with homoscedastic noise both FA and PCA succeed in recovering the size of the low rank subspace. The likelihood with PCA is higher than FA in this case. However PCA fails and overestimates the rank when heteroscedastic noise is present. Under appropriate circumstances the low rank models are more likely than shrinkage models. The automatic estimation from Automatic Choice of Dimensionality for PCA. NIPS 2000 Step1: Create the data Step2: Fit the models
Python Code: # Authors: Alexandre Gramfort # Denis A. Engemann # License: BSD 3 clause import numpy as np import matplotlib.pyplot as plt from scipy import linalg from sklearn.decomposition import PCA, FactorAnalysis from sklearn.covariance import ShrunkCovariance, LedoitWolf from sklearn.model_selection import cross_val_score from sklearn.model_selection import GridSearchCV print(__doc__) Explanation: =============================================================== Model selection with Probabilistic PCA and Factor Analysis (FA) =============================================================== Probabilistic PCA and Factor Analysis are probabilistic models. The consequence is that the likelihood of new data can be used for model selection and covariance estimation. Here we compare PCA and FA with cross-validation on low rank data corrupted with homoscedastic noise (noise variance is the same for each feature) or heteroscedastic noise (noise variance is the different for each feature). In a second step we compare the model likelihood to the likelihoods obtained from shrinkage covariance estimators. One can observe that with homoscedastic noise both FA and PCA succeed in recovering the size of the low rank subspace. The likelihood with PCA is higher than FA in this case. However PCA fails and overestimates the rank when heteroscedastic noise is present. Under appropriate circumstances the low rank models are more likely than shrinkage models. The automatic estimation from Automatic Choice of Dimensionality for PCA. NIPS 2000: 598-604 by Thomas P. Minka is also compared. End of explanation n_samples, n_features, rank = 1000, 50, 10 sigma = 1. rng = np.random.RandomState(42) U, _, _ = linalg.svd(rng.randn(n_features, n_features)) X = np.dot(rng.randn(n_samples, rank), U[:, :rank].T) # Adding homoscedastic noise X_homo = X + sigma * rng.randn(n_samples, n_features) # Adding heteroscedastic noise sigmas = sigma * rng.rand(n_features) + sigma / 2. X_hetero = X + rng.randn(n_samples, n_features) * sigmas Explanation: Create the data End of explanation n_components = np.arange(0, n_features, 5) # options for n_components def compute_scores(X): pca = PCA(svd_solver='full') fa = FactorAnalysis() pca_scores, fa_scores = [], [] for n in n_components: pca.n_components = n fa.n_components = n pca_scores.append(np.mean(cross_val_score(pca, X))) fa_scores.append(np.mean(cross_val_score(fa, X))) return pca_scores, fa_scores def shrunk_cov_score(X): shrinkages = np.logspace(-2, 0, 30) cv = GridSearchCV(ShrunkCovariance(), {'shrinkage': shrinkages}) return np.mean(cross_val_score(cv.fit(X).best_estimator_, X)) def lw_score(X): return np.mean(cross_val_score(LedoitWolf(), X)) for X, title in [(X_homo, 'Homoscedastic Noise'), (X_hetero, 'Heteroscedastic Noise')]: pca_scores, fa_scores = compute_scores(X) n_components_pca = n_components[np.argmax(pca_scores)] n_components_fa = n_components[np.argmax(fa_scores)] pca = PCA(svd_solver='full', n_components='mle') pca.fit(X) n_components_pca_mle = pca.n_components_ print("best n_components by PCA CV = %d" % n_components_pca) print("best n_components by FactorAnalysis CV = %d" % n_components_fa) print("best n_components by PCA MLE = %d" % n_components_pca_mle) plt.figure() plt.plot(n_components, pca_scores, 'b', label='PCA scores') plt.plot(n_components, fa_scores, 'r', label='FA scores') plt.axvline(rank, color='g', label='TRUTH: %d' % rank, linestyle='-') plt.axvline(n_components_pca, color='b', label='PCA CV: %d' % n_components_pca, linestyle='--') plt.axvline(n_components_fa, color='r', label='FactorAnalysis CV: %d' % n_components_fa, linestyle='--') plt.axvline(n_components_pca_mle, color='k', label='PCA MLE: %d' % n_components_pca_mle, linestyle='--') # compare with other covariance estimators plt.axhline(shrunk_cov_score(X), color='violet', label='Shrunk Covariance MLE', linestyle='-.') plt.axhline(lw_score(X), color='orange', label='LedoitWolf MLE' % n_components_pca_mle, linestyle='-.') plt.xlabel('nb of components') plt.ylabel('CV scores') plt.legend(loc='lower right') plt.title(title) plt.show() Explanation: Fit the models End of explanation
7,527
Given the following text description, write Python code to implement the functionality described below step by step Description: Analysing PPG signals from smart rings There's a range of Smart Rings that recently hit the market. Among other things they also record PPG signals on the finger, so let's dive into how to analyse these with HeartPy. The example file I've included contains only the PPG signal in order to keep filesize low. Many smart rings also record skin response, acceleration and gyroscopic data. It was recorded at 32Hz Step1: First let's take a look at the whole signal Step2: I spot a few gaps. This means periods where the device outputs NaN values. Let's replace with numerical data (0's) otherwise some numerical parts of the analysis will fail. Also let's plot the first 5 minutes and get a look at the signal Step3: Doesn't seem like much of a signal is going on. Let's go to the second five minutes and zoom in a bit Step4: It's not great but we definitely got a signal in this segment! Analysing is relatively straightforward. We need to do a few standard signal processing tricks Step5: Well that does look a lot better Step6: Let's zoom in on the 5 minute segment minute by minute Step7: Even though the signal is quite faint, it is still possible to extract the vast majority of peaks from it. Now the last segment is noisy even after filtering, which in combination with the smart ring as a source (low amplitude, low sample rate!) results in some incorrect acceptances. This causes high values of RMSSD, SDNN and SDSD, pNN20 and pNN50, which are sensitive to outliers. HeartPy comes with an option to clean the peak-peak intervals prior to analysis. It will attempt to identify and reject outliers. This works better the longer the signal is! After all, if there's many 'good' peak-peak intervals to relatively few 'bad' ones, outlier detection works better. Available methods to detect outiers are 'quotient-filter', 'iqr' (based on the interquartile range) and 'z-score' (based on the modified z-score method). Default is 'quotient-filter'
Python Code: #Let's import some packages first import numpy as np import matplotlib.pyplot as plt import heartpy as hp sample_rate = 32 #load the example file data = hp.get_data('ring_data.csv') Explanation: Analysing PPG signals from smart rings There's a range of Smart Rings that recently hit the market. Among other things they also record PPG signals on the finger, so let's dive into how to analyse these with HeartPy. The example file I've included contains only the PPG signal in order to keep filesize low. Many smart rings also record skin response, acceleration and gyroscopic data. It was recorded at 32Hz End of explanation plt.figure(figsize=(12,6)) plt.plot(data) plt.show() Explanation: First let's take a look at the whole signal End of explanation #missing pieces! Let's replace data = np.nan_to_num(data) #plot first 5 minutes plt.figure(figsize=(12,6)) plt.plot(data[0:(5 * 60) * sample_rate]) plt.ylim(15000, 17000) plt.show() Explanation: I spot a few gaps. This means periods where the device outputs NaN values. Let's replace with numerical data (0's) otherwise some numerical parts of the analysis will fail. Also let's plot the first 5 minutes and get a look at the signal End of explanation plt.figure(figsize=(12,6)) plt.plot(data[(5 * 60) * sample_rate:(10 * 60) * sample_rate]) plt.show() plt.figure(figsize=(12,6)) plt.title('zoomed in!') plt.plot(data[(5 * 60) * sample_rate:(6 * 60) * sample_rate]) plt.show() Explanation: Doesn't seem like much of a signal is going on. Let's go to the second five minutes and zoom in a bit End of explanation #First let's filter, there's a standard butterworth #filter implementations available in HeartPy under #filtersignal(). We will use the bandpass variant. #we filter out frequencies below 0.8Hz (<= 48 bpm) #and above 3Hz (>= 180 bpm) filtered_ppg = hp.filter_signal(data[(5 * 60) * sample_rate: (10 * 60) * sample_rate], cutoff = [0.8, 2.5], filtertype = 'bandpass', sample_rate = sample_rate, order = 3, return_top = False) #And let's plot the same segment as under 'zoomed in!' above plt.figure(figsize=(12,6)) plt.plot(filtered_ppg[0:((2*60)*32)]) plt.show() Explanation: It's not great but we definitely got a signal in this segment! Analysing is relatively straightforward. We need to do a few standard signal processing tricks: filter the signal run the analysis visualise the analysis End of explanation #Run the analysis. Using 'high_precision' means a spline will #be fitted to all peaks and then the maximum determined. #this means we can have a much higher peak position accuracy than #the 32Hz would allow wd, m = hp.process(filtered_ppg, sample_rate=sample_rate, high_precision = True) plt.figure(figsize=(12,6)) hp.plotter(wd, m) for key in m.keys(): print('%s: %f' %(key, m[key])) Explanation: Well that does look a lot better End of explanation plt.figure(figsize=(12,6)) plt.xlim(0, (1 * 60) * sample_rate) hp.plotter(wd, m, title='first minute') plt.figure(figsize=(12,6)) plt.xlim((1 * 60) * sample_rate, (2 * 60) * sample_rate) hp.plotter(wd, m, title='second minute') plt.figure(figsize=(12,6)) plt.xlim((2 * 60) * sample_rate, (3 * 60) * sample_rate) hp.plotter(wd, m, title='third minute') plt.figure(figsize=(12,6)) plt.xlim((3 * 60) * sample_rate, (4 * 60) * sample_rate) hp.plotter(wd, m, title='fourth minute') plt.figure(figsize=(12,6)) plt.xlim((4 * 60) * sample_rate, (5 * 60) * sample_rate) hp.plotter(wd, m, title='fifth minute') Explanation: Let's zoom in on the 5 minute segment minute by minute End of explanation wd, m = hp.process(filtered_ppg, sample_rate=sample_rate, high_precision = True, clean_rr = True) plt.figure(figsize=(12,6)) hp.plotter(wd, m) for key in m.keys(): print('%s: %f' %(key, m[key])) #and plot poincare hp.plot_poincare(wd, m) Explanation: Even though the signal is quite faint, it is still possible to extract the vast majority of peaks from it. Now the last segment is noisy even after filtering, which in combination with the smart ring as a source (low amplitude, low sample rate!) results in some incorrect acceptances. This causes high values of RMSSD, SDNN and SDSD, pNN20 and pNN50, which are sensitive to outliers. HeartPy comes with an option to clean the peak-peak intervals prior to analysis. It will attempt to identify and reject outliers. This works better the longer the signal is! After all, if there's many 'good' peak-peak intervals to relatively few 'bad' ones, outlier detection works better. Available methods to detect outiers are 'quotient-filter', 'iqr' (based on the interquartile range) and 'z-score' (based on the modified z-score method). Default is 'quotient-filter' End of explanation
7,528
Given the following text description, write Python code to implement the functionality described below step by step Description: Target Connectivity Configurable logging system All LISA modules have been updated to use a more consistent logging which can be configured using a single configuraton file Step1: Each module has a unique name which can be used to assign a priority level for messages generated by that module. Step2: The default logging level for a notebook can also be easily configured using this few lines Step3: Removed Juno/Juno2 distinction Juno R0 and Juno R2 boards are now accessible by specifying "juno" in the target configuration. The previous distinction was required because of a different way for the two boards to report HWMON channels. This distinction is not there anymore and thus Juno boards can now be connected using the same platform data. Step4: Executor Module Simplified tests definition using in-code configurations Automated LISA tests previously configured the Executor using JSON files. This is still possible, but the existing tests now use Python dictionaries directly in the code. In the short term, this allows de-duplicating configuration elements that are shared between multiple tests. It will later allow more flexible test configuration. See tests/eas/acceptance.py for an example of how this is currently used. Support to write files from Executor configuration https Step5: can be used to run a test where the platform is configured to - disable the "sched_is_big_little" flag (if present) - set to 50ms the "sched_migration_cost_ns" Nortice that a value written in a file is verified only if the file path is prefixed by a '/'. Otherwise, the write never fails, e.g. if the file does not exists. Support to freeze user-space across a test https Step6: Android Support Added support for Pixel Phones A new platform definition file has been added which allows to easily setup a connection with an Pixel device Step7: Added UiBench workload A new Android benchmark has been added to run UiBench provided tests. Here is a notebook which provides an example of how to run this test on your android target Step8: This folder is configured to be ignored by git, thus it's the best place to place your work-in-progress notebooks. Example notebook restructoring Example notebooks has been consolidated and better organized by topic
Python Code: !head -n12 $LISA_HOME/logging.conf Explanation: Target Connectivity Configurable logging system All LISA modules have been updated to use a more consistent logging which can be configured using a single configuraton file: End of explanation !head -n30 $LISA_HOME/logging.conf | tail -n5 Explanation: Each module has a unique name which can be used to assign a priority level for messages generated by that module. End of explanation import logging from conf import LisaLogging LisaLogging.setup(level=logging.INFO) Explanation: The default logging level for a notebook can also be easily configured using this few lines End of explanation from env import TestEnv te = TestEnv({ 'platform' : 'linux', 'board' : 'juno', 'host' : '10.1.210.45', 'username' : 'root' }) target = te.target Explanation: Removed Juno/Juno2 distinction Juno R0 and Juno R2 boards are now accessible by specifying "juno" in the target configuration. The previous distinction was required because of a different way for the two boards to report HWMON channels. This distinction is not there anymore and thus Juno boards can now be connected using the same platform data. End of explanation tests_conf = { "confs" : [ { "tag" : "base", "flags" : "ftrace", "sched_features" : "NO_ENERGY_AWARE", "cpufreq" : { "governor" : "performance", }, "files" : { '/proc/sys/kernel/sched_is_big_little' : '0', '!/proc/sys/kernel/sched_migration_cost_ns' : '500000' }, } ] } Explanation: Executor Module Simplified tests definition using in-code configurations Automated LISA tests previously configured the Executor using JSON files. This is still possible, but the existing tests now use Python dictionaries directly in the code. In the short term, this allows de-duplicating configuration elements that are shared between multiple tests. It will later allow more flexible test configuration. See tests/eas/acceptance.py for an example of how this is currently used. Support to write files from Executor configuration https://github.com/ARM-software/lisa/pull/209 A new "files" attribute can be added to Executor configurations which allows to specify a list files (e.g. sysfs and procfs) and values to be written to that files. For example, the following test configuration: End of explanation from trace import Trace import json with open('/home/patbel01/Code/lisa/results/LisaInANutshell_Backup/platform.json', 'r') as fh: platform = json.load(fh) trace = Trace(platform, '/home/patbel01/Code/lisa/results/LisaInANutshell_Backup/trace.dat', events=['sched_switch'] ) logging.info("%d tasks loaded from trace", len(trace.getTasks())) logging.info("The rt-app task in this trace has these PIDs:") logging.info(" %s", trace.getTasks()['rt-app']) Explanation: can be used to run a test where the platform is configured to - disable the "sched_is_big_little" flag (if present) - set to 50ms the "sched_migration_cost_ns" Nortice that a value written in a file is verified only if the file path is prefixed by a '/'. Otherwise, the write never fails, e.g. if the file does not exists. Support to freeze user-space across a test https://github.com/ARM-software/lisa/pull/227 Executor learned the "freeze_userspace" conf flag. When this flag is present, LISA uses the devlib freezer to freeze as much of userspace as possible while the experiment workload is executing, in order to reduce system noise. The Executor example notebook: https://github.com/ARM-software/lisa/blob/master/ipynb/examples/utils/executor_example.ipynb gives an example of using this feature. Trace module Tasks name pre-loading When the Trace module is initialized, by default all the tasks in that trace are identified and exposed via the usual getTask() method: End of explanation !cat $LISA_HOME/libs/utils/platforms/pixel.json from env import TestEnv te = TestEnv({ 'platform' : 'android', 'board' : 'pixel', 'ANDROID_HOME' : '/home/patbel01/Code/lisa/tools/android-sdk-linux/' }, force_new=True) target = te.target Explanation: Android Support Added support for Pixel Phones A new platform definition file has been added which allows to easily setup a connection with an Pixel device: End of explanation !tree -L 1 ~/Code/lisa/ipynb Explanation: Added UiBench workload A new Android benchmark has been added to run UiBench provided tests. Here is a notebook which provides an example of how to run this test on your android target: https://github.com/ARM-software/lisa/blob/master/ipynb/examples/android/benchmarks/Android_UiBench.ipynb Tests Intial version of the preliminary tests Preliminary tests aim at verifying some basic support required for a complete functional EAS solution. A initial version of these preliminary tests is now available: https://github.com/ARM-software/lisa/blob/master/tests/eas/preliminary.py and it will be extended in the future to include more and more tests. Capacity capping test A new test has been added to verify that capacity capping is working as expected: https://github.com/ARM-software/lisa/blob/master/tests/eas/capacity_capping.py Acceptance tests reworked The EAS acceptace test collects a set of platform independent tests to verify basic EAS beahviours. This test has been cleaned up and it's now avaiable with a detailed documentation: https://github.com/ARM-software/lisa/blob/master/tests/eas/acceptance.py Notebooks Added scratchpad notebooks A new scratchpad folder has been added under the ipynb folder which collects the available notebooks: End of explanation !tree -L 1 ~/Code/lisa/ipynb/examples Explanation: This folder is configured to be ignored by git, thus it's the best place to place your work-in-progress notebooks. Example notebook restructoring Example notebooks has been consolidated and better organized by topic: End of explanation
7,529
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction This tutorial shows how to use Google Cloud Platform technologies to work with structured healthcare data to build a predictive model. Synthea Synthea is a data generator that simulates the lives of patients based on several medical modules. Each module models a different medical condition based on some real world statistics. Each patient in the Synthea dataset dies either due to medical reasons or non-medical random events not modeled by the generator. Problem definition Given the patient records generated by Synthea, predict the probability of the patient dying due to medical reasons. Overview Setup Step1: Library imports Step2: Setup Step16: Feature extraction Like many healthcare datasets, the Synthea dataset contains both "vertical/longitudinal" and "horizontal" tables. Horizontal tables Horizontal tables contain one row per patient. Each column provides a piece of information about that patient. A good example of this is the patients table where each column provides a demographic feature of the patient such as gender, race, and so forth. A more technical term for horizontal tables is first normal form (1NF). Vertical or longitudinal tables Vertical tables are usually used to store logitudinal measurements and observations. Unlike horizontal tables, each patient can have multiple rows representing different measurements/observations at different times. An example is the observations table where each row can contain the result of a specific lab test. The description column provides the name of the lab test and the value column determines the outcome of the test. A more technical term for vertical tables is entity–attribute–value model. ML-ready table You cannot directly train the regression model on the dataset as-is in BigQuery. Instead, you will have a simple horizontal table where each training example corresponds to a single row containing all of the data for one patient and each column corresponds to a feature. The name for this type of table is an ML-ready table. Transformation The following code extracts data from the source vertical and horizontal tables, joins them based on patient ID, and creates a single ML-ready table. To extract features from the vertical tables, a query is built that groups the rows by their description and aggregates the corresponding values into an array sorted by time. For each description that occurs at least 100 times, a new column is added to the ML-ready table, which will be a feature for the model. The name of the column comes from the description and the value of the column is the last value in the aggregated array. Therefore, for each type of measurement, the last measurement is used as the value of the corresponding feature. Step17: Model training Now that the data is transformed into the ML-ready table, you are ready to train a model. At this point, you have several options to train a model including BigQuery ML, AutoML tables, and Cloud Machine Learning Engine. This tutorial focuses on the simplest and quickest tool, BigQuery ML (BQML), to train a linear logistic regression model to predict the probablity of death due to medical reasons. BQML automatically applies the required transformations depending on each variable's data type. For example, STRINGs are transformed into one-hot vectors, and TIMESTAMPs are standardized. BQML also let you apply regularization to help with the generalization error. In the example here, because you are using features that all resulted from verticalization of the observations and conditions table, you will use relatively large l1 regularization coefficients to avoid overfitting. This step takes ~5 minutes. Set up the context for bigquery.magics which you will use in the following sections Step18: Next, run the following commands to perform the actual model training and evaluation using BigQuery ML. NOTE Step19: Exploring the results To see the training metrics, go to the BigQuery dashboard in Cloud Console and select the project that you are running this tutorial in. You can then find your model under the dataset you used in the model_name. BigQuery shows you useful plots on training and evaluation tabs. On the training tab, the training and validation loss are plotted as a function of iterations. You can also see the learning rate used at each iteration. The evaluation tab also provides useful accuracy metrics like F1 score, Log loss and ROC AUC. You should see an AUC of around 0.8. Inspecting the weights of the different features Because you converted all of the data in the conditions and observations tables to features, you might ask whether all of these features are required to train a model. Because Bigquery ML trains linear models, you can answer this question by inspecting the weights learned for the features. Run the following query to view the top 10 categorical features having the largest weight variance Step20: Run the following query to view the top 10 categorical features by maximum absolute weight value
Python Code: from google.colab import auth auth.authenticate_user() credentials = auth._check_adc() print(credentials) Explanation: Introduction This tutorial shows how to use Google Cloud Platform technologies to work with structured healthcare data to build a predictive model. Synthea Synthea is a data generator that simulates the lives of patients based on several medical modules. Each module models a different medical condition based on some real world statistics. Each patient in the Synthea dataset dies either due to medical reasons or non-medical random events not modeled by the generator. Problem definition Given the patient records generated by Synthea, predict the probability of the patient dying due to medical reasons. Overview Setup: Authentication, importing libraries, and project and dataset naming. NOTE: You must execute these commands every time you reconnect to Colab. Data generation: Download Synthea and run it, and then export the data to BigQuery. Feature extraction: Pivot a vertical table, join with horizontal, simplify. Model: Identify columns with sufficient data, train the model using BigQuery ML. Explore: Check model weights and most important features. Model with Tables: Train a neural net using AutoML Tables and compare the results with the BigQuery ML model. Flow/reconnecting Whenever you restart or reconnect to this notebook, run the steps in Setup again. The remaining steps need to be performed in order, but they do not need to be repeated after you complete them once. Prerequisites This notebook is accompanied by another notebook that generates the Synthea dataset and imports it into BigQuery. Run that notebook before running this one. Requirements To run this tutorial, you will need a GCP project with a billing account. Costs There is a small cost associated with importing the dataset and storing it in BigQuery. However, if you run the AutoML Tables step at the end of the tutorial, the costs can reach up to $20 per hour of model training. Setup First, sign into your Google account to access Google Cloud Platform (GCP). You will also import some standard Python data analysis packages that you'll use later to extract features. Authentication: Run the following commands, click on the link that displays, and follow the instructions to authenticate. Scroll to the results box to the left to see where to paste the key you will copy from the browser. NOTE: You will need to repeat this step each time you reconnect to the notebook server. End of explanation import re import pandas as pd from google.cloud import bigquery from google.cloud.bigquery import magics Explanation: Library imports: NOTE: You will need to repeat this step each time you reconnect to the notebook server. End of explanation project = "" #@param {type:"string"} if not project: raise Exception("Project is empty.") !gcloud config set project $project dataset = "SYNMASS_2k" #@param {type:"string"} output_table = "ml_ready_table_1" #@param {type:"string"} ml_ready_table_name = "{}.{}.{}".format(project, dataset, output_table) model_name = "mortality_model_1" #@param {type:"string"} full_model_name = "{}.{}".format(dataset, model_name) Explanation: Setup: Enter the name of your GCP project. The dataset name, output table, and model names are supplied for you. Use the same GCP project and dataset that you used when importing Synthea data in the previous notebook. NOTE: You will need to repeat this step each time you reconnect to the notebook server. End of explanation _MIN_DESCRIPTION_OCCURENCES = 100 def GetFullTableName(name): Returns the full BQ table name for the given short table name. return "{}.{}.{}".format(project, dataset, name) def UpdateFieldNameToDesc(field_name_to_desc, prefix, descriptions_to_exclude, description): Updates a given field name to description dictionary with the given values. The description is converted to a valid BQ field name, and then the dictionary is updated to map the field name to the description. Args: field_name_to_desc: The map that should be updated. prefix: The prefix used for the normalized field name. This is required to differentiate between same descriptions in different tables. descriptions_to_exclude: A list of descriptions that should be excluded from the map. description: the description that should be added to the map. if description in descriptions_to_exclude: return pattern = re.compile(r"[\W_]+") field_name = pattern.sub(" ", description) field_name = field_name.replace(" ", "_") field_name = "{}_{}".format(prefix, field_name) field_name_to_desc[field_name] = description def BuildFieldNameToDesc(prefix, table, typ, descriptions_to_exclude): Reads a vertical table and returns a map of field name to description. The description value of the rows determines the column name of the ML-ready table. We extract all the unique descriptions and transform them to a valid name that can be used as BQ column names. Args: prefix: The prefix used for the normalized field names. table: The name of the table for which the map is built. typ: If this is set, the output is limited to the fields having this type. descriptions_to_exclude: A list of descriptions that should be excluded from the map. Returns: A map from normalized BQ field names to their corresponding description. type_constraint = "" if typ is not None: type_constraint = " WHERE TYPE='{}' ".format(typ) sql = SELECT DESCRIPTION as description, count(*) as occurences FROM `{}`{} GROUP BY 1 ORDER BY 2 DESC.format(table, type_constraint) data = pd.read_gbq(query=sql, project_id=project, dialect="standard") # Filter the data to contain descriptions that have at least # _MIN_DESCRIPTION_OCCURENCES occurences. data = data[data["occurences"] > _MIN_DESCRIPTION_OCCURENCES] field_name_to_desc = {} def UpdateFn(description): UpdateFieldNameToDesc(field_name_to_desc, prefix, descriptions_to_exclude, description) data["description"].apply(UpdateFn) return field_name_to_desc def BuildQueryToHorizontalize(prefix, table, typ, descriptions_to_exclude): Builds a query that horizontalizes the given table. The description column determines the feature name and the value column determines the value. In case of multiple values for a description, the last value is used. Args: prefix: The prefix used for the normalized field names. table: The name of the table for which the query is built. typ: Type of the values, it can be either "numeric" or "text". descriptions_to_exclude: descriptions that shouldn't be featurized. Returns: A sql query to horizontalize the table. field_name_to_desc = BuildFieldNameToDesc(prefix, table, typ, descriptions_to_exclude) columns_str = "" for field_name, desc in field_name_to_desc.items(): if typ == "numeric": columns_str += ( ", any_value(if(DESCRIPTION = \"{}\", CAST(values[OFFSET(0)] as " "float64), NULL)) AS {}\n").format(desc, field_name) else: columns_str += (", any_value(if(DESCRIPTION = \"{}\", values[OFFSET(0)], " "NULL)) AS {}\n").format(desc, field_name) sql = SELECT PATIENT {} FROM ( SELECT PATIENT, DESCRIPTION, ARRAY_AGG(VALUE order by DATE desc) as values FROM `{}` group by 1,2) group by 1.format(columns_str, table) return sql def BuildQueryToHorizontalizeBinaryFeatures(prefix, table): Builds a query to horizontalize the table. If a patient has no row with a given description the corresponding feature will have value 0, otherwise 1. Args: prefix: The prefix used for the normalized field names. table: The name of the table for which the query is built. Returns: A sql query to horizontalize the table. descriptions_to_exclude = set() field_name_to_desc = BuildFieldNameToDesc( prefix, table, typ=None, descriptions_to_exclude=descriptions_to_exclude) columns_str = "" for field_name, desc in field_name_to_desc.items(): columns_str += ", sum(if(DESCRIPTION = \"{}\", 1, 0)) AS {}\n".format( desc, field_name) sql = SELECT PATIENT {} FROM ( SELECT PATIENT, DESCRIPTION FROM `{}` group by 1,2) group by 1.format(columns_str, table) return sql def BuildQueryToExtractHorizontalFeatures(table, columns): Builds a query to extract a given set of features from a horizontal table. features_str = "" for col in columns: features_str += ", {}".format(col) sql = SELECT Id as PATIENT {} FROM `{}`.format(features_str, table) return sql def BuildLabelQuery(table): Returns the query to build a table with patient id and label columns. sql = SELECT PATIENT, sum(if(DESCRIPTION="Death Certification", 1, 0)) as LABEL FROM `{}` group by 1.format(table) return sql # Build queries to extract features from patients, observations, and conditions # tables. demographics = BuildQueryToExtractHorizontalFeatures( table=GetFullTableName("patients"), columns=["ethnicity", "gender", "city", "race"]) # Exclude the rows that contains the cause of death, otherwise it would be # cheating ;)."". numeric_observations = BuildQueryToHorizontalize( prefix="obs", table=GetFullTableName("observations"), typ="numeric", descriptions_to_exclude=set( ["Cause of Death [US Standard Certificate of Death]"])) text_observations = BuildQueryToHorizontalize( prefix="obs", table=GetFullTableName("observations"), typ="text", descriptions_to_exclude=set( ["Cause of Death [US Standard Certificate of Death]"])) # Conditions are modeled as binary features, the corresponding column is # true if and only if there is a row in conditions table with matching # description. conditions = BuildQueryToHorizontalizeBinaryFeatures( prefix="cond", table=GetFullTableName("conditions")) # Build the query for the label table. label = BuildLabelQuery(table=GetFullTableName("encounters")) # Build the main query that uses subqueries to extract the label, and # verticalize observations and conditions tables. The result of subqueries # are joined based on patient ID. sql_query = SELECT * FROM ({}) left join ({}) using (PATIENT) left join ({}) using (PATIENT) left join ({}) using (PATIENT) left join ({}) using (PATIENT).format(numeric_observations, text_observations, conditions, demographics, label) job_config = bigquery.QueryJobConfig() # Set the destination table table_name = ml_ready_table_name.split(".")[-1] bq_client = bigquery.Client(project=project) table_ref = bq_client.dataset(dataset).table(table_name) job_config.destination = table_ref job_config.write_disposition = "WRITE_TRUNCATE" # Start the query, passing in the extra configuration. query_job = bq_client.query( sql_query, # Location must match that of the dataset(s) referenced in the query # and of the destination table. location="US", job_config=job_config) # API request - starts the query query_job.result() # Waits for the query to finish print("Query results loaded to table {}".format(table_ref.path)) Explanation: Feature extraction Like many healthcare datasets, the Synthea dataset contains both "vertical/longitudinal" and "horizontal" tables. Horizontal tables Horizontal tables contain one row per patient. Each column provides a piece of information about that patient. A good example of this is the patients table where each column provides a demographic feature of the patient such as gender, race, and so forth. A more technical term for horizontal tables is first normal form (1NF). Vertical or longitudinal tables Vertical tables are usually used to store logitudinal measurements and observations. Unlike horizontal tables, each patient can have multiple rows representing different measurements/observations at different times. An example is the observations table where each row can contain the result of a specific lab test. The description column provides the name of the lab test and the value column determines the outcome of the test. A more technical term for vertical tables is entity–attribute–value model. ML-ready table You cannot directly train the regression model on the dataset as-is in BigQuery. Instead, you will have a simple horizontal table where each training example corresponds to a single row containing all of the data for one patient and each column corresponds to a feature. The name for this type of table is an ML-ready table. Transformation The following code extracts data from the source vertical and horizontal tables, joins them based on patient ID, and creates a single ML-ready table. To extract features from the vertical tables, a query is built that groups the rows by their description and aggregates the corresponding values into an array sorted by time. For each description that occurs at least 100 times, a new column is added to the ML-ready table, which will be a feature for the model. The name of the column comes from the description and the value of the column is the last value in the aggregated array. Therefore, for each type of measurement, the last measurement is used as the value of the corresponding feature. End of explanation # Set the default project for running queries bigquery.magics.context.project = project # Set up the substitution preprocessing injection # This is used to be able to configure bigquery magic with ml_ready_table_name # parameter sub_dict = dict() sub_dict["model_name"] = "{}.{}".format(dataset, model_name) sub_dict["ml_ready_table_name"] = ml_ready_table_name if globals().get('custom_run_query') is None: original_run_query = bigquery.magics._run_query def custom_run_query(client, query, job_config=None): query = query.format(**sub_dict) return original_run_query(client, query, job_config) bigquery.magics._run_query = custom_run_query print('done') Explanation: Model training Now that the data is transformed into the ML-ready table, you are ready to train a model. At this point, you have several options to train a model including BigQuery ML, AutoML tables, and Cloud Machine Learning Engine. This tutorial focuses on the simplest and quickest tool, BigQuery ML (BQML), to train a linear logistic regression model to predict the probablity of death due to medical reasons. BQML automatically applies the required transformations depending on each variable's data type. For example, STRINGs are transformed into one-hot vectors, and TIMESTAMPs are standardized. BQML also let you apply regularization to help with the generalization error. In the example here, because you are using features that all resulted from verticalization of the observations and conditions table, you will use relatively large l1 regularization coefficients to avoid overfitting. This step takes ~5 minutes. Set up the context for bigquery.magics which you will use in the following sections: End of explanation %%bigquery # BigQuery ML create model statement: CREATE OR REPLACE MODEL `{model_name}` OPTIONS( # Use logistic_reg for discrete predictions (classification) and linear_reg # for continuous predictions (forecasting). model_type = 'logistic_reg', early_stop = False, max_iterations = 25, l1_reg = 2, # Identify the column to use as the label. input_label_cols = ["LABEL"] ) AS SELECT * FROM `{ml_ready_table_name}` Explanation: Next, run the following commands to perform the actual model training and evaluation using BigQuery ML. NOTE: If the following command fails with a permission error, check your Cloud IAM settings and make sure that the default Compute Engine service account (PROJECT_NUMBER-compute@developer.gserviceaccount.com) has a role with BigQuery model creation permissions, such as roles/bigquery.dataEditor or BigQuery Data Editor. This happens only if you have intentionally changed the role for the Compute Engine default service account. The default role for this account has BigQuery Data Editor, which has all the required permissions. End of explanation %%bigquery SELECT processed_input, STDDEV(cws.weight) as stddev_w, max(cws.weight) as max_w, min(cws.weight) as min_w from ( SELECT processed_input, cws FROM ML.WEIGHTS(MODEL `{model_name}`) cross join unnest(category_weights) as cws ) group by 1 order by 2 desc limit 10 Explanation: Exploring the results To see the training metrics, go to the BigQuery dashboard in Cloud Console and select the project that you are running this tutorial in. You can then find your model under the dataset you used in the model_name. BigQuery shows you useful plots on training and evaluation tabs. On the training tab, the training and validation loss are plotted as a function of iterations. You can also see the learning rate used at each iteration. The evaluation tab also provides useful accuracy metrics like F1 score, Log loss and ROC AUC. You should see an AUC of around 0.8. Inspecting the weights of the different features Because you converted all of the data in the conditions and observations tables to features, you might ask whether all of these features are required to train a model. Because Bigquery ML trains linear models, you can answer this question by inspecting the weights learned for the features. Run the following query to view the top 10 categorical features having the largest weight variance: End of explanation %%bigquery SELECT processed_input, max(abs(weight)) FROM ML.WEIGHTS(MODEL `{model_name}`) group by 1 order by 2 desc limit 10 Explanation: Run the following query to view the top 10 categorical features by maximum absolute weight value: End of explanation
7,530
Given the following text description, write Python code to implement the functionality described below step by step Description: Taylor approximations to color conversion This notebook shows how to come up with all these magic constants that appear in the approximations to LinearRgb in my go-colorful library in order to speed them up at almost no loss in accuracy. The gist is to compute a Taylor expansion up to a degree that gives enough accuracy. Taylor expansions work well for relatively linear-ish functions, as LinearRgb is. Doing this is especially easy thanks to the SymPy library which has symbolic Taylor expansion built-in! Step1: The following is the conversion from RGB to linear RGB (aka. gamma-correction), where I'm dropping the conditional part for very small values of x as a first approximation. Step2: Now, we can use SymPy to create a symbolic version of that equation, and compute a symbolic Taylor expansion around $0.5$ (the middle of our target range) up to the fourth degree Step3: In order to use it numerically, we will "drop the O", which means do the actual approximation, and "lambdify" the function, which turns a symbolic function into a NumPy function Step4: As additional heuristic approximations, we'll include simply squaring the values, which should also be very fast, but quite wrong. Then, plot all these functions in order to see their behaviour, and compute errors Step5: The inverse function (for Lab->RGB) The inverse function is significantly more difficult, because its left part is highly non-linear and changes much faster than the rest of the function. So what we do here, in order to keep reasonable accuracy, is split it into three parts with three different approximations. You will notice that the leftmost part has quite large coefficients, which hints to the approximation being worse/"harder".
Python Code: %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib as mpl import matplotlib.pyplot as plt plt.style.use('ggplot') import numpy as np from sympy import * init_printing() Explanation: Taylor approximations to color conversion This notebook shows how to come up with all these magic constants that appear in the approximations to LinearRgb in my go-colorful library in order to speed them up at almost no loss in accuracy. The gist is to compute a Taylor expansion up to a degree that gives enough accuracy. Taylor expansions work well for relatively linear-ish functions, as LinearRgb is. Doing this is especially easy thanks to the SymPy library which has symbolic Taylor expansion built-in! End of explanation def linear_rgb(x): return ((x+0.055)/1.055)**2.4 Explanation: The following is the conversion from RGB to linear RGB (aka. gamma-correction), where I'm dropping the conditional part for very small values of x as a first approximation. End of explanation x = Symbol('x', real=True) series(linear_rgb(x), x, x0=0.5, n=4) Explanation: Now, we can use SymPy to create a symbolic version of that equation, and compute a symbolic Taylor expansion around $0.5$ (the middle of our target range) up to the fourth degree: End of explanation fast_linear_rgb = lambdify([x], series(linear_rgb(x), x, x0=0.5, n=4).removeO()) Explanation: In order to use it numerically, we will "drop the O", which means do the actual approximation, and "lambdify" the function, which turns a symbolic function into a NumPy function: End of explanation X = np.linspace(0,1,1001) ref = linear_rgb(X) # The (almost) correct implementation. fast = fast_linear_rgb(X) # The Taylor approximation square = X*X # The approximation by squaring. fig, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(20,4)) ax1.plot(X, ref, label='linear') ax1.plot(X, fast, label='fast (max/avg err: {:.4f} / {:.4f})'.format(np.max(np.abs(ref - fast)), np.mean(np.abs(ref - fast)))) ax1.plot(X, square, label='square (max/avg err: {:.4f} / {:.4f})'.format(np.max(np.abs(ref - square)), np.mean(np.abs(ref - square)))) ax2.plot(X, ref) ax2.plot(X, fast) ax2.plot(X, square) ax2.set_xlim(0, 0.1) ax2.set_ylim(0, 0.1) ax2.set_title("Left end") ax3.plot(X, ref) ax3.plot(X, fast, ls=':') ax3.plot(X, square) ax3.set_xlim(0.45, 0.55) ax3.set_ylim(0.15, 0.25) ax3.set_title("Middle") ax4.plot(X, ref) ax4.plot(X, fast) ax4.plot(X, square) ax4.set_xlim(0.9, 1) ax4.set_ylim(0.9, 1) ax4.set_title("Right end") ax1.legend(); Explanation: As additional heuristic approximations, we'll include simply squaring the values, which should also be very fast, but quite wrong. Then, plot all these functions in order to see their behaviour, and compute errors: End of explanation def delinear_rgb(x): return 1.055*(x**(1.0/2.4)) - 0.055 fast_delinear_rgb_part1 = lambdify([x], series(delinear_rgb(x), x, x0=0.015, n=6).removeO()) fast_delinear_rgb_part2 = lambdify([x], series(delinear_rgb(x), x, x0=0.03, n=6).removeO()) fast_delinear_rgb_part3 = lambdify([x], series(delinear_rgb(x), x, x0=0.6, n=6).removeO()) ref = delinear_rgb(X) fast1 = fast_delinear_rgb_part1(X) fast2 = fast_delinear_rgb_part2(X) fast3 = fast_delinear_rgb_part3(X) sqrt = np.sqrt(X) def plot(ax): ax.plot(X, ref, label='linear') l, = ax.plot(X, fast1, label='fast, part1', ls=':') ax.plot(X, fast2, label='fast, part2', c=l.get_color(), ls='--') ax.plot(X, fast3, label='fast, part3', c=l.get_color(), ls='-') ax.plot(X, sqrt, label='sqrt') fig, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(20,4)) plot(ax1) ax1.set_ylim(0, 1) plot(ax2) ax2.set_xlim(0, 0.05) ax2.set_ylim(0, 0.25) ax2.set_title("Left end") plot(ax3) ax3.set_xlim(0.45, 0.55) ax3.set_ylim(0.65, 0.75) ax3.set_title("Middle") plot(ax4) ax4.set_xlim(0.95, 1) ax4.set_ylim(0.95, 1) ax4.set_title("Right end") ax1.legend() Explanation: The inverse function (for Lab->RGB) The inverse function is significantly more difficult, because its left part is highly non-linear and changes much faster than the rest of the function. So what we do here, in order to keep reasonable accuracy, is split it into three parts with three different approximations. You will notice that the leftmost part has quite large coefficients, which hints to the approximation being worse/"harder". End of explanation
7,531
Given the following text description, write Python code to implement the functionality described below step by step Description: <table style="width Step1: Use debugging tools throughout! Don't forget all the fun debugging tools we covered while you work on these exercises. %debug %pdb import q;q.d() And (if necessary) %prun Exercise 1 You'll notice that our dataset actually has two different files, pumps_train_values.csv and pumps_train_labels.csv. We want to load both of these together in a single DataFrame for our exploratory analysis. Create a function that Step4: Exercise 2 Now that we've loaded our data, we want to do some pre-processing before we model. From inspection of the data, we've noticed that there are some numeric values that are probably not valid that we want to replace. Select the relevant columns for modeling. For the purposes of this exercise, we'll select Step6: Exercise 3 Now that we've got a feature matrix, let's train a model! Add a function as defined below to the src/model/train_model.py The function should use sklearn.linear_model.LogisticRegression to train a logistic regression model. In a dataframe with categorical variables pd.get_dummies will do encoding that can be passed to sklearn. The LogisticRegression class in sklearn handles muticlass models automatically, so no need to use get_dummies on status_group. Finally, this method should return a GridSearchCV object that has been run with the following parameters for a logistic regression model
Python Code: %matplotlib inline from __future__ import print_function import os import pandas as pd import matplotlib.pyplot as plt import seaborn as sns PROJ_ROOT = os.path.join(os.pardir, os.pardir) Explanation: <table style="width:100%; border: 0px solid black;"> <tr style="width: 100%; border: 0px solid black;"> <td style="width:75%; border: 0px solid black;"> <a href="http://www.drivendata.org"> <img src="https://s3.amazonaws.com/drivendata.org/kif-example/img/dd.png" /> </a> </td> </tr> </table> Data Science is Software Developer #lifehacks for the Jupyter Data Scientist Section 3: Refactoring for reusability End of explanation def load_pumps_data(values_path, labels_path): # YOUR CODE HERE pass values = os.path.join(PROJ_ROOT, "data", "raw", "pumps_train_values.csv") labels = os.path.join(PROJ_ROOT, "data", "raw", "pumps_train_labels.csv") df = load_pumps_data(values, labels) assert df.shape == (59400, 40) Explanation: Use debugging tools throughout! Don't forget all the fun debugging tools we covered while you work on these exercises. %debug %pdb import q;q.d() And (if necessary) %prun Exercise 1 You'll notice that our dataset actually has two different files, pumps_train_values.csv and pumps_train_labels.csv. We want to load both of these together in a single DataFrame for our exploratory analysis. Create a function that: - Reads both of the csvs - uses the id column as the index - parses dates of the date_recorded columns - joins the labels and the training set on the id - returns the complete dataframe End of explanation def clean_raw_data(df): Takes a dataframe and performs four steps: - Selects columns for modeling - For numeric variables, replaces 0 values with mean for that region - Fills invalid construction_year values with the mean construction_year - Converts strings to categorical variables :param df: A raw dataframe that has been read into pandas :returns: A dataframe with the preprocessing performed. pass def replace_value_with_grouped_mean(df, value, column, to_groupby): For a given numeric value (e.g., 0) in a particular column, take the mean of column (excluding value) grouped by to_groupby and return that column with the value replaced by that mean. :param df: The dataframe to operate on. :param value: The value in column that should be replaced. :param column: The column in which replacements need to be made. :param to_groupby: Groupby this variable and take the mean of column. Replace value with the group's mean. :returns: The data frame with the invalid values replaced pass cleaned_df = clean_raw_data(df) # verify construction year assert (cleaned_df.construction_year > 1000).all() # verify filled in other values for numeric_col in ["population", "longitude", "latitude"]: assert (cleaned_df[numeric_col] != 0).all() # verify the types are in the expected types assert (cleaned_df.dtypes .astype(str) .isin(["int64", "float64", "category"])).all() # check some actual values assert cleaned_df.latitude.mean() == -5.970642969008563 assert cleaned_df.longitude.mean() == 35.14119354200863 assert cleaned_df.population.mean() == 277.3070009774711 Explanation: Exercise 2 Now that we've loaded our data, we want to do some pre-processing before we model. From inspection of the data, we've noticed that there are some numeric values that are probably not valid that we want to replace. Select the relevant columns for modeling. For the purposes of this exercise, we'll select: useful_columns = ['amount_tsh', 'gps_height', 'longitude', 'latitude', 'region', 'population', 'construction_year', 'extraction_type_class', 'management_group', 'quality_group', 'source_type', 'waterpoint_type', 'status_group'] Replace longitude, and population where it is 0 with mean for that region. zero_is_bad_value = ['longitude', 'population'] Replace the latitude where it is -2E-8 (a different bad value) with the mean for that region. other_bad_value = ['latitude'] Replace construction_year less than 1000 with the mean construction year. Convert object type (i.e., string) variables to categoricals. Convert the label column into a categorical variable A skeleton for this work is below where clean_raw_data will call replace_value_with_grouped_mean internally. Copy and Paste the skeleton below into a Python file called preprocess.py in src/features/. Import and autoload the methods from that file to run tests on your changes in this notebook. End of explanation def logistic(df): Trains a multinomial logistic regression model to predict the status of a water pump given characteristics about the pump. :param df: The dataframe with the features and the label. :returns: A trained GridSearchCV classifier pass %%time clf = logistic(cleaned_df) assert clf.best_score_ > 0.5 # Just for fun, let's profile the whole stack and see what's slowest! %prun logistic(clean_raw_data(load_pumps_data(values, labels))) Explanation: Exercise 3 Now that we've got a feature matrix, let's train a model! Add a function as defined below to the src/model/train_model.py The function should use sklearn.linear_model.LogisticRegression to train a logistic regression model. In a dataframe with categorical variables pd.get_dummies will do encoding that can be passed to sklearn. The LogisticRegression class in sklearn handles muticlass models automatically, so no need to use get_dummies on status_group. Finally, this method should return a GridSearchCV object that has been run with the following parameters for a logistic regression model: params = {'C': [0.1, 1, 10]} End of explanation
7,532
Given the following text description, write Python code to implement the functionality described below step by step Description: Using the C/C++ API This notebook shows how to use the OpenMC C/C++ API through the openmc.lib module. This module is particularly useful for multiphysics coupling because it allows you to update the density of materials and the temperatures of cells in memory, without stopping the simulation. Warning Step1: <b>Generate Input Files</b> Let's start by creating a fuel rod geometry. We will make 10 zones in the z-direction which will allow us to make changes to each zone. Changes in temperature have to be made on the cell, so will make 10 cells in the axial direction. Changes in density have to be made on the material, so we will make 10 water materials. Materials Step2: Cells Step3: If you are coupling this externally to a heat transfer solver, you will want to know the heat deposited by each fuel cell. So let's create a cell filter for the recoverable fission heat. Step4: Let's plot our geometry to make sure it looks like we expect. Since we made new water materials in each axial cell, and we have centered the plot at 150, we should see one color for the water material in the bottom half and a different color for the water material in the top half. Step5: Settings Step6: To run a regular simulation, just use openmc.run(). However, we want to run a simulation that we can stop in the middle and update the material and cell properties. So we will use openmc.lib. Step7: There are 10 inactive batches, so we need to run next_batch() at least 10 times before the tally is activated. Step8: Let's take a look at the tally. There are 10 entries, one for each cell in the fuel. Step9: Now, let's make some changes to the temperatures. For this, we need to identify each cell by its id. We can use get_temperature() to compare the temperatures of the cells before and after the change. Step10: Let's make a similar change for the water density. Again, we need to identify each material by its id. Step11: The new batches we run will use the new material and cell properties. Step12: When you're ready to end the simulation, use the following
Python Code: %matplotlib inline import openmc import openmc.lib Explanation: Using the C/C++ API This notebook shows how to use the OpenMC C/C++ API through the openmc.lib module. This module is particularly useful for multiphysics coupling because it allows you to update the density of materials and the temperatures of cells in memory, without stopping the simulation. Warning: these bindings are still somewhat experimental and may be subject to change in future versions of OpenMC. End of explanation material_list = [] uo2 = openmc.Material(material_id=1, name='UO2 fuel at 2.4% wt enrichment') uo2.set_density('g/cm3', 10.29769) uo2.add_element('U', 1., enrichment=2.4) uo2.add_element('O', 2.) material_list.append(uo2) helium = openmc.Material(material_id=2, name='Helium for gap') helium.set_density('g/cm3', 0.001598) helium.add_element('He', 2.4044e-4) material_list.append(helium) zircaloy = openmc.Material(material_id=3, name='Zircaloy 4') zircaloy.set_density('g/cm3', 6.55) zircaloy.add_element('Sn', 0.014, 'wo') zircaloy.add_element('Fe', 0.00165, 'wo') zircaloy.add_element('Cr', 0.001, 'wo') zircaloy.add_element('Zr', 0.98335, 'wo') material_list.append(zircaloy) for i in range(4, 14): water = openmc.Material(material_id=i) water.set_density('g/cm3', 0.7) water.add_element('H', 2.0) water.add_element('O', 1.0) water.add_s_alpha_beta('c_H_in_H2O') material_list.append(water) materials_file = openmc.Materials(material_list) materials_file.export_to_xml() Explanation: <b>Generate Input Files</b> Let's start by creating a fuel rod geometry. We will make 10 zones in the z-direction which will allow us to make changes to each zone. Changes in temperature have to be made on the cell, so will make 10 cells in the axial direction. Changes in density have to be made on the material, so we will make 10 water materials. Materials: we will make a fuel, helium, zircaloy, and 10 water materials. End of explanation pitch = 1.25984 fuel_or = openmc.ZCylinder(r=0.39218) clad_ir = openmc.ZCylinder(r=0.40005) clad_or = openmc.ZCylinder(r=0.4572) left = openmc.XPlane(x0=-pitch/2) right = openmc.XPlane(x0=pitch/2) back = openmc.YPlane(y0=-pitch/2) front = openmc.YPlane(y0=pitch/2) z = [0., 30., 60., 90., 120., 150., 180., 210., 240., 270., 300.] z_list = [openmc.ZPlane(z0=z_i) for z_i in z] left.boundary_type = 'reflective' right.boundary_type = 'reflective' front.boundary_type = 'reflective' back.boundary_type = 'reflective' z_list[0].boundary_type = 'vacuum' z_list[-1].boundary_type = 'vacuum' fuel_list = [] gap_list = [] clad_list = [] water_list = [] for i in range(1, 11): fuel_list.append(openmc.Cell(cell_id=i)) gap_list.append(openmc.Cell(cell_id=i+10)) clad_list.append(openmc.Cell(cell_id=i+20)) water_list.append(openmc.Cell(cell_id=i+30)) for j, fuels in enumerate(fuel_list): fuels.region = -fuel_or & +z_list[j] & -z_list[j+1] fuels.fill = uo2 fuels.temperature = 800. for j, gaps in enumerate(gap_list): gaps.region = +fuel_or & -clad_ir & +z_list[j] & -z_list[j+1] gaps.fill = helium gaps.temperature = 700. for j, clads in enumerate(clad_list): clads.region = +clad_ir & -clad_or & +z_list[j] & -z_list[j+1] clads.fill = zircaloy clads.temperature = 600. for j, waters in enumerate(water_list): waters.region = +clad_or & +left & -right & +back & -front & +z_list[j] & -z_list[j+1] waters.fill = material_list[j+3] waters.temperature = 500. root = openmc.Universe(name='root universe') root.add_cells(fuel_list) root.add_cells(gap_list) root.add_cells(clad_list) root.add_cells(water_list) geometry_file = openmc.Geometry(root) geometry_file.export_to_xml() Explanation: Cells: we will make a fuel cylinder, a gap cylinder, a cladding cylinder, and a water exterior. Each one will be broken into 10 cells which are the 10 axial zones. The z_list is the list of axial positions that delimit those 10 zones. To keep track of all the cells, we will create lists: fuel_list, gap_list, clad_list, and water_list. End of explanation cell_filter = openmc.CellFilter(fuel_list) t = openmc.Tally(tally_id=1) t.filters.append(cell_filter) t.scores = ['fission-q-recoverable'] tallies = openmc.Tallies([t]) tallies.export_to_xml() Explanation: If you are coupling this externally to a heat transfer solver, you will want to know the heat deposited by each fuel cell. So let's create a cell filter for the recoverable fission heat. End of explanation root.plot(basis='yz', width=[2, 10], color_by='material', origin=[0., 0., 150.], pixels=[400, 400]) Explanation: Let's plot our geometry to make sure it looks like we expect. Since we made new water materials in each axial cell, and we have centered the plot at 150, we should see one color for the water material in the bottom half and a different color for the water material in the top half. End of explanation lower_left = [-0.62992, -pitch/2, 0] upper_right = [+0.62992, +pitch/2, +300] uniform_dist = openmc.stats.Box(lower_left, upper_right, only_fissionable=True) settings_file = openmc.Settings() settings_file.batches = 100 settings_file.inactive = 10 settings_file.particles = 10000 settings_file.temperature = {'multipole': True, 'method': 'interpolation', 'range': [290, 2500]} settings_file.source = openmc.source.Source(space=uniform_dist) settings_file.export_to_xml() Explanation: Settings: everything will be standard except for the temperature settings. Since we will be working with specified temperatures, you will need temperature dependent data. I typically use the endf data found here: https://openmc.org/official-data-libraries/ Make sure your cross sections environment variable is pointing to temperature-dependent data before using the following settings. End of explanation openmc.lib.init() openmc.lib.simulation_init() Explanation: To run a regular simulation, just use openmc.run(). However, we want to run a simulation that we can stop in the middle and update the material and cell properties. So we will use openmc.lib. End of explanation for _ in range(14): openmc.lib.next_batch() Explanation: There are 10 inactive batches, so we need to run next_batch() at least 10 times before the tally is activated. End of explanation t = openmc.lib.tallies[1] print(t.mean) Explanation: Let's take a look at the tally. There are 10 entries, one for each cell in the fuel. End of explanation print("fuel temperature is: ") print(openmc.lib.cells[5].get_temperature()) print("gap temperature is: ") print(openmc.lib.cells[15].get_temperature()) print("clad temperature is: ") print(openmc.lib.cells[25].get_temperature()) print("water temperature is: ") print(openmc.lib.cells[35].get_temperature()) for i in range(1, 11): temp = 900.0 openmc.lib.cells[i].set_temperature(temp) print("fuel temperature is: ") print(openmc.lib.cells[5].get_temperature()) Explanation: Now, let's make some changes to the temperatures. For this, we need to identify each cell by its id. We can use get_temperature() to compare the temperatures of the cells before and after the change. End of explanation for i in range(4, 14): density = 0.65 openmc.lib.materials[i].set_density(density, units='g/cm3') Explanation: Let's make a similar change for the water density. Again, we need to identify each material by its id. End of explanation for _ in range(14): openmc.lib.next_batch() Explanation: The new batches we run will use the new material and cell properties. End of explanation openmc.lib.simulation_finalize() openmc.lib.finalize() Explanation: When you're ready to end the simulation, use the following: End of explanation
7,533
Given the following text description, write Python code to implement the functionality described below step by step Description: Data exploration To start with, let us load the dataframe, summarize the columns, and plot a sactter matrix of the data to check for e.g. missing values, non-linear scaling, etc.. Step1: Scatter matrix. None of the features appear to require rescaling transformations e.g. on a log-scales... Step2: Constructing a logistic regression classifier Intriguingly, the logistic Step3: Measuring precision/recall and ROC curves
Python Code: import pandas as pd # Sample code number: id number # Clump Thickness: 1 - 10 # 3. Uniformity of Cell Size: 1 - 10 # 4. Uniformity of Cell Shape: 1 - 10 # 5. Marginal Adhesion: 1 - 10 # 6. Single Epithelial Cell Size: 1 - 10 # 7. Bare Nuclei: 1 - 10 # 8. Bland Chromatin: 1 - 10 # 9. Normal Nucleoli: 1 - 10 # 10. Mitoses: 1 - 10 # 11. Class: (2 for benign, 4 for malignant) names = ['sampleid', 'clumpthickness', 'sizeuniformity', 'shapeunformity', 'adhesion', 'epithelialsize', 'barenuclei', 'blandchromatin', 'normalnucleoli', 'mitoses', 'cellclass'] df = pd.read_csv('./breast-cancer-wisconsin.data', names=names) # df.drop('sampleid') df.drop('sampleid', axis=1, inplace=True) df.head(10) df.cellclass = (df.cellclass == 4).astype(int) # It turns out one column is a string, but should be an int... df.barenuclei = df.barenuclei.values.astype(int) df.describe() # Check the class balance. Turns out to be pretty good so we should have a relatively unbiased view print 'Num Benign', (df.cellclass==2).sum(), 'Num Malignant', (df.cellclass==4).sum() Explanation: Data exploration To start with, let us load the dataframe, summarize the columns, and plot a sactter matrix of the data to check for e.g. missing values, non-linear scaling, etc.. End of explanation from pandas.tools.plotting import scatter_matrix _ = scatter_matrix(df, figsize=(14,14), alpha=.4) Explanation: Scatter matrix. None of the features appear to require rescaling transformations e.g. on a log-scales... End of explanation from sklearn.linear_model import LogisticRegression from sklearn import cross_validation from sklearn import svm LR = LogisticRegression(penalty='l1', dual=False, tol=0.0001, C=1, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver='liblinear', max_iter=100, multi_class='ovr', verbose=1, warm_start=False, n_jobs=1) X, Y = df.astype(np.float32).get_values()[:,:-1], df.get_values()[:,-1] X2 = np.append(X,X**2, axis=1) print X2.shape LR.fit(X, Y) print LR.score(X,Y) C_list = np.logspace(-1, 2, 15) CV_scores = [] CV_scores2 = [] for c in C_list: LR = LogisticRegression(penalty='l1', dual=False, tol=0.0001, C=c, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver='liblinear', max_iter=100, multi_class='ovr', verbose=1, warm_start=False, n_jobs=1) CV_scores.append(np.average(cross_validation.cross_val_score(LR, X, Y, cv=6, n_jobs=12))) svm_class = svm.SVC(C=c, kernel='linear', gamma='auto', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape=None, random_state=None) CV_scores2.append(np.average(cross_validation.cross_val_score(svm_class, X, Y, cv=6, n_jobs=12))) plt.plot(C_list, CV_scores, marker='o', label='Logistic Regression L1 loss') plt.plot(C_list, CV_scores2, marker='o', label='SVM-Linear') plt.xscale('log') plt.xlabel(r'C = 1/$\lambda$') plt.legend(loc=4) from sklearn.metrics import confusion_matrix LR = LogisticRegression(penalty='l1', dual=False, tol=0.0001, C=1e10, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver='liblinear', max_iter=100, multi_class='ovr', verbose=1, warm_start=False, n_jobs=1) LR.fit(X[:300],Y[:300]) svm_class = svm.SVC(C=10., kernel='linear', gamma='auto', coef0=0.0, shrinking=True, probability=True, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape=None, random_state=None) svm_class.fit(X[:300],Y[:300]) # Confusion matrix print print 'Confusion Matrix - LASSO Regression' print confusion_matrix(y_true=Y[300:], y_pred=LR.predict(X[300:])) print 'Confusion Matrix - SVM-Linear' print confusion_matrix(y_true=Y[300:], y_pred=svm_class.predict(X[300:])) Explanation: Constructing a logistic regression classifier Intriguingly, the logistic End of explanation from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score from sklearn.metrics import precision_recall_curve plt.figure(figsize=(7,2)) plt.subplot(121) prec, rec, thresh = precision_recall_curve(y_true=Y[300:], probas_pred=LR.predict_proba(X[300:])[:,1]) plt.plot(rec, prec,) plt.xlabel('Recall') plt.ylabel('Precision') plt.xlim(0,1) plt.ylim(0,1) plt.subplot(122) fp, tp, thresh = roc_curve(y_true=Y[300:], y_score=LR.predict_proba(X[300:])[:,1]) AUC = roc_auc_score(y_true=Y[300:], y_score=LR.predict_proba(X[300:])[:,1]) roc_curve(y_true=Y[300:], y_score=LR.predict_proba(X[300:])[:,1]) plt.text(.05, .05, 'AUC=%1.3f'%AUC) plt.plot(fp, tp, linewidth=2) plt.xlabel('False Positives') plt.ylabel('True Positives') Explanation: Measuring precision/recall and ROC curves End of explanation
7,534
Given the following text description, write Python code to implement the functionality described below step by step Description: What is the True Normal Human Body Temperature? Background The mean normal body temperature was held to be 37$^{\circ}$C or 98.6$^{\circ}$F for more than 120 years since it was first conceptualized and reported by Carl Wunderlich in a famous 1868 book. But, is this value statistically correct? <h3>Exercises</h3> <p>In this exercise, you will analyze a dataset of human body temperatures and employ the concepts of hypothesis testing, confidence intervals, and statistical significance.</p> <p>Answer the following questions <b>in this notebook below and submit to your Github account</b>.</p> <ol> <li> Is the distribution of body temperatures normal? <ul> <li> Although this is not a requirement for CLT to hold (read CLT carefully), it gives us some peace of mind that the population may also be normally distributed if we assume that this sample is representative of the population. </ul> <li> Is the sample size large? Are the observations independent? <ul> <li> Remember that this is a condition for the CLT, and hence the statistical tests we are using, to apply. </ul> <li> Is the true population mean really 98.6 degrees F? <ul> <li> Would you use a one-sample or two-sample test? Why? <li> In this situation, is it appropriate to use the $t$ or $z$ statistic? <li> Now try using the other test. How is the result be different? Why? </ul> <li> Draw a small sample of size 10 from the data and repeat both tests. <ul> <li> Which one is the correct one to use? <li> What do you notice? What does this tell you about the difference in application of the $t$ and $z$ statistic? </ul> <li> At what temperature should we consider someone's temperature to be "abnormal"? <ul> <li> Start by computing the margin of error and confidence interval. </ul> <li> Is there a significant difference between males and females in normal temperature? <ul> <li> What test did you use and why? <li> Write a story with your conclusion in the context of the original problem. </ul> </ol> You can include written notes in notebook cells using Markdown Step1: Questions and Answers 1. Is the distribution of body temperatures normal? Yes. Based on the shape of the curve plotted with sample data, we have a normal distribution of body temperature. Step2: 2. Is the sample size large? Are the observations independent? Yes. We have 390 records in the sample data file (df.size).<br> There is no connection or dependence between the measured temperature values, in other words, the observations are independent. Step3: What we know about population and what we get from sample dataset<br> Step4: 3. Is the true population mean really 98.6 degrees F? Ho or Null hypothesis Step5: a) Would you use a one-sample or two-sample test? Why?<br> * ??? b) In this situation, is it appropriate to use the t or z statistic?<br> * ??? c) Now try using the other test. How is the result be different? Why?<br> ? 4. Draw a small sample of size 10 from the data and repeat both tests. Step6: The histogram
Python Code: import pandas as pd df = pd.read_csv('data/human_body_temperature.csv') # Your work here. # Load Matplotlib + Seaborn and SciPy libraries import matplotlib.pyplot as plt import seaborn as sns import numpy as np from scipy import stats %matplotlib inline df.head(5) Explanation: What is the True Normal Human Body Temperature? Background The mean normal body temperature was held to be 37$^{\circ}$C or 98.6$^{\circ}$F for more than 120 years since it was first conceptualized and reported by Carl Wunderlich in a famous 1868 book. But, is this value statistically correct? <h3>Exercises</h3> <p>In this exercise, you will analyze a dataset of human body temperatures and employ the concepts of hypothesis testing, confidence intervals, and statistical significance.</p> <p>Answer the following questions <b>in this notebook below and submit to your Github account</b>.</p> <ol> <li> Is the distribution of body temperatures normal? <ul> <li> Although this is not a requirement for CLT to hold (read CLT carefully), it gives us some peace of mind that the population may also be normally distributed if we assume that this sample is representative of the population. </ul> <li> Is the sample size large? Are the observations independent? <ul> <li> Remember that this is a condition for the CLT, and hence the statistical tests we are using, to apply. </ul> <li> Is the true population mean really 98.6 degrees F? <ul> <li> Would you use a one-sample or two-sample test? Why? <li> In this situation, is it appropriate to use the $t$ or $z$ statistic? <li> Now try using the other test. How is the result be different? Why? </ul> <li> Draw a small sample of size 10 from the data and repeat both tests. <ul> <li> Which one is the correct one to use? <li> What do you notice? What does this tell you about the difference in application of the $t$ and $z$ statistic? </ul> <li> At what temperature should we consider someone's temperature to be "abnormal"? <ul> <li> Start by computing the margin of error and confidence interval. </ul> <li> Is there a significant difference between males and females in normal temperature? <ul> <li> What test did you use and why? <li> Write a story with your conclusion in the context of the original problem. </ul> </ol> You can include written notes in notebook cells using Markdown: - In the control panel at the top, choose Cell > Cell Type > Markdown - Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet Resources Information and data sources: http://www.amstat.org/publications/jse/datasets/normtemp.txt, http://www.amstat.org/publications/jse/jse_data_archive.htm Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet End of explanation ax = sns.distplot(df[['temperature']], rug=True, axlabel='Temperature (o F)') Explanation: Questions and Answers 1. Is the distribution of body temperatures normal? Yes. Based on the shape of the curve plotted with sample data, we have a normal distribution of body temperature. End of explanation # Sample (dataset) size df['temperature'].describe() # Population mean temperature POP_MEAN = 98.6 # Sample size, mean and standard deviation sample_size = df['temperature'].count() sample_mean = df['temperature'].mean() sample_std = df['temperature'].std(axis=0) Explanation: 2. Is the sample size large? Are the observations independent? Yes. We have 390 records in the sample data file (df.size).<br> There is no connection or dependence between the measured temperature values, in other words, the observations are independent. End of explanation print("Population mean temperature (given): POP_MEAN = " + str(POP_MEAN)) print("Sample size: sample_size = " + str(sample_size)) print("Sample mean: sample_mean = "+ str(sample_mean)) print("Sample standard deviation: sample_std = "+ str(sample_std)) Explanation: What we know about population and what we get from sample dataset<br> End of explanation # t distribuition # t = ((sample_mean - reference_value)/ std_deviation ) * sqrt(sample_size) # ... # degrees of freedom degree = 130 - 1 # p-value # p = stats.t.cdf(t,df=degree) # t-stats and p-value # print("t = " + str(t)) # print("p = " + str(2*p)) Explanation: 3. Is the true population mean really 98.6 degrees F? Ho or Null hypothesis: Average body temperature is 98.6 degrees F Ha or Alternative hypothesis: Average body temperature is not 98.6 degrees F ??? How to validate these hypotheis? End of explanation # A sample with randomly 10 records from original dataset df_sample10 = df.sample(n=10) Explanation: a) Would you use a one-sample or two-sample test? Why?<br> * ??? b) In this situation, is it appropriate to use the t or z statistic?<br> * ??? c) Now try using the other test. How is the result be different? Why?<br> ? 4. Draw a small sample of size 10 from the data and repeat both tests. End of explanation ax = sns.distplot(df_sample10[['temperature']], rug=True, axlabel='Temperature (o F)') Explanation: The histogram: End of explanation
7,535
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1>2b. Machine Learning using tf.estimator </h1> In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is. Step1: Read data created in the previous chapter. Step2: <h2> Train and eval input functions to read from Pandas Dataframe </h2> Step3: Our input function for predictions is the same except we don't provide a label Step4: Create feature columns for estimator Step5: <h3> Linear Regression with tf.Estimator framework </h3> Step6: Evaluate on the validation data (we should defer using the test data to after we have selected a final model). Step7: This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction. Step8: This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well. <h3> Deep Neural Network regression </h3> Step11: We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about! But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model. <h2> Benchmark dataset </h2> Let's do this on the benchmark dataset.
Python Code: !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst # Ensure the right version of Tensorflow is installed. !pip freeze | grep tensorflow==2.6 import tensorflow as tf import pandas as pd import numpy as np import shutil print(tf.__version__) Explanation: <h1>2b. Machine Learning using tf.estimator </h1> In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is. End of explanation # In CSV, label is the first column, after the features, followed by the key CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key'] FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1] LABEL = CSV_COLUMNS[0] df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS) df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS) df_test = pd.read_csv('./taxi-test.csv', header = None, names = CSV_COLUMNS) Explanation: Read data created in the previous chapter. End of explanation def make_train_input_fn(df, num_epochs): return tf.compat.v1.estimator.inputs.pandas_input_fn( x = df, y = df[LABEL], batch_size = 128, num_epochs = num_epochs, shuffle = True, queue_capacity = 1000 ) def make_eval_input_fn(df): return tf.compat.v1.estimator.inputs.pandas_input_fn( x = df, y = df[LABEL], batch_size = 128, shuffle = False, queue_capacity = 1000 ) Explanation: <h2> Train and eval input functions to read from Pandas Dataframe </h2> End of explanation def make_prediction_input_fn(df): return tf.compat.v1.estimator.inputs.pandas_input_fn( x = df, y = None, batch_size = 128, shuffle = False, queue_capacity = 1000 ) Explanation: Our input function for predictions is the same except we don't provide a label End of explanation def make_feature_cols(): input_columns = [tf.feature_column.numeric_column(k) for k in FEATURES] return input_columns Explanation: Create feature columns for estimator End of explanation tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO) OUTDIR = 'taxi_trained' shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time model = tf.estimator.LinearRegressor( feature_columns = make_feature_cols(), model_dir = OUTDIR) model.train(input_fn = make_train_input_fn(df_train, num_epochs = 10)) Explanation: <h3> Linear Regression with tf.Estimator framework </h3> End of explanation def print_rmse(model, df): metrics = model.evaluate(input_fn = make_eval_input_fn(df)) print('RMSE on dataset = {}'.format(np.sqrt(metrics['average_loss']))) print_rmse(model, df_valid) Explanation: Evaluate on the validation data (we should defer using the test data to after we have selected a final model). End of explanation predictions = model.predict(input_fn = make_prediction_input_fn(df_test)) for items in predictions: print(items) Explanation: This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction. End of explanation tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO) shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time model = tf.estimator.DNNRegressor(hidden_units = [32, 8, 2], feature_columns = make_feature_cols(), model_dir = OUTDIR) model.train(input_fn = make_train_input_fn(df_train, num_epochs = 100)); print_rmse(model, df_valid) Explanation: This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well. <h3> Deep Neural Network regression </h3> End of explanation from google.cloud import bigquery import numpy as np import pandas as pd def create_query(phase, EVERY_N): phase: 1 = train 2 = valid base_query = SELECT (tolls_amount + fare_amount) AS fare_amount, EXTRACT(DAYOFWEEK FROM pickup_datetime) * 1.0 AS dayofweek, EXTRACT(HOUR FROM pickup_datetime) * 1.0 AS hourofday, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers, CONCAT(CAST(pickup_datetime AS STRING), CAST(pickup_longitude AS STRING), CAST(pickup_latitude AS STRING), CAST(dropoff_latitude AS STRING), CAST(dropoff_longitude AS STRING)) AS key FROM `nyc-tlc.yellow.trips` WHERE trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 if EVERY_N == None: if phase < 2: # Training query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) < 2".format(base_query) else: # Validation query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) = {1}".format(base_query, phase) else: query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), {1})) = {2}".format(base_query, EVERY_N, phase) return query query = create_query(2, 100000) df = bigquery.Client().query(query).to_dataframe() print_rmse(model, df) Explanation: We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about! But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model. <h2> Benchmark dataset </h2> Let's do this on the benchmark dataset. End of explanation
7,536
Given the following text description, write Python code to implement the functionality described below step by step Description: Geotransforms May-June, 2018, Mauro Alberti Last version Step1: Forward and backward transformation examples Step2: calculating the X, Y geographic coordinate arrays
Python Code: from pygsf.geometries.grids.geotransform import * gt1 = GeoTransform(1500, 3000, 10, 10) gt1 Explanation: Geotransforms May-June, 2018, Mauro Alberti Last version: 2021-04-24 Last running version: 2021-04-24 1. Examples End of explanation ijPixToxyGeogr(gt1, 0, 0) xyGeogrToijPix(gt1, 1500, 3000) ijPixToxyGeogr(gt1, 1, 1) xyGeogrToijPix(gt1, 1510, 2990) ijPixToxyGeogr(gt1, 10, 10) xyGeogrToijPix(gt1, 1600, 3100) Explanation: Forward and backward transformation examples End of explanation X, Y = gtToxyCellCenters( gt=gt1, num_rows=10, num_cols=5) print(X) print(Y) Explanation: calculating the X, Y geographic coordinate arrays End of explanation
7,537
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Landice MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Grid 4. Glaciers 5. Ice 6. Ice --&gt; Mass Balance 7. Ice --&gt; Mass Balance --&gt; Basal 8. Ice --&gt; Mass Balance --&gt; Frontal 9. Ice --&gt; Dynamics 1. Key Properties Land ice key properties 1.1. Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Ice Albedo Is Required Step7: 1.4. Atmospheric Coupling Variables Is Required Step8: 1.5. Oceanic Coupling Variables Is Required Step9: 1.6. Prognostic Variables Is Required Step10: 2. Key Properties --&gt; Software Properties Software properties of land ice code 2.1. Repository Is Required Step11: 2.2. Code Version Is Required Step12: 2.3. Code Languages Is Required Step13: 3. Grid Land ice grid 3.1. Overview Is Required Step14: 3.2. Adaptive Grid Is Required Step15: 3.3. Base Resolution Is Required Step16: 3.4. Resolution Limit Is Required Step17: 3.5. Projection Is Required Step18: 4. Glaciers Land ice glaciers 4.1. Overview Is Required Step19: 4.2. Description Is Required Step20: 4.3. Dynamic Areal Extent Is Required Step21: 5. Ice Ice sheet and ice shelf 5.1. Overview Is Required Step22: 5.2. Grounding Line Method Is Required Step23: 5.3. Ice Sheet Is Required Step24: 5.4. Ice Shelf Is Required Step25: 6. Ice --&gt; Mass Balance Description of the surface mass balance treatment 6.1. Surface Mass Balance Is Required Step26: 7. Ice --&gt; Mass Balance --&gt; Basal Description of basal melting 7.1. Bedrock Is Required Step27: 7.2. Ocean Is Required Step28: 8. Ice --&gt; Mass Balance --&gt; Frontal Description of claving/melting from the ice shelf front 8.1. Calving Is Required Step29: 8.2. Melting Is Required Step30: 9. Ice --&gt; Dynamics ** 9.1. Description Is Required Step31: 9.2. Approximation Is Required Step32: 9.3. Adaptive Timestep Is Required Step33: 9.4. Timestep Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-3', 'landice') Explanation: ES-DOC CMIP6 Model Properties - Landice MIP Era: CMIP6 Institute: NCAR Source ID: SANDBOX-3 Topic: Landice Sub-Topics: Glaciers, Ice. Properties: 30 (21 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:22 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Grid 4. Glaciers 5. Ice 6. Ice --&gt; Mass Balance 7. Ice --&gt; Mass Balance --&gt; Basal 8. Ice --&gt; Mass Balance --&gt; Frontal 9. Ice --&gt; Dynamics 1. Key Properties Land ice key properties 1.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.ice_albedo') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "function of ice age" # "function of ice density" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Ice Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify how ice albedo is modelled End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.4. Atmospheric Coupling Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which variables are passed between the atmosphere and ice (e.g. orography, ice mass) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.5. Oceanic Coupling Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which variables are passed between the ocean and ice End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ice velocity" # "ice thickness" # "ice temperature" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which variables are prognostically calculated in the ice model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Software Properties Software properties of land ice code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Grid Land ice grid 3.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land ice scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.2. Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is an adative grid being used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.base_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.3. Base Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The base resolution (in metres), before any adaption End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.resolution_limit') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.4. Resolution Limit Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If an adaptive grid is being used, what is the limit of the resolution (in metres) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.projection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.5. Projection Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The projection of the land ice grid (e.g. albers_equal_area) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Glaciers Land ice glaciers 4.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of glaciers in the land ice scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of glaciers, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 4.3. Dynamic Areal Extent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does the model include a dynamic glacial extent? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Ice Ice sheet and ice shelf 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the ice sheet and ice shelf in the land ice scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.grounding_line_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "grounding line prescribed" # "flux prescribed (Schoof)" # "fixed grid size" # "moving grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 5.2. Grounding Line Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_sheet') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.3. Ice Sheet Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are ice sheets simulated? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_shelf') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.4. Ice Shelf Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are ice shelves simulated? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Ice --&gt; Mass Balance Description of the surface mass balance treatment 6.1. Surface Mass Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Ice --&gt; Mass Balance --&gt; Basal Description of basal melting 7.1. Bedrock Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of basal melting over bedrock End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of basal melting over the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Ice --&gt; Mass Balance --&gt; Frontal Description of claving/melting from the ice shelf front 8.1. Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of calving from the front of the ice shelf End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Melting Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of melting from the front of the ice shelf End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Ice --&gt; Dynamics ** 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description if ice sheet and ice shelf dynamics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.approximation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SIA" # "SAA" # "full stokes" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9.2. Approximation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Approximation type used in modelling ice dynamics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 9.3. Adaptive Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there an adaptive time scheme for the ice scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 9.4. Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep. End of explanation
7,538
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1> Homework 2 </h1> Matt Buchovecky Astro 283 Step1: Set of measurements $\left{Y_{k}\right}$ at known locations $\left{X_{k}\right}$ Gaussian uncertainty $\sigma=1.0$ $p=30\%$ chance of adding constant factor $5.0$ fit a line $y=a_{0}+a_{1}x$ we want to find the parameters of the above function that will maximize the measured data given the known points $p\left(\vec{z}\bigr|\left{Y_{k},X_{k},\sigma_{k}\right}\right) \propto p\left(\left{Y_{k}\right}\bigr|\vec{a},\left{X_{k},\sigma_{k}\right}\right)p\left(\vec{a}\bigr|\left{\sigma_{k}\right}\right)$ we will assume a uniform or mostly flat prior, ignore the probability of the measurements, and drop the $\sigma$ and $X$ terms, so the likelihood will dominate, and we want to maximize $p\left(\vec{a}\bigr|\left{Y_{k}\right}\right) \propto p\left(\left{Y_{k}\right}\bigr|\vec{a},\right)$ if the data measurements are independent, then the probability density is separable Step2: <h3>Define the negative of the posterior, as defined above. This is the function to be minimized</h3> Step3: <h3>i)</h3> Step4: <h3>ii)</h3> Step5: <h3>iii)</h3> The points here are near the visible maxes on the 2D contour plot that have not been found yet Step6: Plot results Step7: The legend isn't working properly. The legend should read
Python Code: from scipy import random, optimize, std from matplotlib import pyplot %matplotlib inline import numpy import csv Explanation: <h1> Homework 2 </h1> Matt Buchovecky Astro 283 End of explanation sigma_meas = 1.0 # standard deviation of measurements p_err = 0.30 # probability of experimental mistake occurring err_add = 5.0 # additive increase error reader = csv.reader(open('hw2prob1-data.txt','rt'), delimiter = ' ') x_data = [ ] y_data = [ ] for row in reader: if len(row) == 0: continue try: float(row[0]) and float(row[1]) and x_data.append(float(row[0])) y_data.append(float(row[1])) except ValueError: continue x_data = numpy.asarray(x_data) y_data = numpy.asarray(y_data) print(x_data) print(y_data) Explanation: Set of measurements $\left{Y_{k}\right}$ at known locations $\left{X_{k}\right}$ Gaussian uncertainty $\sigma=1.0$ $p=30\%$ chance of adding constant factor $5.0$ fit a line $y=a_{0}+a_{1}x$ we want to find the parameters of the above function that will maximize the measured data given the known points $p\left(\vec{z}\bigr|\left{Y_{k},X_{k},\sigma_{k}\right}\right) \propto p\left(\left{Y_{k}\right}\bigr|\vec{a},\left{X_{k},\sigma_{k}\right}\right)p\left(\vec{a}\bigr|\left{\sigma_{k}\right}\right)$ we will assume a uniform or mostly flat prior, ignore the probability of the measurements, and drop the $\sigma$ and $X$ terms, so the likelihood will dominate, and we want to maximize $p\left(\vec{a}\bigr|\left{Y_{k}\right}\right) \propto p\left(\left{Y_{k}\right}\bigr|\vec{a},\right)$ if the data measurements are independent, then the probability density is separable: $p\left(\vec{a}\bigr|\left{Y_{k}\right}\right) \propto \prod_{k}p\left(y_{k}|\vec{a}\right)$ But instead of the standard standalone gaussian distribution, for each data point, we have two mutually exclusive cases. They can be added because they are disjoint, and since they form an exhaustive set, the sum of their probabilities, integrated over the whole parameter space, should equal 1. Since we're just doing a maximization we don't care about normalization, but this means we can write the likelihood for each data point as: $p\left(\vec{a}\bigr|y_{k}\right) \propto\left(1-p\right)\exp{\left[-\frac{\left(y_{k}-\left(a_{0}+a_{1}x_{k}\right)\right)^{2}}{2\sigma^{2}}\right]}+p\exp{\left[-\frac{\left(y_{k}-\left(5+a_{0}+a_{1}x_{k}\right)\right)^{2}}{2\sigma^{2}}\right]}$ <h3>Initialize variables and read in the data</h3> End of explanation def negative_posterior(a, x, y, sigma, p): prob = -1 for k in range(0, len(x)): prob *= ((1-p)*numpy.exp(-(y[k]-(a[0]+a[1]*x[k]))**2/(2*sigma**2)) + p*numpy.exp(-(y[k]-(5+a[0]+a[1]*x[k]))**2/(2*sigma**2))) return prob a0 = numpy.arange(-2.0, 14.0, 0.05) a1 = numpy.arange(-1.1, 1.4, 0.05) z = numpy.array([[-negative_posterior([i,j], x_data, y_data, sigma_meas, p_err) for i in a0] for j in a1]) contour_plot = pyplot.contour(a0, a1, z) pyplot.colorbar() pyplot.xlabel('$a_0$', fontsize=20) pyplot.ylabel('$a_1$', fontsize=20) pyplot.title('Posterior probability (unnormalized)', fontsize=15) Explanation: <h3>Define the negative of the posterior, as defined above. This is the function to be minimized</h3> End of explanation guess = [ [ 0.1, 0.1 ] ] a = [ optimize.fmin(negative_posterior, guess[0], args=(x_data, y_data, sigma_meas, p_err)) ] print(a) y = [ a[0][0]*numpy.ones(len(x_data)) + a[0][1]*x_data ] Explanation: <h3>i)</h3> End of explanation def chi_square(a, x, y): residuals = numpy.zeros(len(x)) for k in range(0, len(x)): residuals[k] = (y[k]-(a[0]+a[1]*x[k])) return numpy.sum(residuals**2) guess.append(optimize.fmin(chi_square, guess[0], args=(x_data, y_data))) print(guess[1]) a.append(optimize.fmin(negative_posterior, guess[1], args=(x_data, y_data, sigma_meas, p_err))) print(a[1]) y.append(a[1][0]*numpy.ones(len(x_data)) + a[1][1]*x_data) Explanation: <h3>ii)</h3> End of explanation guess.append( [ 1.9, 0.75 ] ) a.append(optimize.fmin(negative_posterior, guess[2], args=(x_data, y_data, sigma_meas, p_err))) print(a[2]) y.append(a[2][0]*numpy.ones(len(x_data)) + a[2][1]*x_data) guess[3] = [ 11.0, -0.6 ] a.append( optimize.fmin(negative_posterior, guess[3], args=(x_data, y_data, sigma_meas, p_err)) ) print(a[3]) y.append( a[3][0]*numpy.ones(len(x_data)) + a[3][1]*x_data ) Explanation: <h3>iii)</h3> The points here are near the visible maxes on the 2D contour plot that have not been found yet End of explanation pyplot.title("y vs x") pyplot.xlabel("x") pyplot.ylabel("y") plots = [ pyplot.plot(x_data, y_data, 'bo', label="Measured") ] plots.append( pyplot.plot(x_data, y_data - 5, 'mo', label="Measured (error)") ) plots.append( pyplot.plot(x_data, y[0], 'r-', label="Guess i") ) plots.append( pyplot.plot(x_data, y[1], 'g-', label="Guess ii") ) plots.append( pyplot.plot(x_data, y[2], 'y-', label="Guess iii") ) plots.append( pyplot.plot(x_data, y[3], 'c-', label="Guess iv") ) pyplot.axis([2.0, 10.0, 1.0, 10.0]) pyplot.legend(plots, loc='center left', bbox_to_anchor=(1., 0.5)) Explanation: Plot results End of explanation l = len(a) params_x = []*l params_y = []*l for i in range(0, l): params_x.append( a[i][0] ) params_y.append( a[i][1] ) pyplot.contourf(a0, a1, z) pyplot.plot(params_x, params_y, 'ko') pyplot.xlabel('$a_0$', fontsize=20) pyplot.ylabel('$a_1$', fontsize=20) pyplot.title("Fit param locations") Explanation: The legend isn't working properly. The legend should read: y_meas, y_remove_error, first guess, second, third, fourth End of explanation
7,539
Given the following text description, write Python code to implement the functionality described below step by step Description: Brief look at Cartopy Cartopy is a Python package that provides easy creation of maps with matplotlib. Cartopy vs Basemap Cartopy is better integrated with matplotlib and in a more active development state Proper handling of datelines in cartopy - one of the bugs in basemap (example Step1: Then let's import the cartopy Step2: In addition, we import cartopy's coordinate reference system submodule Step3: Creating GeoAxes Cartopy-matplotlib interface is set up via the projection keyword when constructing Axes / SubAxes The resulting instance (cartopy.mpl.geoaxes.GeoAxesSubplot) has new methods specific to drawing cartographic data, e.g. coastlines Step4: Here we are using a Plate Carrée projection, which is one of equidistant cylindrical projections. A full list of Cartopy projections is available at http Step5: Notice that unless we specify a map extent (we did so via the set_global method in this case) the map will zoom into the range of the plotted data. Decorating the map We can add grid lines and tick labels to the map using the gridlines() method Step6: Unfortunately, gridline labels work only in PlateCarree and Mercator projections. We can control the specific tick values by using matplotlib's locator object, and the formatting can be controlled with matplotlib formatters Step7: Plotting layers directly from Web Map Service (WMS) and Web Map Tile Service (WMTS)
Python Code: import matplotlib.pyplot as plt %matplotlib inline Explanation: Brief look at Cartopy Cartopy is a Python package that provides easy creation of maps with matplotlib. Cartopy vs Basemap Cartopy is better integrated with matplotlib and in a more active development state Proper handling of datelines in cartopy - one of the bugs in basemap (example: Challenger circumnavigation) Cartopy offers powerful vector data handling by integrating shapefile reading with Shapely capabilities Basemap: gridline labels for any projection; limited to a few in cartopy (workaround for Lambert Conic) Basemap has a map scale bar feature (can be buggy); still not implemented in cartopy, but there are some messy workarounds As for the standard matplotlib plots, we first need to import pyplot submodule and make the graphical output appear in the notebook: In order to create a map with cartopy and matplotlib, we typically need to import pyplot from matplotlib and cartopy's crs (coordinate reference system) submodule. These are typically imported as follows: End of explanation import cartopy Explanation: Then let's import the cartopy End of explanation import cartopy.crs as ccrs Explanation: In addition, we import cartopy's coordinate reference system submodule: End of explanation ax = plt.axes(projection=ccrs.PlateCarree()) ax.coastlines() print('axes type:', type(ax)) Explanation: Creating GeoAxes Cartopy-matplotlib interface is set up via the projection keyword when constructing Axes / SubAxes The resulting instance (cartopy.mpl.geoaxes.GeoAxesSubplot) has new methods specific to drawing cartographic data, e.g. coastlines: End of explanation ax = plt.axes(projection=ccrs.PlateCarree()) ax.coastlines() ax.set_global() plt.plot([-100, 50], [25, 25], linewidth=4, color='r', transform=ccrs.PlateCarree()) plt.plot([-100, 50], [25, 25], linewidth=4, color='b', transform=ccrs.Geodetic()) Explanation: Here we are using a Plate Carrée projection, which is one of equidistant cylindrical projections. A full list of Cartopy projections is available at http://scitools.org.uk/cartopy/docs/latest/crs/projections.html. Putting georeferenced data on a map Use the standard matplotlib plotting routines with an additional transform keyword. The value of the transform argument should be the cartopy coordinate reference system of the data being plotted End of explanation ax = plt.axes(projection=ccrs.Mercator()) ax.coastlines() gl = ax.gridlines(draw_labels=True) Explanation: Notice that unless we specify a map extent (we did so via the set_global method in this case) the map will zoom into the range of the plotted data. Decorating the map We can add grid lines and tick labels to the map using the gridlines() method: End of explanation import matplotlib.ticker as mticker from cartopy.mpl.gridliner import LATITUDE_FORMATTER ax = plt.axes(projection=ccrs.PlateCarree()) ax.coastlines() gl = ax.gridlines(draw_labels=True) gl.xlocator = mticker.FixedLocator([-180, -45, 0, 45, 180]) gl.yformatter = LATITUDE_FORMATTER fig = plt.figure(figsize=(9, 6)) ax = fig.add_subplot(111, projection=ccrs.PlateCarree()) ax.set_global() lons = -75, 77.2, 151.2, -75 lats = 43, 28.6, -33.9, 43 ax.plot(lons, lats, color='green', linewidth=2, marker='o', ms=10, transform=ccrs.Geodetic()) # feature = cartopy.feature.LAND feature = cartopy.feature.NaturalEarthFeature(name='land', category='physical', scale='110m', edgecolor='red', facecolor='black') ax.add_feature(feature) _ = ax.add_feature(cartopy.feature.LAKES, facecolor='b') states = cartopy.feature.NaturalEarthFeature(category='cultural', scale='50m', facecolor='none', name='admin_1_states_provinces_lines') _ = ax.add_feature(states, edgecolor='gray') Explanation: Unfortunately, gridline labels work only in PlateCarree and Mercator projections. We can control the specific tick values by using matplotlib's locator object, and the formatting can be controlled with matplotlib formatters: End of explanation url = 'http://map1c.vis.earthdata.nasa.gov/wmts-geo/wmts.cgi' ax = plt.axes(projection=ccrs.PlateCarree()) ax.add_wmts(url, 'VIIRS_CityLights_2012') Explanation: Plotting layers directly from Web Map Service (WMS) and Web Map Tile Service (WMTS) End of explanation
7,540
Given the following text description, write Python code to implement the functionality described below step by step Description: Step2: Optimization problems, objective functions and optimization benchmarks TODO Step3: Type of optimization problems continuous vs discrete problems (possibly combinatorial if the set of solutions is discrete and very big) unconstrained vs constrained problems deterministic vs stochastic problems convex vs non-convex problems unimodal vs multimodal problems single-objective vs multi-objective problems differentiable vs nondifferentiable problems linear vs nonlinear problems quadratic vs nonquadratic problems derivative-free problems multistage problems Linear Programming (LP) Mixed Integer Nonlinear Programming (MINLP) Quadratically Constrained Quadratic Programming (QCQP) Quadratic Programming (QP) https Step5: Benchmarks Here are some famous benchmarks Step6: Remark Step8: Test functions for non-convex deterministic unconstrained continuous single-objective optimization The (extended) Rosenbrock function The Rosenbrock function is a famous non-convex function used to test the performance of optimization algorithms. The classical two-dimensional version of this function is unimodal but its extended $n$-dimensional version (with $n \geq 4$) is multimodal [ref.]. $$ f(\boldsymbol{x}) = \sum_{i=1}^{n-1} \left[100 \left( x_{i+1} - x_{i}^{2} \right)^{2} + \left( x_{i} - 1 \right)^2 \right] $$ Global minimum Step10: The Himmelblau's function The Himmelblau's function is a two-dimensional multimodal function. $$ f(x_1, x_2) = (x_1^2 + x_2 - 11)^2 + (x_1 + x_2^2 - 7)^2 $$ The function has four global minima Step12: The Rastrigin function The Rastrigin function is a famous multimodal function. Finding the minimum of this function is a fairly difficult problem due to its large search space and its large number of local minima. The classical two-dimensional version of this function has been introduced by L. A. Rastrigin in Systems of extremal control Mir, Moscow (1974). Its generalized $n$-dimensional version has been proposed by H. Mühlenbein, D. Schomisch and J. Born in The Parallel Genetic Algorithm as Function Optimizer Parallel Computing, 17, pages 619–632, 1991. On an n-dimensional domain it is defined by Step14: Shekel function The Shekel function is a famous multimodal function. The mathematical form of a function in $n$ dimensions with $m$ local minima is Step16: Cross-in-tray function The Cross-in-tray function is a 2 dimensions multimodal function, with four global minima. $$ f(x_1, x_2) = -0.0001 \left( \left| \sin(x_1) \sin(x_2) \exp \left( \left| 100 - \frac{\sqrt{x_1^2 + x_2^2}}{\pi} \right| \right)\right| + 1 \right)^{0.1} $$ Global minima Step18: Hölder table function The Hölder table function is a 2 dimensions multimodal function, with four global minima. $$ f(x_1, x_2) = -\left| \sin(x_1) \cos(x_2) \exp \left( \left| 1 - \frac{\sqrt{x_1^2 + x_2^2}}{\pi} \right| \right) \right| $$ Global minima
Python Code: %matplotlib inline import numpy as np import matplotlib matplotlib.rcParams['figure.figsize'] = (8, 8) import math import matplotlib.pyplot as plt import matplotlib.colors as colors from mpl_toolkits.mplot3d import axes3d from matplotlib import cm import scipy.optimize def plot_2d_contour_solution_space(func, xmin=-np.ones(2), xmax=np.ones(2), xstar=None, title="", vmin=None, vmax=None, zlog=True, output_file_name=None): TODO fig, ax = plt.subplots(figsize=(12, 8)) x1_space = np.linspace(xmin[0], xmax[0], 200) x2_space = np.linspace(xmin[1], xmax[1], 200) x1_mesh, x2_mesh = np.meshgrid(x1_space, x2_space) zz = func(np.array([x1_mesh.ravel(), x2_mesh.ravel()])).reshape(x1_mesh.shape) ############################ if xstar.ndim == 1: min_value = func(xstar) else: min_value = min(func(xstar)) max_value = zz.max() if vmin is None: if zlog: vmin = 0.1 # TODO else: vmin = min_value if vmax is None: vmax = max_value if zlog: norm = colors.LogNorm() else: norm = None levels = np.logspace(0.1, 3., 5) # TODO im = ax.pcolormesh(x1_mesh, x2_mesh, zz, vmin=vmin, vmax=vmax, norm=norm, shading='gouraud', cmap='gnuplot2') # 'jet' # 'gnuplot2' plt.colorbar(im, ax=ax) cs = plt.contour(x1_mesh, x2_mesh, zz, levels, linewidths=(2, 2, 2, 2, 3), linestyles=('dotted', '-.', 'dashed', 'solid', 'solid'), alpha=0.5, colors='white') ax.clabel(cs, inline=False, fontsize=12) ############################ if xstar is not None: ax.scatter(xstar[0], xstar[1], c='red', label="$x^*$") ax.set_title(title) ax.set_xlabel(r"$x_1$") ax.set_ylabel(r"$x_2$") ax.legend(fontsize=12) if output_file_name is not None: plt.savefig(output_file_name, transparent=True) plt.show() def plot_2d_solution_space(func, xmin=-np.ones(2), xmax=np.ones(2), xstar=None, angle_view=None, title="", zlog=True, output_file_name=None): TODO fig = plt.figure(figsize=(12, 8)) ax = axes3d.Axes3D(fig) if angle_view is not None: ax.view_init(angle_view[0], angle_view[1]) x1_space = np.linspace(xmin[0], xmax[0], 100) x2_space = np.linspace(xmin[1], xmax[1], 100) x1_mesh, x2_mesh = np.meshgrid(x1_space, x2_space) zz = func(np.array([x1_mesh.ravel(), x2_mesh.ravel()])).reshape(x1_mesh.shape) # TODO ############################ if zlog: norm = colors.LogNorm() else: norm = None surf = ax.plot_surface(x1_mesh, x2_mesh, zz, cmap='gnuplot2', # 'jet' # 'gnuplot2' norm=norm, rstride=1, cstride=1, #color='b', shade=False) ax.set_zlabel(r"$f(x_1, x_2)$") fig.colorbar(surf, shrink=0.5, aspect=5) ############################ if xstar is not None: ax.scatter(xstar[0], xstar[1], func(xstar), #s=50, # TODO c='red', alpha=1, label="$x^*$") ax.set_title(title) ax.set_xlabel(r"$x_1$") ax.set_ylabel(r"$x_2$") ax.legend(fontsize=12) if output_file_name is not None: plt.savefig(output_file_name, transparent=True) plt.show() Explanation: Optimization problems, objective functions and optimization benchmarks TODO: Vérifier pour chaque fonction si x.dtype est un entier, le résultat n'est pas altéré (si non, utiliser x = x.astype(np.float)). Vérifier pour chaque fonction que l'évaluation multi-point donne le même résultat que l'évaluation mono-point. Remarque: Pour l'évaluation simultanée des points $x_1$, $x_2$ et $x_3$: $x_1 = \begin{pmatrix} 0 \ 0 \end{pmatrix}$ $x_2 = \begin{pmatrix} 1 \ 1 \end{pmatrix}$ $x_3 = \begin{pmatrix} 2 \ 2 \end{pmatrix}$ x doit être codé x = np.array([[0, 1, 2], [0, 1, 2]]) et non pas x = np.array([[0, 0], [1, 1], [2, 2]]) dans le code Python car ça simplifie grandement la définition des fonctions (avec le premier codage il n'est pas nécessaire de se préoccuper du nombre de point). Par exemple, pour la fonction sphère, avec le 2nd codage il faudrait écrire: if x.ndim == 1: return np.sum(x**2) else: return np.sum(x**2, axis=1) alors qu'avec le 1er codage, il suffit décrire: return np.sum(x**2, axis=0) ou plus simplement: return sum(x**2) sans avoir à se préocupper de la dimension de x (i.e. on se débrouille pour que toutes les oppérations d'aggrégation soit faites sur axis=0 ainsi le même code marche quand x.ndim=1 et quand x.ndim=2). End of explanation def unimodal_but_not_convex_example(x): return np.sqrt(np.abs(x[0])) + np.sqrt(np.abs(x[1])) plot_2d_solution_space(unimodal_but_not_convex_example, xmin=-2*np.ones(2), xmax=2*np.ones(2), xstar=np.zeros(2), title=r"$f(x_1, x_2) = \sqrt{|x_1|} + \sqrt{|x_2|}$", output_file_name="unnamed1_3d.png") plot_2d_contour_solution_space(unimodal_but_not_convex_example, xmin=-2*np.ones(2), xmax=2*np.ones(2), xstar=np.zeros(2), title=r"$f(x_1, x_2) = \sqrt{|x_1|} + \sqrt{|x_2|}$", output_file_name="unnamed1.png") Explanation: Type of optimization problems continuous vs discrete problems (possibly combinatorial if the set of solutions is discrete and very big) unconstrained vs constrained problems deterministic vs stochastic problems convex vs non-convex problems unimodal vs multimodal problems single-objective vs multi-objective problems differentiable vs nondifferentiable problems linear vs nonlinear problems quadratic vs nonquadratic problems derivative-free problems multistage problems Linear Programming (LP) Mixed Integer Nonlinear Programming (MINLP) Quadratically Constrained Quadratic Programming (QCQP) Quadratic Programming (QP) https://neos-guide.org/content/optimization-tree-alphabetical https://en.wikipedia.org/wiki/Derivative-free_optimization https://en.wikipedia.org/wiki/List_of_types_of_functions Remark: unimodal does not imply convex... Unimodal does not imply convex. For instance, $f(x_1, x_2) = \sqrt{|x_1|} + \sqrt{|x_2|}$ is unimodal but not convex. See https://math.stackexchange.com/questions/1452657/how-to-prove-quasi-convex-if-and-only-if-unimodal for more information. End of explanation def sphere(x): rThe Sphere function. Example: single 2D point ------------------------ To evaluate $x = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$: >>> sphere( np.array([0, 0]) ) 0.0 The result should be $f(x) = 0$. Example: single 3D point ------------------------ To evaluate $x = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}$: >>> sphere( np.array([1, 1, 1]) ) 3.0 The result should be $f(x) = 3.0$. Example: multiple 2D points --------------------------- To evaluate $x_1 = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$, $x_2 = \begin{pmatrix} 1 \\ 1 \end{pmatrix}$ and $x_3 = \begin{pmatrix} 2 \\ 2 \end{pmatrix}$ at once: >>> sphere( np.array([[0, 1, 2], [0, 1, 2]]) ) array([ 0., 2., 8.]) The result should be $f(x_1) = 0$, $f(x_2) = 1$ and $f(x_3) = 8$. Parameters ---------- x : array_like One dimension Numpy array of the point at which the Sphere function is to be computed or a two dimension Numpy array of points at which the Sphere function is to be computed. Returns ------- float or array_like The value(s) of the Sphere function for the given point(s) `x`. return sum(x**2.0) Explanation: Benchmarks Here are some famous benchmarks: COCO (COmparing Continuous Optimisers) B-BOB (see also its github repository) Test functions for optimization The following pages contain a lot of test functions for optimization: * https://en.wikipedia.org/wiki/Test_functions_for_optimization * http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar_files/TestGO.htm * https://www.cs.cmu.edu/afs/cs/project/jair/pub/volume24/ortizboyer05a-html/node6.html * http://benchmarkfcns.xyz/fcns * https://www.sfu.ca/~ssurjano/optimization.html Test functions for convex deterministic unconstrained continuous single-objective optimization The sphere function The Sphere function is a famous convex function used to test the performance of optimization algorithms. This function is very easy to optimize and can be used as a first test to check an optimization algorithm. $$ f(\boldsymbol{x}) = \sum_{i=1}^{n} x_{i}^2 $$ Global minimum: $$ f(\boldsymbol{0}) = 0 $$ Search domain: $$ \boldsymbol{x} \in \mathbb{R}^n $$ End of explanation plot_2d_solution_space(sphere, xmin=-2*np.ones(2), xmax=2*np.ones(2), xstar=np.zeros(2), angle_view=(55, 83), title="Sphere function", output_file_name="sphere_3d.png") plot_2d_contour_solution_space(sphere, xmin=-10*np.ones(2), xmax=10*np.ones(2), xstar=np.zeros(2), title="Sphere function", output_file_name="sphere.png") Explanation: Remark: sum(x**2.0) is equivalent to np.sum(x**2.0, axis=0) End of explanation def rosen(x): rThe Rosenbrock function. Example: single 2D point ------------------------ To evaluate $x = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$: >>> rosen( np.array([0, 0]) ) 1.0 The result should be $f(x) = 1$. Example: single 3D point ------------------------ To evaluate $x = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}$: >>> rosen( np.array([1, 1, 1]) ) 0.0 The result should be $f(x) = 0$. Example: multiple 2D points --------------------------- To evaluate $x_1 = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$, $x_2 = \begin{pmatrix} 1 \\ 1 \end{pmatrix}$ and $x_3 = \begin{pmatrix} 2 \\ 2 \end{pmatrix}$ at once: >>> rosen( np.array([[0, 1, 2], [0, 1, 2]]) ) array([ 1., 0., 401.]) The result should be $f(x_1) = 1$, $f(x_2) = 0$ and $f(x_3) = 401$. Parameters ---------- x : array_like One dimension Numpy array of the point at which the Rosenbrock function is to be computed or a two dimension Numpy array of points at which the Rosenbrock function is to be computed. Returns ------- float or array_like The value(s) of the Rosenbrock function for the given point(s) `x`. return sum(100.0*(x[1:] - x[:-1]**2.0)**2.0 + (1 - x[:-1])**2.0) scipy.optimize.rosen(np.array([1, 1, 1, 1])) scipy.optimize.rosen_der(np.array([1, 1, 1, 1])) plot_2d_solution_space(rosen, # scipy.optimize.rosen xmin=-2*np.ones(2), xmax=2*np.ones(2), xstar=np.ones(2), angle_view=(30, 45), title="Rosenbrock function", output_file_name="rosenbrock_3d.png") plot_2d_contour_solution_space(rosen, # scipy.optimize.rosen xmin=-2*np.ones(2), xmax=2*np.ones(2), xstar=np.ones(2), title="Rosenbrock function", output_file_name="rosenbrock.png") Explanation: Test functions for non-convex deterministic unconstrained continuous single-objective optimization The (extended) Rosenbrock function The Rosenbrock function is a famous non-convex function used to test the performance of optimization algorithms. The classical two-dimensional version of this function is unimodal but its extended $n$-dimensional version (with $n \geq 4$) is multimodal [ref.]. $$ f(\boldsymbol{x}) = \sum_{i=1}^{n-1} \left[100 \left( x_{i+1} - x_{i}^{2} \right)^{2} + \left( x_{i} - 1 \right)^2 \right] $$ Global minimum: $$ \min = \begin{cases} n = 2 & \rightarrow \quad f(1,1) = 0, \ n = 3 & \rightarrow \quad f(1,1,1) = 0, \ n > 3 & \rightarrow \quad f(\underbrace{1,\dots,1}_{n{\text{ times}}}) = 0 \ \end{cases} $$ Search domain: $$ \boldsymbol{x} \in \mathbb{R}^n $$ The Rosenbrock has exactly one (global) minimum $(\underbrace{1, \dots, 1}_{n{\text{ times}}})^\top$ for $n \leq 3$ and an additional local minimum for $n \geq 4$ near $(-1, 1, 1, \dots, 1)^\top$. See http://www.mitpressjournals.org/doi/abs/10.1162/evco.2006.14.1.119 (freely available at http://dl.acm.org/citation.cfm?id=1118014) and https://en.wikipedia.org/wiki/Rosenbrock_function#Multidimensional_generalisations for more information. TODO read and check the above article and get the exact value of the local minimum if it exists. See https://en.wikipedia.org/wiki/Rosenbrock_function and http://mathworld.wolfram.com/RosenbrockFunction.html for more information. The Rosenbrock function, its derivative (i.e. gradient) and its hessian matrix are also implemented in Scipy (scipy.optimize.rosen, scipy.optimize.rosen_der, scipy.optimize.rosen_hess and scipy.optimize.rosen_hess_prod). See Scipy documentation for more information. End of explanation def himmelblau(x): rThe Himmelblau's function. Example: single point --------------------- To evaluate $x = \begin{pmatrix} 3 \\ 2 \end{pmatrix}$: >>> himmelblau( np.array([3, 2]) ) 0.0 The result should be $f(x) = 1$. Example: multiple points ------------------------ To evaluate $x_1 = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$, $x_2 = \begin{pmatrix} 1 \\ 1 \end{pmatrix}$ and $x_3 = \begin{pmatrix} 2 \\ 2 \end{pmatrix}$ at once: >>> himmelblau( np.array([[0, 1, 2], [0, 1, 2]]) ) array([ 170., 106., 26.]) The result should be $f(x_1) = 170$, $f(x_2) = 106$ and $f(x_3) = 26$. Parameters ---------- x : array_like One dimension Numpy array of the point at which the Himmelblau's function is to be computed or a two dimension Numpy array of points at which the Himmelblau's function is to be computed. Returns ------- float or array_like The value(s) of the Himmelblau's function for the given point(s) `x`. assert x.shape[0] == 2, x.shape return (x[0]**2.0 + x[1] - 11.0)**2.0 + (x[0] + x[1]**2.0 - 7.0)**2.0 himmelblau( np.array([0, 0]) ) himmelblau( np.array([[0, 1, 2], [0, 1, 2]]) ) himmelblau(np.array([[3., 2.], [-2.805118, 3.131312]]).T) plot_2d_solution_space(himmelblau, xmin=-5*np.ones(2), xmax=5*np.ones(2), xstar=np.array([[3., 2.], [-2.805118, 3.131312], [-3.779310, -3.283186], [3.584428, -1.848126]]).T, angle_view=(55, -25), title="The Himmelblau's function", output_file_name="himmelblau_3d.png") plot_2d_contour_solution_space(himmelblau, xmin=-5*np.ones(2), xmax=5*np.ones(2), xstar=np.array([[3., 2.], [-2.805118, 3.131312], [-3.779310, -3.283186], [3.584428, -1.848126]]).T, title="The Himmelblau's function", output_file_name="himmelblau.png") Explanation: The Himmelblau's function The Himmelblau's function is a two-dimensional multimodal function. $$ f(x_1, x_2) = (x_1^2 + x_2 - 11)^2 + (x_1 + x_2^2 - 7)^2 $$ The function has four global minima: $$ \begin{eqnarray} f(3, 2) = 0 \ f(-2.805118, 3.131312) = 0 \ f(-3.779310, -3.283186) = 0 \ f(3.584428, -1.848126) = 0 \end{eqnarray} $$ Search domain: $$ \boldsymbol{x} \in \mathbb{R}^2 $$ It also has one local maximum at $f(-0.270845, -0.923039) = 181.617$. The locations of all the minima can be found analytically (roots of cubic polynomials) but expressions are somewhat complicated. The function is named after David Mautner Himmelblau, who introduced it in Applied Nonlinear Programming (1972), McGraw-Hill, ISBN 0-07-028921-2. See https://en.wikipedia.org/wiki/Himmelblau%27s_function for more information. End of explanation def rastrigin(x): rThe Rastrigin function. Example: single 2D point ------------------------ To evaluate $x = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$: >>> rastrigin( np.array([0, 0]) ) 0.0 The result should be $f(x) = 0$. Example: single 3D point ------------------------ To evaluate $x = \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}$: >>> rastrigin( np.array([0, 0, 0]) ) 0.0 The result should be $f(x) = 0$. Example: multiple 2D points --------------------------- To evaluate $x_1 = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$, $x_2 = \begin{pmatrix} 1 \\ 1 \end{pmatrix}$ and $x_3 = \begin{pmatrix} 2 \\ 2 \end{pmatrix}$ at once: >>> rastrigin( np.array([[0, 1, 2], [0, 1, 2]]) ) array([ 1., 0., 401.]) The result should be $f(x_1) = 1$, $f(x_2) = 0$ and $f(x_3) = 401$. Parameters ---------- x : array_like One dimension Numpy array of the point at which the Rastrigin function is to be computed or a two dimension Numpy array of points at which the Rastrigin function is to be computed. Returns ------- float or array_like The value(s) of the Rastrigin function for the given point(s) `x`. A = 10. n = x.shape[0] return A * n + sum(x**2.0 - A * np.cos(2.0 * np.pi * x)) rastrigin(np.array([[0, np.pi], [0, np.pi]])) plot_2d_solution_space(rastrigin, xmin=-5.12*np.ones(2), xmax=5.12*np.ones(2), xstar=np.zeros(2), angle_view=(35, 25), title="Rastrigin function", output_file_name="rastrigin_3d.png") plot_2d_contour_solution_space(rastrigin, xmin=-5.12*np.ones(2), xmax=5.12*np.ones(2), xstar=np.zeros(2), title="Rastrigin function", output_file_name="rastrigin.png") Explanation: The Rastrigin function The Rastrigin function is a famous multimodal function. Finding the minimum of this function is a fairly difficult problem due to its large search space and its large number of local minima. The classical two-dimensional version of this function has been introduced by L. A. Rastrigin in Systems of extremal control Mir, Moscow (1974). Its generalized $n$-dimensional version has been proposed by H. Mühlenbein, D. Schomisch and J. Born in The Parallel Genetic Algorithm as Function Optimizer Parallel Computing, 17, pages 619–632, 1991. On an n-dimensional domain it is defined by: $$ f(\boldsymbol{x}) = An + \sum_{i=1}^{n} \left[ x_{i}^{2} - A \cos(2 \pi x_{i}) \right] $$ where $A = 10$. Global minimum: $$ f(\boldsymbol{0}) = 0 $$ Search domain: $$ \boldsymbol{x} \in \mathbb{R}^n $$ See https://en.wikipedia.org/wiki/Rastrigin_function for more information. End of explanation def easom(x): rThe Easom function. Example: single 2D point ------------------------ To evaluate $x = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$: >>> easom( np.array([np.pi, np.pi]) ) -1.0 The result should be $f(x) = -1$. Example: multiple 2D points --------------------------- To evaluate $x_1 = \begin{pmatrix} \pi \\ \pi \end{pmatrix}$, $x_2 = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$ and $x_3 = \begin{pmatrix} 1 \\ 1 \end{pmatrix}$ at once: >>> easom( np.array([[np.pi, 0, 1], [np.pi, 0, 1]]) ) array([ -1., -2.67528799e-09, -3.03082341e-05]) The result should be $f(x_1) = -1$, $f(x_2) \approx 0$ and $f(x_3) \approx 0$. Parameters ---------- x : array_like One dimension Numpy array of the point at which the Easom function is to be computed or a two dimension Numpy array of points at which the Easom function is to be computed. Returns ------- float or array_like The value(s) of the Easom function for the given point(s) `x`. assert x.shape[0] == 2, x.shape return -np.cos(x[0]) * np.cos(x[1]) * np.exp(-((x[0]-np.pi)**2.0 + (x[1]-np.pi)**2.0)) easom( np.array([[np.pi, 0, 1], [np.pi, 0, 1]]) ) plot_2d_solution_space(easom, xmin=-5*np.ones(2), xmax=5*np.ones(2), xstar=np.ones(2) * np.pi, #angle_view=(35, 25), title="Easom function", zlog=False, output_file_name="easom_3d.png") plot_2d_contour_solution_space(easom, xmin=-5*np.ones(2), xmax=5*np.ones(2), xstar=np.ones(2) * np.pi, zlog=False, title="Easom function", output_file_name="easom.png") Explanation: Shekel function The Shekel function is a famous multimodal function. The mathematical form of a function in $n$ dimensions with $m$ local minima is: $$ f(\boldsymbol{x}) = -\sum_{i=1}^{m} \left( \boldsymbol{b}{i} + \sum{j=1}^{n} (x_{j} - \boldsymbol{A}_{ji})^{2} \right)^{-1} $$ Global minimum: $$ f(\boldsymbol{0}) = 0 $$ Search domain: $$ \boldsymbol{x} \in \mathbb{R}^n $$ References: Shekel, J. 1971. Test Functions for Multimodal Search Techniques Fifth Annual Princeton Conference on Information Science and Systems. See https://en.wikipedia.org/wiki/Shekel_function and https://www.sfu.ca/~ssurjano/shekel.html for more information. Easom function The Easom function is a 2 dimensions unimodal function. $$ f(x_1, x_2) = -\cos(x_1) \cos(x_2) \exp \left( -\left[ (x_1-\pi)^2 + (x_2-\pi)^2 \right] \right) $$ Global minimum: $$ f(\pi, \pi) = -1 $$ Search domain: $$ \boldsymbol{x} \in \mathbb{R}^2 $$ See https://www.sfu.ca/~ssurjano/easom.html for more information. End of explanation def crossintray(x): rThe Cross-in-tray function. Example: single 2D point ------------------------ To evaluate $x = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$: >>> crossintray( np.array([0, 0]) ) -0.0001 The result should be $f(x) = -0.0001$. Example: multiple 2D points --------------------------- To evaluate $x_1 = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$, $x_2 = \begin{pmatrix} 1.34941 \\ 1.34941 \end{pmatrix}$ and $x_3 = \begin{pmatrix} -1.34941 \\ -1.34941 \end{pmatrix}$ at once: >>> crossintray( np.array([[0, 1.34941, -1.34941], [0, 1.34941, -1.34941]]) ) array([ -0.0001, -2.06261, -2.06261]) The result should be $f(x_1) = -0.0001$, $f(x_2) = -2.06261$ and $f(x_3) = -2.06261$. Parameters ---------- x : array_like One dimension Numpy array of the point at which the Cross-in-tray function is to be computed or a two dimension Numpy array of points at which the Cross-in-tray function is to be computed. Returns ------- float or array_like The value(s) of the Cross-in-tray function for the given point(s) `x`. assert x.shape[0] == 2, x.shape return -0.0001 * (np.abs(np.sin(x[0]) * np.sin(x[1]) * np.exp( np.abs( 100.0 - np.sqrt(x[0]**2.0 + x[1]**2.0)/np.pi ))) + 1.0)**0.1 crossintray(np.array([0., 0.])) plot_2d_solution_space(crossintray, xmin=-10*np.ones(2), xmax=10*np.ones(2), xstar=np.array([[1.34941, 1.34941, -1.34941, -1.34941], [1.34941, -1.34941, 1.34941, -1.34941]]), #angle_view=(35, 25), title="Cross-in-tray function", zlog=False, output_file_name="cross_in_tray_3d.png") plot_2d_contour_solution_space(crossintray, xmin=-10*np.ones(2), xmax=10*np.ones(2), xstar=np.array([[1.34941, 1.34941, -1.34941, -1.34941], [1.34941, -1.34941, 1.34941, -1.34941]]), title="Cross-in-tray function", #vmin=-2.0, #vmax=-0.5, zlog=False, output_file_name="cross_in_tray.png") Explanation: Cross-in-tray function The Cross-in-tray function is a 2 dimensions multimodal function, with four global minima. $$ f(x_1, x_2) = -0.0001 \left( \left| \sin(x_1) \sin(x_2) \exp \left( \left| 100 - \frac{\sqrt{x_1^2 + x_2^2}}{\pi} \right| \right)\right| + 1 \right)^{0.1} $$ Global minima: $$ \text{Min} = \begin{cases} f(1.34941, -1.34941) &= -2.06261 \ f(1.34941, 1.34941) &= -2.06261 \ f(-1.34941, 1.34941) &= -2.06261 \ f(-1.34941, -1.34941) &= -2.06261 \ \end{cases} $$ Search domain: $$ -10 \leq x_1, x_2 \leq 10 $$ References: Test functions for optimization (Wikipedia): https://en.wikipedia.org/wiki/Test_functions_for_optimization. End of explanation def holder(x): rThe Hölder table function. Example: single 2D point ------------------------ To evaluate $x = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$: >>> holder( np.array([0, 0]) ) 0.0 The result should be $f(x) = 0$. Example: multiple 2D points --------------------------- To evaluate $x_1 = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$, $x_2 = \begin{pmatrix} 0 \\ 1 \end{pmatrix}$ and $x_3 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}$ at once: >>> holder( np.array([[0., 0., 1.], [0., 1., 0.]]) ) array([-0. , -0. , -1.66377043]) The result should be $f(x_1) = 0$, $f(x_2) = 0$ and $f(x_3) = -1.66377043$. Parameters ---------- x : array_like One dimension Numpy array of the point at which the Hölder table function is to be computed or a two dimension Numpy array of points at which the Hölder table function is to be computed. Returns ------- float or array_like The value(s) of the Hölder table function for the given point(s) `x`. assert x.shape[0] == 2, x.shape return -np.abs(np.sin(x[0]) * np.cos(x[1]) * np.exp(np.abs(1.0 - np.sqrt(x[0]**2.0 + x[1]**2.0)/np.pi ))) holder(np.array([[0., 0., 2.], [0., 1., 2.]])) plot_2d_solution_space(holder, xmin=-10.*np.ones(2), xmax=10.*np.ones(2), xstar=np.array([[8.05502, 8.05502, -8.05502, -8.05502], [9.66459, -9.66459, 9.66459, -9.66459]]), #angle_view=(35, 25), title="Hölder table function", zlog=False, output_file_name="holder_3d.png") plot_2d_contour_solution_space(holder, xmin=-10*np.ones(2), xmax=10*np.ones(2), xstar=np.array([[8.05502, 8.05502, -8.05502, -8.05502], [9.66459, -9.66459, 9.66459, -9.66459]]), zlog=False, title="Hölder table function", output_file_name="holder.png") Explanation: Hölder table function The Hölder table function is a 2 dimensions multimodal function, with four global minima. $$ f(x_1, x_2) = -\left| \sin(x_1) \cos(x_2) \exp \left( \left| 1 - \frac{\sqrt{x_1^2 + x_2^2}}{\pi} \right| \right) \right| $$ Global minima: $$ \text{Min} = \begin{cases} f(8.05502, 9.66459) &= -19.2085 \ f(-8.05502, 9.66459) &= -19.2085 \ f(8.05502, -9.66459) &= -19.2085 \ f(-8.05502, -9.66459) &= -19.2085 \end{cases} $$ Search domain: $$ -10 \leq x_1, x_2 \leq 10 $$ References: Test functions for optimization (Wikipedia): https://en.wikipedia.org/wiki/Test_functions_for_optimization. End of explanation
7,541
Given the following text description, write Python code to implement the functionality described below step by step Description: Trying MountainCar Step1: Let's try some arbitrary thetas And see what the ratio depends on. I've seen above that it's probably not the order of the Fourier FA, but the number of dimensions. Step2: If the bounds of the states are [0, n], the ratio between symbolic and numeric results is $1/n^{n_{dim}-1}$. Or this is at least what I think I see. This looks like there's a problem with normalization. (What also very strongly suggested it, was that numeric and symbolic results were equal over [0, 1], but started to differ when I changed to [0, 2].)
Python Code: mc_env = gym.make("MountainCar-v0") mc_n_weights, mc_feature_vec = fourier_fa.make_feature_vec( np.array([mc_env.low, mc_env.high]), n_acts=3, order=2) mc_experience = linfa.init(lmbda=0.9, init_alpha=1.0, epsi=0.1, feature_vec=mc_feature_vec, n_weights=mc_n_weights, act_space=mc_env.action_space, theta=None, is_use_alpha_bounds=True) mc_experience, mc_spe, mc_ape = driver.train(mc_env, linfa, mc_experience, n_episodes=400, max_steps=200, is_render=False) fig, ax1 = pyplot.subplots() ax1.plot(mc_spe, color='b') ax2 = ax1.twinx() ax2.plot(mc_ape, color='r') pyplot.show() def mc_Q_at_x(e, x, a): return scipy.integrate.quad( lambda x_dot: e.feature_vec(np.array([x, x_dot]), a).dot(e.theta), mc_env.low[1], mc_env.high[1]) def mc_Q_fun(x): return mc_Q_at_x(mc_experience, x, 0) sample_xs = np.arange(mc_env.low[0], mc_env.high[0], (mc_env.high[0] - mc_env.low[0]) / 8.0) mc_num_Qs = np.array( map(mc_Q_fun, sample_xs) ) mc_num_Qs mc_sym_Q_s0 = fourier_fa_int.make_sym_Q_s0( np.array([mc_env.low, mc_env.high]), 2) mc_sym_Qs = np.array( [mc_sym_Q_s0(mc_experience.theta, 0, s0) for s0 in sample_xs] ) mc_sym_Qs mc_sym_Qs - mc_num_Qs[:,0] Explanation: Trying MountainCar End of explanation # Credits: http://stackoverflow.com/a/1409496/5091738 def make_integrand(feature_vec, theta, s0, n_dim): argstr = ", ".join(["s{}".format(i) for i in xrange(1, n_dim)]) code = "def integrand({argstr}):\n" \ " return feature_vec(np.array([s0, {argstr}]), 0).dot(theta)\n" \ .format(argstr=argstr) #print code compiled = compile(code, "fakesource", "exec") fakeglobals = {'feature_vec': feature_vec, 'theta': theta, 's0': s0, 'np': np} fakelocals = {} eval(compiled, fakeglobals, fakelocals) return fakelocals['integrand'] print make_integrand(None, None, None, 4) for order in xrange(1,3): for n_dim in xrange(2, 4): print "\norder {} dims {}".format(order, n_dim) min_max = np.array([np.zeros(n_dim), 3 * np.ones(n_dim)]) n_weights, feature_vec = fourier_fa.make_feature_vec( min_max, n_acts=1, order=order) theta = np.cos(np.arange(0, 2*np.pi, 2*np.pi/n_weights)) sample_xs = np.arange(0, 3, 0.3) def num_Q_at_x(s0): integrand = make_integrand(feature_vec, theta, s0, n_dim) return scipy.integrate.nquad(integrand, min_max.T[1:]) num_Qs = np.array( map(num_Q_at_x, sample_xs) ) #print num_Qs sym_Q_at_x = fourier_fa_int.make_sym_Q_s0(min_max, order) sym_Qs = np.array( [sym_Q_at_x(theta, 0, s0) for s0 in sample_xs] ) #print sym_Qs print sym_Qs / num_Qs[:,0] Explanation: Let's try some arbitrary thetas And see what the ratio depends on. I've seen above that it's probably not the order of the Fourier FA, but the number of dimensions. End of explanation np.arange(0, 1, 10) import sympy as sp a, b, x, f = sp.symbols("a b x f") b_int = sp.Integral(1, (x, a, b)) sp.init_printing() u_int = sp.Integral((1-a)/(b-a), (x, 0, 1)) u_int (b_int / u_int).simplify() b_int.subs([(a,0), (b,2)]).doit() u_int.subs([(a,0), (b,2)]).doit() (u_int.doit()*b).simplify() Explanation: If the bounds of the states are [0, n], the ratio between symbolic and numeric results is $1/n^{n_{dim}-1}$. Or this is at least what I think I see. This looks like there's a problem with normalization. (What also very strongly suggested it, was that numeric and symbolic results were equal over [0, 1], but started to differ when I changed to [0, 2].) End of explanation
7,542
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to NumPy numpy is a powerful set of tools to perform mathematical operations of on lists of numbers. It works faster than normal python lists operations and can manupilate high dimentional arrays too. Additional material Step1: Why NumPy? Python has powerful data types. Why do we need another? Generate a list Step2: Basic Operations Step3: np.array Numpy Arrays are central to Numpy. They ca be created in several ways. np.arange([start,] stop[, step,], dtype=None) Step4: np.array(object, dtype=None, ...) Step5: np.linspace(start, stop, num=50, dtype=None, ...) Step6: Examing np arrays Step7: Common methods zeros(shape, dtype=float, ...) Step8: np.linspace(start, stop, num=50, endpoint=True) Step9: random_sample(size=None) Step10: Statistics Step11: Reshaping Step12: Slicing Step13: Range Step14: Stepping
Python Code: import numpy as np Explanation: Introduction to NumPy numpy is a powerful set of tools to perform mathematical operations of on lists of numbers. It works faster than normal python lists operations and can manupilate high dimentional arrays too. Additional material: * Another tutorial * Numpy Reference Import NumPy End of explanation # Regular Python %timeit python_list_1 = list(range(1, 1000)) python_list_1 = list(range(1, 1000)) python_list_2 = list(range(1, 1000)) #Numpy %timeit numpy_list_1 = np.arange(1, 1000) numpy_list_1 = np.arange(1, 1000) numpy_list_2 = np.arange(1, 1000) Explanation: Why NumPy? Python has powerful data types. Why do we need another? Generate a list End of explanation %%timeit # Regular Python [(x + y) for x, y in zip(python_list_1, python_list_2)] [(x - y) for x, y in zip(python_list_1, python_list_2)] [(x * y) for x, y in zip(python_list_1, python_list_2)] [(x / y) for x, y in zip(python_list_1, python_list_2)] %%timeit #Numpy numpy_list_1 + numpy_list_2 numpy_list_1 - numpy_list_2 numpy_list_1 * numpy_list_2 numpy_list_1 / numpy_list_2 Explanation: Basic Operations End of explanation np.arange(10) np.arange(3, 10) np.arange(1, 10, 0.5) np.arange(1, 10, 2, dtype=np.float64) Explanation: np.array Numpy Arrays are central to Numpy. They ca be created in several ways. np.arange([start,] stop[, step,], dtype=None) End of explanation np.array([1,2,3]) np.array([1,2,3], dtype=np.float64) a = np.array([[1, 1], [1, 1]]) a a = np.array([[[1, 1], [1, 1]], [[1, 1], [1, 1]]]) a a.shape Explanation: np.array(object, dtype=None, ...) End of explanation np.linspace(1, 10) np.linspace(1, 10, num=12) Explanation: np.linspace(start, stop, num=50, dtype=None, ...) End of explanation ds = np.arange(1, 10, 2) ds.ndim ds.shape ds.size len(ds) ds.dtype ds Explanation: Examing np arrays End of explanation np.zeros((3, 4)) np.zeros((3, 4), dtype=np.int64) Explanation: Common methods zeros(shape, dtype=float, ...) End of explanation np.linspace(1, 5) np.linspace(0, 2, num=4) np.linspace(0, 2, num=4, endpoint=False) Explanation: np.linspace(start, stop, num=50, endpoint=True) End of explanation np.random.random((2,3)) np.random.choice(np.arange(10), 3) Explanation: random_sample(size=None) End of explanation data_set = np.random.random((3,4)) data_set np.max(data_set) np.max(data_set, axis=0) np.max(data_set, axis=1) np.min(data_set) np.mean(data_set) np.median(data_set) np.std(data_set) np.sum(data_set) np.argmax(data_set) np.argmin(data_set) np.argsort(data_set) Explanation: Statistics End of explanation np.reshape(data_set, (4, 3)) np.reshape(data_set, (12, 1)) np.reshape(data_set, (12)) np.ravel(data_set) Explanation: Reshaping End of explanation data_set = np.random.randint(0, 10, size=(5, 10)) data_set data_set[1] data_set[1][0] data_set[1, 0] Explanation: Slicing End of explanation data_set[2:4] data_set[2:4, 0] data_set[2:4, 0:2] data_set[:,0] Explanation: Range End of explanation data_set[2:4:1] data_set[::] data_set[::2] data_set[2:4,::2] data_set[2:4,::-2] Explanation: Stepping End of explanation
7,543
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction NetworKit provides an easy interface to Gephi that uses the Gephi graph streaming plugin. To be able to use it, install the Graph Streaming plugin using the Gephi plugin manager. Afterwards, open the Streaming window by selecting Windows/Streaming in the menu. Workflow Once the plugin is installed in gephi, create a new project and start the Master Server in the Streaming tab within gephi. The running server will be indicated by a green dot. As an example, we generate a random graph... Step1: ... and export it directly export it into the active gephi workspace. After executing the following code, the graph should be available in the first gephi workspace. Attention Step2: Exporting node values We now apply a community detection algorithm to the generated random graph and export the community as a node attribute to gephi. Any python list or Partition object can be exported. Please note that only the attribute itself is transfered, so make sure you called exportGraph(graph) first. Step3: The node attribute can now be selected and used in gephi, for partitioning or any other desired scenario. Exporting edges scores Just like node values, we can export edge values. After graph creation, each edge is assigned an integer id that is then used to access arbitrary attribute vectors, so any python list can be exported to gephi. In the following example, we assign an even number edge and export that score to gephi. Step4: Changing the server URL By default, the streaming client in NetworKit connects to http
Python Code: G = generators.ErdosRenyiGenerator(300, 0.2).generate() G.addEdge(0, 1) #We want to make sure this specific edge exists, for usage in an example later. Explanation: Introduction NetworKit provides an easy interface to Gephi that uses the Gephi graph streaming plugin. To be able to use it, install the Graph Streaming plugin using the Gephi plugin manager. Afterwards, open the Streaming window by selecting Windows/Streaming in the menu. Workflow Once the plugin is installed in gephi, create a new project and start the Master Server in the Streaming tab within gephi. The running server will be indicated by a green dot. As an example, we generate a random graph... End of explanation client = gephi.streaming.GephiStreamingClient(url='http://localhost:8080/workspace0') client.exportGraph(G) Explanation: ... and export it directly export it into the active gephi workspace. After executing the following code, the graph should be available in the first gephi workspace. Attention: Any graph currently contained in that workspace will be overriden. End of explanation communities = community.detectCommunities(G) client.exportNodeValues(G, communities, "community") Explanation: Exporting node values We now apply a community detection algorithm to the generated random graph and export the community as a node attribute to gephi. Any python list or Partition object can be exported. Please note that only the attribute itself is transfered, so make sure you called exportGraph(graph) first. End of explanation edgeScore = [2*x for x in range(0, G.upperEdgeIdBound())] client.exportEdgeValues(G, edgeScore, "myEdgeScore") Explanation: The node attribute can now be selected and used in gephi, for partitioning or any other desired scenario. Exporting edges scores Just like node values, we can export edge values. After graph creation, each edge is assigned an integer id that is then used to access arbitrary attribute vectors, so any python list can be exported to gephi. In the following example, we assign an even number edge and export that score to gephi. End of explanation client = gephi.streaming.GephiStreamingClient(url='http://localhost:8080/workspace0') Explanation: Changing the server URL By default, the streaming client in NetworKit connects to http://localhost:8080/workspace0, i.e. the workspace called 'Workspace 0' of the local gephi instance. One might want to connect to a gephi instance running on a remote host or change the used port (this can be done in gephi by selecting Settings within the Streaming tab). To change the url in NetworKit, simply pass it upon the creation of the client object: End of explanation
7,544
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2020 The TensorFlow Authors. Step1: Writing a training loop from scratch <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Introduction Keras provides default training and evaluation loops, fit() and evaluate(). Their usage is covered in the guide Training & evaluation with the built-in methods. If you want to customize the learning algorithm of your model while still leveraging the convenience of fit() (for instance, to train a GAN using fit()), you can subclass the Model class and implement your own train_step() method, which is called repeatedly during fit(). This is covered in the guide Customizing what happens in fit(). Now, if you want very low-level control over training & evaluation, you should write your own training & evaluation loops from scratch. This is what this guide is about. Using the GradientTape Step3: Let's train it using mini-batch gradient with a custom training loop. First, we're going to need an optimizer, a loss function, and a dataset Step4: Here's our training loop Step5: Low-level handling of metrics Let's add metrics monitoring to this basic loop. You can readily reuse the built-in metrics (or custom ones you wrote) in such training loops written from scratch. Here's the flow Step6: Here's our training & evaluation loop Step7: Speeding-up your training step with tf.function The default runtime in TensorFlow 2 is eager execution. As such, our training loop above executes eagerly. This is great for debugging, but graph compilation has a definite performance advantage. Describing your computation as a static graph enables the framework to apply global performance optimizations. This is impossible when the framework is constrained to greedly execute one operation after another, with no knowledge of what comes next. You can compile into a static graph any function that takes tensors as input. Just add a @tf.function decorator on it, like this Step8: Let's do the same with the evaluation step Step9: Now, let's re-run our training loop with this compiled training step Step10: Much faster, isn't it? Low-level handling of losses tracked by the model Layers & models recursively track any losses created during the forward pass by layers that call self.add_loss(value). The resulting list of scalar loss values are available via the property model.losses at the end of the forward pass. If you want to be using these loss components, you should sum them and add them to the main loss in your training step. Consider this layer, that creates an activity regularization loss Step11: Let's build a really simple model that uses it Step12: Here's what our training step should look like now Step13: Summary Now you know everything there is to know about using built-in training loops and writing your own from scratch. To conclude, here's a simple end-to-end example that ties together everything you've learned in this guide Step14: Then let's create a generator network, that turns latent vectors into outputs of shape (28, 28, 1) (representing MNIST digits) Step15: Here's the key bit Step16: Let's train our GAN, by repeatedly calling train_step on batches of images. Since our discriminator and generator are convnets, you're going to want to run this code on a GPU.
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2020 The TensorFlow Authors. End of explanation import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import numpy as np Explanation: Writing a training loop from scratch <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/snapshot-keras/site/en/guide/keras/writing_a_training_loop_from_scratch.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/keras-team/keras-io/blob/master/guides/writing_a_training_loop_from_scratch.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/writing_a_training_loop_from_scratch.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Setup End of explanation inputs = keras.Input(shape=(784,), name="digits") x1 = layers.Dense(64, activation="relu")(inputs) x2 = layers.Dense(64, activation="relu")(x1) outputs = layers.Dense(10, name="predictions")(x2) model = keras.Model(inputs=inputs, outputs=outputs) Explanation: Introduction Keras provides default training and evaluation loops, fit() and evaluate(). Their usage is covered in the guide Training & evaluation with the built-in methods. If you want to customize the learning algorithm of your model while still leveraging the convenience of fit() (for instance, to train a GAN using fit()), you can subclass the Model class and implement your own train_step() method, which is called repeatedly during fit(). This is covered in the guide Customizing what happens in fit(). Now, if you want very low-level control over training & evaluation, you should write your own training & evaluation loops from scratch. This is what this guide is about. Using the GradientTape: a first end-to-end example Calling a model inside a GradientTape scope enables you to retrieve the gradients of the trainable weights of the layer with respect to a loss value. Using an optimizer instance, you can use these gradients to update these variables (which you can retrieve using model.trainable_weights). Let's consider a simple MNIST model: End of explanation # Instantiate an optimizer. optimizer = keras.optimizers.SGD(learning_rate=1e-3) # Instantiate a loss function. loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) # Prepare the training dataset. batch_size = 64 (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() x_train = np.reshape(x_train, (-1, 784)) x_test = np.reshape(x_test, (-1, 784)) # Reserve 10,000 samples for validation. x_val = x_train[-10000:] y_val = y_train[-10000:] x_train = x_train[:-10000] y_train = y_train[:-10000] # Prepare the training dataset. train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size) # Prepare the validation dataset. val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)) val_dataset = val_dataset.batch(batch_size) Explanation: Let's train it using mini-batch gradient with a custom training loop. First, we're going to need an optimizer, a loss function, and a dataset: End of explanation epochs = 2 for epoch in range(epochs): print("\nStart of epoch %d" % (epoch,)) # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): # Open a GradientTape to record the operations run # during the forward pass, which enables auto-differentiation. with tf.GradientTape() as tape: # Run the forward pass of the layer. # The operations that the layer applies # to its inputs are going to be recorded # on the GradientTape. logits = model(x_batch_train, training=True) # Logits for this minibatch # Compute the loss value for this minibatch. loss_value = loss_fn(y_batch_train, logits) # Use the gradient tape to automatically retrieve # the gradients of the trainable variables with respect to the loss. grads = tape.gradient(loss_value, model.trainable_weights) # Run one step of gradient descent by updating # the value of the variables to minimize the loss. optimizer.apply_gradients(zip(grads, model.trainable_weights)) # Log every 200 batches. if step % 200 == 0: print( "Training loss (for one batch) at step %d: %.4f" % (step, float(loss_value)) ) print("Seen so far: %s samples" % ((step + 1) * batch_size)) Explanation: Here's our training loop: We open a for loop that iterates over epochs For each epoch, we open a for loop that iterates over the dataset, in batches For each batch, we open a GradientTape() scope Inside this scope, we call the model (forward pass) and compute the loss Outside the scope, we retrieve the gradients of the weights of the model with regard to the loss Finally, we use the optimizer to update the weights of the model based on the gradients End of explanation # Get model inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu", name="dense_1")(inputs) x = layers.Dense(64, activation="relu", name="dense_2")(x) outputs = layers.Dense(10, name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs) # Instantiate an optimizer to train the model. optimizer = keras.optimizers.SGD(learning_rate=1e-3) # Instantiate a loss function. loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) # Prepare the metrics. train_acc_metric = keras.metrics.SparseCategoricalAccuracy() val_acc_metric = keras.metrics.SparseCategoricalAccuracy() Explanation: Low-level handling of metrics Let's add metrics monitoring to this basic loop. You can readily reuse the built-in metrics (or custom ones you wrote) in such training loops written from scratch. Here's the flow: Instantiate the metric at the start of the loop Call metric.update_state() after each batch Call metric.result() when you need to display the current value of the metric Call metric.reset_states() when you need to clear the state of the metric (typically at the end of an epoch) Let's use this knowledge to compute SparseCategoricalAccuracy on validation data at the end of each epoch: End of explanation import time epochs = 2 for epoch in range(epochs): print("\nStart of epoch %d" % (epoch,)) start_time = time.time() # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): with tf.GradientTape() as tape: logits = model(x_batch_train, training=True) loss_value = loss_fn(y_batch_train, logits) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) # Update training metric. train_acc_metric.update_state(y_batch_train, logits) # Log every 200 batches. if step % 200 == 0: print( "Training loss (for one batch) at step %d: %.4f" % (step, float(loss_value)) ) print("Seen so far: %d samples" % ((step + 1) * batch_size)) # Display metrics at the end of each epoch. train_acc = train_acc_metric.result() print("Training acc over epoch: %.4f" % (float(train_acc),)) # Reset training metrics at the end of each epoch train_acc_metric.reset_states() # Run a validation loop at the end of each epoch. for x_batch_val, y_batch_val in val_dataset: val_logits = model(x_batch_val, training=False) # Update val metrics val_acc_metric.update_state(y_batch_val, val_logits) val_acc = val_acc_metric.result() val_acc_metric.reset_states() print("Validation acc: %.4f" % (float(val_acc),)) print("Time taken: %.2fs" % (time.time() - start_time)) Explanation: Here's our training & evaluation loop: End of explanation @tf.function def train_step(x, y): with tf.GradientTape() as tape: logits = model(x, training=True) loss_value = loss_fn(y, logits) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) train_acc_metric.update_state(y, logits) return loss_value Explanation: Speeding-up your training step with tf.function The default runtime in TensorFlow 2 is eager execution. As such, our training loop above executes eagerly. This is great for debugging, but graph compilation has a definite performance advantage. Describing your computation as a static graph enables the framework to apply global performance optimizations. This is impossible when the framework is constrained to greedly execute one operation after another, with no knowledge of what comes next. You can compile into a static graph any function that takes tensors as input. Just add a @tf.function decorator on it, like this: End of explanation @tf.function def test_step(x, y): val_logits = model(x, training=False) val_acc_metric.update_state(y, val_logits) Explanation: Let's do the same with the evaluation step: End of explanation import time epochs = 2 for epoch in range(epochs): print("\nStart of epoch %d" % (epoch,)) start_time = time.time() # Iterate over the batches of the dataset. for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): loss_value = train_step(x_batch_train, y_batch_train) # Log every 200 batches. if step % 200 == 0: print( "Training loss (for one batch) at step %d: %.4f" % (step, float(loss_value)) ) print("Seen so far: %d samples" % ((step + 1) * batch_size)) # Display metrics at the end of each epoch. train_acc = train_acc_metric.result() print("Training acc over epoch: %.4f" % (float(train_acc),)) # Reset training metrics at the end of each epoch train_acc_metric.reset_states() # Run a validation loop at the end of each epoch. for x_batch_val, y_batch_val in val_dataset: test_step(x_batch_val, y_batch_val) val_acc = val_acc_metric.result() val_acc_metric.reset_states() print("Validation acc: %.4f" % (float(val_acc),)) print("Time taken: %.2fs" % (time.time() - start_time)) Explanation: Now, let's re-run our training loop with this compiled training step: End of explanation class ActivityRegularizationLayer(layers.Layer): def call(self, inputs): self.add_loss(1e-2 * tf.reduce_sum(inputs)) return inputs Explanation: Much faster, isn't it? Low-level handling of losses tracked by the model Layers & models recursively track any losses created during the forward pass by layers that call self.add_loss(value). The resulting list of scalar loss values are available via the property model.losses at the end of the forward pass. If you want to be using these loss components, you should sum them and add them to the main loss in your training step. Consider this layer, that creates an activity regularization loss: End of explanation inputs = keras.Input(shape=(784,), name="digits") x = layers.Dense(64, activation="relu")(inputs) # Insert activity regularization as a layer x = ActivityRegularizationLayer()(x) x = layers.Dense(64, activation="relu")(x) outputs = layers.Dense(10, name="predictions")(x) model = keras.Model(inputs=inputs, outputs=outputs) Explanation: Let's build a really simple model that uses it: End of explanation @tf.function def train_step(x, y): with tf.GradientTape() as tape: logits = model(x, training=True) loss_value = loss_fn(y, logits) # Add any extra losses created during the forward pass. loss_value += sum(model.losses) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) train_acc_metric.update_state(y, logits) return loss_value Explanation: Here's what our training step should look like now: End of explanation discriminator = keras.Sequential( [ keras.Input(shape=(28, 28, 1)), layers.Conv2D(64, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(128, (3, 3), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.GlobalMaxPooling2D(), layers.Dense(1), ], name="discriminator", ) discriminator.summary() Explanation: Summary Now you know everything there is to know about using built-in training loops and writing your own from scratch. To conclude, here's a simple end-to-end example that ties together everything you've learned in this guide: a DCGAN trained on MNIST digits. End-to-end example: a GAN training loop from scratch You may be familiar with Generative Adversarial Networks (GANs). GANs can generate new images that look almost real, by learning the latent distribution of a training dataset of images (the "latent space" of the images). A GAN is made of two parts: a "generator" model that maps points in the latent space to points in image space, a "discriminator" model, a classifier that can tell the difference between real images (from the training dataset) and fake images (the output of the generator network). A GAN training loop looks like this: 1) Train the discriminator. - Sample a batch of random points in the latent space. - Turn the points into fake images via the "generator" model. - Get a batch of real images and combine them with the generated images. - Train the "discriminator" model to classify generated vs. real images. 2) Train the generator. - Sample random points in the latent space. - Turn the points into fake images via the "generator" network. - Get a batch of real images and combine them with the generated images. - Train the "generator" model to "fool" the discriminator and classify the fake images as real. For a much more detailed overview of how GANs works, see Deep Learning with Python. Let's implement this training loop. First, create the discriminator meant to classify fake vs real digits: End of explanation latent_dim = 128 generator = keras.Sequential( [ keras.Input(shape=(latent_dim,)), # We want to generate 128 coefficients to reshape into a 7x7x128 map layers.Dense(7 * 7 * 128), layers.LeakyReLU(alpha=0.2), layers.Reshape((7, 7, 128)), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2DTranspose(128, (4, 4), strides=(2, 2), padding="same"), layers.LeakyReLU(alpha=0.2), layers.Conv2D(1, (7, 7), padding="same", activation="sigmoid"), ], name="generator", ) Explanation: Then let's create a generator network, that turns latent vectors into outputs of shape (28, 28, 1) (representing MNIST digits): End of explanation # Instantiate one optimizer for the discriminator and another for the generator. d_optimizer = keras.optimizers.Adam(learning_rate=0.0003) g_optimizer = keras.optimizers.Adam(learning_rate=0.0004) # Instantiate a loss function. loss_fn = keras.losses.BinaryCrossentropy(from_logits=True) @tf.function def train_step(real_images): # Sample random points in the latent space random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim)) # Decode them to fake images generated_images = generator(random_latent_vectors) # Combine them with real images combined_images = tf.concat([generated_images, real_images], axis=0) # Assemble labels discriminating real from fake images labels = tf.concat( [tf.ones((batch_size, 1)), tf.zeros((real_images.shape[0], 1))], axis=0 ) # Add random noise to the labels - important trick! labels += 0.05 * tf.random.uniform(labels.shape) # Train the discriminator with tf.GradientTape() as tape: predictions = discriminator(combined_images) d_loss = loss_fn(labels, predictions) grads = tape.gradient(d_loss, discriminator.trainable_weights) d_optimizer.apply_gradients(zip(grads, discriminator.trainable_weights)) # Sample random points in the latent space random_latent_vectors = tf.random.normal(shape=(batch_size, latent_dim)) # Assemble labels that say "all real images" misleading_labels = tf.zeros((batch_size, 1)) # Train the generator (note that we should *not* update the weights # of the discriminator)! with tf.GradientTape() as tape: predictions = discriminator(generator(random_latent_vectors)) g_loss = loss_fn(misleading_labels, predictions) grads = tape.gradient(g_loss, generator.trainable_weights) g_optimizer.apply_gradients(zip(grads, generator.trainable_weights)) return d_loss, g_loss, generated_images Explanation: Here's the key bit: the training loop. As you can see it is quite straightforward. The training step function only takes 17 lines. End of explanation import os # Prepare the dataset. We use both the training & test MNIST digits. batch_size = 64 (x_train, _), (x_test, _) = keras.datasets.mnist.load_data() all_digits = np.concatenate([x_train, x_test]) all_digits = all_digits.astype("float32") / 255.0 all_digits = np.reshape(all_digits, (-1, 28, 28, 1)) dataset = tf.data.Dataset.from_tensor_slices(all_digits) dataset = dataset.shuffle(buffer_size=1024).batch(batch_size) epochs = 1 # In practice you need at least 20 epochs to generate nice digits. save_dir = "./" for epoch in range(epochs): print("\nStart epoch", epoch) for step, real_images in enumerate(dataset): # Train the discriminator & generator on one batch of real images. d_loss, g_loss, generated_images = train_step(real_images) # Logging. if step % 200 == 0: # Print metrics print("discriminator loss at step %d: %.2f" % (step, d_loss)) print("adversarial loss at step %d: %.2f" % (step, g_loss)) # Save one generated image img = tf.keras.preprocessing.image.array_to_img( generated_images[0] * 255.0, scale=False ) img.save(os.path.join(save_dir, "generated_img" + str(step) + ".png")) # To limit execution time we stop after 10 steps. # Remove the lines below to actually train the model! if step > 10: break Explanation: Let's train our GAN, by repeatedly calling train_step on batches of images. Since our discriminator and generator are convnets, you're going to want to run this code on a GPU. End of explanation
7,545
Given the following text description, write Python code to implement the functionality described below step by step Description: 1A.data - Décorrélation de variables aléatoires On construit des variables corrélées gaussiennes et on cherche à construire des variables décorrélées en utilisant le calcul matriciel. Step1: Ce TD appliquera le calcul matriciel aux vecteurs de variables normales corrélées ou aussi décomposition en valeurs singulières. Création d'un jeu de données Q1 La première étape consiste à construire des variables aléatoires normales corrélées dans une matrice $N \times 3$. On cherche à construire cette matrice au format numpy. Le programme suivant est un moyen de construire un tel ensemble à l'aide de combinaisons linéaires. Complétez les lignes contenant des ..... Step2: Q2 A partir de la matrice npm, on veut construire la matrice des corrélations. Step3: A quoi correspond la matrice a ? Corrélation de matrices Q3 Construire la matrice des corrélations à partir de la matrice a. Si besoin, on pourra utiliser le module copy. Step4: Q4 Construire une fonction qui prend comme argument la matrice npm et qui retourne la matrice de corrélation. Cette fonction servira plus pour vérifier que nous avons bien réussi à décorréler. Step5: Un peu de mathématiques Pour la suite, un peu de mathématique. On note $M$ la matrice npm. $V=\frac{1}{n}M'M$ correspond à la matrice des covariances et elle est nécessairement symétrique. C'est une matrice diagonale si et seulement si les variables normales sont indépendantes. Comme toute matrice symétrique, elle est diagonalisable. On peut écrire
Python Code: from jyquickhelper import add_notebook_menu add_notebook_menu() Explanation: 1A.data - Décorrélation de variables aléatoires On construit des variables corrélées gaussiennes et on cherche à construire des variables décorrélées en utilisant le calcul matriciel. End of explanation import random import numpy as np def combinaison () : x = random.gauss(0,1) # génère un nombre aléatoire y = random.gauss(0,1) # selon une loi normale z = random.gauss(0,1) # de moyenne null et de variance 1 x2 = x y2 = 3*x + y z2 = -2*x + y + 0.2*z return [x2, y2, z2] # mat = [ ............. ] # npm = np.matrix ( mat ) Explanation: Ce TD appliquera le calcul matriciel aux vecteurs de variables normales corrélées ou aussi décomposition en valeurs singulières. Création d'un jeu de données Q1 La première étape consiste à construire des variables aléatoires normales corrélées dans une matrice $N \times 3$. On cherche à construire cette matrice au format numpy. Le programme suivant est un moyen de construire un tel ensemble à l'aide de combinaisons linéaires. Complétez les lignes contenant des ..... End of explanation npm = ... # voir question précédente t = npm.transpose () a = t * npm a /= npm.shape[0] Explanation: Q2 A partir de la matrice npm, on veut construire la matrice des corrélations. End of explanation import copy b = copy.copy (a) # remplacer cette ligne par b = a b[0,0] = 44444444 print(b) # et comparer le résultat ici Explanation: A quoi correspond la matrice a ? Corrélation de matrices Q3 Construire la matrice des corrélations à partir de la matrice a. Si besoin, on pourra utiliser le module copy. End of explanation def correlation(npm): # .......... return "....." Explanation: Q4 Construire une fonction qui prend comme argument la matrice npm et qui retourne la matrice de corrélation. Cette fonction servira plus pour vérifier que nous avons bien réussi à décorréler. End of explanation def simultation (N, cov) : # simule un échantillon de variables corrélées # N : nombre de variables # cov : matrice de covariance # ... return M Explanation: Un peu de mathématiques Pour la suite, un peu de mathématique. On note $M$ la matrice npm. $V=\frac{1}{n}M'M$ correspond à la matrice des covariances et elle est nécessairement symétrique. C'est une matrice diagonale si et seulement si les variables normales sont indépendantes. Comme toute matrice symétrique, elle est diagonalisable. On peut écrire : $$\frac{1}{n}M'M = P \Lambda P'$$ $P$ vérifie $P'P= PP' = I$. La matrice $\Lambda$ est diagonale et on peut montrer que toutes les valeurs propres sont positives ($\Lambda = \frac{1}{n}P'M'MP = \frac{1}{n}(MP)'(MP)$). On définit alors la racine carrée de la matrice $\Lambda$ par : $$\begin{array}{rcl} \Lambda &=& diag(\lambda_1,\lambda_2,\lambda_3) \ \Lambda^{\frac{1}{2}} &=& diag\left(\sqrt{\lambda_1},\sqrt{\lambda_2},\sqrt{\lambda_3}\right)\end{array}$$ On définit ensuite la racine carrée de la matrice $V$ : $$V^{\frac{1}{2}} = P \Lambda^{\frac{1}{2}} P'$$ On vérifie que $\left(V^{\frac{1}{2}}\right)^2 = P \Lambda^{\frac{1}{2}} P' P \Lambda^{\frac{1}{2}} P' = P \Lambda^{\frac{1}{2}}\Lambda^{\frac{1}{2}} P' = V = P \Lambda P' = V$. Calcul de la racine carrée Q6 Le module numpy propose une fonction qui retourne la matrice $P$ et le vecteur des valeurs propres $L$ : L,P = np.linalg.eig(a) Vérifier que $P'P=I$. Est-ce rigoureusement égal à la matrice identité ? Q7 Que fait l'instruction suivante : np.diag(L) ? Q8 Ecrire une fonction qui calcule la racine carrée de la matrice $\frac{1}{n}M'M$ (on rappelle que $M$ est la matrice npm). Voir aussi Racine carrée d'une matrice. Décorrélation np.linalg.inv(a) permet d'obtenir l'inverse de la matrice a. Q9 Chaque ligne de la matrice $M$ représente un vecteur de trois variables corrélées. La matrice de covariance est $V=\frac{1}{n}M'M$. Calculer la matrice de covariance de la matrice $N=M V^{-\frac{1}{2}}$ (mathématiquement). Q10 Vérifier numériquement. Simulation de variables corrélées Q11 A partir du résultat précédent, proposer une méthode pour simuler un vecteur de variables corrélées selon une matrice de covariance $V$ à partir d'un vecteur de lois normales indépendantes. Q12 Proposer une fonction qui crée cet échantillon : End of explanation
7,546
Given the following text description, write Python code to implement the functionality described below step by step Description: Intoduction to Pandas and Dataframes <hr> Venkat Malladi (Computational Biologist BICF) Agenda <hr> Introduction to Pandas DataSeries Exercise 1 Exercise 2 Dataframe Exercise 3 Exercise 4 Exercise 5 Import and Store Data Summarizing and Computing Descriptive Statistics Exercise 6 Exercise 7 Grouped and apply Exercise 8 Exercise 9 Data Transformation and Normalization Exercise 10 Exercise 11 Exercise 12 1.1 What is Pandas? <hr> Description Python library providing high-performance, easy-to-use structures and data analysis tools. Suitable for tabular data with heterogeneously-typed colums, as in an SQL table or Excel spreadsheet Provides data analysis feautures similar to Step1: 1.4 Data Structures Series Dataframe Agenda <hr> Introduction to Pandas Series Exercise 1 Exercise 2 Dataframe Exercise 3 Exercise 4 Exercise 5 Import and Store Data Summarizing and Computing Descriptive Statistics Exercise 6 Exercise 7 Grouped and apply Exercise 8 Exercise 9 Data Transformation and Normalization Exercise 10 Exercise 11 Exercise 12 2.1 Series <hr> One dimensional array-like object Contains array of data (of any NumPy data type) with an index that labels each element in the vector. Indexes can be Step2: What datatype is the counts object? Step3: 2.3 Series - String indexes <hr> We can assign meaningful lables to the indexes when making the Series object by specfing an array of index | Index | Value | | ------ | ------ | | CA | 35 | | TX | 50 | | OK | 25 | Make Series of count data with Gene Symbols Step4: 2.4 Series - Dictionary <hr> Can be thought of as a dict Can be constructed from a dict directly. Construct second sample RNA-counts dict Step5: Make pandas Series from RNA-counts dict Step6: 2.5.1 Series - Referencing Elements - Integer <hr> <div style="background-color Step7: Get the 2nd through 4th elements Step8: 2.5.2 Series - Referencing Elements - String <hr> String Index Get the counts for the Myc Gene Step9: Get the counts for FOXA1, GATA2 and BRCA2 Step10: 2.5.3 Series - Referencing Elements - array/index values <hr> Can get the array representation and index object of the Series via its values and index atrributes, respectively. Get the values in the counts matrix Step11: Get the index of the rna_counts matrix Step12: 2.5.4 Series - Referencing Elements - labels <hr> Can give both the array of values and the index meaningful labels themselves Step13: 2.6 Series - array operations <hr> NumPy array operations can be applied to Series without losing the data structure Use boolean array to filter Series Select Genes that have greater than 20 counts Step14: Select genes that have greater than 20 counts Step15: 2.7 Series - null values <hr> Many times Series have missing values that you need to identify and clean Marvel Cinematic Universe Make Movie DataFrame with missing values Step16: Find movies with no opening revenue Step17: Find movies with opening revenue Step18: Display only movie with no opening revenue Step19: 2.8 Series - auto alignment <hr> Index labels are used to align (merge) data when used in operations with other Series objects Step20: Combine counts for 2 cells (rna_counts and rna_counts_cell2) Step21: Adding Series combined values with the same label in the resulting series Contrast this with arrays, where arrays of the same length will combine values element-wise Notice that the missing values. If one Series has a missing value you can't add it to the other value and a missing value results. <div style="background-color Step22: <div style="background-color Step23: Agenda <hr> Introduction to Pandas Series Exercise 1 Exercise 2 Dataframe Exercise 3 Exercise 4 Exercise 5 Import and Store Data Summarizing and Computing Descriptive Statistics Exercise 6 Exercise 7 Grouped and apply Exercise 8 Exercise 9 Data Transformation and Normalization Exercise 10 Exercise 11 Exercise 12 3.1 Dataframe <hr> A DataFrame is a tabular data structure, comprised of rows and columns like in a spreadsheet Each column can be a different value type (numeric, string, boolean etc) | Title | Year | Studio | Rating | | Step24: What datatype is df_mcu? Step25: 3.3 Dataframe - specifying indices and columns <hr> Order of columns/rows can be specified using Step26: 3.4 Dataframe - from nested dict of dicts <hr> Outer dict keys as columns and inner dict keys as row indices Make Dataframe of population of states Step27: 3.5 Dataframe - number of rows and columns <hr> Get the number of rows in a Dataframe Step28: Get the (rows, cols) of the Dataframe Step29: 3.6 Dataframe - index, columns and values <hr> Get the column headers Step30: Get the row index values Step31: Get values of the Dataframe only Step32: 3.7 Dataframe - Selecting Columns and Rows <hr> There are three basic ways to access the data in the Dataframe Step33: Select values in a list of columns Step34: Use slice to get the first n rows (NumPy style indexing) Step35: Can combine slice and column selection to select the first n rows Step36: Order of column and slice doesn't matter Step37: 3.7.2 Dataframe - Selecting Columns and Rows - Integer based selection <hr> iloc is primarily an integer position based (from 0 to length-1 of the axis), when we include the starting from the end indexing, but may also be used with a boolean array. Allowed inputs are Step38: A list or array of integers Step39: A slice object with ints Step40: 3.7.3 Dataframe - Selecting Columns and Rows - Integer based selection <hr> loc is primarily label based, but may also be used with a boolean array. Allowed inputs are Step41: A list or array of labels Step42: A slice object with labels <div style="background-color Step43: 3.7.4 Dataframe - Selecting Columns and Rows - Data Filtering <hr> Data filtering using boolean Filter and select on single condition Step44: Filter and select on multiple conidtions Step45: 3.8 Dataframe - Adding and Deleting data <hr> Add a new column Step46: Add a new row Step47: Drop an existing column Step48: Drop an existing row Step49: Columns, Rows or individual elements can be modified similarly using loc or iloc. <div style="background-color Step50: <div style="background-color Step51: <div style="background-color Step52: Agenda <hr> Introduction to Pandas Series Exercise 1 Exercise 2 Dataframe Exercise 3 Exercise 4 Import and Store Data Summarizing and Computing Descriptive Statistics Exercise 6 Exercise 7 Grouped and apply Exercise 8 Exercise 9 Data Transformation and Normalization Exercise 10 Exercise 11 Exercise 12 4.1 Import and Store Data <hr> The first step in any problem is identifying what format your data is in, and then loading it into whatever framework you're using. Common formats are Step53: Agenda <hr> Introduction to Pandas Series Exercise 1 Exercise 2 Dataframe Exercise 3 Exercise 4 Import and Store Data Summarizing and Computing Descriptive Statistics Exercise 6 Exercise 7 Grouped and apply Exercise 8 Exercise 9 Data Transformation and Normalization Exercise 10 Exercise 11 Exercise 12 5.1 Summarizing and Computing Descriptive Statistics <hr> Pandas as a lot of built-in essential functionality common to the pandas data structures to help explore the data. 5.2 Summarizing and Computing Descriptive Statistics - Head and Tail <hr> To view a small sample of a Series or DataFrame object, use Step54: 5.2 Summarizing and Computing Descriptive Statistics - Sorting <hr> To sort data for exploring the data use Step55: Sort Dataframe by columns in descending order Step56: 5.3 Summarizing and Computing Descriptive Statistics - Descriptive statistics <hr> Built in functions to calculate the values over row or columns describe() Step57: mean() Step58: var() Step59: 5.4 Summarizing and Computing Descriptive Statistics - Missing Data <hr> Data comes in many shapes and forms and Pandas is very flexible in handling missing data Step60: - Fill Nan with default value Step61: - Use inplace to modify the dataframe instead of retunring a new object Step62: <div style="background-color Step63: <div style="background-color Step64: Agenda <hr> Introduction to Pandas Series Exercise 1 Exercise 2 Dataframe Exercise 3 Exercise 4 Import and Store Data Summarizing and Computing Descriptive Statistics Exercise 6 Exercise 7 Grouped and apply Exercise 8 Exercise 9 Data Transformation and Normalization Exercise 10 Exercise 11 Exercise 12 6.1 Grouped and apply <hr> The Grouped functionality is referring to a process involving one or more of the following steps Step65: 6.3 Grouped and apply - Apply <hr> Apply the same function to every column or row Step66: - Apply a new function that subtract max from 2 times min in every column Step67: <div style="background-color Step68: <div style="background-color Step69: Introduction to Pandas Series Exercise 1 Exercise 2 Dataframe Exercise 3 Exercise 4 Import and Store Data Summarizing and Computing Descriptive Statistics Exercise 6 Exercise 7 Grouped and apply Exercise 8 Exercise 9 Data Transformation and Normalization Exercise 10 Exercise 11 Exercise 12 7.1 Data Transformation and Normalization <hr> For Machine Learning (ML) and other data analysis it is important to Step70: Plot the Number of Medals in barchart by country using Step71: Other useful plots Step72: <div style="background-color Step73: <div style="background-color
Python Code: # Import Pandas and Numpy import pandas as pd import numpy as np Explanation: Intoduction to Pandas and Dataframes <hr> Venkat Malladi (Computational Biologist BICF) Agenda <hr> Introduction to Pandas DataSeries Exercise 1 Exercise 2 Dataframe Exercise 3 Exercise 4 Exercise 5 Import and Store Data Summarizing and Computing Descriptive Statistics Exercise 6 Exercise 7 Grouped and apply Exercise 8 Exercise 9 Data Transformation and Normalization Exercise 10 Exercise 11 Exercise 12 1.1 What is Pandas? <hr> Description Python library providing high-performance, easy-to-use structures and data analysis tools. Suitable for tabular data with heterogeneously-typed colums, as in an SQL table or Excel spreadsheet Provides data analysis feautures similar to: R, MATLAB, SAS Based on NumPy 1.2 Key Features <hr> The library is oriented towards table-like data structures that can be manipulated by a collection of methods: Easy handling of missing data Label-based slicing, indexing and subsetting of large data sets Powerful and flexible group by functionality to perform split-apply-combine operations on data sets Read/Write data from/to Excel, CSV, SQL databases, JSON 1.3 Import Pandas <hr> Before we explore the pandas package, let's import pandas. The convention is to use pd to refere to pandas when importing the package. End of explanation # Make Series of count data and visaulize series counts = pd.Series([223, 43, 53, 24, 43]) counts Explanation: 1.4 Data Structures Series Dataframe Agenda <hr> Introduction to Pandas Series Exercise 1 Exercise 2 Dataframe Exercise 3 Exercise 4 Exercise 5 Import and Store Data Summarizing and Computing Descriptive Statistics Exercise 6 Exercise 7 Grouped and apply Exercise 8 Exercise 9 Data Transformation and Normalization Exercise 10 Exercise 11 Exercise 12 2.1 Series <hr> One dimensional array-like object Contains array of data (of any NumPy data type) with an index that labels each element in the vector. Indexes can be: integers strings other data types <div style="background-color: #9999ff; padding: 10px;">NOTE: Not Numpy arrays but adds functionality to a Numpy array </div> 2.2 Series - Integer indexes <hr> If an index is not specified, a default sequence of integers is assigned as index. | Index | Value | | ------ | ------ | | 0 | 35 | | 1 | 50 | | 2 | 25 | Make a Series of count data End of explanation # What datatype is the counts object? type(counts) Explanation: What datatype is the counts object? End of explanation # Make Series of count data with Gene Symbols rna_counts = pd.Series([50, 10, 12, 29, 4], index=['BRCA2', 'GATA2', 'Myc', 'FOXA1', 'ERCC2']) rna_counts Explanation: 2.3 Series - String indexes <hr> We can assign meaningful lables to the indexes when making the Series object by specfing an array of index | Index | Value | | ------ | ------ | | CA | 35 | | TX | 50 | | OK | 25 | Make Series of count data with Gene Symbols End of explanation # Construct second sample RNA-counts dict cell2_counts = {'BRCA2':5, 'GATA2':20, 'Myc':45, 'FOXA1':10, 'ERCC2':0, 'BRCA1': 20} cell2_counts Explanation: 2.4 Series - Dictionary <hr> Can be thought of as a dict Can be constructed from a dict directly. Construct second sample RNA-counts dict End of explanation # Make pandas Series from RNA-counts dict rna_counts_cell2 = pd.Series(cell2_counts) rna_counts_cell2 Explanation: Make pandas Series from RNA-counts dict End of explanation # Access the 1st element of counts data counts[0] Explanation: 2.5.1 Series - Referencing Elements - Integer <hr> <div style="background-color: #9999ff; padding: 10px;">NOTE: We can access the values like an array text </div> Integer Index Access the 1st element of counts data End of explanation # Get the 2nd through 4th elements counts[1:4] Explanation: Get the 2nd through 4th elements End of explanation # Get the counts for Myc Gene rna_counts['Myc'] Explanation: 2.5.2 Series - Referencing Elements - String <hr> String Index Get the counts for the Myc Gene End of explanation # Get the Counts for FOXA1, GATA2 and BRCA2 rna_counts[['FOXA1', 'GATA2', 'BRCA2']] Explanation: Get the counts for FOXA1, GATA2 and BRCA2 End of explanation # Get the values in the counts matrix counts.values Explanation: 2.5.3 Series - Referencing Elements - array/index values <hr> Can get the array representation and index object of the Series via its values and index atrributes, respectively. Get the values in the counts matrix End of explanation # Get the index of the rna_counts matrix rna_counts.index Explanation: Get the index of the rna_counts matrix End of explanation rna_counts.name = 'RNA Counts' rna_counts.index.name = 'Symbol' rna_counts Explanation: 2.5.4 Series - Referencing Elements - labels <hr> Can give both the array of values and the index meaningful labels themselves End of explanation # Select Genes that have greater than 20 counts rna_counts > 20 Explanation: 2.6 Series - array operations <hr> NumPy array operations can be applied to Series without losing the data structure Use boolean array to filter Series Select Genes that have greater than 20 counts End of explanation # Select genes that have greater than 20 counts rna_counts[rna_counts > 20] Explanation: Select genes that have greater than 20 counts End of explanation # Make Movie Database with missing values mcu_opening = {'Black Panther': 202003951, 'Thor: Ragnarok': 122744989, 'Spider-Man: Homecoming': 117027503, 'Guardians of the Galaxy Vol. 2': 146510104, 'Doctor Strange': 85058311, 'Captain America: Civil War': 179139142} mcu_movies = ['Ant-Man and the Wasp', 'Avengers: Infinity War', 'Black Panther', 'Thor: Ragnarok', 'Spider-Man: Homecoming', 'Guardians of the Galaxy Vol. 2', 'Doctor Strange', 'Captain America: Civil War'] mcu_series = pd.Series(mcu_opening, index=mcu_movies) mcu_series Explanation: 2.7 Series - null values <hr> Many times Series have missing values that you need to identify and clean Marvel Cinematic Universe Make Movie DataFrame with missing values End of explanation # Find movies with no opening revenue pd.isnull(mcu_series) # Good opportunity to use Boolean filter get index and only movie names mcu_series[pd.isnull(mcu_series)].index.values Explanation: Find movies with no opening revenue End of explanation # Find movies with opening revenue pd.notnull(mcu_series) Explanation: Find movies with opening revenue End of explanation # Display only movies with no opening revenue mcu_series[pd.isnull(mcu_series)].index.values Explanation: Display only movie with no opening revenue End of explanation rna_counts rna_counts_cell2 Explanation: 2.8 Series - auto alignment <hr> Index labels are used to align (merge) data when used in operations with other Series objects End of explanation # Combine counts for 2 cells rna_counts + rna_counts_cell2 Explanation: Combine counts for 2 cells (rna_counts and rna_counts_cell2) End of explanation # Sample Python data and labels: students = ['Anastasia', 'Dima', 'Katherine', 'James', 'Emily', 'Michael', 'Matthew', 'Laura', 'Kevin', 'Jonas'] test_scores = [12.5, 9, 16.5, np.nan, 9, 20, 14.5, np.nan, 8, 19] s_scores = pd.Series(test_scores, index=students) # Which Students have scores greater than 15? s_scores[s_scores > 15].index.values # Bonus: How would you use get the Students scores greater than 15 and less than 20? s_scores[(s_scores > 15) & (s_scores < 20)] Explanation: Adding Series combined values with the same label in the resulting series Contrast this with arrays, where arrays of the same length will combine values element-wise Notice that the missing values. If one Series has a missing value you can't add it to the other value and a missing value results. <div style="background-color:yellow; padding: 10px"><h3><span></span>Exercise 1</div> <hr> Create a Series from a specified list data which has the index labels as student names and test values. Which Students have scores greater than 15? Bonus: How would you use get the Students scores greater than 15 and less than 20? <div style="background-color: #9999ff; padding: 10px;">Hint: Will use the bitwise opperator & </div> | Names | Value | | ------ | ------ | | Anastasia | 12.5 | | Dima | 9 | | Katherine | 16.5 | | James | NaN | | Emily | 9 | | Michael | 20 | | Matthew | 14.5 | | Laura | NaN | | Kevin | 8 | | Jonas | 19 | End of explanation # What is the mean, median and max test scores? s_scores.mean() s_scores.median() s_scores.max() Explanation: <div style="background-color:yellow; padding: 10px"><h3><span></span>Exercise 2</div> What is the mean, median and max test scores? End of explanation # Make Dataframe of Marvel data mcu_data = {'Title': ['Ant-Man and the Wasp', 'Avengers: Infinity War', 'Black Panther', 'Thor: Ragnarok', 'Spider-Man: Homecoming', 'Guardians of the Galaxy Vol. 2'], 'Year':[2018, 2018, 2018, 2017, 2017, 2017], 'Studio':['Beuna Vista', 'Beuna Vista', 'Beuna Vista', 'Beuna Vista', 'Sony', 'Beuna Vista'], 'Rating': [np.nan, np.nan, 0.96, 0.92, 0.92, 0.83]} df_mcu = pd.DataFrame(mcu_data) df_mcu Explanation: Agenda <hr> Introduction to Pandas Series Exercise 1 Exercise 2 Dataframe Exercise 3 Exercise 4 Exercise 5 Import and Store Data Summarizing and Computing Descriptive Statistics Exercise 6 Exercise 7 Grouped and apply Exercise 8 Exercise 9 Data Transformation and Normalization Exercise 10 Exercise 11 Exercise 12 3.1 Dataframe <hr> A DataFrame is a tabular data structure, comprised of rows and columns like in a spreadsheet Each column can be a different value type (numeric, string, boolean etc) | Title | Year | Studio | Rating | | :------: | :------: | :------: | :------: | | Ant-Man and the Wasp | 2018| Beuna Vista | NaN| | Avengers: Infinity War | 2018 | Beuna Vista | NaN| | Black Panther | 2018 | Beuna Vista | 0.96 | | Thor: Ragnarok | 2017 | Beuna Vista | 0.92| | Spider-Man: Homecoming| 2017 | Sony | 0.92| | Guardians of the Galaxy Vol. 2| 2017| Beuna Vista | 0.83 | 3.2 Dataframe - from dict of lists <hr> dict keys: columns dict values (arrays): rows Make Dataframe of Marvel data End of explanation # What datatype is df_mcu? type(df_mcu) Explanation: What datatype is df_mcu? End of explanation # Assign column order and index based on Marvel Cinemetic Universe Movie Number mcu_index = ['mcu_20','mcu_19', 'mcu_18', 'mcu_17', 'mcu_16', 'mcu_15'] mcu_columns = ['Title', 'Year', 'Studio', 'Rating'] df_mcu = pd.DataFrame(mcu_data, columns = mcu_columns, index = mcu_index) df_mcu Explanation: 3.3 Dataframe - specifying indices and columns <hr> Order of columns/rows can be specified using: columns array index array End of explanation # Make Dataframe of population pop = {'Nevada': {2001: 2.9, 2002: 2.9}, 'Ohio': {2002: 3.6, 2001: 1.7, 2000: 1.5}} df_pop = pd.DataFrame(pop) df_pop Explanation: 3.4 Dataframe - from nested dict of dicts <hr> Outer dict keys as columns and inner dict keys as row indices Make Dataframe of population of states End of explanation # Get the number of rows in a Dataframe len(df_mcu) Explanation: 3.5 Dataframe - number of rows and columns <hr> Get the number of rows in a Dataframe End of explanation # Get the (rows, cols) of the Dataframe df_mcu.shape Explanation: Get the (rows, cols) of the Dataframe End of explanation # Get the column headers df_mcu.columns Explanation: 3.6 Dataframe - index, columns and values <hr> Get the column headers End of explanation # Get the row index values df_mcu.index Explanation: Get the row index values End of explanation # Get values of the Dataframe only df_mcu.values Explanation: Get values of the Dataframe only End of explanation # Select values in a single column df_mcu['Title'] Explanation: 3.7 Dataframe - Selecting Columns and Rows <hr> There are three basic ways to access the data in the Dataframe: Quick Access: DataFrame[] Integer position based selection method: DataFrame.iloc[row, col] Label based selection method: DataFrame.loc[row, col] 3.7.1 Dataframe - Selecting Columns and Rows - Quick Access <hr> Select values in a single column End of explanation # Select values in a list of columns df_mcu[['Title', 'Rating']] Explanation: Select values in a list of columns End of explanation # Use slice to get the first n rows (NumPy style indexing) df_mcu[:2] Explanation: Use slice to get the first n rows (NumPy style indexing) End of explanation # Can combine slice and column selection to select the first n rows df_mcu['Title'][:2] Explanation: Can combine slice and column selection to select the first n rows End of explanation df_mcu[:4]['Year'] Explanation: Order of column and slice doesn't matter End of explanation # Return values in the first row df_mcu.iloc[0] # Return values in the first row and second column df_mcu.iloc[0,1] Explanation: 3.7.2 Dataframe - Selecting Columns and Rows - Integer based selection <hr> iloc is primarily an integer position based (from 0 to length-1 of the axis), when we include the starting from the end indexing, but may also be used with a boolean array. Allowed inputs are: Integer End of explanation # Return values in the 3,5 and 6th rows df_mcu.iloc[[2,4,5]] Explanation: A list or array of integers End of explanation # Return values in the first row and columns 2 and 3 df_mcu.iloc[:2, 1:3] Explanation: A slice object with ints End of explanation # Select all values of the 20th Movie df_mcu.loc['mcu_20'] Explanation: 3.7.3 Dataframe - Selecting Columns and Rows - Integer based selection <hr> loc is primarily label based, but may also be used with a boolean array. Allowed inputs are: A single label End of explanation # Select all values of the 20th, 17th and 15th movie, which uses row index values, # Not to be confused with df_mcu[['Title', 'Rating']] which uses column headers df_mcu.loc[['mcu_20', 'mcu_17', 'mcu_15']] Explanation: A list or array of labels End of explanation # Select the Year and Rating df_mcu.loc[:, ['Year', 'Rating']] Explanation: A slice object with labels <div style="background-color: #9999ff; padding: 10px;"> NOTE: Unlike numeric index python slices, both the start and the stop are included!</div> End of explanation # Filter for Rating < .95 df_mcu.loc[df_mcu['Rating'] < .95, :] Explanation: 3.7.4 Dataframe - Selecting Columns and Rows - Data Filtering <hr> Data filtering using boolean Filter and select on single condition End of explanation # Filter for Rating < .95 and Sudio is Sony # Reuse the bitwise comparator seen earlier but with OR instead of AND. df_mcu.loc[(df_mcu['Rating'] < .95) | (df_mcu['Studio'] == 'Sony'), :] Explanation: Filter and select on multiple conidtions End of explanation # Add new predicted rating to Dataframe df_mcu['Predicted Rating'] = np.random.random(len(df_mcu)) df_mcu Explanation: 3.8 Dataframe - Adding and Deleting data <hr> Add a new column End of explanation # Add a new row for a new movie new_row = pd.Series(['Captain Marvel', 2019, 'BeunaVista', np.nan, np.random.random(1)[0]], index=df_mcu.columns, name= 'mcu_21' ) df_mcu.append(new_row) Explanation: Add a new row End of explanation # Drop the Rating Column df_mcu.drop('Rating', axis=1) Explanation: Drop an existing column End of explanation # Drop the 17 and 19th movies df_mcu.drop(['mcu_15', 'mcu_17']) Explanation: Drop an existing row End of explanation # Sample Python data: exam_data = {'Names': ['Anastasia', 'Dima', 'Katherine', 'James', 'Emily', 'Michael', 'Matthew', 'Laura', 'Kevin', 'Jonas'], 'Scores': [12.5, 9, 16.5, np.nan, 9, 20, 14.5, np.nan, 8, 19], 'Attempts': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1], 'Qualify': ['yes', 'no', 'yes', 'no', 'no', 'yes', 'yes', 'no', 'no', 'yes']} student_data = pd.DataFrame(exam_data) student_data # Select the students the qualify based on scores using the Qualify column 'yes' student_data[student_data['Qualify'] == 'yes'] Explanation: Columns, Rows or individual elements can be modified similarly using loc or iloc. <div style="background-color: #9999ff; padding: 10px;"> NOTE: Return new object Dataframe, withouth changing the orignal dataframe (append, drop).</div> <div style="background-color:yellow; padding: 10px"><h3><span></span>Exericse 3</div> <hr> Create a Dataframe from a specified dictionary data of Which Students have scorescore is between 15 and 20? | Names | Scores | Attempts | Qualify | | ------ | ------ | ------ | ------ | | Anastasia | 12.5 | 1 | 'yes' | | Dima | 9 | 3 | 'no' | | Katherine | 16.5 | 2| 'yes' | | James | NaN | 3| 'no' | | Emily | 9 | 2| 'no' | | Michael | 20 | 3| 'yes' | | Matthew | 14.5 | 1| 'yes' | | Laura | NaN | 1| 'no' | | Kevin | 8 | 2| 'no' | | Jonas | 19 | 1| 'yes' | End of explanation # Add a new column of, Grade Level, to indicate which Grade in High school the students are in? grade_level = ['9th', '9th', '10th', '11th', '12th', '9th', '10th', '11th', '12th', '11th'] student_data['Grade'] = grade_level student_data student_data.columns # Add a new student named, Jack? (Hint: Need to use ignore_index=True) jack = pd.Series([2, 'Jack', 'yes' , 17, '9th'],index=student_data.columns ) student_data.append(jack, ignore_index=True) Explanation: <div style="background-color:yellow; padding: 10px"><h3><span></span>Exercise 4</div> <hr> Add a new column of, Grade Level, to indicate which Grade in High school the students are in? Add a new student named, Jack? (Hint: Need to use ignore_index=True) End of explanation # Add a new column of Pass that is either 0 or 1 based on the column Qualify. (Hint: use numpy.where) student_data['Pass'] = np.where(student_data['Qualify'] == 'yes', 1, 0) student_data Explanation: <div style="background-color:yellow; padding: 10px"><h3><span></span>Exercise 5</div> <hr> Add a new column of Pass that is either 0 or 1 based on the column Qualify. (Hint: use numpy.where) - 'yes': 1 - 'no': 0 End of explanation # Read in Winter Olympic Medal Winners winter_olympics = pd.read_csv('data/winter_olympics.csv') winter_olympics.head() Explanation: Agenda <hr> Introduction to Pandas Series Exercise 1 Exercise 2 Dataframe Exercise 3 Exercise 4 Import and Store Data Summarizing and Computing Descriptive Statistics Exercise 6 Exercise 7 Grouped and apply Exercise 8 Exercise 9 Data Transformation and Normalization Exercise 10 Exercise 11 Exercise 12 4.1 Import and Store Data <hr> The first step in any problem is identifying what format your data is in, and then loading it into whatever framework you're using. Common formats are: - Text Files: Text files with a common delimeter to seperate values (e.g. CSV uses ,) - JSON(JavaScript Object Notation): standard formats for sending data by HTTP requests - Web Page: XML and HTML - Binary: "pickle” format and HDF5 - Database: MySQL, PostgreSQL <div style="background-color: #9999ff; padding: 10px;"> NOTE: Most common is CSV or Text files with different delimeters. </div> 4.2 Import and Store Data - Text Files <hr> Reading read_csv: Use comma seperated (,) deliminator to read file read_table: Use tab (\t) deliminator to read file Writting write_csv: Use comma seperated (,) deliminator to write file write_table: Use tab (\t) deliminator to write file Read in Winter Olympic Medal Winners form Kaggle End of explanation # Get the First 3 lines of a Dataframe winter_olympics.head(3) Explanation: Agenda <hr> Introduction to Pandas Series Exercise 1 Exercise 2 Dataframe Exercise 3 Exercise 4 Import and Store Data Summarizing and Computing Descriptive Statistics Exercise 6 Exercise 7 Grouped and apply Exercise 8 Exercise 9 Data Transformation and Normalization Exercise 10 Exercise 11 Exercise 12 5.1 Summarizing and Computing Descriptive Statistics <hr> Pandas as a lot of built-in essential functionality common to the pandas data structures to help explore the data. 5.2 Summarizing and Computing Descriptive Statistics - Head and Tail <hr> To view a small sample of a Series or DataFrame object, use: head() tail() <div style="background-color: #9999ff; padding: 10px;"> NOTE: The default number of elements to display is five, but you may pass a custom number. </div> Get the First 3 lines of Dataframe End of explanation # Sort Dataframe by rows in ascending order df_mcu.sort_index(axis=0, ascending=True) Explanation: 5.2 Summarizing and Computing Descriptive Statistics - Sorting <hr> To sort data for exploring the data use: sort_index(): object by labels (along an axis) sort_values(): by the values along either axis <div style="background-color: #9999ff; padding: 10px;"> NOTE: axis: 0 or ‘index’, 1 or ‘columns’ and default is 0 </div> Sort Dataframe by rows in ascending order End of explanation # Sort Dataframe by column in descending order df_mcu.sort_values(by=['Rating', 'Predicted Rating'], ascending=False) Explanation: Sort Dataframe by columns in descending order End of explanation # Summary Statistics for the Dataframe df_mcu.describe() Explanation: 5.3 Summarizing and Computing Descriptive Statistics - Descriptive statistics <hr> Built in functions to calculate the values over row or columns describe(): return summary statistics of each column - for numeric data: mean, std, max, min, 25%, 50%, 75%, etc. - For non-numeric data: count, uniq, most-frequent item, etc End of explanation # Mean of the Rating and Predicted Rating Columns df_mcu.loc[:,['Rating', 'Predicted Rating']].mean() Explanation: mean() End of explanation # Get the variance of the Rating column df_mcu.loc[:,['Rating']].var() Explanation: var() End of explanation # Drop rows with NaN values df_mcu.dropna() Explanation: 5.4 Summarizing and Computing Descriptive Statistics - Missing Data <hr> Data comes in many shapes and forms and Pandas is very flexible in handling missing data: While Nan is the default missing value marker However, Python None will arise and we wish to also consider that “missing” or “not available” or “NA” Drop rows with Nan values End of explanation # File Nan in Dataframe with default value df_mcu.fillna(0) Explanation: - Fill Nan with default value End of explanation # File Nan in Dataframe with default value in place df_mcu.fillna(0, inplace=True) df_mcu Explanation: - Use inplace to modify the dataframe instead of retunring a new object End of explanation # What is the median score of the students on the exam? student_data['Scores'].median() Explanation: <div style="background-color:yellow; padding: 10px"><h3><span></span>Exercise 6</div> <hr> What is the median score of the students on the exam? End of explanation # Deduct 4 points from everyone that attempted the exam 2 or more times. Replace all Nan scores with 0. (Passing is 12 points) student_data.loc[student_data['Attempts'] >= 2, 'Scores'] -= 4 student_data # Compute the mean. Would the class as a whole pass the test? student_data['Scores'].mean() # Are there any students that will fail now? student_data[(student_data['Qualify'] == 'yes')] Explanation: <div style="background-color:yellow; padding: 10px"><h3><span></span>Exercise 7</div> <hr> Deduct 4 points from everyone that attempted the exam 2 or more times. Replace all Nan scores with 0. (Passing is 12 points) Compute the mean. Would the class as a whole pass the test? Are there any students that will fail now? End of explanation # Groupby year of release and get mean Rating df_mcu.groupby('Year').mean() Explanation: Agenda <hr> Introduction to Pandas Series Exercise 1 Exercise 2 Dataframe Exercise 3 Exercise 4 Import and Store Data Summarizing and Computing Descriptive Statistics Exercise 6 Exercise 7 Grouped and apply Exercise 8 Exercise 9 Data Transformation and Normalization Exercise 10 Exercise 11 Exercise 12 6.1 Grouped and apply <hr> The Grouped functionality is referring to a process involving one or more of the following steps: - Splitting the data into groups based on some criteria - Applying a function to each group independently 6.2 Grouped and apply - Splitting <hr> Pandas objects can be split on any of their axes. - 'grouping' is to provided as a mapping of labels to group names in a Grouped Objec - groupby() End of explanation # Apply square to every value in a dataframe test_data = np.arange(9).reshape(3,-1) df_test = pd.DataFrame(test_data, index=['r1', 'r2', 'r3'], columns=['c1', 'c2', 'c3']) df_test df_test.applymap(np.square) Explanation: 6.3 Grouped and apply - Apply <hr> Apply the same function to every column or row: - applymap: Apply same function across every cell - apply: Apply same function to every column (default) or row - Apply square to every value in a dataframe End of explanation # Define max minus min function def max_minus_min(x): return max(x)-(2*min(x)) # Apply a new function that subtract max from min in every column df_test.apply(max_minus_min) Explanation: - Apply a new function that subtract max from 2 times min in every column End of explanation # Group students by attempts and find the average score? student_data.groupby('Attempts')['Scores'].mean() Explanation: <div style="background-color:yellow; padding: 10px"><h3><span></span>Exercise 8</div> <hr> Group students by attempts and find the average score? End of explanation # Group students by their pass result and report the variance in scores? student_data.groupby('Pass')['Scores'].var() Explanation: <div style="background-color:yellow; padding: 10px"><h3><span></span>Exercise 9</div> <hr> Group students by their pass result and report the variance in scores? End of explanation # Import maplotlib and setup to display plots notebook import matplotlib.pyplot as plt %matplotlib inline Explanation: Introduction to Pandas Series Exercise 1 Exercise 2 Dataframe Exercise 3 Exercise 4 Import and Store Data Summarizing and Computing Descriptive Statistics Exercise 6 Exercise 7 Grouped and apply Exercise 8 Exercise 9 Data Transformation and Normalization Exercise 10 Exercise 11 Exercise 12 7.1 Data Transformation and Normalization <hr> For Machine Learning (ML) and other data analysis it is important to: - Explore your data - Standardize data (Transform/Normalize) to obtain so different columns became comparable / compatible 7.1 Data Transformation and Normalization - Data Exploration <hr> Pandas Dataframes have a built in plot functionality: - Dataframe.plot() <div style="background-color: #9999ff; padding: 10px;"> NOTE: We will go into more plotting libraries later in the course. </div> End of explanation # Plot the Number of Medals in barchart by country using: plot.bar() winter_olympics.groupby(['Country'])['Medal'].count().plot.bar() Explanation: Plot the Number of Medals in barchart by country using: plot.bar() End of explanation # In the Winter olympics which country has the most Biathlon medals? winter_sport_medal = winter_olympics.groupby(['Country','Sport'])['Medal'].count() winter_sport_medal.head(10) winter_sport_medal.loc[:,'Biathlon',:].sort_values(ascending=False).head() # In the Winter olympics which country has the most Skiing medals? winter_sport_medal.loc[:,'Skiing',:].sort_values(ascending=False).head() # And in which event do they have the most Gold medals? winter_sport_disc_medal = winter_olympics.groupby(['Country','Sport', 'Discipline', 'Medal'])['Medal'].count() winter_sport_disc_medal.head() winter_sport_disc_medal['NOR','Skiing'].sort_values(ascending=False) Explanation: Other useful plots: hist: plot.hist() scatter: plot.scatter() boxplot: plot.box() 7.2 Data Transformation and Normalization - Normalization <hr> Why Normalize (re-scale)? - Transform data to obtain a certain distribution e.g. from lognormal to normal - Normalize data so different columns became comparable / compatible Typical normalization approach: - Z-score transformation - Scale to between 0 and 1 - Trimmed mean normalization - Vector length transformation - Quantilenorm Further Resources: - scikit-learn Further Resources <hr> Resources: - Pandas 10 min - Pandas Tutorals - Pandas Cookbook <div style="background-color:yellow; padding: 10px"><h3><span></span>Exercise 10</div> <hr> In the Winter olympics which country has the most Biathlon medals? In the Winter olympics which country has the most Skiing medals? And in which event do they have the most Gold medals? End of explanation # Import the Summer Olympic dataset located in ('data/summer_olypmics.csv') summer_olympics = pd.read_csv('data/summer_olympics.csv') #Which Olympian has the most medals? summer_olympics.groupby(['Athlete'])['Medal'].count().sort_values(ascending=False).head() # Which Olympian has the most Gold medals and for which Country? summer_olympics.groupby(['Athlete','Medal','Country'])['Medal'].count().sort_values(ascending=False).head() # Which Olympian has the most Gold medals and for which Sport? summer_olympics.groupby(['Athlete','Medal','Discipline'])['Medal'].count().sort_values(ascending=False).head() # Which rows have no values and why? summer_olympics[pd.isnull(summer_olympics).any(axis=1)] Explanation: <div style="background-color:yellow; padding: 10px"><h3><span></span>Exercise 11</div> <hr> Import the Summer Olympic dataset located in ('data/summer_olypmics.csv') Which Olympian has the most medals? Which Olympian has the most Gold medals and for which Country and Sport? Which rows have no values and why? End of explanation # Import the example RNA-seq Count Data in ('data/RNAseq_count_table.txt') rna_counts = pd.read_csv('data/RNAseq_count_table.txt', sep = '\t', index_col = 0) rna_counts.head() # Calculate CPM for each Sample. rna_cpm = rna_counts.divide(rna_counts.sum(axis=0)).multiply(1000000) rna_cpm.head() # Which Gene has the highest average CPM? rna_cpm.mean(axis=1).sort_values(ascending=False).head() # What is the Correlation between SRR1550986 SRR1550987? rna_cpm.corr() Explanation: <div style="background-color:yellow; padding: 10px"><h3><span></span>Exercise 12</div> <hr> Import the example RNA-seq Count Data in ('../data/RNAseq_count_table.txt') Calculate CPM for each Sample. (CPM, Counts Per Million) Formula for CPM = readsMappedToGene x 1/totalNumReads x 10^6 totalNumReads - total number of mapped reads of a sample readsMappedToGene - number of reads mapped to a selected gene Which Gene has the highest average CPM? What is the Correlation between SRR1550986 SRR1550987? End of explanation
7,547
Given the following text description, write Python code to implement the functionality described below step by step Description: &larr; Back to Index Jupyter Basics You are looking at a Jupyter Notebook, an interactive Python shell inside of a web browser. With it, you can run individual Python commands and immediately view their output. It's basically like the Matlab Desktop or Mathematica Notebook but for Python. To start an interactive Jupyter notebook on your local machine, read the instructions at the GitHub README for this repository. If you are reading this notebook on http Step1: Modes The Jupyter Notebook has two different keyboard input modes. In Edit Mode, you type code/text into a cell. Edit Mode is indicated by a green cell border. To enter Edit Mode from Command Mode, press Enter. You can also double-click on a cell. To execute the code inside of a cell and move to the next cell, press Shift-Enter. (Ctrl-Enter will run the current cell without moving to the next cell. This is useful for rapidly tweaking the current cell.) In Command Mode, you can perform notebook level actions such as navigating among cells, selecting cells, moving cells, saving notebooks, displaying help. Command Mode is indicated by a grey cell border. To enter Command Mode from Edit Mode, press Esc. Other commands can also enter Command Mode, e.g. Shift-Enter. To display the Help Menu from Command Mode, press h. Use it often; h is your best friend. Saving Your code goes directly into a Jupyter notebook. To save your changes, click on the "Save" icon in the menu bar, or type s in command mode. If this notebook is in a Git repo, use git checkout -- &lt;file&gt; to revert a saved edit. Writing Text in Markdown Markdown is simply a fancy way of formatting plain text. It is a markup language that is a superset of HTML. The Markdown specification is found here Step2: You can also combine imports on one line Step3: Tab Autocompletion Tab autocompletion works in Command Window and the Editor. After you type a few letters, press the Tab key and a popup will appear and show you all of the possible completions, including variable names and functions. This prevents you from mistyping the names of variables -- a big time saver! For example, type scipy. and then press Tab. You should see a list of members in the Python package scipy. Or type scipy.sin, then press Tab to view members that begin with sin. Step4: Inline Documentation To get help on a certain Python object, type ? after the object name, and run the cell Step5: In addition, if you press Shift-Tab in a code cell, a help dialog will also appear. For example, in the cell above, place your cursor after int, and press Shift-Tab. Press Shift-Tab twice to expand the help dialog. More Documentation
Python Code: 1+2 Explanation: &larr; Back to Index Jupyter Basics You are looking at a Jupyter Notebook, an interactive Python shell inside of a web browser. With it, you can run individual Python commands and immediately view their output. It's basically like the Matlab Desktop or Mathematica Notebook but for Python. To start an interactive Jupyter notebook on your local machine, read the instructions at the GitHub README for this repository. If you are reading this notebook on http://musicinformationretrieval.com, you are viewing a read-only version of the notebook, not an interactive version. Therefore, the instructions below do not apply. Tour If you're new, we recommend that you take the User Interface Tour in the Help Menu above. Cells A Jupyter Notebook is comprised of cells. Cells are just small units of code or text. For example, the text that you are reading is inside a Markdown cell. (More on that later.) Code cells allow you to edit, execute, and analyze small portions of Python code at a time. Here is a code cell: End of explanation import numpy import scipy import pandas import sklearn import seaborn import matplotlib import matplotlib.pyplot as plt import librosa import librosa.display import IPython.display as ipd Explanation: Modes The Jupyter Notebook has two different keyboard input modes. In Edit Mode, you type code/text into a cell. Edit Mode is indicated by a green cell border. To enter Edit Mode from Command Mode, press Enter. You can also double-click on a cell. To execute the code inside of a cell and move to the next cell, press Shift-Enter. (Ctrl-Enter will run the current cell without moving to the next cell. This is useful for rapidly tweaking the current cell.) In Command Mode, you can perform notebook level actions such as navigating among cells, selecting cells, moving cells, saving notebooks, displaying help. Command Mode is indicated by a grey cell border. To enter Command Mode from Edit Mode, press Esc. Other commands can also enter Command Mode, e.g. Shift-Enter. To display the Help Menu from Command Mode, press h. Use it often; h is your best friend. Saving Your code goes directly into a Jupyter notebook. To save your changes, click on the "Save" icon in the menu bar, or type s in command mode. If this notebook is in a Git repo, use git checkout -- &lt;file&gt; to revert a saved edit. Writing Text in Markdown Markdown is simply a fancy way of formatting plain text. It is a markup language that is a superset of HTML. The Markdown specification is found here: http://daringfireball.net/projects/markdown/basics/ A cell may contain Python code or Markdown code. To convert any Python cell to a Markdown cell, press m. To convert from a Markdown cell to a Python cell, press y. For headings, we recommend that you use Jupyter's keyboard shortcuts. To change the text in a cell to a level-3 header, simply press 3. For similar commands, press h to view the Help menu. Writing Text in $\LaTeX$ In a Markdown cell, you can also use $\LaTeX$ syntax. Example input: $$ \max_{||w||=1} \sum_{i=1}^{N} \big| \langle w, x_i - m \rangle \big|^2 $$ Output: $$ \max_{||w||=1} \sum_{i=1}^{N} \big| \langle w, x_i - m \rangle \big|^2 $$ Imports You may encounter the following imports while using this website: End of explanation import numpy, scipy, pandas Explanation: You can also combine imports on one line: End of explanation # Press Tab at the end of the following line scipy.sin Explanation: Tab Autocompletion Tab autocompletion works in Command Window and the Editor. After you type a few letters, press the Tab key and a popup will appear and show you all of the possible completions, including variable names and functions. This prevents you from mistyping the names of variables -- a big time saver! For example, type scipy. and then press Tab. You should see a list of members in the Python package scipy. Or type scipy.sin, then press Tab to view members that begin with sin. End of explanation # Run this cell. int? Explanation: Inline Documentation To get help on a certain Python object, type ? after the object name, and run the cell: End of explanation x = scipy.arange(50) # Try these too: # x = scipy.randn(50) # x = scipy.linspace(0, 1, 50, endpoint=False) x Explanation: In addition, if you press Shift-Tab in a code cell, a help dialog will also appear. For example, in the cell above, place your cursor after int, and press Shift-Tab. Press Shift-Tab twice to expand the help dialog. More Documentation: NumPy, SciPy, Matplotlib In the top menu bar, click on Help, and you'll find a prepared set of documentation links for IPython, NumPy, SciPy, Matplotlib, and Pandas. Experimenting Code cells are meant to be interactive. We may present you with several options for experimentation, e.g. choices of variables, audio files, and algorithms. For example, if you see a cell like this, then try all of the possible options by uncommenting the desired line(s) of code. (To run the cell, select "Cell" and "Run" from the top menu, or press Shift-Enter.) End of explanation
7,548
Given the following text description, write Python code to implement the functionality described below step by step Description: Load station data based on NetCDF files In this example we show how to load station data based on NetCDF files. The data is loaded with the pymepps package. Thanks to Ingo Lange we could use original data from the Wettermast for this example. In the following the data is loaded, plotted and saved as json file. Step1: We could use the global pymepps open_station_dataset function to open the Wettermast data. We have to specify the data path and the data type. Step2: Now we could extract the temperature in 2 m height. For this we use the select method of the resulted dataset. Step3: We could see that the resulting temperature is a normal pandas.Series. So it is possible to use all pandas methods, e.g. plotting of the Series. Step4: Pymepps uses an accessor to extend the pandas functionality. The accessor could be accessed with Series.pp. At the moment there is only a lonlat attribute, update, save and load method defined, but it is planned to expand the number of additional methods.
Python Code: import pymepps import matplotlib.pyplot as plt Explanation: Load station data based on NetCDF files In this example we show how to load station data based on NetCDF files. The data is loaded with the pymepps package. Thanks to Ingo Lange we could use original data from the Wettermast for this example. In the following the data is loaded, plotted and saved as json file. End of explanation wm_ds = pymepps.open_station_dataset('../data/station/wettermast.nc', 'nc') print(wm_ds) Explanation: We could use the global pymepps open_station_dataset function to open the Wettermast data. We have to specify the data path and the data type. End of explanation t2m = wm_ds.select('TT002_M10') print(type(t2m)) print(t2m.describe()) Explanation: Now we could extract the temperature in 2 m height. For this we use the select method of the resulted dataset. End of explanation t2m.plot() plt.xlabel('Date') plt.ylabel('Temperature in °C') plt.title('Temperature at the Wettermast Hamburg') plt.show() Explanation: We could see that the resulting temperature is a normal pandas.Series. So it is possible to use all pandas methods, e.g. plotting of the Series. End of explanation print(t2m.pp.lonlat) Explanation: Pymepps uses an accessor to extend the pandas functionality. The accessor could be accessed with Series.pp. At the moment there is only a lonlat attribute, update, save and load method defined, but it is planned to expand the number of additional methods. End of explanation
7,549
Given the following text description, write Python code to implement the functionality described below step by step Description: AMPLPY Step1: Google Colab & Kaggle interagration Step2: Use %%ampl_eval to pass the model to AMPL Step3: Set data Step4: Use %%ampl_eval to display values Step5: Use amplpy to retrive values Step6: Use %%ampl_eval to solve the model
Python Code: !pip install -q amplpy ampltools Explanation: AMPLPY: Jupyter Notebook Integration Documentation: http://amplpy.readthedocs.io GitHub Repository: https://github.com/ampl/amplpy PyPI Repository: https://pypi.python.org/pypi/amplpy Jupyter Notebooks: https://github.com/ampl/amplpy/tree/master/notebooks Setup End of explanation MODULES=['ampl', 'gurobi'] from ampltools import cloud_platform_name, ampl_notebook from amplpy import AMPL, register_magics if cloud_platform_name() is None: ampl = AMPL() # Use local installation of AMPL else: ampl = ampl_notebook(modules=MODULES) # Install AMPL and use it register_magics(ampl_object=ampl) # Evaluate %%ampl_eval cells with ampl.eval() Explanation: Google Colab & Kaggle interagration End of explanation %%ampl_eval set SIZES; param capacity >= 0; param value {SIZES}; var Qty {SIZES} binary; maximize TotVal: sum {s in SIZES} value[s] * Qty[s]; subject to Cap: sum {s in SIZES} s * Qty[s] <= capacity; Explanation: Use %%ampl_eval to pass the model to AMPL End of explanation ampl.set['SIZES'] = [5, 4, 6, 3] ampl.param['value'] = [10, 40, 30, 50] ampl.param['capacity'] = 10 Explanation: Set data End of explanation %%ampl_eval display SIZES; display value; display capacity; Explanation: Use %%ampl_eval to display values End of explanation print('SIZES:', ampl.set['SIZES'].getValues().toList()) print('value:', ampl.param['value'].getValues().toDict()) print('capacity:', ampl.param['capacity'].value()) Explanation: Use amplpy to retrive values End of explanation %%ampl_eval option solver gurobi; option gurobi_options 'outlev=1'; solve; Explanation: Use %%ampl_eval to solve the model End of explanation
7,550
Given the following text description, write Python code to implement the functionality described below step by step Description: Create an auto transform to scale numeric columns automatically. Step1: Create an XGBoost classifier and run 5-fold cross validation on the data.
Python Code: df["target"] = df["target"] - 1 t_auto = auto.Auto_transform(exclude=["target"]) df2 = t_auto.fit_transform(df) df2.head() Explanation: Create an auto transform to scale numeric columns automatically. End of explanation from seldon import xgb import seldon.pipeline.cross_validation as cf xgb = xgb.XGBoostClassifier(target="target") cv = cf.Seldon_KFold(xgb,5) cv.fit(df2) print "Average accuracy ",cv.get_score() Explanation: Create an XGBoost classifier and run 5-fold cross validation on the data. End of explanation
7,551
Given the following text description, write Python code to implement the functionality described below step by step Description: Convergence on low-rank manifolds Ivan Oseledets Skolkovo Institute of Science and Technology Based on joint work with C. Lubich, H. Walach, D. Kolesnikov The topic of this talk Recently, much attention has been paid to solution of optimization problems on low-rank matrix and tensor manifolds. Low-rank matrix manifold Step1: Linear test case $$\Phi (X) = X + A(X) - f,$$ where $A$ is a linear operator on matrix space, $\Vert A - I\Vert = \delta$, $f$ is known right-hand side and $X_$ is the solution of linear equation $A(X_) = f$. This problem is equivalent to the minimization problem of the quadratic functional $F(X) = \langle A(X) - f, X \rangle.$ Setting up the experiment Generate some random matrices... Step2: Generic case In the generic case, the convergence is as follows. Step3: Bad curvature Now let us design bad singular values for the solution, $$\sigma_k = 10^{2-2k}.$$ Step4: Consequences The typical convergence Step5: Theory status We are still working on the theory. Even the case $n = 2$, $r = 1$ is non-trivial, but yesterday (Tuesday 19th) the estimate was obtained. Actually, I think it can be generalized to arbitrary $n, r$ by block matrix argument. Conclusions For the algorithms
Python Code: #2d case functions def grad(A, x, x0): #u, s, v = x #u0, s0, v0 = x0 #u_new = np.linalg.qr(np.hstack((u, u0)))[0] #v_new = np.linalg.qr(np.hstack((v, v0)))[0] #s_new = u_new.T.dot(u).dot(s).dot(v.T.dot(v_new)) - u_new.T.dot(u0).dot(s0).dot(v0.T.dot(v_new)) return x0 - A.dot(full(x).flatten()).reshape(x0.shape) #return (u_new, s_new, v_new) #it is Frobenius norm def get_norm(x): u, s, v = x return la.norm(s) #return math.sqrt(np.trace((u.T.dot(u)).T.dot(v.T.dot(v)))) def check_orthogonality(u): if la.norm(u.T.dot(u) - np.eye(u.shape[1])) / math.sqrt(u.shape[1]) < 1e-12: return True else: return False def orthogonalize(x): u, s, v = x u_new, ru = np.linalg.qr(u) v_new, lv = np.linalg.qr(u) s_new = ru.dot(s.dot(lv)) return (u_new, s_new, v_new) def diagonalize_core(x): u, s, v = x ls, s_diag, rs = la.svd(s) return (u.dot(ls), np.diag(s_diag), v.dot(rs)) def func(x, x0): return get_norm(grad(x, x0)) def full(dx): return dx[0].dot(dx[1].dot(dx[2].T)) def projector_splitting_2d(x, dx, flag_dual=False): n, r = x[0].shape u, s, v = x[0].copy(), x[1].copy(), x[2].copy() if not flag_dual: u, s = np.linalg.qr(u.dot(s) + dx.dot(v)) s = s - u.T.dot(dx).dot(v) v, s = np.linalg.qr(v.dot(s.T) + dx.T.dot(u)) s = s.T else: v, s = np.linalg.qr(v.dot(s.T) + dx.T.dot(u)) s = s.T s = s - u.T.dot(dx).dot(v) u, s = np.linalg.qr(u.dot(s) + dx.dot(v)) return u, s, v def inter_point(x, dx): u, s, v = x u, s = np.linalg.qr(u.dot(s) + dx.dot(v)) #dx - (I - uu') dx (I - vv') = uu' dx + dx * vv' - uu' dx vv' dx_tangent = u.dot(u.T.dot(dx)) + (dx.dot(v)).dot(v.T) - u.dot(u.T.dot(dx).dot(v)).dot(v.T) return u.copy(), v.copy(), dx_tangent def minus(x1, x2): u1, s1, v1 = x1 u2, s2, v2 = x2 u_new = np.linalg.qr(np.hstack((u1, u2)))[0] v_new = np.linalg.qr(np.hstack((v1, v2)))[0] s_new = u_new.T.dot(u1).dot(s1).dot(v1.T.dot(v_new)) - u_new.T.dot(u2).dot(s2).dot(v2.T.dot(v_new)) return u_new, s_new, v_new def ps_proj(x, dx): return full(projector_splitting_2d(x, dx)) - full(x) def rotation(u): return la.qr(u)[0] Explanation: Convergence on low-rank manifolds Ivan Oseledets Skolkovo Institute of Science and Technology Based on joint work with C. Lubich, H. Walach, D. Kolesnikov The topic of this talk Recently, much attention has been paid to solution of optimization problems on low-rank matrix and tensor manifolds. Low-rank matrix manifold: $A \in \mathbb{R}^{n \times m}$, $A = UV^{\top}$. Low-rank tensor-train manifold, $\mathrm{rank}~A_k = r_k.$ These methods are really important, since they lead to huge complexity reduction, and there are recent theoretical results for the approximability of solutions. Optimization problems examples Linear systems: $F(X) = \langle A(X), X \rangle - 2 \langle F, X\rangle.$ Eigenvalue problems $F(X) = \langle A(X), X \rangle, \mbox{s.t.} \Vert X \Vert = 1.$ Tensor completion: $F(X) = \Vert W \circ (A - X)\Vert.$ My claim is that we need better understanding of how these methods converge! General scheme The general scheme is that we have some (convergent method) in a full space: $$X_{k+1} = \Phi(X_k),$$ and we know that the solution $X_*$ is on the manifold $\mathcal{M}$. Then we introduce a Riemannian projected method $$X_{k+1} = R(X_k + P_{\mathcal{T}} \Phi(X_k)),$$ where $P_{\mathcal{T}}$ is a projection on the tangent space, and $R$ is a retraction. Riemannian optimization is easy The projection onto the tangent space is trivial for low-rank (and is not difficult for TT) $$P_{T}(X) = X - (I - UU^{\top}) X (I - VV^{\top}).$$ For the retraction, there many choices (see the review Absil, O., 2014). Projector-splitting scheme One of the simplest second-order retractions is the projector-splitting (or KSL) scheme Initially proposed as a time integrator for the dynamical low-rank approximation (C. Lubich, I. Oseledets, A projector-splitting... 2014) for matrices and (C. Lubich, I. Oseledets, B. Vandreycken, Time integration of tensor trains, SINUM, 2015). Reformulated as a retraction in (Absil, Oseledets) Has a trivial form: a half-step (or full) step of the Alternating least squares (ALS). The simplest retraction possible: The projector-splitting scheme can be implemented in a very simple "half-ALS" step. $$ U_1, S_1 = \mathrm{QR}(A V_0), \quad V_1, S_2^{\top} = \mathrm{QR}(A^{\top} U_1).$$ Projection onto the tangent space not needed! $$X_{k+1} = I(X_k, F), $$ where $I$ the is the projector-splitting integrator, and $F$ is the step for the full method. What about the convergence? Now, instead of $$X_{k+1} = \Phi(X_k), \quad X_ = \Phi(X_),$$ and $X_$ is on the manifold, we have the manifold-projected* process $$Y_{k+1} = I(Y_k, \Phi(Y_k) - Y_k).$$ Linear manifold Suppose the linear case, $$\Phi(X) = X + Q(X), \quad \Vert Q \Vert < 1.$$ and $M$ is a linear subspace. Then the projected method is always not slower. On a curved manifold, the curvature of the manifold plays crucial role in the convergence estimates. Curvature of fixed-rank manifold The curvature of the manifold of matrices with rank $r$ at point $X$ is equal to the $$\Vert X^{-1} \Vert_2,$$ i.e. the inverse of the minimal singular value. In practice we know, that zero singular values for the best rank-r approximation do not harm: Approximation of a rank-$1$ matrix with a rank-$2$ matrix is ok (block power method.) Experimental convergence study Our numerical experiments confirm that the low-rank matrix manifold behaves typically like a linear manifold, i. e. projected gradient is almost always faster, independent of the curvature. Projection-splitting as a projection onto the "middle point". Lemma. We can write down one step of the projector-splitting scheme as a projection onto the middle point: $$X_1 = I(X_0, F) = P_{\mathcal{T}(X_m)}( X_0 + F).$$ The proof is simple. Let $X_0 = U_0 S_0 V^{\top}_0$ and $X_1 = U_1 S_1 V^{\top}_1.$ Then take any matrix of the form $U_1, V_0$ as the middle point. Decomposition of the error We can decompose the error into the normal component and the tangent component. Indeed, consider one step. $$Y_1 = I(Y_0, F), \quad Y_0 + F = X_ + H, \quad \Vert H \Vert \leq \delta \Vert Y_0 - X_ \Vert.$$ Then, $$Y_1 = P(Y_0 + F) = P(X_ + H).$$ We have $$E_1 = Y_1 - X_ = P(X_ + H) - X_ = -P_{\perp}(X_) + P(H).$$ This is the decomposition of the error into the tangent and normal* components. End of explanation #Init sizes n, r, r0 = 40, 7, 7 M = n * n Q = np.random.randn(M, M) Q = Q + Q.T Q = (Q/np.linalg.norm(Q, 2)) * 0.8 # contraction coefficient A = np.eye(M) + Q Explanation: Linear test case $$\Phi (X) = X + A(X) - f,$$ where $A$ is a linear operator on matrix space, $\Vert A - I\Vert = \delta$, $f$ is known right-hand side and $X_$ is the solution of linear equation $A(X_) = f$. This problem is equivalent to the minimization problem of the quadratic functional $F(X) = \langle A(X) - f, X \rangle.$ Setting up the experiment Generate some random matrices... End of explanation # Case 1: Projector-Splitting versus Gradient descent #Random initialization x_orig = np.random.randn(n, r0), np.random.randn(r0, r0), np.random.randn(n, r0) x_start = np.random.randn(n, r0), np.random.randn(r0, r0), np.random.randn(n, r0) x_start, x_orig = orthogonalize(x_start), orthogonalize(x_orig) x_orig = diagonalize_core(x_orig) print 'The singular values of fixed point matrix are \n', np.diag(x_orig[1]) f = full(x_orig) f = A.dot(f.flatten()).reshape(f.shape) grad_dist, orth_proj_norm, tangent_proj_norm = [], [], [] k = 50 # Gradient Descent Convergence x = full(x_start) for i in xrange(k): grad_dist.append(la.norm(x - full(x_orig))) dx = f - (A.dot(x.flatten())).reshape(x.shape) x = x + dx # Projector Splitting Convergence x = x_start dx_orig = full(x)-full(x_orig) for i in xrange(k): dx = grad(A, x, f) u1, v, dx_tangent = inter_point(x, dx_orig) dx_orig = full(x)-full(x_orig) dx_orig_tangent = u1.dot(u1.T.dot(dx_orig)) + (dx_orig.dot(v)).dot(v.T) - u1.dot(u1.T.dot(dx_orig).dot(v)).dot(v.T) orth_proj_norm.append(la.norm(dx_orig - dx_orig_tangent)) tangent_proj_norm.append(la.norm(dx_orig_tangent)) x = projector_splitting_2d(x, dx) # Plotting plt.semilogy(grad_dist, marker='x', label="GD") plt.semilogy(orth_proj_norm, marker='o', label="Orthogonal projection") plt.semilogy(tangent_proj_norm, marker='o', label="Tangent projection") plt.legend(bbox_to_anchor=(0.40, 1), loc=2)#(1.05, 1), loc=2 plt.xlabel('Iteration') plt.ylabel('Error') Explanation: Generic case In the generic case, the convergence is as follows. End of explanation # Case 2: Stair convergence x_orig = np.random.randn(n, r0), np.random.randn(r0, r0), np.random.randn(n, r0) x_start = np.random.randn(n, r0), np.random.randn(r0, r0), np.random.randn(n, r0) x_start, x_orig = orthogonalize(x_start), orthogonalize(x_orig) x_orig = diagonalize_core(x_orig) u, s, v = x_orig s_diag = [10**(2-2*i) for i in xrange(r)] x_orig = (u, np.diag(s_diag), v) print 'The singular values of fixed point matrix are \n', s_diag f = full(x_orig) f = A.dot(f.flatten()).reshape(f.shape) grad_dist, orth_proj_norm, tangent_proj_norm = [], [], [] k = 50 # Gradient Descent Convergence x = full(x_start) for i in xrange(k): grad_dist.append(la.norm(x - full(x_orig))) dx = f - (A.dot(x.flatten())).reshape(x.shape) x = x + dx # Projector Splitting Convergence x = x_start for i in xrange(k): dx = grad(A, x, f) u1, v, dx_tangent = inter_point(x, dx_orig) dx_orig = full(x)-full(x_orig) dx_orig_tangent = u1.dot(u1.T.dot(dx_orig)) + (dx_orig.dot(v)).dot(v.T) - u1.dot(u1.T.dot(dx_orig).dot(v)).dot(v.T) orth_proj_norm.append(la.norm(dx_orig - dx_orig_tangent)) tangent_proj_norm.append(la.norm(dx_orig_tangent)) x = projector_splitting_2d(x, dx) # Plotting plt.semilogy(grad_dist, marker='x', label="GD") plt.semilogy(orth_proj_norm, marker='o', label="Orthogonal projection") plt.semilogy(tangent_proj_norm, marker='o', label="Tangent projection") plt.legend(bbox_to_anchor=(0.40, 1), loc=2)#(1.05, 1), loc=2 plt.xlabel('Iteration') plt.ylabel('Error') Explanation: Bad curvature Now let us design bad singular values for the solution, $$\sigma_k = 10^{2-2k}.$$ End of explanation # Case 3: Stair convergence x_orig = np.random.randn(n, r0), np.random.randn(r0, r0), np.random.randn(n, r0) x_start = np.random.randn(n, r0), np.random.randn(r0, r0), np.random.randn(n, r0) x_start, x_orig = orthogonalize(x_start), orthogonalize(x_orig) eps = 1e-12 u, s, v = x_orig u1, s1, v1 = x_start u1 = u1 - u.dot(u.T.dot(u1)) v1 = v1 - v.dot(v.T.dot(v1)) x_start = u1, s1, v1 x_orig = diagonalize_core(x_orig) print 'The singular values of fixed point matrix are \n', s_diag f = full(x_orig) f = A.dot(f.flatten()).reshape(f.shape) grad_dist, proj_dist = [], [] k = 10 # Gradient Descent Convergence x = full(x_start) for i in xrange(k): grad_dist.append(la.norm(x - full(x_orig))) dx = f - (A.dot(x.flatten())).reshape(x.shape) x = x + dx # Projector Splitting Convergence x = x_start for i in xrange(k): dx = grad(A, x, f) u1, v, dx_tangent = inter_point(x, dx_orig) dx_orig = full(x)-full(x_orig) proj_dist.append(la.norm(dx_orig)) x = projector_splitting_2d(x, dx) # Plotting plt.semilogy(grad_dist, marker='x', label="GD") plt.semilogy(proj_dist, marker='o', label="Projector Splitting") plt.legend(bbox_to_anchor=(0.40, 1), loc=2)#(1.05, 1), loc=2 plt.xlabel('Iteration') plt.ylabel('Error') Explanation: Consequences The typical convergence: Tangent component convergence linearly. Normal components decays quadratically until it hits the next singular values, and then waits for the tangent component to catch. The convergence is monotone: First the first singular vector converge, then the second and so on. Adversal examples are still possible (similar to the convergence of the power method). Case when Riemannian is worse You can make up an example, when on the first iteration it is worse: it is basically the angle between $V_*$ and $V_0$. End of explanation from IPython.core.display import HTML def css_styling(): styles = open("custom.css", "r").read() return HTML(styles) css_styling() Explanation: Theory status We are still working on the theory. Even the case $n = 2$, $r = 1$ is non-trivial, but yesterday (Tuesday 19th) the estimate was obtained. Actually, I think it can be generalized to arbitrary $n, r$ by block matrix argument. Conclusions For the algorithms: this "gradual convergence" may give a hint on how to adapt the ranks during the iteration. For applications, they are many (and we are working on machine learning applications, stay tuned). The TT-case is in progress as well. Papers and software Software - TT-Toolbox: https://github.com/oseledets/ttpy (Python) - https://github.com/oseledets/TT-Toolbox (MATLAB) Papers - Christian Lubich and Ivan V. Oseledets. A projector-splitting integrator for dynamical low-rank approximation. BIT, 54(1):171–188, 2014 - Christian Lubich, Ivan Oseledets, and Bart Vandereycken. Time integration of tensor trains. arXiv preprint 1407.2042, 2014., published in SINUM - Jutho Haegeman, Christian Lubich, Ivan Oseledets, Bart Vandereycken, and Frank Verstraete. Unifying time evolution and optimization with matrix product states. arXiv preprint 1408.5056, 2014. End of explanation
7,552
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: The function below generate_desc() implements this behavior and generates a textual description given a trained model, and a given prepared photo as input.
Python Code:: # map an integer to a word def word_for_id(integer, tokenizer): for word, index in tokenizer.word_index.items(): if index == integer: return word return None # generate a description for an image def generate_desc(model, tokenizer, photo, max_length): # seed the generation process in_text = 'startseq' # iterate over the whole length of the sequence for i in range(max_length): # integer encode input sequence sequence = tokenizer.texts_to_sequences([in_text])[0] # pad input sequence = pad_sequences([sequence], maxlen=max_length) # predict next word yhat = model.predict([photo,sequence], verbose=0) # convert probability to integer yhat = argmax(yhat) # map integer to word word = word_for_id(yhat, tokenizer) # stop if we cannot map the word if word is None: break # append as input for generating the next word in_text += ' ' + word # stop if we predict the end of the sequence if word == 'endseq': break return in_tex
7,553
Given the following text description, write Python code to implement the functionality described below step by step Description: Building a neural network with TensorFlow In this module we are going to build a neural network for regression. Regression is the prediction of a real-valued number given some inputs. Step1: Let's generate some data, in this case, a noisy sine wave as plotted below Step2: We are going to use placeholders from now on. Placeholder for X and Y are as follows Step3: We need two parameters, weight W and bias B for our model Step4: We need to define model, and a cost function Step5: Now we have defined the variables, we need to run the code Before we run the code, we must also run the variables using tf.initialize_all_variables() to give an initial value to W and B.
Python Code: import tensorflow as tf import numpy as np import matplotlib.pyplot as plt %matplotlib inline Explanation: Building a neural network with TensorFlow In this module we are going to build a neural network for regression. Regression is the prediction of a real-valued number given some inputs. End of explanation n_observations = 1000 xs = np.linspace(-3.0, 3.0, n_observations) ys = np.sin(xs) + np.random.uniform(-0.5, 0.5, n_observations) plt.scatter(xs, ys, alpha=0.15, marker = '+') plt.show() # alpha makes the points transparent and marker changes it from dots to +'s Explanation: Let's generate some data, in this case, a noisy sine wave as plotted below End of explanation X = tf.placeholder(tf.float32, name = 'X') Y = tf.placeholder(tf.float32, name = 'Y') sess = tf.InteractiveSession() n = tf.random_normal([1000]).eval() n_ = tf.random_normal([1000], stddev = 0.1).eval() plt.hist(n) # plt.hist(n, 20) gives answer with 20 buckets plt.hist(n_) # We need initial values much closer to 0 for initializing the weights Explanation: We are going to use placeholders from now on. Placeholder for X and Y are as follows End of explanation W = tf.Variable(tf.random_normal([1], stddev=0.1), name = 'weight') B = tf.Variable(0.0, name = 'bias') Explanation: We need two parameters, weight W and bias B for our model End of explanation # Perceptron model (or Linear regression) Y_ = X*W + B def distance(y, y_): return tf.abs(y-y_) # cost = distance(Y_, tf.sin(X)) cost = tf.reduce_mean(distance(Y_, Y)) optimizer = tf.train.GradientDescentOptimizer(learning_rate = 0.01).minimize(cost) Explanation: We need to define model, and a cost function End of explanation n_iterations = 100 sess.run(tf.initialize_all_variables()) for _ in range(n_iterations): sess.run(optimizer, feed_dict = {X:xs, Y:ys}) training_cost = sess.run(cost, feed_dict = {X:xs, Y:ys}) # This is how to print the values mid execution print training_cost, sess.run(W), sess.run(B) Explanation: Now we have defined the variables, we need to run the code Before we run the code, we must also run the variables using tf.initialize_all_variables() to give an initial value to W and B. End of explanation
7,554
Given the following text description, write Python code to implement the functionality described below step by step Description: Data management in the ocean, weather and climate sciences If you use this lesson in a bootcamp or workshop, please let me know (irving.damien (at) gmail.com). I'd love to keep track of where it's being used so I can update and improve the content accordingly. Our previous lessons have shown us how to write programs that ingest a list of data files, perform some calculations on those data, and then print a final result to the screen. While this was a useful exercise in learning the principles of scripting and parsing the command line, in most cases the output of our programs will not be so simple. Instead, programs typically take data as input, manipulate that data, and then output yet more data. Over the course of a multi-year research project, most reseachers will write many different programs that produce many different output datasets. We want to Step1: The first thing to notice is the distinctive Data Reference Syntax (DRS) associated with the file. The staff at IMOS have archived the data according to the following directory structure Step2: The great thing about netCDF files is that they contain metadata - that is, data about the data. There are global attributes that give information about the file as a whole (shown above - we will come back to these later), while each variable also has its own attributes. Step3: (The 'u' means each variable name is represented by a Unicode string.) Step4: The raw time values are fairly meaningless, but we can use the time attributes to convert them to a more meaningful format... Step5: Climate and Forecast (CF) metadata convention When performing simple data analysis tasks on netCDF files, command line tools like the Climate Data Operators (CDO) are often a better alternative to writing your own functions in Python. However, let's put ourselves in the shoes of the developers of CDO for a minute. In order to calculate the time mean of a dataset for a given start and end date (for example), CDO must first identify the units of the time axis. This isn't as easy as you'd think, since the creator of the netCDF file could easily have called the units attribute measure, or scale, or something else completely unpredictable. They could also have defined the units as weeks since 1-01-01 00 Step6: Both uData and vData are a special type of numpy array (which we have met previously) known as a masked array, whereby some of the points in the time/latitude/longitude grid have missing (or masked) values. Just as with a normal numpy array, we can check the shape of our data (in fact, masked arrays can do everything normal numpy arrays can do and more). Step7: In other words, 493 time steps, 55 latitudes and 57 longitudes. We can now go ahead and calculate the current speed. Step8: Viewing the result It's a good idea to regularly view your data throughout the code development process, just to ensure nothing that crazy has happened along the way. Below is a code except from this example in the AODN user code library, which simply plots one of the 493 timesteps. Step9: Plotting options Quite a few lines of code were required to create our publication quality figure using matplotlib, and there would have been even more had we wanted to use the basemap library to plot coastlines or change the map projection. Recognising this burden, the team at the UK Met Office have developed Iris and Cartopy, which build on matplotlib and basemap to provide a more convenient interface (read Step10: This practice of recording the history of the file ensures the provenance of the data. In other words, a complete record of everything that has been done to the data is stored with the data, which avoids any confusion in the event that the data is ever moved, passed around to different users, or viewed by its creator many months later. If we want to create our own entry for the history attribute, we'll need to be able to create a Step11: The strftime function can be used to customise the appearance of a datetime object; in this case we've made it look just like the other time stamps in our data file. Command line record In the Software Carpentry lesson on command line programs we met sys.argv, which contains all the arguments entered by the user at the command line Step12: In launching this IPython notebook, you can see that a number of command line arguments were used. To join all these list elements up, we can use the join function that belongs to Python strings Step13: While this list of arguments is very useful, it doesn't tell us which Python installation was used to execute those arguments. The sys library can help us out here too Step14: Git hash In the Software Carpentry lessons on git we learned that each commit is associated with a unique 40-character identifier known as a hash. We can use the git Python library to get the hash associated with the script Step16: We can now put all this information together for our history entry Step17: Putting it all together So far we've been experimenting in the IPython notebook to familiarise ourselves with netCDF4 and the other Python libraries that might be useful for calculating the surface current speed. We should now go ahead and write a script, so we can repeat the process with a single entry at the command line Step18: Introducing xray It took four separate functions in calc_current_speed.py to create the output file, because we had to copy the dimensions and most of the global and variable attributes from the original file to the new file. This is such a common problem that a Python library called xray has been developed, which conserves metadata whenever possible. When xray is used to read a netCDF file, the data are stored as an xray.DataArray (as opposed to a numpy.ndarray). These special data arrays carry their dimension information and variable atributes with them, which means you don't have to retrieve them manually. xray also comes with a bunch of convenience functions for doing typical weather/climate/ocean tasks (calculating climatologies, anomalies, etc), which can be a pain using numpy. Similar to Iris and Cartopy, the easiest way to install xray is with the conda package installer. Simply type the following at the command line Step19: The finished product We can now inspect the attributes in our new file
Python Code: from netCDF4 import Dataset acorn_URL = 'http://thredds.aodn.org.au/thredds/dodsC/IMOS/eMII/demos/ACORN/monthly_gridded_1h-avg-current-map_non-QC/TURQ/2012/IMOS_ACORN_V_20121001T000000Z_TURQ_FV00_monthly-1-hour-avg_END-20121029T180000Z_C-20121030T160000Z.nc.gz' acorn_DATA = Dataset(acorn_URL) Explanation: Data management in the ocean, weather and climate sciences If you use this lesson in a bootcamp or workshop, please let me know (irving.damien (at) gmail.com). I'd love to keep track of where it's being used so I can update and improve the content accordingly. Our previous lessons have shown us how to write programs that ingest a list of data files, perform some calculations on those data, and then print a final result to the screen. While this was a useful exercise in learning the principles of scripting and parsing the command line, in most cases the output of our programs will not be so simple. Instead, programs typically take data as input, manipulate that data, and then output yet more data. Over the course of a multi-year research project, most reseachers will write many different programs that produce many different output datasets. We want to: Develop a personal data management plan so as to avoid confusion/calamity Along the way, we will learn: how to create a Data Reference Syntax how to view the contents of binary files about data provenance and metadata about the Python libraries and command line utilities commonly used in the ocean, weather and climate sciences Additional Python libraries that need be installed: pip install gitpython What's in a name? In this lesson we are going to process some data collected by Australia's Integrated Marine Observing System (IMOS). First off, let's load our data: End of explanation print acorn_DATA Explanation: The first thing to notice is the distinctive Data Reference Syntax (DRS) associated with the file. The staff at IMOS have archived the data according to the following directory structure: http://thredds.aodn.org.au/thredds/dodsC/&lt;project&gt;/&lt;organisation&gt;/&lt;collection&gt;/&lt;facility&gt;/&lt;data-type&gt;/&lt;site-code&gt;/&lt;year&gt;/ From this we can deduce, without even inspecting the contents of the file, that we have data from the IMOS project that is run by the eMarine Information Infrastructure (eMII). It was collected in 2012 at the Turquoise Coast, Western Australia (TURQ) site of the Australian Coastal Ocean Radar Network (ACORN), which is a network of high frequency radars that measure the ocean surface current (see this page on the Research Data Australia website for a nice overview of the dataset). The data type has a sub-DRS of its own, which tells us that the data represents the 1-hourly average surface current for a single month (October 2012), and that it is archived on a regularly spaced spatial grid and has not been quality controlled. The file is located in the "demos" directory, as it has been generated for the purpose of providing an example for users in the very helpful Australian Ocean Data Network (AODN) user code library. Just in case the file gets separated from this informative directory stucture, much of the information is repeated in the file name itself, along with some more detailed information about the start and end time of the data, and the last time the file was modified. &lt;project&gt;_&lt;facility&gt;_V_&lt;time-start&gt;_&lt;site-code&gt;_FV00_&lt;data-type&gt;_&lt;time-end&gt;_&lt;modified&gt;.nc.gz In the first instance this level of detail seems like a bit of overkill, but consider the scope of the IMOS data archive. It is the final resting place for data collected by the entire national array of oceanographic observing equipment in Australia, which monitors the open oceans and coastal marine environment covering physical, chemical and biological variables. Since the data are so well labelled, locating all monthly timescale ACORN data from the Turquoise Coast and Rottnest Shelf sites (which represents hundreds of files) would be as simple as typing the following at the command line: ls */ACORN/monthly_*/{TURQ,ROT}/*/*.nc While it's unlikely that your research will ever involve cataloging data from such a large observational network, it's still a very good idea to develop your own personal DRS for the data you do have. This often involves investing some time at the beginning of a project to think carefully about the design of your directory and file name structures, as these can be very hard to change later on (a good example is the DRS used by the Climate Model Intercomparison Project). The combination of bash shell wildcards and a well planned DRS is one of the easiest ways to make your research more efficient and reliable. Challenge We haven't even looked inside our IMOS data file and already we have the beginnings of a detailed data management plan. The first step in any research project should be to develop such a plan, so for this challenge we are going to turn back time. If you could start your current research project all over again, what would your data management plan look like? Things to consider include: Data Reference Syntax How long it will take to obtain the data Storage and backup (here's a post with some backup ideas) Write down and discuss your plan with your partner. Binary file formats We can guess from the .nc extension that we are dealing with a Network Common Data Form (netCDF) file. It's also compressed, hence the .gz. Had we actually downloaded the file to our computer and uncompressed it, our initial impulse might have been to type, !cat IMOS_ACORN_V_20121001T000000Z_TURQ_FV00_monthly-1-hour-avg_END-20121029T180000Z_C-20121030T160000Z.nc but such a command would produce an incomprehensible mix symbols and letters. The reason is that up until now, we have been dealing with text files. These consist of a simple sequence of character data (represented using ASCII, Unicode, or some other standard) separated into lines, meaning that text files are human-readable when opened with a text editor or displayed using cat. All other file types (including netCDF) are known collectively as binary files. They tend to be smaller and faster for the computer to interpret than text files, but the payoff is that they aren't human-readable unless you have the right intpreter (e.g. .doc files aren't readable with your text editor and must instead be opened with Microsoft Word). To view the contents of a netCDF file we'd need to use a special command line utility called ncdump, which is included with the netCDF C software (i.e. if you're using any software that knows how to deal with netCDF - such as the Anaconda scientific Python distribution used in Software Carpentry workshops - then you probably have ncdump installed without even knowing it). For this example, however, we haven't actually downloaded the netCDF file to our machine. Instead, IMOS has made the data available via a THREDDS server, which means we can just pass a URL to the netCDF4.Dataset function in order to obtain the data. End of explanation print 'The file contains the following variables:' print acorn_DATA.variables.keys() Explanation: The great thing about netCDF files is that they contain metadata - that is, data about the data. There are global attributes that give information about the file as a whole (shown above - we will come back to these later), while each variable also has its own attributes. End of explanation print 'These are the attributes of the time axis:' print acorn_DATA.variables['TIME'] print 'These are some of the time values:' print acorn_DATA.variables['TIME'][0:10] Explanation: (The 'u' means each variable name is represented by a Unicode string.) End of explanation from netCDF4 import num2date units = acorn_DATA.variables['TIME'].units calendar = acorn_DATA.variables['TIME'].calendar times = num2date(acorn_DATA.variables['TIME'][:], units=units, calendar=calendar) print times[0:10] Explanation: The raw time values are fairly meaningless, but we can use the time attributes to convert them to a more meaningful format... End of explanation uData = acorn_DATA.variables['UCUR'][:,:,:] vData = acorn_DATA.variables['VCUR'][:,:,:] Explanation: Climate and Forecast (CF) metadata convention When performing simple data analysis tasks on netCDF files, command line tools like the Climate Data Operators (CDO) are often a better alternative to writing your own functions in Python. However, let's put ourselves in the shoes of the developers of CDO for a minute. In order to calculate the time mean of a dataset for a given start and end date (for example), CDO must first identify the units of the time axis. This isn't as easy as you'd think, since the creator of the netCDF file could easily have called the units attribute measure, or scale, or something else completely unpredictable. They could also have defined the units as weeks since 1-01-01 00:00:00 or milliseconds after 1979-12-31. Obviously what is needed is a standard method for defining netCDF attributes, and that’s where the Climate and Forecast (CF) metadata convention comes in. The CF metadata standard was first defined back in the early 2000s and has now been adopted by all the major institutions and projects in the weather/climate sciences. There's a nice blog post on the topic if you'd like more information, but for the most part you just need to be aware that if a tool like CDO isn't working, it might be because your netCDF file isn't CF compliant. Calculating the current speed For the sake of example, let's say that our data file contained the zonal (east/west; 'UCUR') and meridional (north/south; 'VCUR') surface current components, but not the total current speed. To calculate it, we first need to assign a variable to the zonal and meridional current data. End of explanation print type(uData) print uData.shape Explanation: Both uData and vData are a special type of numpy array (which we have met previously) known as a masked array, whereby some of the points in the time/latitude/longitude grid have missing (or masked) values. Just as with a normal numpy array, we can check the shape of our data (in fact, masked arrays can do everything normal numpy arrays can do and more). End of explanation spData = (uData**2 + vData**2)**0.5 Explanation: In other words, 493 time steps, 55 latitudes and 57 longitudes. We can now go ahead and calculate the current speed. End of explanation %matplotlib inline from matplotlib.pyplot import figure, pcolor, colorbar, xlabel, ylabel, title, draw, quiver, show, savefig LAT = acorn_DATA.variables['LATITUDE'] LON = acorn_DATA.variables['LONGITUDE'] TIME = acorn_DATA.variables['TIME'] # Only one time value is being plotted. modify timeIndex if desired (value between 0 and length(timeData)-1 ) timeIndex = 4 speedData = spData[timeIndex,:,:] latData = LAT[:] lonData = LON[:] # sea water U and V components uData = acorn_DATA.variables['UCUR'][timeIndex,:,:] vData = acorn_DATA.variables['VCUR'][timeIndex,:,:] units = acorn_DATA.variables['UCUR'].units figure1 = figure(figsize=(12, 9), dpi=80, facecolor='w', edgecolor='k') pcolor(lonData , latData, speedData) cbar = colorbar() cbar.ax.set_ylabel('Current speed in ' + units) title(acorn_DATA.title + '\n' + num2date(TIME[timeIndex], TIME.units, TIME.calendar).strftime('%d/%m/%Y')) xlabel(LON.long_name + ' in ' + LON.units) ylabel(LAT.long_name + ' in ' + LAT.units) # plot velocity field Q = quiver(lonData[:], latData[:], uData, vData, units='width') show() #savefig('surface_current.svg', bbox_inches='tight') Explanation: Viewing the result It's a good idea to regularly view your data throughout the code development process, just to ensure nothing that crazy has happened along the way. Below is a code except from this example in the AODN user code library, which simply plots one of the 493 timesteps. End of explanation print acorn_DATA.history Explanation: Plotting options Quite a few lines of code were required to create our publication quality figure using matplotlib, and there would have been even more had we wanted to use the basemap library to plot coastlines or change the map projection. Recognising this burden, the team at the UK Met Office have developed Iris and Cartopy, which build on matplotlib and basemap to provide a more convenient interface (read: shorter, less complex code) for plotting in the weather, climate and ocean sciences. The easiest way to install Iris and Cartopy is to use the conda package installer that comes with Anaconda. Simply enter the following at the command line: conda install -c scitools iris What if you want to view the contents of a netCDF file quickly, rather than go to the effort of producing something that is publication quality? There are numerous tools out there for doing this, including Panoply and UV-CDAT. Challenge Let's say (hypothetically) that the TURQ site radar has been found to be unreliable for surface current speeds greater than 0.9 m/s. To correct for this problem, the IMOS documentation suggests setting all values greater than 0.9 m/s, back to 0.9 m/s. The most obvious solution to this problem would be to loop through every element in spData and check its value: for t in range(0, len(TIME[:])): for y in range(0, len(LAT[:])): for x in range(0, len(LON[:])): if spData[t, y, x] &gt; 0.9: spData[t, y, x] = 0.9 The problem is that not only is this nested loop kind of ugly, it's also pretty slow. If our data array was even larger (e.g. like the huge data arrays that high resolution global climate models produce), then it would probably be prohibitively slow. The reason is that high level languages like Python and Matlab are built for usability (i.e. they make it easy to write concise, readable code), not speed. To get around this problem, people have written Python libraries (like numpy) in low level languages like C and Fortran, which are built for speed (but definitely not usability). Fortunately we don't ever need to see the C code under the hood of the numpy library, but we should use it to vectorise our array operations whenever possible. With this in mind: Use the numpy.ma.where() function to correct all values in spData greater than 0.9 (hint: you'll need to import numpy.ma) Use the %timeit cell magic to compare the speed of your answer to the nested loop described above Data provenance Now that we've developed some code for reading in zonal and meridional surface current data and calculating the speed, the logical next step is to put that code in a script called calc_current_speed.py so that we can repeat the process quickly and easily. The output of that script will be a new netCDF file containing the current speed data, however there's one more thing to consider before we go ahead and start creating new files. Looking closely at the global attributes of IMOS_ACORN_V_20121001T000000Z_TURQ_FV00_monthly-1-hour-avg_END-20121029T180000Z_C-20121030T160000Z.nc you can see that the entire history of the file, all the way back to its initial download, has been recorded in the history attribute. End of explanation import datetime time_stamp = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S") print time_stamp Explanation: This practice of recording the history of the file ensures the provenance of the data. In other words, a complete record of everything that has been done to the data is stored with the data, which avoids any confusion in the event that the data is ever moved, passed around to different users, or viewed by its creator many months later. If we want to create our own entry for the history attribute, we'll need to be able to create a: Time stamp Record of what was entered at the command line in order to execute calc_current_speed.py Method of indicating which verion of the script was run (i.e. because the script is in our git repository) Time stamp A library called datetime can be used to find out the time and date right now: End of explanation import sys print sys.argv Explanation: The strftime function can be used to customise the appearance of a datetime object; in this case we've made it look just like the other time stamps in our data file. Command line record In the Software Carpentry lesson on command line programs we met sys.argv, which contains all the arguments entered by the user at the command line: End of explanation args = " ".join(sys.argv) print args Explanation: In launching this IPython notebook, you can see that a number of command line arguments were used. To join all these list elements up, we can use the join function that belongs to Python strings: End of explanation exe = sys.executable print exe Explanation: While this list of arguments is very useful, it doesn't tell us which Python installation was used to execute those arguments. The sys library can help us out here too: End of explanation from git import Repo import os git_hash = Repo(os.getcwd()+'/../').heads[0].commit print git_hash Explanation: Git hash In the Software Carpentry lessons on git we learned that each commit is associated with a unique 40-character identifier known as a hash. We can use the git Python library to get the hash associated with the script: End of explanation entry = %s: %s %s (Git hash: %s) %(time_stamp, exe, args, str(git_hash)[0:7]) print entry Explanation: We can now put all this information together for our history entry: End of explanation !cat code/calc_current_speed.py Explanation: Putting it all together So far we've been experimenting in the IPython notebook to familiarise ourselves with netCDF4 and the other Python libraries that might be useful for calculating the surface current speed. We should now go ahead and write a script, so we can repeat the process with a single entry at the command line: End of explanation !python code/calc_current_speed.py -h !python code/calc_current_speed.py http://thredds.aodn.org.au/thredds/dodsC/IMOS/eMII/demos/ACORN/monthly_gridded_1h-avg-current-map_non-QC/TURQ/2012/IMOS_ACORN_V_20121001T000000Z_TURQ_FV00_monthly-1-hour-avg_END-20121029T180000Z_C-20121030T160000Z.nc.gz UCUR VCUR IMOS_ACORN_SPCUR_20121001T000000Z_TURQ_monthly-1-hour-avg_END-20121029T180000Z.nc Explanation: Introducing xray It took four separate functions in calc_current_speed.py to create the output file, because we had to copy the dimensions and most of the global and variable attributes from the original file to the new file. This is such a common problem that a Python library called xray has been developed, which conserves metadata whenever possible. When xray is used to read a netCDF file, the data are stored as an xray.DataArray (as opposed to a numpy.ndarray). These special data arrays carry their dimension information and variable atributes with them, which means you don't have to retrieve them manually. xray also comes with a bunch of convenience functions for doing typical weather/climate/ocean tasks (calculating climatologies, anomalies, etc), which can be a pain using numpy. Similar to Iris and Cartopy, the easiest way to install xray is with the conda package installer. Simply type the following at the command line: conda install xray dask netCDF4 bottleneck Using the help information that the argparse library provides, we can now go ahead and run this script at the command line: End of explanation !ncdump -h IMOS_ACORN_SPCUR_20121001T000000Z_TURQ_monthly-1-hour-avg_END-20121029T180000Z.nc Explanation: The finished product We can now inspect the attributes in our new file: End of explanation
7,555
Given the following text description, write Python code to implement the functionality described. Description: Print any pair of integers with sum of GCD and LCM equals to N Function to print the required pair ; Print the pair ; Driver code
Python Code: def printPair(n ) : print("1", end = "▁ ") print(n - 1 )  n = 14 printPair(n )
7,556
Given the following text description, write Python code to implement the functionality described below step by step Description: Storage of Network Data and Topology Step1: Spreadsheet Analogy The best analogy for explaining data storage in OpenPNM is the humble spreadsheet. According to this analogy, each pore (or throat) corresponds to a row and each property corresponds to a column. Consider the following network with 4 pores, 3 throats Step2: Let's use pandas to express the geometric properties as a 'spreadsheet' Step3: We can now view this 'spreadsheet' Step4: The properties are the 'column' names, such as 'pore.area', and the rows correspond to the pore index, so 'pore 0' has an area of 0.176763. One could also extract an entire column using Step5: And then access individual elements Step6: Numpy Arrays Stored in Dictionaries Although the spreadsheet analogy described above is very close to reality, OpenPNM does not actually use pandas DataFrames, or any other spreadsheet data structure. Instead, it uses the basic Python dictionary, filled with Numpy arrays to accomplish a nearly identical behavior, but with a bit more flexibility. Each OpenPNM object (e.g. networks, algorithms, etc) is actually a customized (a.k.a. subclassed) Python dictionary which allows data to be stored and accessed by name, with a syntax like network['pore.diameter']. This is analogous (actually indistinguishable) to extracting a column from a spreadsheet as outlined above. Once the data array is retrieved from the dictionary, it is then a simple matter of working with a Numpy array, which use familiar array indexing, so the diameter of pore 0 is found be accessing element 0 of the array stored under the 'pore.diameter' key. This is elaborated upon below. Quick Overview of Dictionaries The internet contains many tutorials on Python "dicts". To summarize, they are general purpose data contains where items can be stored by name, as follows Step7: Data can be accessed by name, which is called a "key" Step8: And all keys on a given dictionary can be retrieved as a list Step9: Following the spreadsheet analogy, the dictionary is the sheet and the keys are the column names. Quick Overview of Numpy Arrays OpenPNM uses dictionaries or dict's to store an assortment of Numpy arrays that each contain a specific type of pore or throat data. There are many tutorials on the internet explaining the various features and benefits of Numpy arrays. To summarize, they are familiar to numerical arrays in any other language. Let's extract a Numpy array from the geo object and play with it Step10: It's possible to extract several elements at once Step11: And easy to multiply all values in the array by a scalar or by another array of the same size, which defaults to element-wise multiplication Step12: Referring back to the spreadsheet analogy, the Numpy arrays represent the actual cells of the sheet, with the array index corresponding to the row number. Rules to Maintain Data Integrity Several rules have been implemented to control the integrity of the data Step13: This illustrates that the basic python list-type has been converted to a Numpy array when stored in the dictionary Any Scalars are Expanded to a Full-Length Vector For the sake of consistency only arrays of length Np or Nt are allowed in the dictionary. Assigning a scalar value to a dictionary results in the creation of a full length vector, either Np or Nt long, depending on the name of the array.. This effectively applies the scalar value to all locations in the network.** Step14: Note how the scalar value has been cast to an array of 4 elements long, one for each pore in the network. Dictionary Keys Must Start With 'pore' or 'throat' All array names must begin with either 'pore.' or 'throat.' which serves to identify the type of information they contain. Step15: Nesting Dictionary Names are Allowed It's possible to create nested properties by assigning a dictionary containing numpy arrays Step16: The above rule about expanding the scalar values to a numpy array have been applied. Requesting the top level of dictionary key returns both concentrations, but they can accessed directly too Step17: Unfortunately, the following does not work Step18: Boolean Arrays are Treated as Labels Any Boolean data will be treated as a label while all other numerical data is treated as a property. Step19: You can see that 'pore.label' shows up in this list automatically since it is of Boolean type. For more information on using labels see the tutorial on Using and Creating Labels. Dictionary Keys with a Leading Underscore are Hidden Following the Python convention, if a piece of data is not really meant to be seen or used by the user, it can be pre-pended with an underscore and it will no appear in any output. Step20: The 'pore._hidden' key does not show up in this list. Representing Topology Consider the following simple random network Step21: The basic premise of how OpenPNM stores topology can be stated in 1 sentence
Python Code: import openpnm as op %config InlineBackend.figure_formats = ['svg'] import numpy as np np.random.seed(0) Explanation: Storage of Network Data and Topology End of explanation pn = op.network.Cubic(shape=[4, 1, 1]) geo = op.geometry.SpheresAndCylinders(network=pn, pores=pn.Ps, throats=pn.Ts) Explanation: Spreadsheet Analogy The best analogy for explaining data storage in OpenPNM is the humble spreadsheet. According to this analogy, each pore (or throat) corresponds to a row and each property corresponds to a column. Consider the following network with 4 pores, 3 throats: End of explanation import pandas as pd # Let's pull out only the pore properties from the geometry pore_data_sheet = pd.DataFrame({i: geo[i] for i in geo.props(element='pore')}) Explanation: Let's use pandas to express the geometric properties as a 'spreadsheet': End of explanation pore_data_sheet Explanation: We can now view this 'spreadsheet': End of explanation pore_area_column = pore_data_sheet['pore.cross_sectional_area'] print(pore_area_column) Explanation: The properties are the 'column' names, such as 'pore.area', and the rows correspond to the pore index, so 'pore 0' has an area of 0.176763. One could also extract an entire column using: End of explanation print(pore_area_column[0]) Explanation: And then access individual elements: End of explanation a = {} # Create an empty dict a['item 1'] = 'a string' a['item 2'] = 4 # a number a['another item'] = {} # Even other dicts! print(a) Explanation: Numpy Arrays Stored in Dictionaries Although the spreadsheet analogy described above is very close to reality, OpenPNM does not actually use pandas DataFrames, or any other spreadsheet data structure. Instead, it uses the basic Python dictionary, filled with Numpy arrays to accomplish a nearly identical behavior, but with a bit more flexibility. Each OpenPNM object (e.g. networks, algorithms, etc) is actually a customized (a.k.a. subclassed) Python dictionary which allows data to be stored and accessed by name, with a syntax like network['pore.diameter']. This is analogous (actually indistinguishable) to extracting a column from a spreadsheet as outlined above. Once the data array is retrieved from the dictionary, it is then a simple matter of working with a Numpy array, which use familiar array indexing, so the diameter of pore 0 is found be accessing element 0 of the array stored under the 'pore.diameter' key. This is elaborated upon below. Quick Overview of Dictionaries The internet contains many tutorials on Python "dicts". To summarize, they are general purpose data contains where items can be stored by name, as follows: End of explanation print(a['item 2']) Explanation: Data can be accessed by name, which is called a "key": End of explanation print(a.keys()) Explanation: And all keys on a given dictionary can be retrieved as a list: End of explanation a = geo['pore.diameter'] print(a) Explanation: Following the spreadsheet analogy, the dictionary is the sheet and the keys are the column names. Quick Overview of Numpy Arrays OpenPNM uses dictionaries or dict's to store an assortment of Numpy arrays that each contain a specific type of pore or throat data. There are many tutorials on the internet explaining the various features and benefits of Numpy arrays. To summarize, they are familiar to numerical arrays in any other language. Let's extract a Numpy array from the geo object and play with it: End of explanation a[1:3] Explanation: It's possible to extract several elements at once: End of explanation print(a*2) print(a*a) Explanation: And easy to multiply all values in the array by a scalar or by another array of the same size, which defaults to element-wise multiplication: End of explanation pn['throat.list'] = [1, 2, 3] print(type(pn['throat.list'])) Explanation: Referring back to the spreadsheet analogy, the Numpy arrays represent the actual cells of the sheet, with the array index corresponding to the row number. Rules to Maintain Data Integrity Several rules have been implemented to control the integrity of the data: All Values are Converted to Numpy Arrays Only Numpy arrays can be stored in an OpenPNM object, and any data that is written into one of the OpenPNM object dicionaries will be converted to a Numpy array. This is done to ensure that all mathematically operations throughout the code can be consistently done using vectorization. Note that any subclasses of Numpy arrays, such as Dask arrays or Unyt arrays are also acceptable. End of explanation pn['pore.test'] = 0 print(pn['pore.test']) Explanation: This illustrates that the basic python list-type has been converted to a Numpy array when stored in the dictionary Any Scalars are Expanded to a Full-Length Vector For the sake of consistency only arrays of length Np or Nt are allowed in the dictionary. Assigning a scalar value to a dictionary results in the creation of a full length vector, either Np or Nt long, depending on the name of the array.. This effectively applies the scalar value to all locations in the network.** End of explanation try: pn['foo.bar'] = 0 except: print('This will throw an exception since the dict name cannot start with foo') Explanation: Note how the scalar value has been cast to an array of 4 elements long, one for each pore in the network. Dictionary Keys Must Start With 'pore' or 'throat' All array names must begin with either 'pore.' or 'throat.' which serves to identify the type of information they contain. End of explanation pn['pore.concentration'] = {'species_A': 0, 'species_B': 1} print(pn['pore.concentration']) Explanation: Nesting Dictionary Names are Allowed It's possible to create nested properties by assigning a dictionary containing numpy arrays End of explanation print(pn['pore.concentration.species_A']) Explanation: The above rule about expanding the scalar values to a numpy array have been applied. Requesting the top level of dictionary key returns both concentrations, but they can accessed directly too: End of explanation try: pn['pore.concentration']['species_A'] except: print('The request for pore.concentration returns a dictionary with both arrays, ' + 'but the entire property name is still used as the key. ') Explanation: Unfortunately, the following does not work: End of explanation pn['pore.label'] = False print(pn.labels(element='pore')) Explanation: Boolean Arrays are Treated as Labels Any Boolean data will be treated as a label while all other numerical data is treated as a property. End of explanation pn['pore._hidden'] = 1 print(pn.props()) Explanation: You can see that 'pore.label' shows up in this list automatically since it is of Boolean type. For more information on using labels see the tutorial on Using and Creating Labels. Dictionary Keys with a Leading Underscore are Hidden Following the Python convention, if a piece of data is not really meant to be seen or used by the user, it can be pre-pended with an underscore and it will no appear in any output. End of explanation np.random.seed(10) pts = np.random.rand(5, 2)*1.5 pn = op.network.Delaunay(points=pts, shape=[1, 1, 0]) op.topotools.plot_tutorial(pn) Explanation: The 'pore._hidden' key does not show up in this list. Representing Topology Consider the following simple random network: End of explanation print(pn['throat.conns']) Explanation: The basic premise of how OpenPNM stores topology can be stated in 1 sentence: The pores on either end of a throat are just another property to be stored, along with diameter, length, etc. In other words, referring to the above diagram, throat No. 5 has pores 3 and 4 on it's ends. Using the spreadsheet analogy, this implies a new colunm that stores the pair of pores connected by each throat. OpenPNM calls this property 'throat.conns': End of explanation
7,557
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I am trying to modify a DataFrame df to only contain rows for which the values in the column closing_price are not between 99 and 101 and trying to do this with the code below.
Problem: import pandas as pd import numpy as np np.random.seed(2) df = pd.DataFrame({'closing_price': np.random.randint(95, 105, 10)}) def g(df): return df.query('closing_price < 99 or closing_price > 101') result = g(df.copy())
7,558
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2020 The TensorFlow Authors. Step1: こんにちは、多くの世界 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https Step2: TensorFlow Quantum をインストールします。 Step3: 次に、TensorFlow とモジュールの依存関係をインポートします。 Step4: 1. 基本 1.1 Cirq とパラメータ化された量子回路 TensorFlow Quantum (TFQ) について説明する前に、<a target="_blank" href="https Step5: 次のコードは、上記のパラメータを使用して 2 つの量子ビット回路を作成します。 Step6: 回路を評価するには、cirq.Simulatorインターフェースを使用します。回路内の自由パラメータを特定の数値に置き換えるには、cirq.ParamResolverオブジェクトを渡します。以下のコードは、パラメータ化された回路の生の状態ベクトル出力を計算します。 Step7: 状態ベクトルは、シミュレーションの外から直接アクセスすることはできません(上記の複素数出力に注意してください)。物理的に現実的にするには、状態ベクトルを古典的コンピュータが理解できる実数に変換する測定値を指定する必要があります。Cirq は、<a target="_blank" href="https Step8: 1.2 テンソルとしての量子回路 TensorFlow Quantum (TFQ) は、Cirq オブジェクトをテンソルに変換する関数であるtfq.convert_to_tensorを提供します。これにより、Cirq オブジェクトを<a target="_blank" href="https Step9: これは、Cirq オブジェクトをtf.stringテンソルとしてエンコードし、tfq演算は必要に応じてデコードします。 Step10: 1.3 バッチ回路シミュレーション TFQ は、期待値、サンプル、および状態ベクトルを計算するためのメソッドを提供します。まず、期待値から見ていきましょう。 期待値を計算するための最高レベルのインターフェースは、tf.keras.Layerであるtfq.layers.Expectationレイヤーです。最も単純な形式では、このレイヤーは、多くのcirq.ParamResolversでパラメータ化された回路をシミュレートすることと同等ですが、TFQ では TensorFlow セマンティクスに従ったバッチ処理が可能であり、回路は効率的な C++ コードを使用してシミュレートされます。 aとbパラメータの代わりに値のバッチを作成します。 Step11: Cirq のパラメータ値に対するバッチ回路の実行には、ループが必要です。 Step12: TFQ では同じ演算が簡略化されています。 Step13: 2. 量子古典ハイブリッドの最適化 以上は基本の説明でした。次に、TensorFlow Quantum を使用して量子古典ハイブリッドニューラルネットを構築しましょう。古典的なニューラルネットをトレーニングして、1 つの量子ビットを制御します。コントロールは、0または1の状態の量子ビットを正しく準備するように最適化され、シミュレートされた系統的なキャリブレーションエラーを克服します。以下の図は、アーキテクチャを示しています。 <img src="https Step14: 2.2 コントローラ 次に、コントローラネットワークを定義します。 Step15: コントローラにコマンドのバッチを与えると、制御された回路の制御信号のバッチが出力されます。 コントローラはランダムに初期化されるため、これらの出力はまだ有用ではありません。 Step16: 2.3 コントローラを回路に接続する tfqを使用して、コントローラを 1 つのkeras.Modelとして制御回路に接続します。 このスタイルのモデル定義の詳細については、Keras Functional API ガイドをご覧ください。 まず、モデルへの入力を定義します。 Step17: 次に、これらの入力に演算を適用して、計算を定義します。 Step18: 次に、この計算をtf.keras.Modelとしてパッケージ化します。 Step19: ネットワークアーキテクチャは、以下のモデルのプロットで示されています。このモデルプロットをアーキテクチャ図と比較して、正確さを確認します。 注意 Step20: このモデルは、コントローラのコマンドと、コントローラが出力を修正しようとしている入力回路の 2 つの入力を受け取ります。 2.4 データセット モデルは、コマンドごとに $\hat{Z}$ の正しい測定値の出力を試行します。コマンドと正しい値の定義は以下のとおりです。 Step21: これは、このタスクのトレーニングデータセット全体ではありません。データセット内の各データポイントにも入力回路が必要です。 2.4 入力回路の定義 以下の入力回路は、モデルが修正することを学習するためのランダムな誤校正を定義します。 Step22: 回路には 2 つのコピーがあります(データポイントごとに 1 つずつ)。 Step23: 2.5 トレーニング 定義された入力を使用して、tfqモデルのテストランを実行します。 Step24: 次に、標準のトレーニングプロセスを実行して、これらの値をexpected_outputsに向けて調整します。 Step26: このプロットから、ニューラルネットワークが体系的なキャリブレーションエラーを訂正することを学習したことがわかります。 2.6 出力の確認 次に、トレーニング済みモデルを使用して、量子ビット・キャリブレーションエラーを修正します。Cirq を使用する場合は以下のとおりです。 Step27: トレーニング中の損失関数の値から、モデルの学習がどれほど進んでいるかが大まかに分かります。損失が小さいほど、上記のセルの期待値はDesired_valuesに近くなります。パラメータ値に関心がない場合は、tfqを使用して上記からの出力をいつでも確認できます。 Step28: 3 さまざまな演算子の固有状態の準備について学ぶ 1 と 0 に対応する $\pm \hat{Z}$ 固有状態の選択は任意でした。1 を $+ \hat{Z}$ 固有状態に対応させ、0 を $-\hat{X}$ 固有状態に対応させることも簡単にできます。そのためには、次の図に示すように、コマンドごとに異なる測定演算子を指定します。 <img src="https Step29: コントローラネットワークは次のとおりです。 Step30: tfqを使用して、回路とコントローラを 1 つのkeras.Modelに結合します。 Step31: 3.2 データセット model_circuitに提供する各データポイントに対して測定する演算子も含めます。 Step32: 3.3トレーニング 新しい入力と出力を使用し、keras でもう一度トレーニングします。 Step33: 損失関数はゼロに低下しました。 controllerはスタンドアロンモデルとして利用できます。コントローラを呼び出し、各コマンド信号に対する応答を確認します。多少手間がかかりますが、これらの出力をrandom_rotationsの内容と比較します。
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2020 The TensorFlow Authors. End of explanation !pip install tensorflow==2.7.0 Explanation: こんにちは、多くの世界 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/hello_many_worlds"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/quantum/tutorials/hello_many_worlds.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colabで実行</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/quantum/tutorials/hello_many_worlds.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/quantum/tutorials/hello_many_worlds.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td> </table> このチュートリアルでは、古典的なニューラルネットワークが量子ビット・キャリブレーションエラーの訂正を学習する方法を紹介します。<a target="_blank" href="https://github.com/quantumlib/Cirq" class="external">Circq</a> は NISQ(ノイズの多い中間スケール量子)回路を作成、編集、呼び出すための Python フレームワークであり、ここでは Cirq が TensorFlow Quantum とどのようにやり取りするかを示します。 セットアップ End of explanation !pip install tensorflow-quantum # Update package resources to account for version changes. import importlib, pkg_resources importlib.reload(pkg_resources) Explanation: TensorFlow Quantum をインストールします。 End of explanation import tensorflow as tf import tensorflow_quantum as tfq import cirq import sympy import numpy as np # visualization tools %matplotlib inline import matplotlib.pyplot as plt from cirq.contrib.svg import SVGCircuit Explanation: 次に、TensorFlow とモジュールの依存関係をインポートします。 End of explanation a, b = sympy.symbols('a b') Explanation: 1. 基本 1.1 Cirq とパラメータ化された量子回路 TensorFlow Quantum (TFQ) について説明する前に、<a target="_blank" href="https://github.com/quantumlib/Cirq" class="external">Circq</a> の基本をいくつか見てみましょう。Cirq は、Google の量子コンピューティング用の Python ライブラリで、静的ゲートやパラメータ化されたゲートなどの回路の定義に使用します。 Cirq は、<a target="_blank" href="https://www.sympy.org" class="external">SymPy</a> シンボルを使用して自由パラメータを表します。 End of explanation # Create two qubits q0, q1 = cirq.GridQubit.rect(1, 2) # Create a circuit on these qubits using the parameters you created above. circuit = cirq.Circuit( cirq.rx(a).on(q0), cirq.ry(b).on(q1), cirq.CNOT(control=q0, target=q1)) SVGCircuit(circuit) Explanation: 次のコードは、上記のパラメータを使用して 2 つの量子ビット回路を作成します。 End of explanation # Calculate a state vector with a=0.5 and b=-0.5. resolver = cirq.ParamResolver({a: 0.5, b: -0.5}) output_state_vector = cirq.Simulator().simulate(circuit, resolver).final_state_vector output_state_vector Explanation: 回路を評価するには、cirq.Simulatorインターフェースを使用します。回路内の自由パラメータを特定の数値に置き換えるには、cirq.ParamResolverオブジェクトを渡します。以下のコードは、パラメータ化された回路の生の状態ベクトル出力を計算します。 End of explanation z0 = cirq.Z(q0) qubit_map={q0: 0, q1: 1} z0.expectation_from_state_vector(output_state_vector, qubit_map).real z0x1 = 0.5 * z0 + cirq.X(q1) z0x1.expectation_from_state_vector(output_state_vector, qubit_map).real Explanation: 状態ベクトルは、シミュレーションの外から直接アクセスすることはできません(上記の複素数出力に注意してください)。物理的に現実的にするには、状態ベクトルを古典的コンピュータが理解できる実数に変換する測定値を指定する必要があります。Cirq は、<a target="_blank" href="https://en.wikipedia.org/wiki/Pauli_matrices" class="external">Pauli 演算子</a> $\hat{X}$, $\hat{Y}$ および $\hat{Z}$ の組み合わせを使用して測定値を指定します。例として、次のコードは、シミュレーションした状態ベクトルで $\hat{Z}_0$ と $\frac{1}{2}\hat{Z}_0 + \hat{X}_1$ を測定します。 End of explanation # Rank 1 tensor containing 1 circuit. circuit_tensor = tfq.convert_to_tensor([circuit]) print(circuit_tensor.shape) print(circuit_tensor.dtype) Explanation: 1.2 テンソルとしての量子回路 TensorFlow Quantum (TFQ) は、Cirq オブジェクトをテンソルに変換する関数であるtfq.convert_to_tensorを提供します。これにより、Cirq オブジェクトを<a target="_blank" href="https://www.tensorflow.org/quantum/api_docs/python/tfq/layers">量子レイヤー</a>および<a target="_blank" href="https://www.tensorflow.org/quantum/api_docs/python/tfq/get_expectation_op">量子演算</a>に送信できます。この関数は、Cirq Circuits と Cirq Paulis のリストまたは配列で呼び出すことができます。 End of explanation # Rank 1 tensor containing 2 Pauli operators. pauli_tensor = tfq.convert_to_tensor([z0, z0x1]) pauli_tensor.shape Explanation: これは、Cirq オブジェクトをtf.stringテンソルとしてエンコードし、tfq演算は必要に応じてデコードします。 End of explanation batch_vals = np.array(np.random.uniform(0, 2 * np.pi, (5, 2)), dtype=np.float32) Explanation: 1.3 バッチ回路シミュレーション TFQ は、期待値、サンプル、および状態ベクトルを計算するためのメソッドを提供します。まず、期待値から見ていきましょう。 期待値を計算するための最高レベルのインターフェースは、tf.keras.Layerであるtfq.layers.Expectationレイヤーです。最も単純な形式では、このレイヤーは、多くのcirq.ParamResolversでパラメータ化された回路をシミュレートすることと同等ですが、TFQ では TensorFlow セマンティクスに従ったバッチ処理が可能であり、回路は効率的な C++ コードを使用してシミュレートされます。 aとbパラメータの代わりに値のバッチを作成します。 End of explanation cirq_results = [] cirq_simulator = cirq.Simulator() for vals in batch_vals: resolver = cirq.ParamResolver({a: vals[0], b: vals[1]}) final_state_vector = cirq_simulator.simulate(circuit, resolver).final_state_vector cirq_results.append( [z0.expectation_from_state_vector(final_state_vector, { q0: 0, q1: 1 }).real]) print('cirq batch results: \n {}'.format(np.array(cirq_results))) Explanation: Cirq のパラメータ値に対するバッチ回路の実行には、ループが必要です。 End of explanation tfq.layers.Expectation()(circuit, symbol_names=[a, b], symbol_values=batch_vals, operators=z0) Explanation: TFQ では同じ演算が簡略化されています。 End of explanation # Parameters that the classical NN will feed values into. control_params = sympy.symbols('theta_1 theta_2 theta_3') # Create the parameterized circuit. qubit = cirq.GridQubit(0, 0) model_circuit = cirq.Circuit( cirq.rz(control_params[0])(qubit), cirq.ry(control_params[1])(qubit), cirq.rx(control_params[2])(qubit)) SVGCircuit(model_circuit) Explanation: 2. 量子古典ハイブリッドの最適化 以上は基本の説明でした。次に、TensorFlow Quantum を使用して量子古典ハイブリッドニューラルネットを構築しましょう。古典的なニューラルネットをトレーニングして、1 つの量子ビットを制御します。コントロールは、0または1の状態の量子ビットを正しく準備するように最適化され、シミュレートされた系統的なキャリブレーションエラーを克服します。以下の図は、アーキテクチャを示しています。 <img src="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/quantum/tutorials/images/nn_control1.png?raw=true"> これはニューラルネットワークがなくても簡単に解決できる問題ですが、テーマは TFQ を使用して解決できる実際の量子制御の問題と似ています。これは、tf.keras.Model内のtfq.layers.ControlledPQC (Parametrized Quantum Circuit) レイヤーを使用した量子古典計算のエンドツーエンドの例を示しています。 このチュートリアルの実装では、アーキテクチャは 3 つの部分に分かれています。 入力回路またはデータポイント回路:最初の 3 つの $R$ ゲート。 制御回路:その他の 3 つの $R$ ゲート。 コントローラ:制御回路のパラメータを設定する古典的なニューラルネットワーク。 2.1 制御回路の定義 上の図に示すように、学習可能なシングルビットローテーションを定義します。これは、制御回路に対応します。 End of explanation # The classical neural network layers. controller = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation='elu'), tf.keras.layers.Dense(3) ]) Explanation: 2.2 コントローラ 次に、コントローラネットワークを定義します。 End of explanation controller(tf.constant([[0.0],[1.0]])).numpy() Explanation: コントローラにコマンドのバッチを与えると、制御された回路の制御信号のバッチが出力されます。 コントローラはランダムに初期化されるため、これらの出力はまだ有用ではありません。 End of explanation # This input is the simulated miscalibration that the model will learn to correct. circuits_input = tf.keras.Input(shape=(), # The circuit-tensor has dtype `tf.string` dtype=tf.string, name='circuits_input') # Commands will be either `0` or `1`, specifying the state to set the qubit to. commands_input = tf.keras.Input(shape=(1,), dtype=tf.dtypes.float32, name='commands_input') Explanation: 2.3 コントローラを回路に接続する tfqを使用して、コントローラを 1 つのkeras.Modelとして制御回路に接続します。 このスタイルのモデル定義の詳細については、Keras Functional API ガイドをご覧ください。 まず、モデルへの入力を定義します。 End of explanation dense_2 = controller(commands_input) # TFQ layer for classically controlled circuits. expectation_layer = tfq.layers.ControlledPQC(model_circuit, # Observe Z operators = cirq.Z(qubit)) expectation = expectation_layer([circuits_input, dense_2]) Explanation: 次に、これらの入力に演算を適用して、計算を定義します。 End of explanation # The full Keras model is built from our layers. model = tf.keras.Model(inputs=[circuits_input, commands_input], outputs=expectation) Explanation: 次に、この計算をtf.keras.Modelとしてパッケージ化します。 End of explanation tf.keras.utils.plot_model(model, show_shapes=True, dpi=70) Explanation: ネットワークアーキテクチャは、以下のモデルのプロットで示されています。このモデルプロットをアーキテクチャ図と比較して、正確さを確認します。 注意: graphvizパッケージのシステムインストールが必要になる場合があります。 End of explanation # The command input values to the classical NN. commands = np.array([[0], [1]], dtype=np.float32) # The desired Z expectation value at output of quantum circuit. expected_outputs = np.array([[1], [-1]], dtype=np.float32) Explanation: このモデルは、コントローラのコマンドと、コントローラが出力を修正しようとしている入力回路の 2 つの入力を受け取ります。 2.4 データセット モデルは、コマンドごとに $\hat{Z}$ の正しい測定値の出力を試行します。コマンドと正しい値の定義は以下のとおりです。 End of explanation random_rotations = np.random.uniform(0, 2 * np.pi, 3) noisy_preparation = cirq.Circuit( cirq.rx(random_rotations[0])(qubit), cirq.ry(random_rotations[1])(qubit), cirq.rz(random_rotations[2])(qubit) ) datapoint_circuits = tfq.convert_to_tensor([ noisy_preparation ] * 2) # Make two copied of this circuit Explanation: これは、このタスクのトレーニングデータセット全体ではありません。データセット内の各データポイントにも入力回路が必要です。 2.4 入力回路の定義 以下の入力回路は、モデルが修正することを学習するためのランダムな誤校正を定義します。 End of explanation datapoint_circuits.shape Explanation: 回路には 2 つのコピーがあります(データポイントごとに 1 つずつ)。 End of explanation model([datapoint_circuits, commands]).numpy() Explanation: 2.5 トレーニング 定義された入力を使用して、tfqモデルのテストランを実行します。 End of explanation optimizer = tf.keras.optimizers.Adam(learning_rate=0.05) loss = tf.keras.losses.MeanSquaredError() model.compile(optimizer=optimizer, loss=loss) history = model.fit(x=[datapoint_circuits, commands], y=expected_outputs, epochs=30, verbose=0) plt.plot(history.history['loss']) plt.title("Learning to Control a Qubit") plt.xlabel("Iterations") plt.ylabel("Error in Control") plt.show() Explanation: 次に、標準のトレーニングプロセスを実行して、これらの値をexpected_outputsに向けて調整します。 End of explanation def check_error(command_values, desired_values): Based on the value in `command_value` see how well you could prepare the full circuit to have `desired_value` when taking expectation w.r.t. Z. params_to_prepare_output = controller(command_values).numpy() full_circuit = noisy_preparation + model_circuit # Test how well you can prepare a state to get expectation the expectation # value in `desired_values` for index in [0, 1]: state = cirq_simulator.simulate( full_circuit, {s:v for (s,v) in zip(control_params, params_to_prepare_output[index])} ).final_state_vector expt = cirq.Z(qubit).expectation_from_state_vector(state, {qubit: 0}).real print(f'For a desired output (expectation) of {desired_values[index]} with' f' noisy preparation, the controller\nnetwork found the following ' f'values for theta: {params_to_prepare_output[index]}\nWhich gives an' f' actual expectation of: {expt}\n') check_error(commands, expected_outputs) Explanation: このプロットから、ニューラルネットワークが体系的なキャリブレーションエラーを訂正することを学習したことがわかります。 2.6 出力の確認 次に、トレーニング済みモデルを使用して、量子ビット・キャリブレーションエラーを修正します。Cirq を使用する場合は以下のとおりです。 End of explanation model([datapoint_circuits, commands]) Explanation: トレーニング中の損失関数の値から、モデルの学習がどれほど進んでいるかが大まかに分かります。損失が小さいほど、上記のセルの期待値はDesired_valuesに近くなります。パラメータ値に関心がない場合は、tfqを使用して上記からの出力をいつでも確認できます。 End of explanation # Define inputs. commands_input = tf.keras.layers.Input(shape=(1), dtype=tf.dtypes.float32, name='commands_input') circuits_input = tf.keras.Input(shape=(), # The circuit-tensor has dtype `tf.string` dtype=tf.dtypes.string, name='circuits_input') operators_input = tf.keras.Input(shape=(1,), dtype=tf.dtypes.string, name='operators_input') Explanation: 3 さまざまな演算子の固有状態の準備について学ぶ 1 と 0 に対応する $\pm \hat{Z}$ 固有状態の選択は任意でした。1 を $+ \hat{Z}$ 固有状態に対応させ、0 を $-\hat{X}$ 固有状態に対応させることも簡単にできます。そのためには、次の図に示すように、コマンドごとに異なる測定演算子を指定します。 <img src="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/quantum/tutorials/images/nn_control2.png?raw=true"> これには、<code>tfq.layers.Expectation</code>を使用する必要があります。これで、入力は、回路、コマンド、および演算子の 3 つのオブジェクトを含むようになりました。出力は期待値のままです。 3.1 新しいモデルの定義 このタスクを実行するためのモデルを見てみましょう。 End of explanation # Define classical NN. controller = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation='elu'), tf.keras.layers.Dense(3) ]) Explanation: コントローラネットワークは次のとおりです。 End of explanation dense_2 = controller(commands_input) # Since you aren't using a PQC or ControlledPQC you must append # your model circuit onto the datapoint circuit tensor manually. full_circuit = tfq.layers.AddCircuit()(circuits_input, append=model_circuit) expectation_output = tfq.layers.Expectation()(full_circuit, symbol_names=control_params, symbol_values=dense_2, operators=operators_input) # Contruct your Keras model. two_axis_control_model = tf.keras.Model( inputs=[circuits_input, commands_input, operators_input], outputs=[expectation_output]) Explanation: tfqを使用して、回路とコントローラを 1 つのkeras.Modelに結合します。 End of explanation # The operators to measure, for each command. operator_data = tfq.convert_to_tensor([[cirq.X(qubit)], [cirq.Z(qubit)]]) # The command input values to the classical NN. commands = np.array([[0], [1]], dtype=np.float32) # The desired expectation value at output of quantum circuit. expected_outputs = np.array([[1], [-1]], dtype=np.float32) Explanation: 3.2 データセット model_circuitに提供する各データポイントに対して測定する演算子も含めます。 End of explanation optimizer = tf.keras.optimizers.Adam(learning_rate=0.05) loss = tf.keras.losses.MeanSquaredError() two_axis_control_model.compile(optimizer=optimizer, loss=loss) history = two_axis_control_model.fit( x=[datapoint_circuits, commands, operator_data], y=expected_outputs, epochs=30, verbose=1) plt.plot(history.history['loss']) plt.title("Learning to Control a Qubit") plt.xlabel("Iterations") plt.ylabel("Error in Control") plt.show() Explanation: 3.3トレーニング 新しい入力と出力を使用し、keras でもう一度トレーニングします。 End of explanation controller.predict(np.array([0,1])) Explanation: 損失関数はゼロに低下しました。 controllerはスタンドアロンモデルとして利用できます。コントローラを呼び出し、各コマンド信号に対する応答を確認します。多少手間がかかりますが、これらの出力をrandom_rotationsの内容と比較します。 End of explanation
7,559
Given the following text description, write Python code to implement the functionality described below step by step Description: Using MySQLdb mySQLdb - Library has to be pip'ed' before and now to be imported Step1: Database Connection Properties and Connect to the database Step12: Let's create a sample table and insert some data into it. Step14: conn.cursor will return a cursor object, you can use this cursor to perform queries. Step15: Now you can print out the result set using pretty print Step16: Am Ende schließen wir die Verbindung Step17: Using ext sql 'from
Python Code: import MySQLdb Explanation: Using MySQLdb mySQLdb - Library has to be pip'ed' before and now to be imported End of explanation #Enter the values for you database connection dsn_database = "verein" # e.g. "MySQLdbtest" dsn_hostname = "localhost" # e.g.: "mydbinstance.xyz.us-east-1.rds.amazonaws.com" dsn_port = 3306 # e.g. 3306 dsn_uid = "steinam" # e.g. "user1" dsn_pwd = "steinam" # e.g. "Password123" conn = MySQLdb.connect(host=dsn_hostname, port=dsn_port, user=dsn_uid, passwd=dsn_pwd, db=dsn_database) Explanation: Database Connection Properties and Connect to the database End of explanation conn.query(DROP TABLE IF EXISTS Cars) conn.query(CREATE TABLE Cars(Id INTEGER PRIMARY KEY, Name VARCHAR(20), Price INT)) conn.query(INSERT INTO Cars VALUES(1,'Audi',52642)) conn.query(INSERT INTO Cars VALUES(2,'Mercedes',57127)) conn.query(INSERT INTO Cars VALUES(3,'Skoda',9000)) conn.query(INSERT INTO Cars VALUES(4,'Volvo',29000)) conn.query(INSERT INTO Cars VALUES(5,'Bentley',350000)) conn.query(INSERT INTO Cars VALUES(6,'Citroen',21000)) conn.query(INSERT INTO Cars VALUES(7,'Hummer',41400)) conn.query(INSERT INTO Cars VALUES(8,'Volkswagen',21600)) Explanation: Let's create a sample table and insert some data into it. End of explanation cursor=conn.cursor() cursor.execute(SELECT * FROM Cars) cursor.fetchone() Explanation: conn.cursor will return a cursor object, you can use this cursor to perform queries. End of explanation print("\nShow me the records:\n") rows = cursor.fetchall() import pprint pprint.pprint(rows) Explanation: Now you can print out the result set using pretty print: You should see the following results: End of explanation conn.close() Explanation: Am Ende schließen wir die Verbindung End of explanation %load_ext sql %sql mysql://steinam:steinam@localhost/verein %sql select * from spieler; result = _ print(result[3]) %sql describe strafen; result = %sql SELECT Betrag, spielernr from strafen %matplotlib inline result.bar() Explanation: Using ext sql 'from: https://github.com/catherinedevlin/ipython-sql pip install ipython-sql has to be done End of explanation
7,560
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Gc graph imports "article_neg/pos/neu1.gml" - saves "nodes_df_negative/positive/neutral.csv" - node labels, degrees, and centralities for entire network - saves "Gc_negative/positive/neutral.gml" imports "Gc_negative/positive/neutral.gml" - saves "Gc_df_negative/positive/neutral.csv" - node labels, degrees, and centralities for greatest component Step2: Step3: Step4: Calculate network statistics for greatest component. Step5: Step6:
Python Code: # 1_network_df import networkx as nx import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import os from glob import glob plt.style.use('ggplot') pd.set_option('display.width', 5000) pd.set_option('display.max_columns', 60) #gml_files = glob('../output/network/article_neg1.gml') #gml_files = glob('../output/network/article_pos1.gml') gml_files = glob('../output/network/article_neu1.gml') def calculate_graph_inf(graph): graph.name = filename info = nx.info(graph) print info #plt.figure(figsize=(10,10)) #nx.draw_spring(graph, arrows=True, with_labels=True) def highest_centrality(cent_dict): Returns a tuple (node,value) with the node with largest value from centrality dictionary. # create ordered tuple of centrality data cent_items = [(b,a) for (a,b) in cent_dict.iteritems()] # sort in descending order cent_items.sort() cent_items.reverse() return tuple(reversed(cent_items[0])) Explanation: Gc graph imports "article_neg/pos/neu1.gml" - saves "nodes_df_negative/positive/neutral.csv" - node labels, degrees, and centralities for entire network - saves "Gc_negative/positive/neutral.gml" imports "Gc_negative/positive/neutral.gml" - saves "Gc_df_negative/positive/neutral.csv" - node labels, degrees, and centralities for greatest component End of explanation # 2_node_df: list all nodes and centrality data_columns = ['name', 'sentiment' ] data = pd.DataFrame(columns = data_columns) combined_df = pd.DataFrame() # graph = directed, ugraph = undirected for graph_num, gml_graph in enumerate(gml_files): graph = nx.read_gml(gml_graph) ugraph = graph.to_undirected() # to undirected graph U = graph.to_undirected(reciprocal=True) e = U.edges() ugraph.add_edges_from(e) (filepath, filename) = os.path.split(gml_graph) print('-' * 10) print(gml_graph) calculate_graph_inf(graph) calculate_graph_inf(ugraph) ## calculate variables and save into list #sent = "negative" #sent = "positive" sent = "neutral" deg_cent = nx.degree_centrality(graph) bet_cent = nx.betweenness_centrality(graph) clo_cent = nx.closeness_centrality(graph) graph_values = {'name':filename, 'sentiment':sent } data = data.append(graph_values, ignore_index=True) degree = nx.degree(graph) deg_df = pd.DataFrame.from_dict(degree, orient = 'index') deg_df.columns = ['degree'] # degree centrality deg_cent = nx.degree_centrality(graph) dc_df = pd.DataFrame.from_dict(deg_cent, orient = 'index') dc_df.columns = ['deg cent'] # betweenness centrality bet_cent = nx.betweenness_centrality(graph) bc_df = pd.DataFrame.from_dict(bet_cent, orient = 'index') bc_df.columns = ['bet cent'] # closeness centrality clo_cent = nx.closeness_centrality(graph) cc_df = pd.DataFrame.from_dict(clo_cent, orient = 'index') cc_df.columns = ['clo cent'] # concat node frames into node_df frames = [deg_df, dc_df, bc_df, cc_df] node_df = pd.concat(frames, axis = 1) node_df.index.name = 'node' node_df = node_df.reset_index() values = pd.DataFrame(graph_values, columns = ('name', 'sentiment'), index = [0]) # df = merges graph_values with node_df for single graph and fill NaNs df = pd.concat([values, node_df], axis = 1) df = df.fillna(method='ffill') combined_df = combined_df.append(df) # what the network looks like without adding back edges e = U.edges() #for graph_num, gml_graph in enumerate(gml_files): # graph2 = nx.read_gml(gml_graph) # ugraph2 = graph.to_undirected() # to undirected graph # U2 = graph.to_undirected(reciprocal=True) # (filepath, filename) = os.path.split(gml_graph) # print('-' * 10) # print(gml_graph) # calculate_graph_inf(graph2) # calculate_graph_inf(ugraph2) # first print entire network combined_df #combined_df.to_csv('../output/df/nodes_df_negative.csv') Explanation: End of explanation # 7_graph_calculation def drawIt(graph, what = 'graph'): nsize = graph.number_of_nodes() print "Drawing %s of size %s:" % (what, nsize) if nsize > 20: plt.figure(figsize=(10, 10)) if nsize > 40: nx.draw_spring(graph, with_labels = True, node_size = 70, font_size = 12) else: nx.draw_spring(graph, with_labels = True) else: nx.draw_spring(graph, with_labels = True) plt.show() # only for undirected graphs def describeGraph(graph): components = sorted(nx.connected_components(graph), key = len, reverse = True) cc = [len(c) for c in components] subgraphs = list(nx.connected_component_subgraphs(graph)) params = (graph.number_of_edges(),graph.number_of_nodes(),len(cc)) print "Graph has %s nodes, %s edges, %s connected components\n" % params drawIt(graph) for sub in components: drawIt(graph.subgraph(sub), what = 'component') # for directed graphs def describeGraph_d(graph): components = sorted(nx.weakly_connected_components(graph), key = len, reverse = True) cc = [len(c) for c in components] subgraphs = list(nx.weakly_connected_component_subgraphs(graph)) params = (graph.number_of_edges(),graph.number_of_nodes(),len(cc)) print "Graph has %s nodes, %s edges, %s connected components\n" % params drawIt(graph) for sub in components: drawIt(graph.subgraph(sub), what = 'component') # undirected network graph describeGraph(ugraph) # directed network graph describeGraph_d(graph) # list of connected components by size (undirected graph) connected_components = [len(c) for c in sorted(nx.connected_components(ugraph), key=len, reverse=True)] print "connected component sizes = ", connected_components # generate connected components as subgraphs (undirected graph) subgraphs = list(nx.connected_component_subgraphs(ugraph)) # greatest component (undirected; MultiGraph) u_Gc = max(nx.connected_component_subgraphs(ugraph), key=len) print nx.info(u_Gc) # use directed graph d_Gc = max(nx.weakly_connected_component_subgraphs(graph), key=len) print nx.info(d_Gc) ## understand how direction changes degree ## print nx.info(graph) # original directed print nx.info(ugraph) # to undirected temp = ugraph.to_directed() # back to directed print nx.info(temp) print nx.info(u_Gc) print nx.info(d_Gc) # save each Gc for each sentiment #nx.write_gml(u_Gc, "../output/network/u_Gc_negative.gml") #nx.write_gml(d_Gc, "../output/network/d_Gc_negative.gml") #nx.write_gml(u_Gc, "../output/network/u_Gc_positive.gml") #nx.write_gml(d_Gc, "../output/network/d_Gc_positive.gml") nx.write_gml(u_Gc, "../output/network/u_Gc_neutral.gml") nx.write_gml(d_Gc, "../output/network/d_Gc_neutral.gml") Explanation: End of explanation #Gc_files = glob('../output/network/d_Gc_negative.gml') #Gc_files = glob('../output/network/d_Gc_positive.gml') #Gc_files = glob('../output/network/d_Gc_neutral.gml') network_data_columns = ['name', 'sentiment', '# nodes', '# edges', #'avg deg', 'density', 'deg assort coef', 'avg deg cent', 'avg bet cent', 'avg clo cent', 'high deg cent', 'high bet cent', 'high clo cent', 'avg node conn', '# conn comp', 'gc size' ] network_data = pd.DataFrame(columns = network_data_columns) # Gc_files for graph_num, gml_graph in enumerate(Gc_files): graph = nx.read_gml(gml_graph) (filepath, filename) = os.path.split(gml_graph) print('-' * 10) print(gml_graph) calculate_graph_inf(graph) # calculate variables sent = "negative" nodes = nx.number_of_nodes(graph) edges = nx.number_of_edges(graph) density = float("{0:.4f}".format(nx.density(graph))) avg_deg_cen = np.array(nx.degree_centrality(graph).values()).mean() avg_bet_cen = np.array(nx.betweenness_centrality(graph).values()).mean() avg_clo_cen = np.array(nx.closeness_centrality(graph).values()).mean() #avg_deg = float("{0:.4f}".format(in_deg + out_deg)) avg_node_con = float("{0:.4f}".format((nx.average_node_connectivity(graph)))) deg_assort_coeff = float("{0:.4f}".format((nx.degree_assortativity_coefficient(graph)))) conn_comp = nx.number_connected_components(graph) # ugraph deg_cen = nx.degree_centrality(graph) bet_cen = nx.betweenness_centrality(graph) clo_cen = nx.closeness_centrality(graph) highest_deg_cen = highest_centrality(deg_cen) highest_bet_cen = highest_centrality(bet_cen) highest_clo_cen = highest_centrality(clo_cen) Gc = len(max(nx.connected_component_subgraphs(graph), key=len)) # save variables into list graph_values = {'name':filename, 'sentiment':sent, '# nodes':nodes, '# edges':edges, #'avg deg':avg_deg, 'density':density, 'deg assort coef':deg_assort_coeff, 'avg deg cent':"%.4f" % avg_deg_cen, 'avg bet cent':"%.4f" % avg_bet_cen, 'avg clo cent':"%.4f" % avg_clo_cen, 'high deg cent':highest_deg_cen, 'high bet cent':highest_bet_cen, 'high clo cent':highest_clo_cen, 'avg node conn':avg_node_con, '# conn comp':conn_comp, 'gc size':Gc } network_data = network_data.append(graph_values, ignore_index=True) network_data #network_data.to_csv('../output/df/Gc_df_negative.csv', encoding = 'utf-8') #network_data.to_csv('../output/df/Gc_df_positive.csv', encoding = 'utf-8') #network_data.to_csv('../output/df/Gc_df_neutral.csv', encoding = 'utf-8') Explanation: Calculate network statistics for greatest component. End of explanation # returns all minimum k cutsets of an undirected graph # i.e., the set(s) of nodes of cardinality equal to the node connectivity of G # thus if removed, would break G into two or more connected components cutsets = list(nx.all_node_cuts(Gc)) print "Connected components =", connected_components print "Greatest component size =", len(Gc) print "# of cutsets =", len(cutsets) # returns a set of nodes or edges of minimum cardinality that disconnects G min_ncut = nx.minimum_node_cut(Gc) min_ecut = nx.minimum_edge_cut(Gc) print "Min node cut =", min_ncut print "Min edge cut =", min_ecut # min cuts with source and target print nx.minimum_node_cut(Gc, s='vaccines', t='autism') print nx.minimum_edge_cut(Gc, s='vaccines', t='autism') # read edge labels in min cut for Gc # change source and target a = nx.minimum_edge_cut(Gc, s='vaccines', t='autism') #a = nx.minimum_edge_cut(Gc) labels = nx.get_edge_attributes(Gc,'edge') edgelabels = {} for e in labels.keys(): e1 = e[0:2] edgelabels[e1]=labels[e] for e in a: if edgelabels.has_key(e): print e,edgelabels[e] else: rev_e = e[::-1] print rev_e, edgelabels[rev_e] Explanation: End of explanation # degree centrality dc = nx.degree_centrality(graph) dc_df = pd.DataFrame.from_dict(dc, orient = 'index') dc_df.columns = ['degree cent'] dc_df = dc_df.sort_values(by = ['degree cent']) #dc_df # betweenness centrality bc = nx.betweenness_centrality(graph) bc_df = pd.DataFrame.from_dict(bc, orient = 'index') bc_df.columns = ['betweenness cent'] bc_df = bc_df.sort_values(by = ['betweenness cent']) #bc_df # closeness centrality cc = nx.closeness_centrality(graph) cc_df = pd.DataFrame.from_dict(cc, orient = 'index') cc_df.columns = ['closeness cent'] cc_df = cc_df.sort_values(by = ['closeness cent']) #cc_df Explanation: End of explanation
7,561
Given the following text description, write Python code to implement the functionality described below step by step Description: PyEarthScience Step1: Add cyclic data. Set minimum and maximum contour values although the interval. Step2: Open a workstation, here x11 window. Step3: Set resources. Step4: Draw the plot. Step5: Show the plot in this notebook.
Python Code: import Ngl,Nio #-- define variables fname = "/Users/k204045/NCL/general/data/new_data/rectilinear_grid_2D.nc" #-- data file name #-- open file and read variables f = Nio.open_file(fname,"r") #-- open data file temp = f.variables["tsurf"][0,::-1,:] #-- first time step, reverse latitude lat = f.variables["lat"][::-1] #-- reverse latitudes lon = f.variables["lon"][:] #-- all longitudes Explanation: PyEarthScience: Python examples for Earth Scientists contour plots Using PyNGL Contour plot with - filled contour areas - without contour line labels - labelbar - title End of explanation tempac = Ngl.add_cyclic(temp[:,:]) minval = 250. #-- minimum contour level maxval = 315 #-- maximum contour level inc = 5. #-- contour level spacing ncn = (maxval-minval)/inc + 1 #-- number of contour levels. Explanation: Add cyclic data. Set minimum and maximum contour values although the interval. End of explanation wkres = Ngl.Resources() #-- generate an res object for workstation wkres.wkColorMap = "rainbow" #-- choose colormap wks_type = "png" #-- graphics output type wks = Ngl.open_wks(wks_type,"plot_contour_PyNGL",wkres) #-- open workstation Explanation: Open a workstation, here x11 window. End of explanation res = Ngl.Resources() #-- generate an resource object for plot if hasattr(f.variables["tsurf"],"long_name"): res.tiMainString = f.variables["tsurf"].long_name #-- set main title res.vpXF = 0.1 #-- start x-position of viewport res.vpYF = 0.9 #-- start y-position of viewport res.vpWidthF = 0.7 #-- width of viewport res.vpHeightF = 0.7 #-- height of viewport res.cnFillOn = True #-- turn on contour fill. res.cnLinesOn = False #-- turn off contour lines res.cnLineLabelsOn = False #-- turn off line labels. res.cnInfoLabelOn = False #-- turn off info label. res.cnLevelSelectionMode = "ManualLevels" #-- select manual level selection mode res.cnMinLevelValF = minval #-- minimum contour value res.cnMaxLevelValF = maxval #-- maximum contour value res.cnLevelSpacingF = inc #-- contour increment res.mpGridSpacingF = 30. #-- map grid spacing res.sfXCStartV = float(min(lon)) #-- x-axis location of 1st element lon res.sfXCEndV = float(max(lon)) #-- x-axis location of last element lon res.sfYCStartV = float(min(lat)) #-- y-axis location of 1st element lat res.sfYCEndV = float(max(lat)) #-- y-axis location of last element lat res.pmLabelBarDisplayMode = "Always" #-- turn on the label bar. res.lbOrientation = "Horizontal" #-- labelbar orientation Explanation: Set resources. End of explanation map = Ngl.contour_map(wks,tempac,res) #-- draw contours over a map. #-- end Ngl.delete_wks(wks) #-- this need to be done to close the graphics output file Ngl.end() Explanation: Draw the plot. End of explanation from IPython.display import Image Image(filename='plot_contour_PyNGL.png') Explanation: Show the plot in this notebook. End of explanation
7,562
Given the following text description, write Python code to implement the functionality described below step by step Description: Training a convolutional neural network on the MNIST data. The original code in script form is here. Step1: Neural Network Settings Here are most of the settings that describe the neural network we use. Iterations Step2: Get the training data Step3: Setup the model Step4: Fit the model Step5: Visualize the inputs and predictions Step6: Save fitted model
Python Code: # Import some stuff from __future__ import print_function, absolute_import, division import numpy as np np.random.seed(1337) # for reproducibility from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Convolution2D, MaxPooling2D from keras.utils import np_utils Explanation: Training a convolutional neural network on the MNIST data. The original code in script form is here. End of explanation # Set some constants batch_size = 128 nb_classes = 10 nb_epoch = 12 # input image dimensions img_rows, img_cols = 28, 28 # number of convolutional filters to use nb_filters = 32 # size of pooling area for max pooling nb_pool = 2 # convolution kernel size nb_conv = 3 # Dropout parameters conv_dropout = 0.25 dens_dropout = 0.5 # Set hidden layer size nb_hidden = 128 Explanation: Neural Network Settings Here are most of the settings that describe the neural network we use. Iterations: These values set the number of iterations. batch_size: The number of samples per gradient update. Bigger values make the gradient update more accurate, but mean it takes longer to train the neural network nb_epoch: The number of times to go through all of the training data. Since batch_size is less than the full training set size, each "epoch" will be updating the gradient multiple times. So basically, the number of iterations is nb_epoch * sample_size / batch_size. nb_classes: The number of classes in the output. For this dataset, it is the numbers 0-9. Convolution Filter settings: Used in the Convolution2D layers. In this case, we have two identical convolution layers. nb_filters: The number of convolution filters. Each image is convolved with all of these filters. nb_conv: The number of rows and columns in the convolution filters. In this case, each filter is a 3x3 kernel. nb_pool: Factor by which to downscale the image (in this case the convolved images) before going to the "normal" part of the neural network. This speeds things up, since it reduces the number of features. Used in the MaxPooling2d layer. Dropout parameters: These are used in the Dropout layers. The layer randomly sets a fraction p of the input units to 0 at each gradient update. It helps to prevent overfitting. The parameters are: conv_dropout: Used in the convolution stage dense_dropout: Used in the "normal" stage nb_hidden: The number of hidden layers in the "normal" part of the neural network. Network Visualization: The neural network looks similar to this one, although this one has an extra pooling step and one more hidden layer in the "normal" part of the network: <img src="http://d3kbpzbmcynnmx.cloudfront.net/wp-content/uploads/2015/11/Screen-Shot-2015-11-07-at-7.26.20-AM.png"> The steps of this neural network are: The original image 1st convolution layer. The result is a bunch of images that are distorted versions of the original Second convolution layer. Convolves all of the convolved images. Pooling layer (downsamples the images). In the image above, there is a pooling layer after each convolution layer. In our network, there is only one pooling layer after both convolutions are done. Fully-connected layer. Each pixel of each image from the previous layer goes into a "normal" neural network node (I think that is how it goes from convolution stage to normal stage at least...) Fully-connected output layer. Each of the outputs from the previous layer is fed into ten nodes that produce probabilities for each class. End of explanation # the data, shuffled and split between train and test sets (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols) X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_test /= 255 print('X_train shape:', X_train.shape) print(X_train.shape[0], 'train samples') print(X_test.shape[0], 'test samples') # convert class vectors to binary class matrices Y_train = np_utils.to_categorical(y_train, nb_classes) Y_test = np_utils.to_categorical(y_test, nb_classes) Explanation: Get the training data End of explanation model = Sequential() model.add(Convolution2D(nb_filters, nb_conv, nb_conv, border_mode='valid', input_shape=(1, img_rows, img_cols))) model.add(Activation('relu')) model.add(Convolution2D(nb_filters, nb_conv, nb_conv)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool))) model.add(Dropout(conv_dropout)) model.add(Flatten()) model.add(Dense(nb_hidden)) model.add(Activation('relu')) model.add(Dropout(dens_dropout)) model.add(Dense(nb_classes)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=['accuracy']) Explanation: Setup the model End of explanation import time t1 = time.time() model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, validation_data=(X_test, Y_test)) t2 = time.time() print('Training Finished in {} minutes.'.format((t2-t1)/60)) # Evaluate score = model.evaluate(X_test, Y_test, verbose=0) print('Test score:', score[0]) print('Test accuracy:', score[1]) Explanation: Fit the model End of explanation import matplotlib.pyplot as plt import seaborn as sns sns.set_style('white') %matplotlib inline def show_predictions(i): X = X_test[i:i+1, :1] digit_probs = model.predict_proba(X)[0] idx = np.argmax(digit_probs) actual = np.argmax(Y_test[i]) # Plot fig, ax = plt.subplots(1, 1) ax.imshow(X[0, 0]) ax.set_title('Predicted Digit = {}\nActual digit = {}'.format(idx, actual)) # Run this lots of times to visualize the testing set show_predictions(np.random.randint(X_test.shape[0])) Explanation: Visualize the inputs and predictions End of explanation import output_model output_model.save_model(model, 'models/MNIST_cnn_model') Explanation: Save fitted model End of explanation
7,563
Given the following text description, write Python code to implement the functionality described below step by step Description: LAB04 Step1: Load taxifare dataset The Taxi Fare dataset for this lab is 106,545 rows and has been pre-processed and split for use in this lab. Note that the dataset is the same as used in the Big Query feature engineering labs. The fare_amount is the target, the continuous value we’ll train a model to predict. First, let's download the .csv data by copying the data from a cloud storage bucket. Step2: Let's check that the files were copied correctly and look like we expect them to. Step3: Create an input pipeline Typically, you will use a two step proces to build the pipeline. Step one is to define the columns of data; i.e., which column we're predicting for, and the default values. Step 2 is to define two functions - a function to define the features and label you want to use and a function to load the training data. Also, note that pickup_datetime is a string and we will need to handle this in our feature engineered model. Step4: Create a Baseline DNN Model in Keras Now let's build the Deep Neural Network (DNN) model in Keras using the functional API. Unlike the sequential API, we will need to specify the input and hidden layers. Note that we are creating a linear regression baseline model with no feature engineering. Recall that a baseline model is a solution to a problem without applying any machine learning techniques. Step5: We'll build our DNN model and inspect the model architecture. Step6: Train the model To train the model, simply call model.fit(). Note that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data. We start by setting up the environment variables for training, creating the input pipeline datasets, and then train our baseline DNN model. Step7: Visualize the model loss curve Next, we will use matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the mean squared error loss over the training epochs for both the train (blue) and test (orange) sets. Step8: Predict with the model locally To predict with Keras, you simply call model.predict() and pass in the cab ride you want to predict the fare amount for. Next we note the fare price at this geolocation and pickup_datetime. Step9: Improve Model Performance Using Feature Engineering We now improve our model's performance by creating the following feature engineering types Step10: Geolocation/Coordinate Feature Columns The pick-up/drop-off longitude and latitude data are crucial to predicting the fare amount as fare amounts in NYC taxis are largely determined by the distance traveled. As such, we need to teach the model the Euclidean distance between the pick-up and drop-off points. Recall that latitude and longitude allows us to specify any location on Earth using a set of coordinates. In our training data set, we restricted our data points to only pickups and drop offs within NYC. New York city has an approximate longitude range of -74.05 to -73.75 and a latitude range of 40.63 to 40.85. Computing Euclidean distance The dataset contains information regarding the pickup and drop off coordinates. However, there is no information regarding the distance between the pickup and drop off points. Therefore, we create a new feature that calculates the distance between each pair of pickup and drop off points. We can do this using the Euclidean Distance, which is the straight-line distance between any two coordinate points. Step11: Scaling latitude and longitude It is very important for numerical variables to get scaled before they are "fed" into the neural network. Here we use min-max scaling (also called normalization) on the geolocation fetures. Later in our model, you will see that these values are shifted and rescaled so that they end up ranging from 0 to 1. First, we create a function named 'scale_longitude', where we pass in all the longitudinal values and add 78 to each value. Note that our scaling longitude ranges from -70 to -78. Thus, the value 78 is the maximum longitudinal value. The delta or difference between -70 and -78 is 8. We add 78 to each longitidunal value and then divide by 8 to return a scaled value. Step12: Next, we create a function named 'scale_latitude', where we pass in all the latitudinal values and subtract 37 from each value. Note that our scaling latitude ranges from -37 to -45. Thus, the value 37 is the minimal latitudinal value. The delta or difference between -37 and -45 is 8. We subtract 37 from each latitudinal value and then divide by 8 to return a scaled value. Step13: Putting it all together We now create two new "geo" functions for our model. We create a function called "euclidean" to initialize our geolocation parameters. We then create a function called transform. The transform function passes our numerical and string column features as inputs to the model, scales geolocation features, then creates the Euclian distance as a transformed variable with the geolocation features. Lastly, we bucketize the latitude and longitude features. Step14: Next, we'll create our DNN model now with the engineered features. We'll set NBUCKETS = 10 to specify 10 buckets when bucketizing the latitude and longitude. Step15: Let's see how our model architecture has changed now. Step16: As before, let's visualize the DNN model layers. Step17: Let's a prediction with this new model with engineered features on the example we had above.
Python Code: import datetime import logging import os import matplotlib.pyplot as plt import numpy as np import tensorflow as tf from tensorflow import feature_column as fc from tensorflow.keras import layers from tensorflow.keras import models # set TF error log verbosity logging.getLogger("tensorflow").setLevel(logging.ERROR) print(tf.version.VERSION) Explanation: LAB04: Advanced Feature Engineering in Keras Learning Objectives Process temporal feature columns in Keras Use Lambda layers to perform feature engineering on geolocation features Create bucketized and crossed feature columns Introduction In this notebook, we use Keras to build a taxifare price prediction model and utilize feature engineering to improve the fare amount prediction for NYC taxi cab rides. Each learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference. Set up environment variables and load necessary libraries We will start by importing the necessary libraries for this lab. End of explanation if not os.path.isdir("../data"): os.makedirs("../data") !gsutil cp gs://cloud-training-demos/feat_eng/data/*.csv ../data Explanation: Load taxifare dataset The Taxi Fare dataset for this lab is 106,545 rows and has been pre-processed and split for use in this lab. Note that the dataset is the same as used in the Big Query feature engineering labs. The fare_amount is the target, the continuous value we’ll train a model to predict. First, let's download the .csv data by copying the data from a cloud storage bucket. End of explanation !ls -l ../data/*.csv !head ../data/*.csv Explanation: Let's check that the files were copied correctly and look like we expect them to. End of explanation CSV_COLUMNS = [ 'fare_amount', 'pickup_datetime', 'pickup_longitude', 'pickup_latitude', 'dropoff_longitude', 'dropoff_latitude', 'passenger_count', 'key', ] LABEL_COLUMN = 'fare_amount' STRING_COLS = ['pickup_datetime'] NUMERIC_COLS = ['pickup_longitude', 'pickup_latitude', 'dropoff_longitude', 'dropoff_latitude', 'passenger_count'] DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']] DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat'] # A function to define features and labesl def features_and_labels(row_data): for unwanted_col in ['key']: row_data.pop(unwanted_col) label = row_data.pop(LABEL_COLUMN) return row_data, label # A utility method to create a tf.data dataset from a Pandas Dataframe def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL): dataset = tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS) dataset = dataset.map(features_and_labels) # features, label if mode == tf.estimator.ModeKeys.TRAIN: dataset = dataset.shuffle(1000).repeat() # take advantage of multi-threading; 1=AUTOTUNE dataset = dataset.prefetch(1) return dataset Explanation: Create an input pipeline Typically, you will use a two step proces to build the pipeline. Step one is to define the columns of data; i.e., which column we're predicting for, and the default values. Step 2 is to define two functions - a function to define the features and label you want to use and a function to load the training data. Also, note that pickup_datetime is a string and we will need to handle this in our feature engineered model. End of explanation # Build a simple Keras DNN using its Functional API def rmse(y_true, y_pred): # Root mean square error return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true))) def build_dnn_model(): # input layer inputs = { colname: layers.Input(name=colname, shape=(), dtype='float32') for colname in NUMERIC_COLS } # feature_columns feature_columns = { colname: fc.numeric_column(colname) for colname in NUMERIC_COLS } # Constructor for DenseFeatures takes a list of numeric columns dnn_inputs = layers.DenseFeatures(feature_columns.values())(inputs) # two hidden layers of [32, 8] just in like the BQML DNN h1 = layers.Dense(32, activation='relu', name='h1')(dnn_inputs) h2 = layers.Dense(8, activation='relu', name='h2')(h1) # final output is a linear activation because this is regression output = layers.Dense(1, activation='linear', name='fare')(h2) model = models.Model(inputs, output) # compile model model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse']) return model Explanation: Create a Baseline DNN Model in Keras Now let's build the Deep Neural Network (DNN) model in Keras using the functional API. Unlike the sequential API, we will need to specify the input and hidden layers. Note that we are creating a linear regression baseline model with no feature engineering. Recall that a baseline model is a solution to a problem without applying any machine learning techniques. End of explanation model = build_dnn_model() tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR') Explanation: We'll build our DNN model and inspect the model architecture. End of explanation TRAIN_BATCH_SIZE = 32 NUM_TRAIN_EXAMPLES = 59621 * 5 NUM_EVALS = 5 NUM_EVAL_EXAMPLES = 14906 trainds = load_dataset('../data/taxi-train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN) evalds = load_dataset('../data/taxi-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000) steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS) history = model.fit(trainds, validation_data=evalds, epochs=NUM_EVALS, steps_per_epoch=steps_per_epoch) Explanation: Train the model To train the model, simply call model.fit(). Note that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data. We start by setting up the environment variables for training, creating the input pipeline datasets, and then train our baseline DNN model. End of explanation def plot_curves(history, metrics): nrows = 1 ncols = 2 fig = plt.figure(figsize=(10, 5)) for idx, key in enumerate(metrics): ax = fig.add_subplot(nrows, ncols, idx+1) plt.plot(history.history[key]) plt.plot(history.history['val_{}'.format(key)]) plt.title('model {}'.format(key)) plt.ylabel(key) plt.xlabel('epoch') plt.legend(['train', 'validation'], loc='upper left'); plot_curves(history, ['loss', 'mse']) Explanation: Visualize the model loss curve Next, we will use matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the mean squared error loss over the training epochs for both the train (blue) and test (orange) sets. End of explanation model.predict({ 'pickup_longitude': tf.convert_to_tensor([-73.982683]), 'pickup_latitude': tf.convert_to_tensor([40.742104]), 'dropoff_longitude': tf.convert_to_tensor([-73.983766]), 'dropoff_latitude': tf.convert_to_tensor([40.755174]), 'passenger_count': tf.convert_to_tensor([3.0]), 'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00 UTC'], dtype=tf.string), }, steps=1) Explanation: Predict with the model locally To predict with Keras, you simply call model.predict() and pass in the cab ride you want to predict the fare amount for. Next we note the fare price at this geolocation and pickup_datetime. End of explanation # TODO 1a # TODO 1b # TODO 1c Explanation: Improve Model Performance Using Feature Engineering We now improve our model's performance by creating the following feature engineering types: Temporal, Categorical, and Geolocation. Temporal Feature Columns We incorporate the temporal feature pickup_datetime. As noted earlier, pickup_datetime is a string and we will need to handle this within the model. First, you will include the pickup_datetime as a feature and then you will need to modify the model to handle our string feature. End of explanation # TODO 2 Explanation: Geolocation/Coordinate Feature Columns The pick-up/drop-off longitude and latitude data are crucial to predicting the fare amount as fare amounts in NYC taxis are largely determined by the distance traveled. As such, we need to teach the model the Euclidean distance between the pick-up and drop-off points. Recall that latitude and longitude allows us to specify any location on Earth using a set of coordinates. In our training data set, we restricted our data points to only pickups and drop offs within NYC. New York city has an approximate longitude range of -74.05 to -73.75 and a latitude range of 40.63 to 40.85. Computing Euclidean distance The dataset contains information regarding the pickup and drop off coordinates. However, there is no information regarding the distance between the pickup and drop off points. Therefore, we create a new feature that calculates the distance between each pair of pickup and drop off points. We can do this using the Euclidean Distance, which is the straight-line distance between any two coordinate points. End of explanation def scale_longitude(lon_column): return (lon_column + 78)/8. Explanation: Scaling latitude and longitude It is very important for numerical variables to get scaled before they are "fed" into the neural network. Here we use min-max scaling (also called normalization) on the geolocation fetures. Later in our model, you will see that these values are shifted and rescaled so that they end up ranging from 0 to 1. First, we create a function named 'scale_longitude', where we pass in all the longitudinal values and add 78 to each value. Note that our scaling longitude ranges from -70 to -78. Thus, the value 78 is the maximum longitudinal value. The delta or difference between -70 and -78 is 8. We add 78 to each longitidunal value and then divide by 8 to return a scaled value. End of explanation def scale_latitude(lat_column): return (lat_column - 37)/8. Explanation: Next, we create a function named 'scale_latitude', where we pass in all the latitudinal values and subtract 37 from each value. Note that our scaling latitude ranges from -37 to -45. Thus, the value 37 is the minimal latitudinal value. The delta or difference between -37 and -45 is 8. We subtract 37 from each latitudinal value and then divide by 8 to return a scaled value. End of explanation def transform(inputs, numeric_cols, string_cols, nbuckets): print("Inputs before features transformation: {}".format(inputs.keys())) # Pass-through columns transformed = inputs.copy() del transformed['pickup_datetime'] feature_columns = { colname: tf.feature_column.numeric_column(colname) for colname in numeric_cols } # Scaling longitude from range [-70, -78] to [0, 1] for lon_col in ['pickup_longitude', 'dropoff_longitude']: transformed[lon_col] = layers.Lambda( scale_longitude, name="scale_{}".format(lon_col))(inputs[lon_col]) # Scaling latitude from range [37, 45] to [0, 1] for lat_col in ['pickup_latitude', 'dropoff_latitude']: transformed[lat_col] = layers.Lambda( scale_latitude, name='scale_{}'.format(lat_col))(inputs[lat_col]) # TODO 2 - Your code here # add Euclidean distance # TODO 3 - Your code here # create bucketized features # TODO 3 # create crossed columns # create embedding columns feature_columns['pickup_and_dropoff'] = fc.embedding_column(pd_pair, 100) print("Transformed features: {}".format(transformed.keys())) print("Feature columns: {}".format(feature_columns.keys())) return transformed, feature_columns Explanation: Putting it all together We now create two new "geo" functions for our model. We create a function called "euclidean" to initialize our geolocation parameters. We then create a function called transform. The transform function passes our numerical and string column features as inputs to the model, scales geolocation features, then creates the Euclian distance as a transformed variable with the geolocation features. Lastly, we bucketize the latitude and longitude features. End of explanation NBUCKETS = 10 # DNN MODEL def rmse(y_true, y_pred): return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true))) def build_dnn_model(): # input layer is all float except for pickup_datetime which is a string inputs = { colname: layers.Input(name=colname, shape=(), dtype='float32') for colname in NUMERIC_COLS } inputs.update({ colname: tf.keras.layers.Input(name=colname, shape=(), dtype='string') for colname in STRING_COLS }) # transforms transformed, feature_columns = transform(inputs, numeric_cols=NUMERIC_COLS, string_cols=STRING_COLS, nbuckets=NBUCKETS) dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed) # two hidden layers of [32, 8] just in like the BQML DNN h1 = layers.Dense(32, activation='relu', name='h1')(dnn_inputs) h2 = layers.Dense(8, activation='relu', name='h2')(h1) # final output is a linear activation because this is regression output = layers.Dense(1, activation='linear', name='fare')(h2) model = models.Model(inputs, output) # Compile model model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse']) return model model = build_dnn_model() Explanation: Next, we'll create our DNN model now with the engineered features. We'll set NBUCKETS = 10 to specify 10 buckets when bucketizing the latitude and longitude. End of explanation tf.keras.utils.plot_model(model, 'dnn_model_engineered.png', show_shapes=False, rankdir='LR') trainds = load_dataset('../data/taxi-train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN) evalds = load_dataset('../data/taxi-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000) steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS) history = model.fit(trainds, validation_data=evalds, epochs=NUM_EVALS+3, steps_per_epoch=steps_per_epoch) Explanation: Let's see how our model architecture has changed now. End of explanation plot_curves(history, ['loss', 'mse']) Explanation: As before, let's visualize the DNN model layers. End of explanation model.predict({ 'pickup_longitude': tf.convert_to_tensor([-73.982683]), 'pickup_latitude': tf.convert_to_tensor([40.742104]), 'dropoff_longitude': tf.convert_to_tensor([-73.983766]), 'dropoff_latitude': tf.convert_to_tensor([40.755174]), 'passenger_count': tf.convert_to_tensor([3.0]), 'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00 UTC'], dtype=tf.string), }, steps=1) Explanation: Let's a prediction with this new model with engineered features on the example we had above. End of explanation
7,564
Given the following text description, write Python code to implement the functionality described below step by step Description: BigQuery Query To View Create a BigQuery view. License Copyright 2020 Google LLC, Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https Step1: 2. Set Configuration This code is required to initialize the project. Fill in required fields and press play. If the recipe uses a Google Cloud Project Step2: 3. Enter BigQuery Query To View Recipe Parameters Specify a single query and choose legacy or standard mode. For PLX use Step3: 4. Execute BigQuery Query To View This does NOT need to be modified unless you are changing the recipe, click play.
Python Code: !pip install git+https://github.com/google/starthinker Explanation: BigQuery Query To View Create a BigQuery view. License Copyright 2020 Google LLC, Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Disclaimer This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team. This code generated (see starthinker/scripts for possible source): - Command: "python starthinker_ui/manage.py colab" - Command: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install Dependencies First install the libraries needed to execute recipes, this only needs to be done once, then click play. End of explanation from starthinker.util.configuration import Configuration CONFIG = Configuration( project="", client={}, service={}, user="/content/user.json", verbose=True ) Explanation: 2. Set Configuration This code is required to initialize the project. Fill in required fields and press play. If the recipe uses a Google Cloud Project: Set the configuration project value to the project identifier from these instructions. If the recipe has auth set to user: If you have user credentials: Set the configuration user value to your user credentials JSON. If you DO NOT have user credentials: Set the configuration client value to downloaded client credentials. If the recipe has auth set to service: Set the configuration service value to downloaded service credentials. End of explanation FIELDS = { 'auth_read':'user', # Credentials used for reading data. 'query':'', # SQL with newlines and all. 'dataset':'', # Existing BigQuery dataset. 'view':'', # View to create from this query. 'legacy':True, # Query type must match source tables. } print("Parameters Set To: %s" % FIELDS) Explanation: 3. Enter BigQuery Query To View Recipe Parameters Specify a single query and choose legacy or standard mode. For PLX use: SELECT * FROM [plx.google:FULL_TABLE_NAME.all] WHERE... If the view exists, it is unchanged, delete it manually to re-create. Modify the values below for your use case, can be done multiple times, then click play. End of explanation from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'bigquery':{ 'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}}, 'from':{ 'query':{'field':{'name':'query','kind':'text','order':1,'default':'','description':'SQL with newlines and all.'}}, 'legacy':{'field':{'name':'legacy','kind':'boolean','order':4,'default':True,'description':'Query type must match source tables.'}} }, 'to':{ 'dataset':{'field':{'name':'dataset','kind':'string','order':2,'default':'','description':'Existing BigQuery dataset.'}}, 'view':{'field':{'name':'view','kind':'string','order':3,'default':'','description':'View to create from this query.'}} } } } ] json_set_fields(TASKS, FIELDS) execute(CONFIG, TASKS, force=True) Explanation: 4. Execute BigQuery Query To View This does NOT need to be modified unless you are changing the recipe, click play. End of explanation
7,565
Given the following text description, write Python code to implement the functionality described below step by step Description: Exploring the Twitter trends API In this notebook we'll take a quick look at the section of the Twitter Rest API that deals with trending terms Step1: Basic calls The first call trends/available returns a set of locations, each with its own set of parameters. Step2: We see from this (and from their docs) that Twitter uses Yahoo WOEIDs to define place names. How they arrived at this list of 467 places is not clear. It's a mix of countries and cities, at least Step3: Hmm, let's see exactly what classification levels they have in this set Step4: 'Unknown'?? Step5: Ah, Soweto is a township, not a city, and Al Ahsa is a region. Moving along, we can dig into specifc places one at a time Step6: There's also a way to fetch the list without hashtags Step7: Simplifying a bit Let's write a few functions so we don't have to write all that out each time. Step8: Weird, there's an L-to-R issue rendering the output here Step9: Just for fun let's compare Algiers with Algeria. Step10: Very similar. How about Chicago vs Miami? Step11: Looking nearby There are specific javascript calls for fetching a user's geolocation through a browser (see Using Geolocation via MDN). With a tool like that in place, you could fetch the user's lat and long and send it to the trends/closest Twitter API call Step12: Hmm, that parentid attribute might be useful.
Python Code: from collections import Counter import os from pprint import pprint import tweepy c_key = os.environ['CONSUMER_KEY'] c_secret = os.environ['CONSUMER_SECRET'] a_token = os.environ['ACCESS_TOKEN'] a_token_secret = os.environ['ACCESS_TOKEN_SECRET'] auth = tweepy.OAuthHandler(c_key, c_secret) auth.set_access_token(a_token, a_token_secret) api = tweepy.API(auth) Explanation: Exploring the Twitter trends API In this notebook we'll take a quick look at the section of the Twitter Rest API that deals with trending terms: GET trends/available GET trends/place GET trends/closest We'll work with the Tweepy library's support for these, explore the calls, and sketch out some possibilities. Setup First some library imports and user key setup. This assumes you've arranged for the necessary app keys at apps.twitter.com. End of explanation trends_available = api.trends_available() len(trends_available) trends_available[0:3] Explanation: Basic calls The first call trends/available returns a set of locations, each with its own set of parameters. End of explanation sorted([t['name'] for t in trends_available])[:25] Explanation: We see from this (and from their docs) that Twitter uses Yahoo WOEIDs to define place names. How they arrived at this list of 467 places is not clear. It's a mix of countries and cities, at least: End of explanation c = Counter([t['placeType']['name'] for t in trends_available]) c Explanation: Hmm, let's see exactly what classification levels they have in this set: End of explanation unk = [t for t in trends_available if t['placeType']['name'] == 'Unknown'] unk Explanation: 'Unknown'?? End of explanation ams = [t for t in trends_available if t['name'] == 'Amsterdam'][0] ams trends_ams = api.trends_place(ams['woeid']) trends_ams trends_ams[0]['trends'] [(t['name'], t['tweet_volume']) for t in trends_ams[0]['trends']] Explanation: Ah, Soweto is a township, not a city, and Al Ahsa is a region. Moving along, we can dig into specifc places one at a time: End of explanation trends_ams = api.trends_place(ams['woeid'], exclude='hashtags') [(t['name'], t['tweet_volume']) for t in trends_ams[0]['trends']] Explanation: There's also a way to fetch the list without hashtags: End of explanation def getloc(name): try: return [t for t in trends_available if t['name'].lower() == name.lower()][0] except: return None def trends(loc, exclude=False): return api.trends_place(loc['woeid'], 'hashtags' if exclude else None) def top(trends): return sorted([(t['name'], t['tweet_volume']) for t in trends[0]['trends']], key=lambda a: a[1] or 0, reverse=True) tok = getloc('tokyo') trends_tok = trends(tok, True) top(trends_tok) alg = getloc('Algeria') trends_alg = trends(alg) top(trends_alg) Explanation: Simplifying a bit Let's write a few functions so we don't have to write all that out each time. End of explanation top(trends_alg)[1][0] Explanation: Weird, there's an L-to-R issue rendering the output here: End of explanation algs = getloc('Algiers') trends_algs = trends(algs) top(trends_algs) Explanation: Just for fun let's compare Algiers with Algeria. End of explanation chi = getloc('Chicago') trends_chi = trends(chi) chi top(trends_chi)[:10] mia = getloc('Miami') mia trends_mia = trends(mia) top(trends_mia)[:10] world = getloc('Worldwide') trends_world = trends(world) top(trends_world)[:25] Explanation: Very similar. How about Chicago vs Miami? End of explanation closeby = api.trends_closest(38.8860307, -76.9931073) closeby[0] top(trends(closeby[0])) Explanation: Looking nearby There are specific javascript calls for fetching a user's geolocation through a browser (see Using Geolocation via MDN). With a tool like that in place, you could fetch the user's lat and long and send it to the trends/closest Twitter API call: End of explanation [t['name'] for t in trends_available if t['parentid'] == 23424977] [t['name'] for t in trends_available if t['parentid'] == world['woeid']] Explanation: Hmm, that parentid attribute might be useful. End of explanation
7,566
Given the following text description, write Python code to implement the functionality described below step by step Description: NEST implementation of the aeif models Hans Ekkehard Plesser and Tanguy Fardet, 2016-09-09 This notebook provides a reference solution for the Adaptive Exponential Integrate and Fire (AEIF) neuronal model and compares it with several numerical implementations using simpler solvers. In particular this justifies the change of implementation in September 2016 to make the simulation closer to the reference solution. Position of the problem Basics The equations governing the evolution of the AEIF model are $$\left\lbrace\begin{array}{rcl} C_m\dot{V} &=& -g_L(V-E_L) + g_L \Delta_T e^{\frac{V-V_T}{\Delta_T}} + I_e + I_s(t) -w\ \tau_s\dot{w} &=& a(V-E_L) - w \end{array}\right.$$ when $V < V_{peak}$ (threshold/spike detection). Once a spike occurs, we apply the reset conditions Step1: Scipy functions mimicking the NEST code Right hand side functions Step2: Complete model Step6: LSODAR reference solution Setting assimulo class Step7: LSODAR reference model Step8: Set the parameters and simulate the models Params (chose a dictionary) Step9: Simulate the 3 implementations Step10: Plot the results Zoom out Step11: Zoom in Step12: Compare properties at spike times Step13: Size of minimal integration timestep Step14: Convergence towards LSODAR reference with step size Zoom out Step15: Zoom in
Python Code: # Install assimulo package in the current Jupyter kernel import sys !{sys.executable} -m pip install assimulo import numpy as np from scipy.integrate import odeint import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (15, 6) Explanation: NEST implementation of the aeif models Hans Ekkehard Plesser and Tanguy Fardet, 2016-09-09 This notebook provides a reference solution for the Adaptive Exponential Integrate and Fire (AEIF) neuronal model and compares it with several numerical implementations using simpler solvers. In particular this justifies the change of implementation in September 2016 to make the simulation closer to the reference solution. Position of the problem Basics The equations governing the evolution of the AEIF model are $$\left\lbrace\begin{array}{rcl} C_m\dot{V} &=& -g_L(V-E_L) + g_L \Delta_T e^{\frac{V-V_T}{\Delta_T}} + I_e + I_s(t) -w\ \tau_s\dot{w} &=& a(V-E_L) - w \end{array}\right.$$ when $V < V_{peak}$ (threshold/spike detection). Once a spike occurs, we apply the reset conditions: $$V = V_r \quad \text{and} \quad w = w + b$$ Divergence In the AEIF model, the spike is generated by the exponential divergence. In practice, this means that just before threshold crossing (threshpassing), the argument of the exponential can become very large. This can lead to numerical overflow or numerical instabilities in the solver, all the more if $V_{peak}$ is large, or if $\Delta_T$ is small. Tested solutions Old implementation (before September 2016) The orginal solution was to bind the exponential argument to be smaller than 10 (ad hoc value to be close to the original implementation in BRIAN). As will be shown in the notebook, this solution does not converge to the reference LSODAR solution. New implementation The new implementation does not bind the argument of the exponential, but the potential itself, since according to the theoretical model, $V$ should never get larger than $V_{peak}$. We will show that this solution is not only closer to the reference solution in general, but also converges towards it as the timestep gets smaller. Reference solution The reference solution is implemented using the LSODAR solver which is described and compared in the following references: http://www.radford.edu/~thompson/RP/eventlocation.pdf (papers citing this one) http://www.sciencedirect.com/science/article/pii/S0377042712000684 http://www.radford.edu/~thompson/RP/rootfinding.pdf https://computation.llnl.gov/casc/nsde/pubs/u88007.pdf http://www.cs.ucsb.edu/~cse/Files/SCE000136.pdf http://www.sciencedirect.com/science/article/pii/0377042789903348 http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.455.2976&rep=rep1&type=pdf https://theses.lib.vt.edu/theses/available/etd-12092002-105032/unrestricted/etd.pdf Technical details and requirements Implementation of the functions The old and new implementations are reproduced using Scipy and are called by the scipy_aeif function The NEST implementations are not shown here, but keep in mind that for a given time resolution, they are closer to the reference result than the scipy implementation since the GSL implementation uses a RK45 adaptive solver. The reference solution using LSODAR, called reference_aeif, is implemented through the assimulo package. Requirements To run this notebook, you need: numpy and scipy assimulo matplotlib End of explanation def rhs_aeif_new(y, _, p): ''' New implementation bounding V < V_peak Parameters ---------- y : list Vector containing the state variables [V, w] _ : unused var p : Params instance Object containing the neuronal parameters. Returns ------- dv : double Derivative of V dw : double Derivative of w ''' v = min(y[0], p.Vpeak) w = y[1] Ispike = 0. if p.DeltaT != 0.: Ispike = p.gL * p.DeltaT * np.exp((v-p.vT)/p.DeltaT) dv = (-p.gL*(v-p.EL) + Ispike - w + p.Ie)/p.Cm dw = (p.a * (v-p.EL) - w) / p.tau_w return dv, dw def rhs_aeif_old(y, _, p): ''' Old implementation bounding the argument of the exponential function (e_arg < 10.). Parameters ---------- y : list Vector containing the state variables [V, w] _ : unused var p : Params instance Object containing the neuronal parameters. Returns ------- dv : double Derivative of V dw : double Derivative of w ''' v = y[0] w = y[1] Ispike = 0. if p.DeltaT != 0.: e_arg = min((v-p.vT)/p.DeltaT, 10.) Ispike = p.gL * p.DeltaT * np.exp(e_arg) dv = (-p.gL*(v-p.EL) + Ispike - w + p.Ie)/p.Cm dw = (p.a * (v-p.EL) - w) / p.tau_w return dv, dw Explanation: Scipy functions mimicking the NEST code Right hand side functions End of explanation def scipy_aeif(p, f, simtime, dt): ''' Complete aeif model using scipy `odeint` solver. Parameters ---------- p : Params instance Object containing the neuronal parameters. f : function Right-hand side function (either `rhs_aeif_old` or `rhs_aeif_new`) simtime : double Duration of the simulation (will run between 0 and tmax) dt : double Time increment. Returns ------- t : list Times at which the neuronal state was evaluated. y : list State values associated to the times in `t` s : list Spike times. vs : list Values of `V` just before the spike. ws : list Values of `w` just before the spike fos : list List of dictionaries containing additional output information from `odeint` ''' t = np.arange(0, simtime, dt) # time axis n = len(t) y = np.zeros((n, 2)) # V, w y[0, 0] = p.EL # Initial: (V_0, w_0) = (E_L, 5.) y[0, 1] = 5. # Initial: (V_0, w_0) = (E_L, 5.) s = [] # spike times vs = [] # membrane potential at spike before reset ws = [] # w at spike before step fos = [] # full output dict from odeint() # imitate NEST: update time-step by time-step for k in range(1, n): # solve ODE from t_k-1 to t_k d, fo = odeint(f, y[k-1, :], t[k-1:k+1], (p, ), full_output=True) y[k, :] = d[1, :] fos.append(fo) # check for threshold crossing if y[k, 0] >= p.Vpeak: s.append(t[k]) vs.append(y[k, 0]) ws.append(y[k, 1]) y[k, 0] = p.Vreset # reset y[k, 1] += p.b # step return t, y, s, vs, ws, fos Explanation: Complete model End of explanation from assimulo.solvers import LSODAR from assimulo.problem import Explicit_Problem class Extended_Problem(Explicit_Problem): # need variables here for access sw0 = [ False ] ts_spikes = [] ws_spikes = [] Vs_spikes = [] def __init__(self, p): self.p = p self.y0 = [self.p.EL, 5.] # V, w # reset variables self.ts_spikes = [] self.ws_spikes = [] self.Vs_spikes = [] #The right-hand-side function (rhs) def rhs(self, t, y, sw): This is the function we are trying to simulate (aeif model). V, w = y[0], y[1] Ispike = 0. if self.p.DeltaT != 0.: Ispike = self.p.gL * self.p.DeltaT * np.exp((V-self.p.vT)/self.p.DeltaT) dotV = ( -self.p.gL*(V-self.p.EL) + Ispike + self.p.Ie - w ) / self.p.Cm dotW = ( self.p.a*(V-self.p.EL) - w ) / self.p.tau_w return np.array([dotV, dotW]) # Sets a name to our function name = 'AEIF_nosyn' # The event function def state_events(self, t, y, sw): This is our function that keeps track of our events. When the sign of any of the events has changed, we have an event. event_0 = -5 if y[0] >= self.p.Vpeak else 5 # spike if event_0 < 0: if not self.ts_spikes: self.ts_spikes.append(t) self.Vs_spikes.append(y[0]) self.ws_spikes.append(y[1]) elif self.ts_spikes and not np.isclose(t, self.ts_spikes[-1], 0.01): self.ts_spikes.append(t) self.Vs_spikes.append(y[0]) self.ws_spikes.append(y[1]) return np.array([event_0]) #Responsible for handling the events. def handle_event(self, solver, event_info): Event handling. This functions is called when Assimulo finds an event as specified by the event functions. ev = event_info event_info = event_info[0] # only look at the state events information. if event_info[0] > 0: solver.sw[0] = True solver.y[0] = self.p.Vreset solver.y[1] += self.p.b else: solver.sw[0] = False def initialize(self, solver): solver.h_sol=[] solver.nq_sol=[] def handle_result(self, solver, t, y): Explicit_Problem.handle_result(self, solver, t, y) # Extra output for algorithm analysis if solver.report_continuously: h, nq = solver.get_algorithm_data() solver.h_sol.extend([h]) solver.nq_sol.extend([nq]) Explanation: LSODAR reference solution Setting assimulo class End of explanation def reference_aeif(p, simtime): ''' Reference aeif model using LSODAR. Parameters ---------- p : Params instance Object containing the neuronal parameters. f : function Right-hand side function (either `rhs_aeif_old` or `rhs_aeif_new`) simtime : double Duration of the simulation (will run between 0 and tmax) dt : double Time increment. Returns ------- t : list Times at which the neuronal state was evaluated. y : list State values associated to the times in `t` s : list Spike times. vs : list Values of `V` just before the spike. ws : list Values of `w` just before the spike h : list List of the minimal time increment at each step. ''' #Create an instance of the problem exp_mod = Extended_Problem(p) #Create the problem exp_sim = LSODAR(exp_mod) #Create the solver exp_sim.atol=1.e-8 exp_sim.report_continuously = True exp_sim.store_event_points = True exp_sim.verbosity = 30 #Simulate t, y = exp_sim.simulate(simtime) #Simulate 10 seconds return t, y, exp_mod.ts_spikes, exp_mod.Vs_spikes, exp_mod.ws_spikes, exp_sim.h_sol Explanation: LSODAR reference model End of explanation # Regular spiking aeif_param = { 'V_reset': -58., 'V_peak': 0.0, 'V_th': -50., 'I_e': 420., 'g_L': 11., 'tau_w': 300., 'E_L': -70., 'Delta_T': 2., 'a': 3., 'b': 0., 'C_m': 200., 'V_m': -70., #! must be equal to E_L 'w': 5., #! must be equal to 5. 'tau_syn_ex': 0.2 } # Bursting aeif_param2 = { 'V_reset': -46., 'V_peak': 0.0, 'V_th': -50., 'I_e': 500.0, 'g_L': 10., 'tau_w': 120., 'E_L': -58., 'Delta_T': 2., 'a': 2., 'b': 100., 'C_m': 200., 'V_m': -58., #! must be equal to E_L 'w': 5., #! must be equal to 5. } # Close to chaos (use resolution < 0.005 and simtime = 200) aeif_param3 = { 'V_reset': -48., 'V_peak': 0.0, 'V_th': -50., 'I_e': 160., 'g_L': 12., 'tau_w': 130., 'E_L': -60., 'Delta_T': 2., 'a': -11., 'b': 30., 'C_m': 100., 'V_m': -60., #! must be equal to E_L 'w': 5., #! must be equal to 5. } class Params(object): ''' Class giving access to the neuronal parameters. ''' def __init__(self): self.params = aeif_param self.Vpeak = aeif_param["V_peak"] self.Vreset = aeif_param["V_reset"] self.gL = aeif_param["g_L"] self.Cm = aeif_param["C_m"] self.EL = aeif_param["E_L"] self.DeltaT = aeif_param["Delta_T"] self.tau_w = aeif_param["tau_w"] self.a = aeif_param["a"] self.b = aeif_param["b"] self.vT = aeif_param["V_th"] self.Ie = aeif_param["I_e"] p = Params() Explanation: Set the parameters and simulate the models Params (chose a dictionary) End of explanation # Parameters of the simulation simtime = 100. resolution = 0.01 t_old, y_old, s_old, vs_old, ws_old, fo_old = scipy_aeif(p, rhs_aeif_old, simtime, resolution) t_new, y_new, s_new, vs_new, ws_new, fo_new = scipy_aeif(p, rhs_aeif_new, simtime, resolution) t_ref, y_ref, s_ref, vs_ref, ws_ref, h_ref = reference_aeif(p, simtime) Explanation: Simulate the 3 implementations End of explanation fig, ax = plt.subplots() ax2 = ax.twinx() # Plot the potentials ax.plot(t_ref, y_ref[:,0], linestyle="-", label="V ref.") ax.plot(t_old, y_old[:,0], linestyle="-.", label="V old") ax.plot(t_new, y_new[:,0], linestyle="--", label="V new") # Plot the adaptation variables ax2.plot(t_ref, y_ref[:,1], linestyle="-", c="k", label="w ref.") ax2.plot(t_old, y_old[:,1], linestyle="-.", c="m", label="w old") ax2.plot(t_new, y_new[:,1], linestyle="--", c="y", label="w new") # Show ax.set_xlim([0., simtime]) ax.set_ylim([-65., 40.]) ax.set_xlabel("Time (ms)") ax.set_ylabel("V (mV)") ax2.set_ylim([-20., 20.]) ax2.set_ylabel("w (pA)") ax.legend(loc=6) ax2.legend(loc=2) plt.show() Explanation: Plot the results Zoom out End of explanation fig, ax = plt.subplots() ax2 = ax.twinx() # Plot the potentials ax.plot(t_ref, y_ref[:,0], linestyle="-", label="V ref.") ax.plot(t_old, y_old[:,0], linestyle="-.", label="V old") ax.plot(t_new, y_new[:,0], linestyle="--", label="V new") # Plot the adaptation variables ax2.plot(t_ref, y_ref[:,1], linestyle="-", c="k", label="w ref.") ax2.plot(t_old, y_old[:,1], linestyle="-.", c="y", label="w old") ax2.plot(t_new, y_new[:,1], linestyle="--", c="m", label="w new") ax.set_xlim([90., 92.]) ax.set_ylim([-65., 40.]) ax.set_xlabel("Time (ms)") ax.set_ylabel("V (mV)") ax2.set_ylim([17.5, 18.5]) ax2.set_ylabel("w (pA)") ax.legend(loc=5) ax2.legend(loc=2) plt.show() Explanation: Zoom in End of explanation print("spike times:\n-----------") print("ref", np.around(s_ref, 3)) # ref lsodar print("old", np.around(s_old, 3)) print("new", np.around(s_new, 3)) print("\nV at spike time:\n---------------") print("ref", np.around(vs_ref, 3)) # ref lsodar print("old", np.around(vs_old, 3)) print("new", np.around(vs_new, 3)) print("\nw at spike time:\n---------------") print("ref", np.around(ws_ref, 3)) # ref lsodar print("old", np.around(ws_old, 3)) print("new", np.around(ws_new, 3)) Explanation: Compare properties at spike times End of explanation plt.semilogy(t_ref, h_ref, label='Reference') plt.semilogy(t_old[1:], [d['hu'] for d in fo_old], linewidth=2, label='Old') plt.semilogy(t_new[1:], [d['hu'] for d in fo_new], label='New') plt.legend(loc=6) plt.show(); Explanation: Size of minimal integration timestep End of explanation plt.plot(t_ref, y_ref[:,0], label="V ref.") resolutions = (0.1, 0.01, 0.001) di_res = {} for resolution in resolutions: t_old, y_old, _, _, _, _ = scipy_aeif(p, rhs_aeif_old, simtime, resolution) t_new, y_new, _, _, _, _ = scipy_aeif(p, rhs_aeif_new, simtime, resolution) di_res[resolution] = (t_old, y_old, t_new, y_new) plt.plot(t_old, y_old[:,0], linestyle=":", label="V old, r={}".format(resolution)) plt.plot(t_new, y_new[:,0], linestyle="--", linewidth=1.5, label="V new, r={}".format(resolution)) plt.xlim(0., simtime) plt.xlabel("Time (ms)") plt.ylabel("V (mV)") plt.legend(loc=2) plt.show(); Explanation: Convergence towards LSODAR reference with step size Zoom out End of explanation plt.plot(t_ref, y_ref[:,0], label="V ref.") for resolution in resolutions: t_old, y_old = di_res[resolution][:2] t_new, y_new = di_res[resolution][2:] plt.plot(t_old, y_old[:,0], linestyle="--", label="V old, r={}".format(resolution)) plt.plot(t_new, y_new[:,0], linestyle="-.", linewidth=2., label="V new, r={}".format(resolution)) plt.xlim(90., 92.) plt.ylim([-62., 2.]) plt.xlabel("Time (ms)") plt.ylabel("V (mV)") plt.legend(loc=2) plt.show(); Explanation: Zoom in End of explanation
7,567
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Correlation In this section we will develop a measure of how tightly clustered a scatter diagram is about a straight line. Formally, this is called measuring linear association. The correlation coefficient The correlation coefficient measures the strength of the linear relationship between two variables. Graphically, it measures how clustered the scatter diagram is around a straight line. The term correlation coefficient isn't easy to say, so it is usually shortened to correlation and denoted by $r$. Here are some mathematical facts about $r$ that we will just observe by simulation. The correlation coefficient $r$ is a number between $-1$ and 1. $r$ measures the extent to which the scatter plot clusters around a straight line. $r = 1$ if the scatter diagram is a perfect straight line sloping upwards, and $r = -1$ if the scatter diagram is a perfect straight line sloping downwards. The function r_scatter takes a value of $r$ as its argument and simulates a scatter plot with a correlation very close to $r$. Because of randomness in the simulation, the correlation is not expected to be exactly equal to $r$. Call r_scatter a few times, with different values of $r$ as the argument, and see how the scatter plot changes. When $r=1$ the scatter plot is perfectly linear and slopes upward. When $r=-1$, the scatter plot is perfectly linear and slopes downward. When $r=0$, the scatter plot is a formless cloud around the horizontal axis, and the variables are said to be uncorrelated. Step2: Calculating $r$ The formula for $r$ is not apparent from our observations so far. It has a mathematical basis that is outside the scope of this class. However, as you will see, the calculation is straightforward and helps us understand several of the properties of $r$. Formula for $r$ Step3: Based on the scatter diagram, we expect that $r$ will be positive but not equal to 1. Step4: Step 1. Convert each variable to standard units. Step5: Step 2. Multiply each pair of standard units. Step6: Step 3. $r$ is the average of the products computed in Step 2. Step7: As expected, $r$ is positive but not equal to 1. Properties of $r$ The calculation shows that Step8: The correlation function We are going to be calculating correlations repeatedly, so it will help to define a function that computes it by performing all the steps described above. Let's define a function correlation that takes a table and the labels of two columns in the table. The function returns $r$, the mean of the products of those column values in standard units. Step9: Let's call the function on the x and y columns of t. The function returns the same answer to the correlation between $x$ and $y$ as we got by direct application of the formula for $r$. Step10: As we noticed, the order in which the variables are specified doesn't matter. Step11: Calling correlation on columns of the table suv gives us the correlation between price and mileage as well as the correlation between price and acceleration.
Python Code: z = np.random.normal(0, 1, 500) def r_scatter(xs, r): Generate y-values for a scatter plot with correlation approximately r return r*xs + (np.sqrt(1-r**2))*z corr_opts = { 'aspect_ratio': 1, 'xlim': (-3.5, 3.5), 'ylim': (-3.5, 3.5), } nbi.scatter(np.random.normal(size=500), r_scatter, options=corr_opts, r=(-1, 1, 0.05)) Explanation: Correlation In this section we will develop a measure of how tightly clustered a scatter diagram is about a straight line. Formally, this is called measuring linear association. The correlation coefficient The correlation coefficient measures the strength of the linear relationship between two variables. Graphically, it measures how clustered the scatter diagram is around a straight line. The term correlation coefficient isn't easy to say, so it is usually shortened to correlation and denoted by $r$. Here are some mathematical facts about $r$ that we will just observe by simulation. The correlation coefficient $r$ is a number between $-1$ and 1. $r$ measures the extent to which the scatter plot clusters around a straight line. $r = 1$ if the scatter diagram is a perfect straight line sloping upwards, and $r = -1$ if the scatter diagram is a perfect straight line sloping downwards. The function r_scatter takes a value of $r$ as its argument and simulates a scatter plot with a correlation very close to $r$. Because of randomness in the simulation, the correlation is not expected to be exactly equal to $r$. Call r_scatter a few times, with different values of $r$ as the argument, and see how the scatter plot changes. When $r=1$ the scatter plot is perfectly linear and slopes upward. When $r=-1$, the scatter plot is perfectly linear and slopes downward. When $r=0$, the scatter plot is a formless cloud around the horizontal axis, and the variables are said to be uncorrelated. End of explanation x = np.arange(1, 7, 1) y = make_array(2, 3, 1, 5, 2, 7) t = Table().with_columns( 'x', x, 'y', y ) t Explanation: Calculating $r$ The formula for $r$ is not apparent from our observations so far. It has a mathematical basis that is outside the scope of this class. However, as you will see, the calculation is straightforward and helps us understand several of the properties of $r$. Formula for $r$: $r$ is the average of the products of the two variables, when both variables are measured in standard units. Here are the steps in the calculation. We will apply the steps to a simple table of values of $x$ and $y$. End of explanation nbi.scatter(t.column(0), t.column(1), options={'aspect_ratio': 1}) Explanation: Based on the scatter diagram, we expect that $r$ will be positive but not equal to 1. End of explanation def standard_units(nums): return (nums - np.mean(nums)) / np.std(nums) t_su = t.with_columns( 'x (standard units)', standard_units(x), 'y (standard units)', standard_units(y) ) t_su Explanation: Step 1. Convert each variable to standard units. End of explanation t_product = t_su.with_column('product of standard units', t_su.column(2) * t_su.column(3)) t_product Explanation: Step 2. Multiply each pair of standard units. End of explanation # r is the average of the products of standard units r = np.mean(t_product.column(4)) r Explanation: Step 3. $r$ is the average of the products computed in Step 2. End of explanation nbi.scatter(t.column(1), t.column(0), options={'aspect_ratio': 1}) Explanation: As expected, $r$ is positive but not equal to 1. Properties of $r$ The calculation shows that: $r$ is a pure number. It has no units. This is because $r$ is based on standard units. $r$ is unaffected by changing the units on either axis. This too is because $r$ is based on standard units. $r$ is unaffected by switching the axes. Algebraically, this is because the product of standard units does not depend on which variable is called $x$ and which $y$. Geometrically, switching axes reflects the scatter plot about the line $y=x$, but does not change the amount of clustering nor the sign of the association. End of explanation def correlation(t, x, y): return np.mean(standard_units(t.column(x))*standard_units(t.column(y))) interact(correlation, t=fixed(t), x=widgets.ToggleButtons(options=['x', 'y'], description='x-axis'), y=widgets.ToggleButtons(options=['x', 'y'], description='y-axis')) Explanation: The correlation function We are going to be calculating correlations repeatedly, so it will help to define a function that computes it by performing all the steps described above. Let's define a function correlation that takes a table and the labels of two columns in the table. The function returns $r$, the mean of the products of those column values in standard units. End of explanation correlation(t, 'x', 'y') Explanation: Let's call the function on the x and y columns of t. The function returns the same answer to the correlation between $x$ and $y$ as we got by direct application of the formula for $r$. End of explanation correlation(t, 'y', 'x') Explanation: As we noticed, the order in which the variables are specified doesn't matter. End of explanation suv = (Table.read_table('https://www.inferentialthinking.com/notebooks/hybrid.csv') .where('class', 'SUV')) interact(correlation, t=fixed(suv), x=widgets.ToggleButtons(options=['mpg', 'msrp', 'acceleration'], description='x-axis'), y=widgets.ToggleButtons(options=['mpg', 'msrp', 'acceleration'], description='y-axis')) correlation(suv, 'mpg', 'msrp') correlation(suv, 'acceleration', 'msrp') Explanation: Calling correlation on columns of the table suv gives us the correlation between price and mileage as well as the correlation between price and acceleration. End of explanation
7,568
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to Python and Natural Language Technologies Lecture 01, Introduction to Python September 6, 2017 About this part of the course Goal upper intermediate level Python will cover some advanced concepts focus on string manipulation Prerequisites intermediate level in at least one object oriented programming language must know Step1: The last command's output is displayed Step2: This can be a tuple of multiple values Step3: Markdown cell This is in bold This is in italics | This | is | | --- | --- | | a | table | and is a pretty LateX equation Step4: For a complete list of magic commands Step5: Course material - Jupyter slides Jupyter notebooks can be converted to slides and rendered with Reveal.js just like this course material. This slideshow is a single Jupyter notebook which means Step6: The input and output of code cells can be accessed Previous output Step7: Next-previous output Step8: Next-next previous output Step9: N-th output can also be accessed as a variable _output_count. This is only defined if the N-th cell had an output. Here is a way to list all defined outputs (you will understand the code in 3 week) Step10: Inputs can be accessed similarly Previous input Step11: N-th input Step12: The Python programming language History of Python Python started as a hobby project of Dutch programmer, Guido van Rossum in 1989. Python 1.0 in 1994 Python 2.0 in 2000 cycle-detecting garbage collector Unicode support Python 3.0 in 2008 backward incompatible Python2 End-of-Life (EOL) date was postponed from 2015 to 2020 # Benevolent Dictator for Life <img width="400" alt="portfolio_view" src="https Step13: Python neologisms the Python community has a number of made-up expressions Pythonic Step14: Dynamic typing type checking is performed at run-time as opposed to compile-time (C++) Step15: Assignment assignment differs from other imperative languages Step16: Simple statements if, elif, else Step17: Conditional expressions one-line if statements the order of operands is different from C's ? Step18: Lists lists are the most frequently used built-in containers basic operations Step19: for, range Iterating a list Step20: Iterating over a range of integers The same in C++ Step21: specifying the start of the range Step22: specifying the step. Note that in this case we need to specify all three positional arguments. Step23: while Step24: There is no do...while loop in Python. break and continue break Step25: Functions Defining functions Functions can be defined using the def keyword Step26: Function arguments positional named or keyword arguments keyword arguments must follow positional arguments Step27: Default arguments arguments can have default values default arguments must follow non-default arguments Step28: Default arguments need not be specified when calling the function Step29: If more than one value has default arguments, either can be skipped Step30: This mechanism allows having a very large number of arguments. Many libraries have functions with dozens of arguments. The popular data analysis library pandas has functions with dozens of arguments, for example Step31: Zen of Python
Python Code: print("Hello world") Explanation: Introduction to Python and Natural Language Technologies Lecture 01, Introduction to Python September 6, 2017 About this part of the course Goal upper intermediate level Python will cover some advanced concepts focus on string manipulation Prerequisites intermediate level in at least one object oriented programming language must know: class, instance, method, operator overloading, basic IO handling good to know: static method, property, mutability, garbage collection Course material Official Github repository will push the slideshow notebooks right before the lecture, so you can follow on your own notebook Homework one homework for this part released on Week 4 deadline by the end of Week 7 Jupyter Jupyter - formally known as IPython Notebook is a web application that allows you to create and share documents with live code, equations, visualizations etc. Jupyter notebooks are JSON files with the extension .ipynb can be converted to HTML, PDF, LateX etc. can render images, tables, graphs, LateX equations content is organized into cells Cell types code cell: Python/R/Lua/etc. code raw cell: raw text markdown cell: formatted text using Markdown Code cell End of explanation 2 + 3 3 + 4 Explanation: The last command's output is displayed End of explanation 2 + 3, 3 + 4, "hello " + "world" Explanation: This can be a tuple of multiple values End of explanation %%time for x in range(100000): pass %%timeit x = 2 %%writefile hello.py print("Hello world from BME") Explanation: Markdown cell This is in bold This is in italics | This | is | | --- | --- | | a | table | and is a pretty LateX equation: $$ \mathbf{E}\cdot\mathrm{d}\mathbf{S} = \frac{1}{\varepsilon_0} \iiint_\Omega \rho \,\mathrm{d}V $$ Using Jupyter Command mode and edit mode Jupyter has two modes: command mode and edit mode Command mode: perform non-edit operations on selected cells (can select more than one cell) selected cells are marked blue Edit mode: edit a single cell the cell being edited is marked green Switching between modes Esc: Edit mode -> Command mode Enter or double click: Command mode -> Edit mode Running cells Ctrl + Enter: run cell Shift + Enter: run cell and select next cell Alt + Enter: run cell and insert new cell below Cell magic Special commands can modify a single cell's behavior, for example End of explanation %lsmagic Explanation: For a complete list of magic commands: End of explanation print("this is run first") print("this is run afterwords. Note the execution count on the left.") Explanation: Course material - Jupyter slides Jupyter notebooks can be converted to slides and rendered with Reveal.js just like this course material. This slideshow is a single Jupyter notebook which means: - you can view it as a notebook on Github - you can run and modify it on your own computer - you can render it using Reveal.js ~~~ jupyter-nbconvert --to slides 01_Python_introduction.ipynb --reveal-prefix=reveal.js --post serve ~~~ More on Jupyter slides: 10 min video on Jupyter slides cells may be skipped during presentations some extra material is skipped, they will not be covered in the exam all notebooks should run without errors using Kernel -&gt; Restart &amp; Run All code samples that would raise an exception are commented this live presentation uses the RISE jupyter extension Under the hood each notebook is run by its own Kernel (Python interpreter) the kernel can interrupted or restarted through the Kernel menu always run Kernel -&gt; Restart &amp; Run All before submitting homework to make sure that your notebook behaves as expected all cells share a single namespace cells can be run in arbitrary order, execution count is helpful End of explanation 42 _ Explanation: The input and output of code cells can be accessed Previous output: End of explanation "first" "second" __ __ Explanation: Next-previous output: End of explanation ___ Explanation: Next-next previous output: End of explanation list(filter(lambda x: x.startswith('_') and x[1:].isdigit(), globals())) Explanation: N-th output can also be accessed as a variable _output_count. This is only defined if the N-th cell had an output. Here is a way to list all defined outputs (you will understand the code in 3 week): End of explanation _i Explanation: Inputs can be accessed similarly Previous input: End of explanation _i2 Explanation: N-th input: End of explanation import antigravity Explanation: The Python programming language History of Python Python started as a hobby project of Dutch programmer, Guido van Rossum in 1989. Python 1.0 in 1994 Python 2.0 in 2000 cycle-detecting garbage collector Unicode support Python 3.0 in 2008 backward incompatible Python2 End-of-Life (EOL) date was postponed from 2015 to 2020 # Benevolent Dictator for Life <img width="400" alt="portfolio_view" src="https://upload.wikimedia.org/wikipedia/commons/6/66/Guido_van_Rossum_OSCON_2006.jpg"> Guido van Rossum at OSCON 2006. by Doc Searls licensed under CC BY 2.0 Python community and development Python Software Foundation nonprofit organization based in Delaware, US managed through PEPs (Python Enhancement Proposal) strong community inclusion large standard library very large third-party module repository called PyPI (Python Package Index) pip installer End of explanation n = 12 if n % 2 == 0: print("n is even") else: print("n is odd") Explanation: Python neologisms the Python community has a number of made-up expressions Pythonic: following Python's conventions, Python-like Pythonist or Pythonista: good Python programmer General properties of Python Whitespaces whitespace indentation instead of curly braces no semicolons End of explanation n = 2 print(type(n)) n = 2.1 print(type(n)) n = "foo" print(type(n)) Explanation: Dynamic typing type checking is performed at run-time as opposed to compile-time (C++) End of explanation i = 2 print(id(i)) i = 3 print(id(i)) i = "foo" print(id(i)) s = i print(id(s) == id(i)) s += "bar" print(id(s) == id(i)) Explanation: Assignment assignment differs from other imperative languages: in C++ i = 2 translates to typed variable named i receives a copy of numeric value 2 in Python i = 2 translates to name i receives a reference to object of numeric type of value 2 the built-in function id returns the object's id End of explanation #n = int(input()) n = 12 if n < 0: print("N is negative") elif n > 0: print("N is positive") else: print("N is neither positive nor negative") Explanation: Simple statements if, elif, else End of explanation n = -2 abs_n = n if n >= 0 else -n abs_n Explanation: Conditional expressions one-line if statements the order of operands is different from C's ?: operator, the C version of abs would look like this ~~~C int x = -2; int abs_x = x ? x>=0 : -x; ~~~ - should only be used for very short statements &lt;expr1&gt; if &lt;condition&gt; else &lt;expr2&gt; End of explanation l = [] # empty list l.append(2) l.append(2) l.append("foo") len(l), l l[1] = "bar" l.extend([-1, True]) len(l), l Explanation: Lists lists are the most frequently used built-in containers basic operations: indexing, length, append, extend lists will be covered in detail next week End of explanation for e in ["foo", "bar"]: print(e) Explanation: for, range Iterating a list End of explanation for i in range(5): print(i) Explanation: Iterating over a range of integers The same in C++: ~~~C++ for (int i=0; i<5; i++) cout << i << endl; ~~~ By default range starts from 0. End of explanation for i in range(2, 5): print(i) Explanation: specifying the start of the range: End of explanation for i in range(0, 10, 2): print(i) Explanation: specifying the step. Note that in this case we need to specify all three positional arguments. End of explanation i = 0 while i < 5: print(i) i += 1 Explanation: while End of explanation for i in range(10): if i % 2 == 0: continue print(i) for i in range(10): if i > 4: break print(i) Explanation: There is no do...while loop in Python. break and continue break: allows early exit from a loop continue: allows early jump to next iteration End of explanation def foo(): print("this is a function") foo() Explanation: Functions Defining functions Functions can be defined using the def keyword: End of explanation def foo(arg1, arg2, arg3): print("arg1 ", arg1) print("arg2 ", arg2) print("arg3 ", arg3) foo(1, 2, 3) foo(1, arg3=2, arg2=29) Explanation: Function arguments positional named or keyword arguments keyword arguments must follow positional arguments End of explanation def foo(arg1, arg2, arg3=3): print("arg1 ", arg1) print("arg2 ", arg2) print("arg3 ", arg3) foo(1, 2) Explanation: Default arguments arguments can have default values default arguments must follow non-default arguments End of explanation foo(1, 2) foo(arg1=1, arg3=33, arg2=222) Explanation: Default arguments need not be specified when calling the function End of explanation def foo(arg1, arg2=2, arg3=3): print("arg1 ", arg1) print("arg2 ", arg2) print("arg3 ", arg3) foo(11, arg3=33) Explanation: If more than one value has default arguments, either can be skipped: End of explanation def foo(n): if n < 0: return "negative" if 0 <= n < 10: return "positive", n return print(foo(-2)) print(foo(3)) print(foo(12)) Explanation: This mechanism allows having a very large number of arguments. Many libraries have functions with dozens of arguments. The popular data analysis library pandas has functions with dozens of arguments, for example: ~~~python pandas.read_csv(filepath_or_buffer, sep=', ', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, iterator=False, chunksize=None, compression='infer', thousands=None, decimal=b'.', lineterminator=None, quotechar='"', quoting=0, escapechar=None, comment=None, encoding=None, dialect=None, tupleize_cols=False, error_bad_lines=True, warn_bad_lines=True, skipfooter=0, skip_footer=0, doublequote=True, delim_whitespace=False, as_recarray=False, compact_ints=False, use_unsigned=False, low_memory=True, buffer_lines=None, memory_map=False, float_precision=None) ~~~ The return statement functions may return more than one value a tuple of the values is returned without an explicit return statement None is returned an empty return statement returns None End of explanation import this Explanation: Zen of Python End of explanation
7,569
Given the following text description, write Python code to implement the functionality described below step by step Description: 1) With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50 that are playable in the USA (or the country of your choice), along with their popularity score. <br /> Step1: 2) What genres are most represented in the search results? Edit your previous printout to also display a list of their genres in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed". <br /> Step2: Tip Step3: 4) Print a list of Lil's that are more popular than Lil' Kim. <br /> Step4: 5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks. <br /> Step5: Tip Step6: 7) Since we're talking about Lils, what about Biggies? How many total "Biggie" artists are there? How many total "Lil"s? If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies? <br /> Step7: 8) Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average? <br />
Python Code: import requests response = requests.get('https://api.spotify.com/v1/search?query=artist:lil&type=artist&market=us&limit=50') data = response.json() artists = data['artists']['items'] for artist in artists: print(artist['name'], artist['popularity']) Explanation: 1) With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50 that are playable in the USA (or the country of your choice), along with their popularity score. <br /> End of explanation for artist in artists: if not artist['genres']: print(artist['name'], artist['popularity'], 'No genres listed') else: print(artist['name'], artist['popularity'], ', '.join(artist['genres'])) genre_list = [] for artist in artists: for genre in artist['genres']: genre_list.append(genre) sorted_genre = sorted(genre_list) genre_list_number = range(len(sorted_genre)) genre_count = 0 for number in genre_list_number: if not sorted_genre[number] == sorted_genre[number - 1]: print((sorted_genre[number]), genre_list.count(sorted_genre[number])) if genre_count < genre_list.count(sorted_genre[number]): genre_count = genre_list.count(sorted_genre[number]) freq_genre = sorted_genre[number] print('') print('With', genre_count, 'artists,', freq_genre, 'is the most represented in search results.') numbers = [72, 3, 0, 72, 34, 72, 3] Explanation: 2) What genres are most represented in the search results? Edit your previous printout to also display a list of their genres in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed". <br /> End of explanation highest_pop = 0 for artist in artists: if artist['name'] != 'Lil Wayne' and highest_pop < artist['popularity']: highest_pop = artist['popularity'] highest_pop_artist = artist['name'] print(highest_pop_artist, 'is the second-most-popular artist with \"Lil\" in his/her name.') most_followers = 0 for artist in artists: if most_followers < artist['followers']['total']: most_followers = artist['followers']['total'] most_followers_artist = artist['name'] print(most_followers_artist, 'has', most_followers, 'followers.') if highest_pop_artist == most_followers_artist: print('The second-most-popular \'Lil\' artist is also the one with the most followers.') else: print('The second-most-popular \'Lil\' artist and the one with the most followers are different people.') Explanation: Tip: "how to join a list Python" might be a helpful search <br /> 3) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating. Is it the same artist who has the largest number of followers? <br /> End of explanation for artist in artists: if artist['name'] == 'Lil\' Kim': more_popular = artist['popularity'] for artist in artists: if more_popular < artist ['popularity']: print(artist['name'], 'is more popular than Lil\' Kim with a popularity score of', artist['popularity']) Explanation: 4) Print a list of Lil's that are more popular than Lil' Kim. <br /> End of explanation wayne_id = '55Aa2cqylxrFIXC767Z865' wayne_response = requests.get('https://api.spotify.com/v1/artists/' + wayne_id + '/top-tracks?country=us') wayne_data = wayne_response.json() print('Lil Wayne\'s top tracks:') wayne_tracks = wayne_data['tracks'] for track in wayne_tracks: print(track['name']) print('') kim_id = '5tth2a3v0sWwV1C7bApBdX' kim_response = requests.get('https://api.spotify.com/v1/artists/' + kim_id + '/top-tracks?country=us') kim_data = kim_response.json() print('Lil\' Kim\'s top tracks:') kim_tracks = kim_data['tracks'] for track in kim_tracks: print(track['name']) Explanation: 5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks. <br /> End of explanation print('Lil Wayne\'s explicit top tracks:') ew_total_pop = 0 ew_total_tracks = 0 ew_playtime = 0 for track in wayne_tracks: if track['explicit']: ew_total_pop = ew_total_pop + track['popularity'] ew_total_tracks = ew_total_tracks + 1 ew_playtime = ew_playtime + track['duration_ms']/60000 if ew_total_tracks == 0: print('There are no explicit tracks.') else: print('The average popularity is', ew_total_pop / ew_total_tracks) print('He has', ew_playtime, 'minutes of explicit music in his top tracks.') print('') print('Lil Wayne\'s non-explicit top tracks:') nw_total_pop = 0 nw_total_tracks = 0 nw_playtime = 0 for track in wayne_tracks: if not track['explicit']: nw_total_pop = nw_total_pop + track ['popularity'] nw_total_tracks = nw_total_tracks + 1 nw_playtime = nw_playtime + track['duration_ms']/60000 if nw_total_tracks == 0: print('There are no non-explicit tracks.') else: print('The average popularity is', nw_total_pop / nw_total_tracks) print('He has', nw_playtime, 'minutes of non-explicit music in his top tracks.') print('') print('Lil\' Kim\'s explicit top tracks:') ek_total_pop = 0 ek_total_tracks = 0 ek_playtime = 0 for track in kim_tracks: if track['explicit']: ek_total_pop = ek_total_pop + track ['popularity'] ek_total_tracks = ek_total_tracks + 1 ek_playtime = ek_playtime + track['duration_ms']/60000 if ek_total_tracks == 0: print('There are no explicit tracks.') else: print('The average popularity is', ek_total_pop / ek_total_tracks) print('She has', ek_playtime, 'minutes of explicit music in her top tracks.') print('') print('Lil\' Kim\'s non-explicit top tracks:') nk_total_pop = 0 nk_total_tracks = 0 nk_playtime = 0 for track in kim_tracks: if not track['explicit']: nk_total_pop = nk_total_pop + track ['popularity'] nk_total_tracks = nk_total_tracks + 1 nk_playtime = nk_playtime + track['duration_ms']/60000 if nk_total_tracks == 0: print('There are no non-explicit tracks.') else: print('The average popularity is', nk_total_pop / nk_total_tracks) print('She has', nk_playtime, 'minutes of non-explicit music in her top tracks.') Explanation: Tip: You're going to be making two separate requests, be sure you DO NOT save them into the same variable. <br /> 6) Will the world explode if a musicians swears? Get an average popularity for their explicit songs vs. their non-explicit songs. How many minutes of explicit songs do they have? Non-explicit? <br /> End of explanation biggie_response = requests.get('https://api.spotify.com/v1/search?query=artist:biggie&type=artist&market=us&limit=50') biggie_data = biggie_response.json() biggie_artists = biggie_data['artists']['items'] total_biggies = 0 for artist in biggie_artists: total_biggies = total_biggies + 1 print('There are', total_biggies, 'Biggies on Spotify.') print('It would take', total_biggies * 5, 'seconds to request all of the Biggies if you were requesting one every five seconds.') print('') pages = range(90) total_lils = 0 for page in pages: lil_response = requests.get('https://api.spotify.com/v1/search?query=artist:lil&type=artist&market=us&limit=50&offset=' + str(page * 50)) lil_data = lil_response.json() lil_artists = lil_data['artists']['items'] for artist in lil_artists: total_lils = total_lils + 1 print('There are', total_lils, 'Lils on Spotify.') print('It would take', round(total_lils / 12), 'minutes to request all of the Lils if you were requesting one every five seconds.') Explanation: 7) Since we're talking about Lils, what about Biggies? How many total "Biggie" artists are there? How many total "Lil"s? If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies? <br /> End of explanation biggie_total_pop = 0 for artist in biggie_artists: biggie_total_pop = biggie_total_pop + artist['popularity'] biggie_avg_pop = biggie_total_pop / 50 lil_response_pg1 = requests.get('https://api.spotify.com/v1/search?query=artist:lil&type=artist&market=us&limit=50') lil_data_pg1 = lil_response_pg1.json() lil_artists_pg1 = lil_data_pg1['artists']['items'] lil_total_pop = 0 for artist in lil_artists_pg1: lil_total_pop = lil_total_pop + artist['popularity'] lil_avg_pop = lil_total_pop / 50 if biggie_avg_pop > lil_avg_pop: print('The top 50 biggies are more popular.') elif biggie_avg_pop < lil_avg_pop: print('The top 50 lils are more popular.') else: print('They are equally popular.') Explanation: 8) Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average? <br /> End of explanation
7,570
Given the following text description, write Python code to implement the functionality described below step by step Description: When I first ran this, my dataframes weren't "aligned". So it's very important to check your datasets after every load. The correspondence between dates and topics and numerical features is critical for training! Step1: Wow! DiscriminantAnalysis is VERY discriminating! Step2: But not in a good way. 10x more true favorites than predicted. Our unbalanced training set makes it easy for the judge to be tough. Let's mellow our judge a bit... Step3: High accuracy, but low MCC (correlation) Balance the training? Get rid of some negatives? Accentuate the positive? <-- give this a try yourself Step4: So let's add some more negative examples back in. 50x imbalance is defintiely misleading. But 2-5x imbalance is probably OK. Step5: At least the confusion matrix looks balanced now Step6: Should have known, imbalance doesn't help...
Python Code: print(len(dates)) print(len(topics)) print(len(nums)) sum(nums.index == dates.index) == len(dates) sum(nums.index == topics.index) == len(dates) disc = LinearDiscriminantAnalysis() disc category = (np.ceil(nums.favorite_count ** .13)).astype(np.int8) disc = LinearDiscriminantAnalysis().fit(topics, category) predicted_favorites = disc.predict(topics) predicted_favorites[:100] np.sum(predicted_favorites > 0) Explanation: When I first ran this, my dataframes weren't "aligned". So it's very important to check your datasets after every load. The correspondence between dates and topics and numerical features is critical for training! End of explanation np.sum(nums.favorite_count >= 1) Explanation: Wow! DiscriminantAnalysis is VERY discriminating! End of explanation results = pd.DataFrame() results['predicted'] = predicted_favorites results['truth'] = pd.Series(nums.favorite_count >= 1) conf = Confusion(results) conf results.predicted.corr(results.truth) conf.stats_dict Explanation: But not in a good way. 10x more true favorites than predicted. Our unbalanced training set makes it easy for the judge to be tough. Let's mellow our judge a bit... End of explanation pos = np.array(nums.favorite_count >= 1) neg = ~pos portion_pos = float(sum(pos)) / len(nums) mask = ((np.random.binomial(1, portion_pos, size=len(nums)).astype(bool) & neg) | pos) disc = LinearDiscriminantAnalysis().fit(topics[mask], (nums.favorite_count[mask] >= 1)) print(sum(mask)) print(sum(pos) * 2) results = pd.DataFrame() results['predicted'] = disc.predict(topics.values) results['truth'] = nums.favorite_count.values >= 1 conf = Confusion(results) conf results.predicted.corr(results.truth) conf.stats_dict Explanation: High accuracy, but low MCC (correlation) Balance the training? Get rid of some negatives? Accentuate the positive? <-- give this a try yourself End of explanation portion_neg = 3 * portion_pos mask = ((np.random.binomial(1, portion_neg, size=len(nums)).astype(bool) & neg) | pos) disc = LinearDiscriminantAnalysis().fit(topics[mask], nums.favorite_count[mask] >=1 ) print(sum(mask)) print(sum(pos) * 2) results = pd.DataFrame() results['predicted'] = disc.predict(topics.values) results['truth'] = nums.favorite_count.values > 0 conf = Confusion(results) conf Explanation: So let's add some more negative examples back in. 50x imbalance is defintiely misleading. But 2-5x imbalance is probably OK. End of explanation results.predicted.corr(results.truth) Explanation: At least the confusion matrix looks balanced now End of explanation portion_neg = 2 * portion_pos mask = ((np.random.binomial(1, portion_neg, size=len(nums)).astype(bool) & neg) | pos) disc = LinearDiscriminantAnalysis().fit(topics.values[mask], (nums.favorite_count.values > 0)[mask]) print(sum(mask)) print(sum(pos) * 2) results = pd.DataFrame() results['predicted'] = disc.predict(topics.values) results['truth'] = nums.favorite_count.values > 0 conf = Confusion(results) conf results.predicted.corr(results.truth) Explanation: Should have known, imbalance doesn't help... End of explanation
7,571
Given the following text description, write Python code to implement the functionality described below step by step Description: Getting Started with Batfish This notebook uses pybatfish, a Python-based SDK for Batfish, to analyze a sample network. It shows how to submit your configurations and other network data for analysis and how to query its vendor-neutral network model. Other notebooks show how to use Batfish for different types of network validation tasks. Check out a video demo of an earlier version of this notebook here. Initializing a Network and Snapshot A network is a logical group of routers and links. It can be your entire network or a subset of it. A snapshot is a collection of information (configuration files, routing data, up/down status of nodes and links) that represent the network state. Snapshots can contain the actual state of the network or candidate states (e.g, those corresponding to a planned change) that you want to analyze. **Make sure that the Batfish service is running locally before running the cells below.** Step1: SNAPSHOT_PATH below can be updated to point to a custom snapshot directory, see the Batfish instructions for how to package data for analysis.<br> More example networks are available in the networks folder of the Batfish repository. Step2: If you used the example we provided, the network you initialized above is illustrated below. You can download/view devices' configuration files here. Querying the Network Model Batfish creates a comprehensive vendor-neutral device and network model which can be queried for information about devices, interfaces, VRFs, routes, etc. It offers a set of questions to query this model. Step3: Getting status of parsed files Batfish may ignore certain lines in the configuration. To retrieve the parsing status of snapshot files, use the fileParseStatus() question. Step4: answer() runs the question and returns the answer in a JSON format. frame() wraps the answer as pandas dataframe. Step5: Additional post-processing can be done on this data, like filtering for values in one or multiple columns, reducing the number of columns, etc. using pandas. We show a few examples of Pandas filtering below, some more filtering examples for Batfish answers are here, and a general tutorial is here. Step6: Extracting properties of network entities Entities in the network refer to things like nodes, interfaces, routing processes, and VRFs. Batfish makes it trivial to extract configured properties of such entities in a vendor neutral manner. Node properties The nodeProperties question extracts information on nodes in the snapshot. Step7: Interface properties To retrieve information about interfaces and the properties of them, use the interfaceProperties question Step8: Similar questions extract properties of other entities (e.g., bgpProcessConfiguration() extracts properties of BGP processes). Inspecting referential integrity of configuration structures Network configuratons define and reference named structures like route maps, access control lists (ACLs), prefix lists, etc. Two common indicators of buggy configurations include references to structures that are not defined anywhere (which can lead to disastrous consequences on some platforms) or defined structures that are not referenced anywhere. Batfish makes it easy to flag such instances because it understand the underlying semantics of configuration. Step9: The question for listing any unused structures is unusedStructures(). Inspecting topologies Nodes in a network form multiple types of topologies that are defined by edges at layer 3 (IP layer) or by routing protocols such as BGP or OSPF. Batfish has questions that return such edges. These questions take nodes and remoteNodes parameters that can limit the output to a subset of the nodes. Step10: Exploring Routing and Forwarding Batfish computes routing and forwarding tables (aka RIBs and FIBs) of the network from snapshot data itself. These tables can be examined to understand the routing and forwarding behavior of the network. One way to examine this behavior is using a virtual traceroute. Unlike the live-network traceroute, Batfish shows all possible flow paths in the network and identifies routing entries that cause each hop to be taken. Step11: Another way to understand the routing behavior in detail is to examine the routing tables directly. Step12: (For a large network, the first time you run a question that needs the dataplane, fetching the answer can take a few minutes. Subsequent questions are quick as the generated dataplane is saved by Batfish.) As used above, the routes() question can generate a lot of results. You may restrict the output using parameters to the question---to restrict the results to core routers, use nodes = "/core/", and to restrict results to the prefix 90.90.90.0/24, use **network=90.90.90.0/24".
Python Code: # Import packages %run startup.py bf = Session(host="localhost") Explanation: Getting Started with Batfish This notebook uses pybatfish, a Python-based SDK for Batfish, to analyze a sample network. It shows how to submit your configurations and other network data for analysis and how to query its vendor-neutral network model. Other notebooks show how to use Batfish for different types of network validation tasks. Check out a video demo of an earlier version of this notebook here. Initializing a Network and Snapshot A network is a logical group of routers and links. It can be your entire network or a subset of it. A snapshot is a collection of information (configuration files, routing data, up/down status of nodes and links) that represent the network state. Snapshots can contain the actual state of the network or candidate states (e.g, those corresponding to a planned change) that you want to analyze. **Make sure that the Batfish service is running locally before running the cells below.** End of explanation # Assign a friendly name to your network and snapshot NETWORK_NAME = "example_network" SNAPSHOT_NAME = "example_snapshot" SNAPSHOT_PATH = "networks/example" # Now create the network and initialize the snapshot bf.set_network(NETWORK_NAME) bf.init_snapshot(SNAPSHOT_PATH, name=SNAPSHOT_NAME, overwrite=True) Explanation: SNAPSHOT_PATH below can be updated to point to a custom snapshot directory, see the Batfish instructions for how to package data for analysis.<br> More example networks are available in the networks folder of the Batfish repository. End of explanation # You can also use tab-completion on the Batfish question module - bf.q. -> press TAB key, # uncomment and try on the following line # bf.q. # In IPython and Jupyter you can use the "?" shorthand to get help on a question ?bf.q.nodeProperties # help(bf.q.nodeProperties) # in standard Python console Explanation: If you used the example we provided, the network you initialized above is illustrated below. You can download/view devices' configuration files here. Querying the Network Model Batfish creates a comprehensive vendor-neutral device and network model which can be queried for information about devices, interfaces, VRFs, routes, etc. It offers a set of questions to query this model. End of explanation parse_status = bf.q.fileParseStatus().answer().frame() Explanation: Getting status of parsed files Batfish may ignore certain lines in the configuration. To retrieve the parsing status of snapshot files, use the fileParseStatus() question. End of explanation # View the parse status results parse_status Explanation: answer() runs the question and returns the answer in a JSON format. frame() wraps the answer as pandas dataframe. End of explanation # An example: use a filter on the returned dataframe to see which files failed to parse completely parse_status[parse_status['Status'] != 'PASSED'] # change '!=' to '==' to get the files which passed # View details if some of the files were not parsed completely bf.q.parseWarning().answer().frame() Explanation: Additional post-processing can be done on this data, like filtering for values in one or multiple columns, reducing the number of columns, etc. using pandas. We show a few examples of Pandas filtering below, some more filtering examples for Batfish answers are here, and a general tutorial is here. End of explanation # Extract the properties of all nodes whose names contain 'border' node_properties = bf.q.nodeProperties(nodes="/border/").answer().frame() # View what columns (properties) are present in the answer node_properties.columns # To extract only a subset of properties, use the properties parameter bf.q.nodeProperties(nodes="/border/", properties="Domain_Name,NTP_Servers,Interfaces").answer().frame() Explanation: Extracting properties of network entities Entities in the network refer to things like nodes, interfaces, routing processes, and VRFs. Batfish makes it trivial to extract configured properties of such entities in a vendor neutral manner. Node properties The nodeProperties question extracts information on nodes in the snapshot. End of explanation # Fetch specific properties of Loopback interfaces bf.q.interfaceProperties(interfaces="/loopback/", properties="Bandwidth,VRF,Primary_Address").answer().frame() Explanation: Interface properties To retrieve information about interfaces and the properties of them, use the interfaceProperties question End of explanation # List references to undefined structures bf.q.undefinedReferences().answer().frame() Explanation: Similar questions extract properties of other entities (e.g., bgpProcessConfiguration() extracts properties of BGP processes). Inspecting referential integrity of configuration structures Network configuratons define and reference named structures like route maps, access control lists (ACLs), prefix lists, etc. Two common indicators of buggy configurations include references to structures that are not defined anywhere (which can lead to disastrous consequences on some platforms) or defined structures that are not referenced anywhere. Batfish makes it easy to flag such instances because it understand the underlying semantics of configuration. End of explanation # Get layer 3 edges bf.q.layer3Edges(nodes="as1border1").answer().frame() # Get BGP edges bf.q.bgpEdges(nodes="as1border1").answer().frame() Explanation: The question for listing any unused structures is unusedStructures(). Inspecting topologies Nodes in a network form multiple types of topologies that are defined by edges at layer 3 (IP layer) or by routing protocols such as BGP or OSPF. Batfish has questions that return such edges. These questions take nodes and remoteNodes parameters that can limit the output to a subset of the nodes. End of explanation # Do a traceroute from host1 to 1.0.2.2 tr_frame = bf.q.traceroute(startLocation="host1", headers=HeaderConstraints(dstIps="1.0.2.2")).answer().frame() # Display results using customizations to handle large string values show(tr_frame) Explanation: Exploring Routing and Forwarding Batfish computes routing and forwarding tables (aka RIBs and FIBs) of the network from snapshot data itself. These tables can be examined to understand the routing and forwarding behavior of the network. One way to examine this behavior is using a virtual traceroute. Unlike the live-network traceroute, Batfish shows all possible flow paths in the network and identifies routing entries that cause each hop to be taken. End of explanation # Fetch the routing table of all VRFs on all nodes in the snapshot routes_all = bf.q.routes().answer().frame() Explanation: Another way to understand the routing behavior in detail is to examine the routing tables directly. End of explanation # Get all routes for the network 90.90.90.0/24 on core routers bf.q.routes(nodes="/core/", network="90.90.90.0/24").answer().frame() Explanation: (For a large network, the first time you run a question that needs the dataplane, fetching the answer can take a few minutes. Subsequent questions are quick as the generated dataplane is saved by Batfish.) As used above, the routes() question can generate a lot of results. You may restrict the output using parameters to the question---to restrict the results to core routers, use nodes = "/core/", and to restrict results to the prefix 90.90.90.0/24, use **network=90.90.90.0/24". End of explanation
7,572
Given the following text description, write Python code to implement the functionality described below step by step Description: Transport a collection of particles Step1: Optimization - Step 1 Step2: Optimization - Step 2 Step3: Optimization - Step 3 Step4: Optimization - Step 4
Python Code: def grid_of_particles(N, w): # Create a grid of N evenly spaced particles # covering a square patch of width and height w # centered on the region 0 < x < 2, 0 < y < 1 x = np.linspace(1.0-w/2, 1.0+w/2, int(np.sqrt(N))) y = np.linspace(0.5-w/2, 0.5+w/2, int(np.sqrt(N))) x, y = np.meshgrid(x, y) return np.array([x.flatten(), y.flatten()]) X = grid_of_particles(50, 0.1) # Make a plot to confirm that this works as expected fig = plt.figure(figsize = (12,6)) plt.scatter(X[0,:], X[1,:], lw = 0, marker = '.', s = 1) plt.xlim(0, 2) plt.ylim(0, 1) X Explanation: Transport a collection of particles End of explanation N = 10000 X0 = grid_of_particles(N, w = 0.1) # Array to hold all grid points after transport X1 = np.zeros((2, N)) # Transport parameters tmax = 5.0 dt = 0.5 # Loop over grid and update all positions # This is where parallelisation would happen, since # each position is independent of all the others tic = time() for i in range(N): # Keep only the last position, not the entire trajectory X1[:,i] = trajectory(X0[:,i], tmax, dt, rk4, f)[:,-1] toc = time() print('Transport took %.3f seconds' % (toc - tic)) # Make scatter plot to show all grid points fig = plt.figure(figsize = (12,6)) plt.scatter(X1[0,:], X1[1,:], lw = 0, marker = '.', s = 1) plt.xlim(0, 2) plt.ylim(0, 1) Explanation: Optimization - Step 1: Naïve loop implementation End of explanation # Implementation of Eq. (1) in the exam set def doublegyre(x, y, t, A, e, w): a = e * np.sin(w*t) b = 1 - 2*e*np.sin(w*t) f = a*x**2 + b*x return np.array([ -np.pi*A*np.sin(np.pi*f) * np.cos(np.pi*y), # x component of velocity np.pi*A*np.cos(np.pi*f) * np.sin(np.pi*y) * (2*a*x + b) # y component of velocity ]) # Wrapper function to pass to integrator # X0 is a two-component vector [x, y] def f(X, t): # Parameters of the velocity field A = 0.1 e = 0.25 # epsilon w = 1 # omega return doublegyre(X[0,:], X[1,:], t, A, e, w) # 4th order Runge-Kutta integrator # X0 is a two-component vector [x, y] def rk4(X, t, dt, f): k1 = f(X, t) k2 = f(X + k1*dt/2, t + dt/2) k3 = f(X + k2*dt/2, t + dt/2) k4 = f(X + k3*dt, t + dt) return X + dt*(k1 + 2*k2 + 2*k3 + k4) / 6 # Function to calculate a trajectory from an # initial position X0 at t = 0, moving forward # until t = tmax, using the given timestep and # integrator def trajectory(X0, tmax, dt, integrator, f): t = 0 # Number of timesteps Nt = int(tmax / dt) # Array to hold the entire trajectory PX = np.zeros((*X0.shape, Nt+1)) # Initial position PX[:,:,0] = X0 # Loop over all timesteps for i in range(1, Nt+1): PX[:,:,i] = integrator(PX[:,:,i-1], t, dt, f) t += dt # Return entire trajectory return PX N = 10000 X0 = grid_of_particles(N, w = 0.1) # Array to hold all grid points after transport X1 = np.zeros((2, N)) # Transport parameters tmax = 5.0 dt = 0.5 # Loop over grid and update all positions # This is where parallelisation would happen, since # each position is independent of all the others tic = time() # Keep only the last position, not the entire trajectory X1 = trajectory(X0, tmax, dt, rk4, f)[:,:,-1] toc = time() print('Transport took %.3f seconds' % (toc - tic)) # Make scatter plot to show all grid points fig = plt.figure(figsize = (12,6)) plt.scatter(X1[0,:], X1[1,:], lw = 0, marker = '.', s = 1) plt.xlim(0, 2) plt.ylim(0, 1) Explanation: Optimization - Step 2: NumPy array operations End of explanation # Implementation of Eq. (1) in the exam set def doublegyre(x, y, t, A, e, w): a = e * np.sin(w*t) b = 1 - 2*e*np.sin(w*t) f = a*x**2 + b*x return np.array([ -np.pi*A*np.sin(np.pi*f) * np.cos(np.pi*y), # x component of velocity np.pi*A*np.cos(np.pi*f) * np.sin(np.pi*y) * (2*a*x + b) # y component of velocity ]) # Wrapper function to pass to integrator # X0 is a two-component vector [x, y] def f(X, t): # Parameters of the velocity field A = 0.1 e = 0.25 # epsilon w = 1 # omega return doublegyre(X[0,:], X[1,:], t, A, e, w) # 4th order Runge-Kutta integrator # X0 is a two-component vector [x, y] def rk4(X, t, dt, f): k1 = f(X, t) k2 = f(X + k1*dt/2, t + dt/2) k3 = f(X + k2*dt/2, t + dt/2) k4 = f(X + k3*dt, t + dt) return X + dt*(k1 + 2*k2 + 2*k3 + k4) / 6 # Function to calculate a trajectory from an # initial position X0 at t = 0, moving forward # until t = tmax, using the given timestep and # integrator def trajectory(X, tmax, dt, integrator, f): t = 0 # Number of timesteps Nt = int(tmax / dt) # Loop over all timesteps for i in range(1, Nt+1): X = integrator(X, t, dt, f) t += dt # Return entire trajectory return X N = 10000 X0 = grid_of_particles(N, w = 0.1) # Array to hold all grid points after transport X1 = np.zeros((2, N)) # Transport parameters tmax = 5.0 dt = 0.5 # Loop over grid and update all positions # This is where parallelisation would happen, since # each position is independent of all the others tic = time() # Keep only the last position, not the entire trajectory X1 = trajectory(X0, tmax, dt, rk4, f) toc = time() print('Transport took %.3f seconds' % (toc - tic)) # Make scatter plot to show all grid points fig = plt.figure(figsize = (12,6)) plt.scatter(X1[0,:], X1[1,:], lw = 0, marker = '.', s = 1) plt.xlim(0, 2) plt.ylim(0, 1) Explanation: Optimization - Step 3: Rewrite trajectory function to only return final location End of explanation # Implementation of Eq. (1) in the exam set @jit(UniTuple(float64[:], 2)(float64[:], float64[:], float64, float64, float64, float64), nopython = True) def doublegyre(x, y, t, A, e, w): a = e * np.sin(w*t) b = 1 - 2*e*np.sin(w*t) f = a*x**2 + b*x v = np.zeros((2, x.size)) return -np.pi*A*np.sin(np.pi*f) * np.cos(np.pi*y), np.pi*A*np.cos(np.pi*f) * np.sin(np.pi*y) * (2*a*x + b) # Wrapper function to pass to integrator # X0 is a two-component vector [x, y] @jit(nopython = True) def f(X, t): # Parameters of the velocity field A = np.float64(0.1) e = np.float64(0.25) # epsilon w = np.float64(1.0) # omega v = np.zeros(X.shape) v[0,:], v[1,:] = doublegyre(X[0,:], X[1,:], t, A, e, w) return v # 4th order Runge-Kutta integrator # X0 is a two-component vector [x, y] @jit(nopython = True) def rk4(X, t, dt): k1 = f(X, t) k2 = f(X + k1*dt/2, t + dt/2) k3 = f(X + k2*dt/2, t + dt/2) k4 = f(X + k3*dt, t + dt) return X + dt*(k1 + 2*k2 + 2*k3 + k4) / 6 # Function to calculate a trajectory from an # initial position X0 at t = 0, moving forward # until t = tmax, using the given timestep and # integrator @jit(nopython = True) def trajectory(X, tmax, dt): t = 0 # Number of timesteps Nt = int(tmax / dt) # Loop over all timesteps for i in range(1, Nt+1): X = rk4(X, t, dt) t += dt # Return entire trajectory return X N = 10000 X0 = grid_of_particles(N, w = 0.1) # Array to hold all grid points after transport X1 = np.zeros((2, N)) # Transport parameters tmax = 5.0 dt = 0.5 # Loop over grid and update all positions # This is where parallelisation would happen, since # each position is independent of all the others tic = time() # Keep only the last position, not the entire trajectory X1 = endpoints(X0[:,:], tmax, dt) toc = time() print('Transport took %.3f seconds' % (toc - tic)) # Make scatter plot to show all grid points fig = plt.figure(figsize = (12,6)) plt.scatter(X1[0,:], X1[1,:], lw = 0, marker = '.', s = 1) plt.xlim(0, 2) plt.ylim(0, 1) Explanation: Optimization - Step 4: Just-in-time compilation with Numba End of explanation
7,573
Given the following text description, write Python code to implement the functionality described below step by step Description: Names Step2: 1. Python Return Statements 1.1 Functions Without Returns Thus far we've not paid much attention to function return statements. As many of you have noticed, you don't HAVE to have a return statement at the end of a function. For example Step4: There are a few things to note here. First that the function did not print out z, even though in an ordinary notebook cell (without a function definition) typing the variable name alone usually prints the value stored in it. That's not the case with functions because functions will only return to output what you tell them to. If you want to print the value of something within a function, then you must use a print statement. For example Step5: This brings up another subtelty that it's worth reemphasizing here - optional arguments can be specified in a function call in any order but if you don't specify which variable is which in the function call, it will assume that you are specifying them in the order that you defined them in the function. For example, as I've written the function, you should be able to specify none, one or two arguments in any order. Make sure you understand what each of the following cells are doing before moving on Step7: OK back to the function dummy_func and the idea of return statements. Note that we did not include a return statement in the definition cell. A return statement is not necessary to delimit the end of a function, though it can be visually useful in that way. In fact, a function will consist of any code that is indented beneath it. If you remove the indentation, any code that you write is no longer considered part of the function. For example Step9: 1.2 Return Statements as Code Breaks Now let's try adding a return statment to dummy_func Step11: Note that nothing changed about the output when we added a return to dummy_func. Regardless of where it is embedded in a function, a return statement always tells Python that the function is complete and it should stop execution and return to a command prompt. In this way, it can sometimes be useful inside a function. For example, consider the following function (from Lab 3 solutions) Step12: In the example above, a return statement was used mid-function in order to break out of it when a bad temperature system is specified. This returns the prompt to the user BEFORE reaching the print functions below, which are bound to fail. In this way, the return statement acts a bit like a "break" statement but break is only useable inside of loops (for, while) and it does not exit the entire function, but only the loop. For example, consider the following function (also from Lab #4 solutions). Step13: Like the return statement in the temperature conversion function, break is used here to handle unrecognized input, but here it functions only to end the for loop and not to exit the function. It will still reach the two print statements at the bottom and print any orders that were collected before the bad input. To verify this, exectute the cell below. For Rowan's order, enter 2 for eggs and no for spam. For Tasha's order, enter 3 eggs and yuck for spam. At the end, the function will still print Rowan's order, and skip Tasha's. Step15: Note though that break will break you out of the for loop entirely. If you enter an unrecognized answer for spam in Rowan's order, it will not then ask Tasha for hers. Verify this by reexecuting the cell above and entering a bad value for spam in Rowan's order. Note too that entering return in place of break in the function above would have stopped the code entirely at that point, and would not have printed the good order before exiting. Try swapping it out in the function definition to verify this. 1.3 Return Statements for Assigned Output Return statements are not only useful as code breaks, however. Their main purpose is to return calculated values, arrays, etc. to the user so that they can be referenced in future code cells. So far our code has mostly involved printing things, but sometimes we want to use and manupulate the output of a function and so it needs to be passed back to the user. For example Step16: Note in executing the cell above, you now have output (z in this case). Nice, but stil not that useful since it just tells you what it is. The function returns z as output, but does not store anything in a variable called z. So even though you defined z within the function and returned it, python will not recognize the variable. You can see this by executing the cell below, which will return an error. Step17: This is an important higher-order thing to note about functions - variables only have meaning within them, not outside them. If I want the function to return whatever I've told it to return (z) and store it in a variable, I have to tell it so via the following syntax Step19: This time, I've told python that it should take the output of dummy_func and store it in the variable z. Functions can return arbitrary numbers of things as well Step20: And when you want to assign them to variables, you use a similar syntax, though it seems a bit funny. Step21: Note that when you define a function with multiple return variables and you assign those returned variables into stored variables with = you must have an equal number of assigned variables as returned variables. For example, the following will return an error. Step22: 2. Basic Python Plotting You've already made some plots for this course, however we have not yet manipulated their appearance in any way. For example, in Homework #2, you wrote functions that returned double slit interference patterns that looked like this Step23: This gives you access to all of pyplot's functions in the usual way of calling modules (plt.functionname). For example Step24: Here are some especially useful pyplot functions, called with plt.functionname(input(s)) Step25: Note these functions can be set before or after the plot command as long as they're within the same cell. Step26: As you have already encountered in your labs and homeworks, it is often useful to overplot functions on top of one another, which we do with multiple plot commands. In this case, what you need to make a quality graphic is a legend to label which is which. For example, let's plot the cubic function on top of the quadratic. In doing so below, note that we don't actually have to specify a separate variable, but that the arguments to a plot command can be combinations of variables Step27: To add a legend to a plot, you use the pyplot function legend. Perhaps the simplest way to use this function is to assign labels to each line plot with the label keyword, as below, and then call legend with no input. Note that the label keyword requires string input and that you can use LaTeX syntax within those labels Step28: As you can see above, the default for a legend is to place it at the upper right of the plot, even when it obscures the underlying lines and to draw a solid line around it (bounding box). Generally speaking, bounding boxes are rather ugly, so you should nearly always (unless you really want to set the legend apart) use the optional Boolean (True or False) keyword "frameon" to turn this off. Legend also takes the optional keyword loc to set the location of the legend. Loc should be a string, and you can see the full list of options by accessing help for the legend function by hitting shift+tab+tab inside of the plt.legend parentheses below. Step29: <div class=hw> ### Exercise 2 --------------------- Make a plot that meets the following criteria Step30: <div class=hw> ### Exercise 3 ------------ ### Exercise 3a Use LaTeX syntax to write Planck's law (provided in class) in the cell below ***Insert Planck's Law here*** <div class=hw> #### Exercise 3b Now, code a function with one required input (temperature) that will return a numpy array called Planck with two columns and 100000 rows. The first column should contain wavelength values between 0 and 100000 nanometers, and the second column should contain the corresponding $B_\lambda$ values. The output flux column should be in units of $W/m^2/nm$. The cell below will import all of the constants that you need to accomplish this. Step31: <div class=hw> If you've done this correctly, the following should tell you that the dimensions of the output are (2,100000). Step32: <div class=hw> #### Exercise 3c Use the output of the function to make a plot with appropriate labels (including units) and legend that shows the difference between a 6000K (sun-like) star's blackbody curve and those of stars that are slightly warmer (7000K) and cooler (5000K). Manipulate the x and/or y range of the plot to zoom in on the curves as much as you can while showing all three. *Reminder/Hint Step33: <div class=hw> #### Exercise 3d Write a paragraph describing the differences between the areas under these three curves and what that means about their total energy output. Be as quantitative as possible by estimating the area underneath them. ***Insert your description here*** <div class=hw> #### Exercise 3e write a new function, planck_norm, that does the same thing as the Planck function, but ***normalizes*** the function by the peak value. To do this, divide the flux array by its maximum value. This serves to make all of the curves peak at 1 so that you can easily compare where they peak. Step34: <div class=hw> Make the same plot as you did above, but with normalized flux along the y axis this time. Step35: <div class=hw> #### Exercise 3f Write a paragraph describing what normalization does and why it is useful. What physical properties are corrupted/lost by displaying the normalized flux instead of the true flux, and which are preserved? Include specific descriptions between your plots. ***insert normalization paragraph*** <div class=hw> #### Exercise 3g Write a paragraph describing the differences in peak wavelength between these three stars. You should include a precise value for the peak wavelength of each star (*Hint Step36: insert description of peak wavelengths here
Python Code: from numpy import * Explanation: Names: [Insert Your Names Here] Lab 5 - Return Statements and Plotting Basics Lab 5 Contents Python Return Statements Functions Without Returns Return Statements as Code Breaks Return Statements for Assigned Output Basic Python Plotting End of explanation def dummy_func(x=0, y=0): This function takes two optional arguments (x and y) with default values of zero and adds them together to create a new variable z, then prints it. z=x+y z dummy_func() Explanation: 1. Python Return Statements 1.1 Functions Without Returns Thus far we've not paid much attention to function return statements. As many of you have noticed, you don't HAVE to have a return statement at the end of a function. For example: End of explanation def dummy_func(x=0, y=0): This function takes two optional arguments (x and y) with default values of zero and adds them together to create a new variable z, then prints it. print("x is", x) print("y is", y) z=x+y print("z is", z) dummy_func(2,3) Explanation: There are a few things to note here. First that the function did not print out z, even though in an ordinary notebook cell (without a function definition) typing the variable name alone usually prints the value stored in it. That's not the case with functions because functions will only return to output what you tell them to. If you want to print the value of something within a function, then you must use a print statement. For example: End of explanation dummy_func(1) dummy_func(y=1) dummy_func(y=3,x=2) dummy_func(3,2) Explanation: This brings up another subtelty that it's worth reemphasizing here - optional arguments can be specified in a function call in any order but if you don't specify which variable is which in the function call, it will assume that you are specifying them in the order that you defined them in the function. For example, as I've written the function, you should be able to specify none, one or two arguments in any order. Make sure you understand what each of the following cells are doing before moving on End of explanation def dummy_func(x=0, y=0): This function takes two optional arguments (x and y) with default values of zero and adds them together to create a new variable z, then prints it. print("x is", x) print("y is", y) z=x+y print("z is", z) print('This is not part of the function and will not be printed when I call it. It will print when I execute this cell') dummy_func() Explanation: OK back to the function dummy_func and the idea of return statements. Note that we did not include a return statement in the definition cell. A return statement is not necessary to delimit the end of a function, though it can be visually useful in that way. In fact, a function will consist of any code that is indented beneath it. If you remove the indentation, any code that you write is no longer considered part of the function. For example: End of explanation def dummy_func(x=0, y=0): This function takes two optional arguments (x and y) with default values of zero and adds them together to create a new variable z, then prints it. print("x is", x) print("y is", y) z=x+y print("z is", z) return dummy_func() Explanation: 1.2 Return Statements as Code Breaks Now let's try adding a return statment to dummy_func End of explanation def temp_convert(system="C"): Description: Asks user to input a temperature, and prints it in C, F and K. Required Inputs: none Optional Inputs: system - string variable that allows the user to specify the temperature system. Default is C, but F and K are also options. #ask the user to enter a temperature input_temp = input("enter a temperature (set system keyword if not Celsius):") # default input type is string. Convert it to a Float input_temp=float(input_temp) #Convert all input temperatures to Celsius if system == "C": input_temp = input_temp elif system == "F": input_temp = (input_temp-32)*5/9 elif system == "K": input_temp = input_temp - 273 else: #if system keyword is not C, F or K, exit without any output and print a warning print("unrecognized system - please specify C, F or K") return #Convert and print the temperatures print('Temperature in Celsius is ', str(input_temp)) temp_f = input_temp*9/5 + 32 print('Temperature in Farenheit is ', str(temp_f)) temp_k = input_temp + 273 print('Temperature in Kelvin is ', str(temp_k)) return temp_convert(system="B") Explanation: Note that nothing changed about the output when we added a return to dummy_func. Regardless of where it is embedded in a function, a return statement always tells Python that the function is complete and it should stop execution and return to a command prompt. In this way, it can sometimes be useful inside a function. For example, consider the following function (from Lab 3 solutions) End of explanation def order_please(names): orders='' foods='' names.sort(key=len) for a in names: negg = input(a +": How many eggs would you like?") spam = input(a +": Would you like Spam (yes or no)?") if spam == "yes": spam = "" spam_print = "SPAM!" elif spam == "no": spam_print = "NO SPAM!" else: print('unrecognized answer. Please specify yes or no') break order =a +" wants "+negg + " eggs, and " + spam + " Spam" food = a+": "+"egg "*int(negg)+"and "+spam_print orders = orders+'\n\n'+order foods = foods+'\n'+food print(orders) print(foods) Explanation: In the example above, a return statement was used mid-function in order to break out of it when a bad temperature system is specified. This returns the prompt to the user BEFORE reaching the print functions below, which are bound to fail. In this way, the return statement acts a bit like a "break" statement but break is only useable inside of loops (for, while) and it does not exit the entire function, but only the loop. For example, consider the following function (also from Lab #4 solutions). End of explanation order_please(['Rowan','Tasha']) Explanation: Like the return statement in the temperature conversion function, break is used here to handle unrecognized input, but here it functions only to end the for loop and not to exit the function. It will still reach the two print statements at the bottom and print any orders that were collected before the bad input. To verify this, exectute the cell below. For Rowan's order, enter 2 for eggs and no for spam. For Tasha's order, enter 3 eggs and yuck for spam. At the end, the function will still print Rowan's order, and skip Tasha's. End of explanation def dummy_func(x=0, y=0): This function takes two optional arguments (x and y) with default values of zero and adds them together to create a new variable z, then prints it. print("x is", x) print("y is", y) z=x+y print("z is", z) return z dummy_func() Explanation: Note though that break will break you out of the for loop entirely. If you enter an unrecognized answer for spam in Rowan's order, it will not then ask Tasha for hers. Verify this by reexecuting the cell above and entering a bad value for spam in Rowan's order. Note too that entering return in place of break in the function above would have stopped the code entirely at that point, and would not have printed the good order before exiting. Try swapping it out in the function definition to verify this. 1.3 Return Statements for Assigned Output Return statements are not only useful as code breaks, however. Their main purpose is to return calculated values, arrays, etc. to the user so that they can be referenced in future code cells. So far our code has mostly involved printing things, but sometimes we want to use and manupulate the output of a function and so it needs to be passed back to the user. For example: End of explanation z Explanation: Note in executing the cell above, you now have output (z in this case). Nice, but stil not that useful since it just tells you what it is. The function returns z as output, but does not store anything in a variable called z. So even though you defined z within the function and returned it, python will not recognize the variable. You can see this by executing the cell below, which will return an error. End of explanation z = dummy_func() Explanation: This is an important higher-order thing to note about functions - variables only have meaning within them, not outside them. If I want the function to return whatever I've told it to return (z) and store it in a variable, I have to tell it so via the following syntax: End of explanation def dummy_func(x=0, y=0): This function takes two optional arguments (x and y) with default values of zero and adds them together to create a new variable z, then prints it. print("x is", x) print("y is", y) z=x+y print("z is", z) return x, y, z dummy_func() Explanation: This time, I've told python that it should take the output of dummy_func and store it in the variable z. Functions can return arbitrary numbers of things as well End of explanation x, y, z = dummy_func() x y z Explanation: And when you want to assign them to variables, you use a similar syntax, though it seems a bit funny. End of explanation x, y = dummy_func() Explanation: Note that when you define a function with multiple return variables and you assign those returned variables into stored variables with = you must have an equal number of assigned variables as returned variables. For example, the following will return an error. End of explanation import matplotlib.pyplot as plt %matplotlib inline Explanation: 2. Basic Python Plotting You've already made some plots for this course, however we have not yet manipulated their appearance in any way. For example, in Homework #2, you wrote functions that returned double slit interference patterns that looked like this: As you can see, python (unlike many languages) generally does a lovely job with coloring, line thickness etc. with simple plot commands. It does not, however, add titles, axis labels, legends, etc. and these are very important things to include. From now on, any plots that you make in Labs or homeworks should always, at a minimum, include: axis labels (including units), a plot title and a legend in any case where there's more than one line on the same plot. There are many useful optional inputs to the plot command that allow you to tweak the appearance of the plot, including: linestyle, color, placement of the legend, etc. So let's learn the basics by plotting some things. So far we have been using the command %pylab inline to allow jupyter to insert inline plots in our notebooks. Now we are going to do this more properly by importing the matplotlib library's plotting module pyplot an then telling the notebook that you still want it to display any plots inline (inside the notebook) with the magic function %matplotlib inline with the following lines End of explanation x=arange(-10,10,0.01) y=x**2 plt.plot(x, y) Explanation: This gives you access to all of pyplot's functions in the usual way of calling modules (plt.functionname). For example: End of explanation plt.xlim(-5,5) plt.ylim(0,20) plt.xlabel("the independent variable (no units)") plt.ylabel("the dependent variable (no units)") plt.title("The Quadratic Function") plt.plot(x, y, color='red',linestyle='--', linewidth=2.5) Explanation: Here are some especially useful pyplot functions, called with plt.functionname(input(s)): xlim and ylim set the range of the x and y axes, respectively and they have two required inputs - a minimum and maximum for the range. By default, pylab will set axis ranges to encompass all of the values in the x and y arrays that you specified in the plot call, but there are many cases where you might want to "zoom in" on certain regions of the plot. xlabel and ylabel set the labels for the x and y axes, respectively and have a required string input. title sets the title of the plot and also requires a string input. Line properties are controlled with optional keywords to the plot function, namely the commands color, linestyle and linewidth. The first two have required string arguments (lists of the options are available here), and the third (linewidth) requires a numerical argument in multiples of the default linewidth (1). These can be specified either in the call to the function or separately before or after the plot call. See the cell below for an example of all of these at play End of explanation plt.plot(x, y, color='red',linestyle='--', linewidth=2.5) plt.xlim(-5,5) plt.ylim(0,20) plt.xlabel("the independent variable (no units)") plt.ylabel("the dependent variable (no units)") plt.title("The Quadratic Function") Explanation: Note these functions can be set before or after the plot command as long as they're within the same cell. End of explanation plt.plot(x,x**2) plt.plot(x,x**3) Explanation: As you have already encountered in your labs and homeworks, it is often useful to overplot functions on top of one another, which we do with multiple plot commands. In this case, what you need to make a quality graphic is a legend to label which is which. For example, let's plot the cubic function on top of the quadratic. In doing so below, note that we don't actually have to specify a separate variable, but that the arguments to a plot command can be combinations of variables End of explanation plt.plot(x,x**2, label='$x^2$') plt.plot(x,x**3, label='$x^3$') plt.legend() Explanation: To add a legend to a plot, you use the pyplot function legend. Perhaps the simplest way to use this function is to assign labels to each line plot with the label keyword, as below, and then call legend with no input. Note that the label keyword requires string input and that you can use LaTeX syntax within those labels End of explanation plt.plot(x,x**2, label='$x^2$') plt.plot(x,x**3, label='$x^3$') plt.legend(loc="lower right", frameon=False) Explanation: As you can see above, the default for a legend is to place it at the upper right of the plot, even when it obscures the underlying lines and to draw a solid line around it (bounding box). Generally speaking, bounding boxes are rather ugly, so you should nearly always (unless you really want to set the legend apart) use the optional Boolean (True or False) keyword "frameon" to turn this off. Legend also takes the optional keyword loc to set the location of the legend. Loc should be a string, and you can see the full list of options by accessing help for the legend function by hitting shift+tab+tab inside of the plt.legend parentheses below. End of explanation # plotting code here Explanation: <div class=hw> ### Exercise 2 --------------------- Make a plot that meets the following criteria: * plots $x^a$ for all integer values of a between 0 and 5 * labels the x and y axes appropriately * includes an appropriate descriptive title * zooms in on a region of the plot where you find the differences between the functions to be the most interesting * sets linestyles and colors to be distinct for each function * makes a legend without a bounding box at an appropriate location End of explanation import astropy.units as u from astropy.constants import G, h, c, k_B #the cell below should include your planck_func function definition # insert test statements here Explanation: <div class=hw> ### Exercise 3 ------------ ### Exercise 3a Use LaTeX syntax to write Planck's law (provided in class) in the cell below ***Insert Planck's Law here*** <div class=hw> #### Exercise 3b Now, code a function with one required input (temperature) that will return a numpy array called Planck with two columns and 100000 rows. The first column should contain wavelength values between 0 and 100000 nanometers, and the second column should contain the corresponding $B_\lambda$ values. The output flux column should be in units of $W/m^2/nm$. The cell below will import all of the constants that you need to accomplish this. End of explanation x = planck_func(1000) x.shape Explanation: <div class=hw> If you've done this correctly, the following should tell you that the dimensions of the output are (2,100000). End of explanation #define your variables to be plotted (outputs from the function you defined) here #insert your plot commands here Explanation: <div class=hw> #### Exercise 3c Use the output of the function to make a plot with appropriate labels (including units) and legend that shows the difference between a 6000K (sun-like) star's blackbody curve and those of stars that are slightly warmer (7000K) and cooler (5000K). Manipulate the x and/or y range of the plot to zoom in on the curves as much as you can while showing all three. *Reminder/Hint: Since the function returns a 2 x 10000 element matrix you will need to use array indices in your plot command. Remember that python array indices start from 0.* End of explanation ##new normalized Planck function definition Explanation: <div class=hw> #### Exercise 3d Write a paragraph describing the differences between the areas under these three curves and what that means about their total energy output. Be as quantitative as possible by estimating the area underneath them. ***Insert your description here*** <div class=hw> #### Exercise 3e write a new function, planck_norm, that does the same thing as the Planck function, but ***normalizes*** the function by the peak value. To do this, divide the flux array by its maximum value. This serves to make all of the curves peak at 1 so that you can easily compare where they peak. End of explanation #define your variables to be plotted (outputs from the function you defined) here #insert your plot commands here Explanation: <div class=hw> Make the same plot as you did above, but with normalized flux along the y axis this time. End of explanation #you should insert some code here to find the peak wavelength for each star Explanation: <div class=hw> #### Exercise 3f Write a paragraph describing what normalization does and why it is useful. What physical properties are corrupted/lost by displaying the normalized flux instead of the true flux, and which are preserved? Include specific descriptions between your plots. ***insert normalization paragraph*** <div class=hw> #### Exercise 3g Write a paragraph describing the differences in peak wavelength between these three stars. You should include a precise value for the peak wavelength of each star (*Hint: find the **index** of the maximum flux value and use that to get the corresponding wavelength*). End of explanation from IPython.core.display import HTML def css_styling(): styles = open("../custom.css", "r").read() return HTML(styles) css_styling() Explanation: insert description of peak wavelengths here End of explanation
7,574
Given the following text description, write Python code to implement the functionality described below step by step Description: Getting started with The Joker The Joker (pronounced Yo-kurr) is a highly specialized Monte Carlo (MC) sampler that is designed to generate converged posterior samplings for Keplerian orbital parameters, even when your data are sparse, non-uniform, or very noisy. This is not a general MC sampler, and this is not a Markov Chain MC sampler like emcee, or pymc3 Step1: Loading radial velocity data To start, we need some radial velocity data to play with. Our ultimate goal is to construct or read in a thejoker.RVData instance, which is the main data container object used in The Joker. For this tutorial, we will use a simulated RV curve that was generated using a separate script and saved to a CSV file, and we will create an RVData instance manually. Because we previously saved this data as an Astropy ECSV file, the units are provided with the column data and read in automatically using the astropy.table read/write interface Step2: The full simulated data table has many rows (256), so let's randomly grab 4 rows to work with Step3: It looks like the time column is given in Barycentric Julian Date (BJD), so in order to create an RVData instance, we will need to create an astropy.time.Time object from this column Step4: We now have an RVData object, so we could continue on with the tutorial. But as a quick aside, there is an alternate, more automatic (automagical?) way to create an RVData instance from tabular data Step5: One of the handy features of RVData is the .plot() method, which generates a quick view of the data Step6: The data are clearly variable! But what orbits are consistent with these data? I suspect many, given how sparse they are! Now that we have the data in hand, we need to set up the sampler by specifying prior distributions over the parameters in The Joker. Specifying the prior distributions for The Joker parameters The prior pdf (probability distribution function) for The Joker is controlled and managed through the thejoker.JokerPrior class. The prior for The Joker is fairly customizable and the initializer for JokerPrior is therefore pretty flexible; usually too flexible for typical use cases. We will therefore start by using an alternate initializer defined on the class, JokerPrior.default(), that provides a simpler interface for creating a JokerPrior instance that uses the default prior distributions assumed in The Joker. In the default prior Step7: Once we have the prior instance, we need to generate some prior samples that we will then use The Joker to rejection sample down to a set of posterior samples. To generate prior samples, use the JokerSamples.sample() method. Here, we'll generate a lare number of samples to use Step8: This object behaves like a Python dictionary in that the parameter values can be accessed via their key names Step9: They can also be written to disk or re-loaded using this same class. For example, to save these prior samples to the current directory to the file "prior_samples.hdf5" Step10: We could then load the samples from this file using Step11: Running The Joker Now that we have a set of prior samples, we can create an instance of The Joker and use the rejection sampler Step12: This works by either passing in an instance of JokerSamples containing the prior samples, or by passing in a filename that contains JokerSamples written to disk. So, for example, this is equivalent Step13: The max_posterior_samples argument above specifies the maximum number of posterior samples to return. It is often helpful to set a threshold here in cases when your data are very uninformative to avoid generating huge numbers of samples (which can slow down the sampler considerably). In either case above, the joker_samples object returned from rejection_sample() is also an instance of the JokerSamples class, but now contains posterior samples for all nonlinear and linear parameters in the model Step14: Plotting The Joker orbit samples over the input data With posterior samples in Keplerian orbital parameters in hand for our data set, we can now plot the posterior samples over the input data to get a sense for how constraining the data are. The Joker comes with a convenience plotting function, plot_rv_curves, for doing just this Step15: It has various options to allow customizing the style of the plot Step16: Another way to visualize the samples is to plot 2D projections of the sample values, for example, to plot period against eccentricity Step17: But is the true period value included in those distinct period modes returned by The Joker? When generating the simulated data, I also saved the true orbital parameters used to generate the data, so we can load and over-plot it
Python Code: import astropy.table as at from astropy.time import Time import astropy.units as u from astropy.visualization.units import quantity_support import matplotlib.pyplot as plt import numpy as np %matplotlib inline import thejoker as tj # set up a random generator to ensure reproducibility rnd = np.random.default_rng(seed=42) Explanation: Getting started with The Joker The Joker (pronounced Yo-kurr) is a highly specialized Monte Carlo (MC) sampler that is designed to generate converged posterior samplings for Keplerian orbital parameters, even when your data are sparse, non-uniform, or very noisy. This is not a general MC sampler, and this is not a Markov Chain MC sampler like emcee, or pymc3: This is fundamentally a rejection sampler with some tricks that help improve performance for the two-body problem. The Joker shines over more conventional MCMC sampling methods when your radial velocity data is imprecise, non-uniform, sparse, or has a short baseline: In these cases, your likelihood function will have many, approximately equal-height modes that are often spaced widely, all properties that make conventional MCMC bork when applied to this problem. In this tutorial, we will not go through the math behind the sampler (most of that is covered in the original paper). However, some terminology is important to know for the tutorial below or for reading the documentation. Most relevant, the parameters in the two-body problem (Kepler orbital parameters) split into two sets: nonlinear and linear parameters. The nonlinear parameters are always the same in each run of The Joker: period $P$, eccentricity $e$, argument of pericenter $\omega$, and a phase $M_0$. The default linear parameters are the velocity semi-ampltude $K$, and a systemtic velocity $v_0$. However, there are ways to add additional linear parameters into the model (as described in other tutorials). For this tutorial, we will set up an inference problem that is common to binary star or exoplanet studies, show how to generate posterior orbit samples from the data, and then demonstrate how to visualize the samples. Other tutorials demonstrate more advanced or specialized functionality included in The Joker, like: - fully customizing the parameter prior distributions, - allowing for a long-term velocity trend in the data, - continuing sampling with standard MCMC methods when The Joker returns one or few samples, - simultaneously inferring constant offsets between data sources (i.e. when using data from multiple instruments that may have calibration offsets) But let's start here with the most basic functionality! First, imports we will need later: End of explanation data_tbl = at.QTable.read('data.ecsv') data_tbl[:2] Explanation: Loading radial velocity data To start, we need some radial velocity data to play with. Our ultimate goal is to construct or read in a thejoker.RVData instance, which is the main data container object used in The Joker. For this tutorial, we will use a simulated RV curve that was generated using a separate script and saved to a CSV file, and we will create an RVData instance manually. Because we previously saved this data as an Astropy ECSV file, the units are provided with the column data and read in automatically using the astropy.table read/write interface: End of explanation sub_tbl = data_tbl[rnd.choice(len(data_tbl), size=4, replace=False)] sub_tbl Explanation: The full simulated data table has many rows (256), so let's randomly grab 4 rows to work with: End of explanation t = Time(sub_tbl['bjd'], format='jd', scale='tcb') data = tj.RVData(t=t, rv=sub_tbl['rv'], rv_err=sub_tbl['rv_err']) Explanation: It looks like the time column is given in Barycentric Julian Date (BJD), so in order to create an RVData instance, we will need to create an astropy.time.Time object from this column: End of explanation data = tj.RVData.guess_from_table(sub_tbl) Explanation: We now have an RVData object, so we could continue on with the tutorial. But as a quick aside, there is an alternate, more automatic (automagical?) way to create an RVData instance from tabular data: RVData.guess_from_table. This classmethod attempts to guess the time format and radial velocity column names from the columns in the data table. It is very much an experimental feature, so if you think it can be improved, please open an issue in the GitHub repo for The Joker. In any case, here it successfully works: End of explanation _ = data.plot() Explanation: One of the handy features of RVData is the .plot() method, which generates a quick view of the data: End of explanation prior = tj.JokerPrior.default( P_min=2*u.day, P_max=1e3*u.day, sigma_K0=30*u.km/u.s, sigma_v=100*u.km/u.s) Explanation: The data are clearly variable! But what orbits are consistent with these data? I suspect many, given how sparse they are! Now that we have the data in hand, we need to set up the sampler by specifying prior distributions over the parameters in The Joker. Specifying the prior distributions for The Joker parameters The prior pdf (probability distribution function) for The Joker is controlled and managed through the thejoker.JokerPrior class. The prior for The Joker is fairly customizable and the initializer for JokerPrior is therefore pretty flexible; usually too flexible for typical use cases. We will therefore start by using an alternate initializer defined on the class, JokerPrior.default(), that provides a simpler interface for creating a JokerPrior instance that uses the default prior distributions assumed in The Joker. In the default prior: $$ \begin{align} &p(P) \propto \frac{1}{P} \quad ; \quad P \in (P_{\rm min}, P_{\rm max})\ &p(e) = B(a_e, b_e)\ &p(\omega) = \mathcal{U}(0, 2\pi)\ &p(M_0) = \mathcal{U}(0, 2\pi)\ &p(K) = \mathcal{N}(K \,|\, \mu_K, \sigma_K)\ &\sigma_K = \sigma_{K, 0} \, \left(\frac{P}{P_0}\right)^{-1/3} \, \left(1 - e^2\right)^{-1/2}\ &p(v_0) = \mathcal{N}(v_0 \,|\, \mu_{v_0}, \sigma_{v_0})\ \end{align} $$ where $B(.)$ is the beta distribution, $\mathcal{U}$ is the uniform distribution, and $\mathcal{N}$ is the normal distribution. Most parameters in the distributions above are set to reasonable values, but there are a few required parameters for the default case: the range of allowed period values (P_min and P_max), the scale of the K prior variance sigma_K0, and the standard deviation of the $v_0$ prior sigma_v. Let's set these to some arbitrary numbers. Here, I chose the value for sigma_K0 to be typical of a binary star system; if using The Joker for exoplanet science, you will want to adjust this correspondingly. End of explanation prior_samples = prior.sample(size=250_000, random_state=rnd) prior_samples Explanation: Once we have the prior instance, we need to generate some prior samples that we will then use The Joker to rejection sample down to a set of posterior samples. To generate prior samples, use the JokerSamples.sample() method. Here, we'll generate a lare number of samples to use: End of explanation prior_samples['P'] prior_samples['e'] Explanation: This object behaves like a Python dictionary in that the parameter values can be accessed via their key names: End of explanation prior_samples.write("prior_samples.hdf5", overwrite=True) Explanation: They can also be written to disk or re-loaded using this same class. For example, to save these prior samples to the current directory to the file "prior_samples.hdf5": End of explanation tj.JokerSamples.read("prior_samples.hdf5") Explanation: We could then load the samples from this file using: End of explanation joker = tj.TheJoker(prior, random_state=rnd) joker_samples = joker.rejection_sample(data, prior_samples, max_posterior_samples=256) Explanation: Running The Joker Now that we have a set of prior samples, we can create an instance of The Joker and use the rejection sampler: End of explanation joker_samples = joker.rejection_sample(data, "prior_samples.hdf5", max_posterior_samples=256) Explanation: This works by either passing in an instance of JokerSamples containing the prior samples, or by passing in a filename that contains JokerSamples written to disk. So, for example, this is equivalent: End of explanation joker_samples Explanation: The max_posterior_samples argument above specifies the maximum number of posterior samples to return. It is often helpful to set a threshold here in cases when your data are very uninformative to avoid generating huge numbers of samples (which can slow down the sampler considerably). In either case above, the joker_samples object returned from rejection_sample() is also an instance of the JokerSamples class, but now contains posterior samples for all nonlinear and linear parameters in the model: End of explanation _ = tj.plot_rv_curves(joker_samples, data=data) Explanation: Plotting The Joker orbit samples over the input data With posterior samples in Keplerian orbital parameters in hand for our data set, we can now plot the posterior samples over the input data to get a sense for how constraining the data are. The Joker comes with a convenience plotting function, plot_rv_curves, for doing just this: End of explanation fig, ax = plt.subplots(1, 1, figsize=(8, 4)) _ = tj.plot_rv_curves(joker_samples, data=data, plot_kwargs=dict(color='tab:blue'), data_plot_kwargs=dict(color='tab:red'), relative_to_t_ref=True, ax=ax) ax.set_xlabel(f'BMJD$ - {data.t.tcb.mjd.min():.3f}$') Explanation: It has various options to allow customizing the style of the plot: End of explanation fig, ax = plt.subplots(1, 1, figsize=(8, 5)) with quantity_support(): ax.scatter(joker_samples['P'], joker_samples['e'], s=20, lw=0, alpha=0.5) ax.set_xscale('log') ax.set_xlim(prior.pars['P'].distribution.a, prior.pars['P'].distribution.b) ax.set_ylim(0, 1) ax.set_xlabel('$P$ [day]') ax.set_ylabel('$e$') Explanation: Another way to visualize the samples is to plot 2D projections of the sample values, for example, to plot period against eccentricity: End of explanation import pickle with open('true-orbit.pkl', 'rb') as f: truth = pickle.load(f) fig, ax = plt.subplots(1, 1, figsize=(8, 5)) with quantity_support(): ax.scatter(joker_samples['P'], joker_samples['e'], s=20, lw=0, alpha=0.5) ax.axvline(truth['P'], zorder=-1, color='tab:green') ax.axhline(truth['e'], zorder=-1, color='tab:green') ax.text(truth['P'], 0.95, 'truth', fontsize=20, va='top', ha='left', color='tab:green') ax.set_xscale('log') ax.set_xlim(prior.pars['P'].distribution.a, prior.pars['P'].distribution.b) ax.set_ylim(0, 1) ax.set_xlabel('$P$ [day]') ax.set_ylabel('$e$') Explanation: But is the true period value included in those distinct period modes returned by The Joker? When generating the simulated data, I also saved the true orbital parameters used to generate the data, so we can load and over-plot it: End of explanation
7,575
Given the following text description, write Python code to implement the functionality described below step by step Description: Multiple Hypothesis Testing! aka "Plenty of P-Values" by Zane Blanton #### Data Scientist in Marketplace at trivago Standard Hypothesis Testing We set an $\alpha$ (False Error Rate) of 0.05. Thus, only five percent of null hypotheses that we test are actually rejected. We hope that analyses delivered in this way are for the most part are valid. How we hope this works Step1: But what if we go wild? Step2: And where might we encounter the second situation? Let's make up parameters for a multiple testing situation and simulate some data Assume null p values distributed as Uniform(0, 1) Assume alternative p values distributed as Beta(1, 100) This assumption means that we have a lot of power, since we're nearly guaranteed to reject our alternative hypotheses at an $\alpha$ of 0.05 Step3: Let's sample some p values! But first, let's assume that 90% of the hypotheses we are testing are null. Step4: Let's use our classic method with $\alpha$ = 0.05 Step5: Bonferroni Correction Definition of family-wise error rate (FWER) If any null hypothesis is rejected under the null, then we consider it a false positive. The Bonferroni correction controls FWER by setting $\alpha = \frac{0.05}{k}$ where $k$ is the number of hypotheses we're testing. In our case, this is $0.05 / 1000 = 0.00005$ Step6: But somehow this is unimpressive If only there were another way... Controlling False Discovery Rate (FDR) We can set an $\alpha$ control on the expected proportion of null hypotheses in the set of hypotheses we reject, known as the False Discovery Rate. Step7: Benjamini-Hochberg Step-up Procedure Step8: Wrap-Up Step9: Wrap-Up Traditional hypothesis testing controls our false positive rate, the rate at which we reject null hypotheses. Also discussed a Family-Wise Error Rate (don't make even one mistake!) and looked at the Bonferroni correction We also discussed a False Discovery Rate, which is the proportion of null hypotheses in the pool of rejected hypotheses, and looked at the BH Step-Up Procedure to correct for this. If you had a representative set of labelled hypotheses, you could set up a loss function and optimize your cutoff based upon it Final note on Dependence of P Values FDR methodology can be extended to deal with dependence among test statistics. In the positively correlated case, our current controls are sufficient. For the negatively correlated case, we have to reduce our p values further. Bonferroni adjustment covers all cases. Of course, there are lots of other ways to deal with this problem. Feel free to try different methods until you get the results you want! (joke)
Python Code: total_null = 500 total_alt = 500 rejected_null = total_null * 0.05 rejected_alt = total_alt * 0.95 hypothesis_df = pd.DataFrame({'null hypotheses': [total_null, total_null - rejected_null, 0], 'rejected nulls': [0, rejected_null, rejected_null], 'alt hypotheses': [total_alt, total_alt - rejected_alt, 0], 'rejected alts': [0, rejected_alt, rejected_alt]}, index=['population of hypotheses', 'rejected_hypotheses', 'only rejected hypotheses']) hypothesis_df = hypothesis_df[['null hypotheses', 'rejected nulls', 'alt hypotheses', 'rejected alts']] def plot_hyp_df(): hypothesis_df.plot(kind='bar', stacked=True, color=['lightsteelblue', 'darkblue', 'wheat', 'orange'], title="When we are halfway right: 95% of rejected hypotheses are alt" ) plt.show() plot_hyp_df() Explanation: Multiple Hypothesis Testing! aka "Plenty of P-Values" by Zane Blanton #### Data Scientist in Marketplace at trivago Standard Hypothesis Testing We set an $\alpha$ (False Error Rate) of 0.05. Thus, only five percent of null hypotheses that we test are actually rejected. We hope that analyses delivered in this way are for the most part are valid. How we hope this works End of explanation total_null = 950 total_alt = 50 rejected_null = total_null * 0.05 rejected_alt = total_alt * 0.95 hypothesis_df = pd.DataFrame({'null hypotheses': [total_null, total_null - rejected_null, 0], 'rejected nulls': [0, rejected_null, rejected_null], 'alt hypotheses': [total_alt, total_alt - rejected_alt, 0], 'rejected alts': [0, rejected_alt, rejected_alt]}, index=['population of hypotheses', 'rejected_hypotheses', 'only rejected hypotheses']) hypothesis_df = hypothesis_df[['null hypotheses', 'rejected nulls', 'alt hypotheses', 'rejected alts']] def plot_hyp_df_again(): hypothesis_df.plot(kind='bar', stacked=True, color=['lightsteelblue', 'darkblue', 'wheat', 'orange'], title="When we're mostly wrong: 50% of rejected hypothese are alt") plt.show() plot_hyp_df_again() Explanation: But what if we go wild? End of explanation a = 1 b = 100 x = np.arange(0, 1, 0.001) null_distr = np.ones(1000) alt_distr = beta(a=a, b=b).pdf(x) def plot_our_p_distrs(): plt.plot(x, null_distr) plt.plot(x, alt_distr) plt.legend(['null_distr', 'alt_distr']) plt.ylim((0, 15)) plt.title('Null Versus Alternative Hypothesis P Values') plt.show() plot_our_p_distrs() np.random.seed(118943579) Explanation: And where might we encounter the second situation? Let's make up parameters for a multiple testing situation and simulate some data Assume null p values distributed as Uniform(0, 1) Assume alternative p values distributed as Beta(1, 100) This assumption means that we have a lot of power, since we're nearly guaranteed to reject our alternative hypotheses at an $\alpha$ of 0.05 End of explanation total_null = 900 total_alt = 100 null_pulls = np.random.random(total_null) alt_pulls = np.random.beta(a=a, b=b, size=total_alt) def plot_sim(): plt.hist([null_pulls, alt_pulls], bins=20, stacked=True) plt.title("Simulated P values") plt.legend(['null_pulls', 'alt_pulls']) plt.show() plot_sim() def plot_sim_gray(): plt.hist(np.concatenate([null_pulls, alt_pulls]), bins=20, color='gray') plt.title("But if we don't know which are which?") plt.show() plot_sim_gray() Explanation: Let's sample some p values! But first, let's assume that 90% of the hypotheses we are testing are null. End of explanation null_pulls_rejected = null_pulls[null_pulls <= 0.05] alt_pulls_rejected = alt_pulls[alt_pulls <= 0.05] def classic_hyp_hist(): plt.hist([null_pulls_rejected, alt_pulls_rejected], stacked=True) plt.legend(['null_pulls: {}'.format(len(null_pulls_rejected)), 'alt_pulls: {}'.format(len(alt_pulls_rejected))]) percent_alt = int(alt_pulls_rejected.shape[0] / (alt_pulls_rejected.shape[0] + null_pulls_rejected.shape[0]) * 100) plt.title('We End up Getting {percent_alt}% Alt Hypotheses'.format(percent_alt=percent_alt)) plt.show() classic_hyp_hist() Explanation: Let's use our classic method with $\alpha$ = 0.05 End of explanation print(alt_pulls[alt_pulls <= 0.05 / 1000]) print(null_pulls[null_pulls <= 0.05 / 1000]) print(alt_pulls[alt_pulls <= 0.10 / 1000]) print(null_pulls[null_pulls <= 0.10 / 1000]) Explanation: Bonferroni Correction Definition of family-wise error rate (FWER) If any null hypothesis is rejected under the null, then we consider it a false positive. The Bonferroni correction controls FWER by setting $\alpha = \frac{0.05}{k}$ where $k$ is the number of hypotheses we're testing. In our case, this is $0.05 / 1000 = 0.00005$ End of explanation def plot_hyp_df_fdr(): hypothesis_df.plot(kind='bar', stacked=True, color=['lightsteelblue', 'darkblue', 'wheat', 'orange'], title="Let's control the proportion of orange on the right") plt.show() plot_hyp_df_fdr() Explanation: But somehow this is unimpressive If only there were another way... Controlling False Discovery Rate (FDR) We can set an $\alpha$ control on the expected proportion of null hypotheses in the set of hypotheses we reject, known as the False Discovery Rate. End of explanation alpha = 0.10 null_pulls.sort() alt_pulls.sort() p_values_df = pd.DataFrame({'p': np.concatenate([null_pulls, alt_pulls]), 'case': ['null'] * total_null + ['alt'] * total_alt}).sort_values('p', ascending=True) n = p_values_df.shape[0] p_values_df['adjusted_alpha'] = [alpha * (k + 1) / n for k in range(n)] p_values_df['k'] = range(1, n + 1) p_values_to_reject = multipletests(p_values_df.p.values, alpha=alpha, method='fdr_bh')[0] p_values_rejected = p_values_df.loc[p_values_to_reject, :].reset_index(drop=True) def plot_stepup(): ax = plt.subplot(111) p_values_df.plot(x='k', y='p', kind='scatter', marker='.', ax=ax) p_values_df.plot(x='k', y='adjusted_alpha', ax=ax, color='red') ax.set_yscale('log') plt.title('Step-Up Procedure: Find the last time a blue point is under red line') plt.xlabel('kth smallest p value') plt.ylabel('Log of p-value') plt.show() plot_stepup() print(p_values_rejected) max_p_fdr = max(p_values_df.loc[p_values_to_reject, 'p']) def plot_stepup_hist(): null_pulls_rejected = null_pulls[null_pulls <= max_p_fdr] alt_pulls_rejected = alt_pulls[alt_pulls <= max_p_fdr] plt.hist([null_pulls_rejected, alt_pulls_rejected], stacked=True) plt.legend(['null_pulls: {}'.format(len(null_pulls_rejected)), 'alt_pulls: {}'.format(len(alt_pulls_rejected))]) percent_alt = int(alt_pulls_rejected.shape[0] / (alt_pulls_rejected.shape[0] + null_pulls_rejected.shape[0]) * 100) plt.title('We End up Getting {percent_alt}% Alt Hypotheses'.format(percent_alt=percent_alt)) plt.show() plot_stepup_hist() def plot_all_criteria(xmax=0.10, bins=500): plt.hist([null_pulls, alt_pulls], bins=bins, stacked=True) plt.xlim(-0.001, xmax) plt.title("Lines are Bonferroni, Step-Up, and Classic Alpha") plt.legend(['null_pulls', 'alt_pulls']) plt.axvline(0.05, color='red') plt.axvline(max_p_fdr, color='pink') plt.axvline(0.05 / 1000, color='purple') plt.show() Explanation: Benjamini-Hochberg Step-up Procedure: Sort all null hypotheses. The smallest p value is $p_{(1)}$, the second smallest is $p_{(2)}$, etc. Set an FDR control $\alpha=0.10$ Then, find the largest $k$ such that $p_{(k)} \le \frac{k}{m}\alpha$. Finaly, we we reject all null hypotheses $p_{(1)}, \dots, p_{(k)}$ We will accept this procedure as magic, but for those of you who are curious, here's a link to the proof: https://statweb.stanford.edu/~candes/stats300c/Lectures/Lecture7.pdf Now, let's apply this to our data End of explanation plot_all_criteria(xmax=1.0, bins=40) plot_all_criteria(xmax=0.10) Explanation: Wrap-Up End of explanation help(multipletests) Explanation: Wrap-Up Traditional hypothesis testing controls our false positive rate, the rate at which we reject null hypotheses. Also discussed a Family-Wise Error Rate (don't make even one mistake!) and looked at the Bonferroni correction We also discussed a False Discovery Rate, which is the proportion of null hypotheses in the pool of rejected hypotheses, and looked at the BH Step-Up Procedure to correct for this. If you had a representative set of labelled hypotheses, you could set up a loss function and optimize your cutoff based upon it Final note on Dependence of P Values FDR methodology can be extended to deal with dependence among test statistics. In the positively correlated case, our current controls are sufficient. For the negatively correlated case, we have to reduce our p values further. Bonferroni adjustment covers all cases. Of course, there are lots of other ways to deal with this problem. Feel free to try different methods until you get the results you want! (joke) End of explanation
7,576
Given the following text description, write Python code to implement the functionality described below step by step Description: Conditional Probability Activity & Exercise Below is some code to create some fake data on how much stuff people purchase given their age range. It generates 100,000 random "people" and randomly assigns them as being in their 20's, 30's, 40's, 50's, 60's, or 70's. It then assigns a lower probability for young people to buy stuff. In the end, we have two Python dictionaries Step1: Let's play with conditional probability. First let's compute P(E|F), where E is "purchase" and F is "you're in your 30's". The probability of someone in their 30's buying something is just the percentage of how many 30-year-olds bought something Step2: P(F) is just the probability of being 30 in this data set Step3: And P(E) is the overall probability of buying something, regardless of your age Step4: If E and F were independent, then we would expect P(E | F) to be about the same as P(E). But they're not; PE is 0.45, and P(E|F) is 0.3. So, that tells us that E and F are dependent (which we know they are in this example.) What is P(E)P(F)? Step5: P(E,F) is different from P(E|F). P(E,F) would be the probability of both being in your 30's and buying something, out of the total population - not just the population of people in their 30's Step6: P(E,F) = P(E)P(F), and they are pretty close in this example. But because E and F are actually dependent on each other, and the randomness of the data we're working with, it's not quite the same. We can also check that P(E|F) = P(E,F)/P(F) and sure enough, it is
Python Code: from numpy import random random.seed(0) totals = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} purchases = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0} totalPurchases = 0 for _ in range(100000): ageDecade = random.choice([20, 30, 40, 50, 60, 70]) purchaseProbability = float(ageDecade) / 100.0 totals[ageDecade] += 1 if (random.random() < purchaseProbability): totalPurchases += 1 purchases[ageDecade] += 1 totals purchases totalPurchases Explanation: Conditional Probability Activity & Exercise Below is some code to create some fake data on how much stuff people purchase given their age range. It generates 100,000 random "people" and randomly assigns them as being in their 20's, 30's, 40's, 50's, 60's, or 70's. It then assigns a lower probability for young people to buy stuff. In the end, we have two Python dictionaries: "totals" contains the total number of people in each age group. "purchases" contains the total number of things purchased by people in each age group. The grand total of purchases is in totalPurchases, and we know the total number of people is 100,000. Let's run it and have a look: End of explanation PEF = float(purchases[30]) / float(totals[30]) print('P(purchase | 30s): ' + str(PEF) Explanation: Let's play with conditional probability. First let's compute P(E|F), where E is "purchase" and F is "you're in your 30's". The probability of someone in their 30's buying something is just the percentage of how many 30-year-olds bought something: End of explanation PF = float(totals[30]) / 100000.0 print("P(30's): " + str(PF)) Explanation: P(F) is just the probability of being 30 in this data set: End of explanation PE = float(totalPurchases) / 100000.0 print("P(Purchase):" + str(PE)) Explanation: And P(E) is the overall probability of buying something, regardless of your age: End of explanation print("P(30's)P(Purchase)" + str(PE * PF)) Explanation: If E and F were independent, then we would expect P(E | F) to be about the same as P(E). But they're not; PE is 0.45, and P(E|F) is 0.3. So, that tells us that E and F are dependent (which we know they are in this example.) What is P(E)P(F)? End of explanation print("P(30's, Purchase)" + str(float(purchases[30]) / 100000.0)) Explanation: P(E,F) is different from P(E|F). P(E,F) would be the probability of both being in your 30's and buying something, out of the total population - not just the population of people in their 30's: End of explanation print((purchases[30] / 100000.0) / PF) Explanation: P(E,F) = P(E)P(F), and they are pretty close in this example. But because E and F are actually dependent on each other, and the randomness of the data we're working with, it's not quite the same. We can also check that P(E|F) = P(E,F)/P(F) and sure enough, it is: End of explanation
7,577
Given the following text description, write Python code to implement the functionality described below step by step Description: ABU量化系统使用文档 <center> <img src="./image/abu_logo.png" alt="" style="vertical-align Step1: 之前的章节无论讲解策略优化,还是针对回测进行滑点或是手续费都是针对一支股票进行择时操作。 本节将示例讲解多支股票进行择时策略的实现,依然使用AbuFactorBuyBreak做为买入策略,其它四个卖出策略同时生效的组合。 Step2: 1. 多支股票使用相同的因子进行择时 选择的股票如下所示: choice_symbols = ['usTSLA', 'usNOAH', 'usSFUN', 'usBIDU', 'usAAPL', 'usGOOG', 'usWUBA', 'usVIPS'] 备注:本节示例都基于美股市场,针对A股市场及港股市场,比特币,期货市场后在后面的章节讲解 Step3: 使用ABuPickTimeExecute.do_symbols_with_same_factors()函数对多支股票使用相同的买入因子,卖出因子 Step4: 运行完毕,使用了ipython的magic code %%time去统计代码块运行时间,显示运行了19.2 s,本节最后会使用多进程模式运行相同的回测,会和这个时间进行比较。 备注:具体实际运行时间根据cpu的性能确定 下面代码显示orders_pd中前10个交易数据: Step5: 通过buy_cnt列可以发现每次交易数量都不一样,由于内部有资金管理控制模块默认使用atr进行仓位控制 默认资金管理控制使用AbuAtrPosition,详情请阅读源代码,下面会有自定义仓位管理的示例。 下面代码显示action_pd中前10个行为数据: Step6: 注意deal列代表了交易是否成交,由于内部有资金管理控制模块,所以不是所有交易信号都可以最后成交。 下面我们使用abu量化系统度量模块对整体结果做个度量,如下图所示(之后章节会对度量方法及模块进行详细讲解,这里请先简单使用即可)。 Step11: 2. 自定义仓位管理策略的实现 上面使用AbuMetricsBase进行度量,我们计算出: 胜率 Step12: 自定义仓位管理代码如上AbuKellyPosition: 仓位管理类需要继承AbuPositionBase 仓位管理类主要需要实现函数fit_position,即根据买入价格,本金基数等融合买入策略对买入单位进行计算 仓位管理类主要需要实现函数_init_self,外部通过字典参数将胜率等参数进行关键子参数设置(详见后使用示例) 更多资金管理代码请阅读AbuPositionBase 下面编写buy_factors2,其42d突破使用position=AbuKellyPosition 参数胜率:metrics.win_rate(41.79%) 期望收益:metrics.gains_mean(12.01%) 期望亏损:metrics.losses_mean(-4.91%), 代码如下所示: Step13: 从输出生成的orders_pd中可以看到buy Pos列所有42d突破都使用了AbuKellyPosition,60d仍然使用AbuAtrPosition Step14: 3. 多支股票使用不同的因子进行择时 使用ABuPickTimeExecute.do_symbols_with_diff_factors()函数针对不同的股票使用不同的买入因子和不同的卖出因子, 具体实现请查阅源代码ABuPickTimeExecute,使用示例如下: Step15: 如下代码通过pandas的交叉表来分析输出的orders_pd, 来证明 Step16: 4. 使用并行来提升择时运行效率 当你选择的股票非常多的时候,比如很多时候是对全市场进行回测,那就需要多进程并行来提升运行效率,AbuPickTimeMaster.do_symbols_with_same_factors_process()函数通过定义n_process_kl(同时获取股票数据的进程数)和n_process_pick_time(同时进行择时的进程数)来完成操作. 具体实现代码请阅读AbuPickTimeMaster,使用示例如下所示:
Python Code: from __future__ import print_function from __future__ import division import warnings warnings.filterwarnings('ignore') warnings.simplefilter('ignore') import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import os import sys # 使用insert 0即只使用github,避免交叉使用了pip安装的abupy,导致的版本不一致问题 sys.path.insert(0, os.path.abspath('../')) import abupy # 使用沙盒数据,目的是和书中一样的数据环境 abupy.env.enable_example_env_ipython() Explanation: ABU量化系统使用文档 <center> <img src="./image/abu_logo.png" alt="" style="vertical-align:middle;padding:10px 20px;"><font size="6" color="black"><b>第4节 多支股票择时回测与仓位管理</b></font> </center> 作者: 阿布 阿布量化版权所有 未经允许 禁止转载 abu量化系统github地址 (您的star是我的动力!) 本节ipython notebook 首先导入abupy中本节使用的模块: End of explanation from abupy import AbuFactorBuyBreak, AbuFactorSellBreak, AbuPositionBase from abupy import AbuFactorAtrNStop, AbuFactorPreAtrNStop, AbuFactorCloseAtrNStop from abupy import ABuPickTimeExecute, AbuBenchmark, AbuCapital # buy_factors 60日向上突破,42日向上突破两个因子 buy_factors = [{'xd': 60, 'class': AbuFactorBuyBreak}, {'xd': 42, 'class': AbuFactorBuyBreak}] # 四个卖出因子同时并行生效 sell_factors = [ { 'xd': 120, 'class': AbuFactorSellBreak }, { 'stop_loss_n': 0.5, 'stop_win_n': 3.0, 'class': AbuFactorAtrNStop }, { 'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.0 }, { 'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5 }] benchmark = AbuBenchmark() capital = AbuCapital(1000000, benchmark) Explanation: 之前的章节无论讲解策略优化,还是针对回测进行滑点或是手续费都是针对一支股票进行择时操作。 本节将示例讲解多支股票进行择时策略的实现,依然使用AbuFactorBuyBreak做为买入策略,其它四个卖出策略同时生效的组合。 End of explanation # 我们假定choice_symbols是我们选股模块的结果, choice_symbols = ['usTSLA', 'usNOAH', 'usSFUN', 'usBIDU', 'usAAPL', 'usGOOG', 'usWUBA', 'usVIPS'] Explanation: 1. 多支股票使用相同的因子进行择时 选择的股票如下所示: choice_symbols = ['usTSLA', 'usNOAH', 'usSFUN', 'usBIDU', 'usAAPL', 'usGOOG', 'usWUBA', 'usVIPS'] 备注:本节示例都基于美股市场,针对A股市场及港股市场,比特币,期货市场后在后面的章节讲解 End of explanation %%time capital = AbuCapital(1000000, benchmark) orders_pd, action_pd, all_fit_symbols_cnt = ABuPickTimeExecute.do_symbols_with_same_factors(choice_symbols, benchmark, buy_factors, sell_factors, capital, show=False) Explanation: 使用ABuPickTimeExecute.do_symbols_with_same_factors()函数对多支股票使用相同的买入因子,卖出因子 End of explanation orders_pd[:10] Explanation: 运行完毕,使用了ipython的magic code %%time去统计代码块运行时间,显示运行了19.2 s,本节最后会使用多进程模式运行相同的回测,会和这个时间进行比较。 备注:具体实际运行时间根据cpu的性能确定 下面代码显示orders_pd中前10个交易数据: End of explanation action_pd[:10] Explanation: 通过buy_cnt列可以发现每次交易数量都不一样,由于内部有资金管理控制模块默认使用atr进行仓位控制 默认资金管理控制使用AbuAtrPosition,详情请阅读源代码,下面会有自定义仓位管理的示例。 下面代码显示action_pd中前10个行为数据: End of explanation from abupy import AbuMetricsBase metrics = AbuMetricsBase(orders_pd, action_pd, capital, benchmark) metrics.fit_metrics() metrics.plot_returns_cmp(only_show_returns=True) Explanation: 注意deal列代表了交易是否成交,由于内部有资金管理控制模块,所以不是所有交易信号都可以最后成交。 下面我们使用abu量化系统度量模块对整体结果做个度量,如下图所示(之后章节会对度量方法及模块进行详细讲解,这里请先简单使用即可)。 End of explanation class AbuKellyPosition(AbuPositionBase): 示例kelly仓位管理类 def fit_position(self, factor_object): fit_position计算的结果是买入多少个单位(股,手,顿,合约) 需要factor_object策略因子对象通过历史回测统计胜率,期望收益,期望亏损, 并设置构造当前factor_object对象,通过kelly公司计算仓位 :param factor_object: ABuFactorBuyBases子类实例对象 :return:买入多少个单位(股,手,顿,合约) # 败率 loss_rate = 1 - self.win_rate # kelly计算出仓位比例 kelly_pos = self.win_rate - loss_rate / (self.gains_mean / self.losses_mean) # 最大仓位限制,依然受上层最大仓位控制限制,eg:如果kelly计算出全仓,依然会减少到75%,如修改需要修改最大仓位值 kelly_pos = self.pos_max if kelly_pos > self.pos_max else kelly_pos # 结果是买入多少个单位(股,手,顿,合约) return self.read_cash * kelly_pos / self.bp * self.deposit_rate def _init_self(self, **kwargs): kelly仓位控制管理类初始化设置 # 默认kelly仓位胜率0.50 self.win_rate = kwargs.pop('win_rate', 0.50) # 默认平均获利期望0.10 self.gains_mean = kwargs.pop('gains_mean', 0.10) # 默认平均亏损期望0.05 self.losses_mean = kwargs.pop('losses_mean', 0.05) 以默认的设置kelly根据计算0.5 - 0.5 / (0.10 / 0.05) 仓位将是0.25即25% Explanation: 2. 自定义仓位管理策略的实现 上面使用AbuMetricsBase进行度量,我们计算出: 胜率:41.79% 平均获利期望:12.01% 平均亏损期望:-4.91% 有这三个参数就可以使用kelly公式来做仓位控制,AbuKellyPosition实现如下: End of explanation from abupy import AbuKellyPosition # 42d使用刚刚编写的AbuKellyPosition,60d仍然使用默认仓位管理类,即abupy中内置的AbuAtrPosition类 buy_factors2 = [{'xd': 60, 'class': AbuFactorBuyBreak}, {'xd': 42, 'position': {'class': AbuKellyPosition, 'win_rate': metrics.win_rate, 'gains_mean': metrics.gains_mean, 'losses_mean': -metrics.losses_mean}, 'class': AbuFactorBuyBreak}] capital = AbuCapital(1000000, benchmark) orders_pd, action_pd, all_fit_symbols_cnt = ABuPickTimeExecute.do_symbols_with_same_factors(choice_symbols, benchmark, buy_factors2, sell_factors, capital, show=False) Explanation: 自定义仓位管理代码如上AbuKellyPosition: 仓位管理类需要继承AbuPositionBase 仓位管理类主要需要实现函数fit_position,即根据买入价格,本金基数等融合买入策略对买入单位进行计算 仓位管理类主要需要实现函数_init_self,外部通过字典参数将胜率等参数进行关键子参数设置(详见后使用示例) 更多资金管理代码请阅读AbuPositionBase 下面编写buy_factors2,其42d突破使用position=AbuKellyPosition 参数胜率:metrics.win_rate(41.79%) 期望收益:metrics.gains_mean(12.01%) 期望亏损:metrics.losses_mean(-4.91%), 代码如下所示: End of explanation orders_pd[:10].filter(['symbol', 'buy_cnt', 'buy_factor', 'buy_pos']) Explanation: 从输出生成的orders_pd中可以看到buy Pos列所有42d突破都使用了AbuKellyPosition,60d仍然使用AbuAtrPosition End of explanation # 选定noah和sfun target_symbols = ['usSFUN', 'usNOAH'] # 针对sfun只使用42d向上突破作为买入因子 buy_factors_sfun = [{'xd': 42, 'class': AbuFactorBuyBreak}] # 针对sfun只使用60d向下突破作为卖出因子 sell_factors_sfun = [{'xd': 60, 'class': AbuFactorSellBreak}] # 针对noah只使用21d向上突破作为买入因子 buy_factors_noah = [{'xd': 21, 'class': AbuFactorBuyBreak}] # 针对noah只使用42d向下突破作为卖出因子 sell_factors_noah = [{'xd': 42, 'class': AbuFactorSellBreak}] factor_dict = dict() # 构建SFUN独立的buy_factors,sell_factors的dict factor_dict['usSFUN'] = {'buy_factors': buy_factors_sfun, 'sell_factors': sell_factors_sfun} # 构建NOAH独立的buy_factors,sell_factors的dict factor_dict['usNOAH'] = {'buy_factors': buy_factors_noah, 'sell_factors': sell_factors_noah} # 初始化资金 capital = AbuCapital(1000000, benchmark) # 使用do_symbols_with_diff_factors执行 orders_pd, action_pd, all_fit_symbols = ABuPickTimeExecute.do_symbols_with_diff_factors(target_symbols, benchmark, factor_dict, capital) Explanation: 3. 多支股票使用不同的因子进行择时 使用ABuPickTimeExecute.do_symbols_with_diff_factors()函数针对不同的股票使用不同的买入因子和不同的卖出因子, 具体实现请查阅源代码ABuPickTimeExecute,使用示例如下: End of explanation pd.crosstab(orders_pd.buy_factor, orders_pd.symbol) Explanation: 如下代码通过pandas的交叉表来分析输出的orders_pd, 来证明: noah买入因子全部是使用21d向上突破,sfun买入因子全部是使用42d向上突破: End of explanation %%time from abupy import AbuPickTimeMaster capital = AbuCapital(1000000, benchmark) orders_pd, action_pd, _ = AbuPickTimeMaster.do_symbols_with_same_factors_process( choice_symbols, benchmark, buy_factors, sell_factors, capital, n_process_kl=4, n_process_pick_time=4) Explanation: 4. 使用并行来提升择时运行效率 当你选择的股票非常多的时候,比如很多时候是对全市场进行回测,那就需要多进程并行来提升运行效率,AbuPickTimeMaster.do_symbols_with_same_factors_process()函数通过定义n_process_kl(同时获取股票数据的进程数)和n_process_pick_time(同时进行择时的进程数)来完成操作. 具体实现代码请阅读AbuPickTimeMaster,使用示例如下所示: End of explanation
7,578
Given the following text description, write Python code to implement the functionality described below step by step Description: 1. Load train and test data Step1: 2. Tokenize name into (surname, title, first name and maiden name) Step2: 2.1 Extract features from Title variable Step3: It seems we can extract some info from title 1. Whether a woman is married Mme/Mrs vs Miss/Mlle vs Ms(Undetermined or single Step4: 3 Examine marriages / sibling relationships Step5: Initialize data structures for algorithm Step6: 1. Extract marriages in greedy fashion. Assume is_married has no fp ( might have actually
Python Code: train = pd.read_csv("data/train.csv") train["dataset"] = "train" train.head() test = pd.read_csv("data/test.csv") test["dataset"] = "test" test.head() #Combine both datasets to predict families train = train.append(test) train.set_index(train["PassengerId"],inplace=True) Explanation: 1. Load train and test data End of explanation name_tokenizer = re.compile(r"^(?P<surname>[^,]+), (?P<title>[A-Z a-z]+?)\. (?P<f_name>[A-Z a-z.]+)?(?P<maiden_name>\([A-Za-z .]+\))?") name_tokens = ["surname","title","f_name","maiden_name"] for name_tk in name_tokens: train[name_tk] = train.Name.apply(lambda x: name_tokenizer.match(x).group(name_tk)) test[name_tk] = test.Name.apply(lambda x: name_tokenizer.match(x).group(name_tk)) train.head(n=5) Explanation: 2. Tokenize name into (surname, title, first name and maiden name) End of explanation print train.groupby(["title","Sex"]).size() Explanation: 2.1 Extract features from Title variable End of explanation #Encode special title following this logic train.has_special_title = train.title.apply(lambda x: x not in ["Mr","Mrs","Miss","Mme","Mlle","Master"]) Explanation: It seems we can extract some info from title 1. Whether a woman is married Mme/Mrs vs Miss/Mlle vs Ms(Undetermined or single :/? ) 2. Master title apparently given to male kids 2. Nobility vs laypeople : (Dr, Col, Capt, ...) vs (Mr,Master,Mrs,Miss). Ambiguous cases (Mlle,Mme,Ms,Don/Dona?) End of explanation def is_married(couple_rows): are_married=False if couple_rows.irow(0).Sex != couple_rows.irow(1).Sex: #Get who is the husband and whose the wife man = couple_rows.irow(0) if couple_rows.irow(0).Sex == "male" else couple_rows.irow(1) woman = couple_rows.irow(0) if couple_rows.irow(0).Sex == "female" else couple_rows.irow(1) #Marriage tests marriage_tests = {} marriage_tests["same_f_name"] = woman.f_name is not None and woman.f_name in man.f_name marriage_tests["consistent_title"] = woman.title not in ("Miss","Mlle") and man.title != "Master" marriage_tests["same_ticket"] = woman.Ticket == man.Ticket marriage_tests["same_pclass"] = woman.Pclass == man.Pclass marriage_tests["legal_age"] = (woman.title in ("Mme","Mrs") or woman.Age >= 10) and man.Age > 10 marriage_tests["consistent_SibSp"] = (woman.SibSp > 0 and man.SibSp > 0) or (woman.SibSp == man.SibSp) are_married = marriage_tests["same_f_name"] and marriage_tests["legal_age"] or ( ) consistency_checks = ( marriage_tests["consistent_title"] and marriage_tests["legal_age"] and marriage_tests["same_pclass"] and marriage_tests["same_ticket"] and marriage_tests["consistent_SibSp"]) if are_married and not consistency_checks: failed_tests = ", ".join("{}:{}".format(x,marriage_tests[x]) for x in marriage_tests if not marriage_tests[x]) print "WARNING: Sketchy marriage: {}".format(failed_tests) print couple_rows print return are_married Explanation: 3 Examine marriages / sibling relationships End of explanation #Data structures - sets to keep track which ones have already been assigned married_people = set() people_with_parents = set() links_to_assign = train[["SibSp","Parch"]] #Matches a couple with the Max amount of kids they can have #Which is the min(husband.Parch, wife.Parch) marriages_table = {} Explanation: Initialize data structures for algorithm End of explanation #Subset only people who have spouses/siblings on the boat train_sibsp = train.ix[ train.SibSp > 0] #People grouped by surname surname_groups = train_sibsp.groupby("surname").groups for surname in surname_groups: surname_rows = surname_groups[surname] couples = itertools.combinations(surname_rows,2) for cpl in couples: cpl_rows = train_sibsp.ix[list(cpl)] if is_married(cpl_rows): #Make sure we're not marrying somebody twice :p assert cpl[0] not in married_people,"{} is already married :/".format(cpl[0]) assert cpl[1] not in married_people,"{} is already married :/".format(cpl[1]) #add couples to married set married_people.add(cpl[0]) married_people.add(cpl[1]) marriages_table[cpl] = min(links_to_assign.ix[cpl[0]]["Parch"], links_to_assign.ix[cpl[1]]["Parch"] ) #print # break marriages_table train.ix[list((26,1066))] train.ix[ (train.SibSp > 0) | (train.Parch > 0) ].shape train Explanation: 1. Extract marriages in greedy fashion. Assume is_married has no fp ( might have actually :/ ) End of explanation
7,579
Given the following text description, write Python code to implement the functionality described below step by step Description: Explore LarterBreakspear model. Run time Step1: Perform the simulation Step2: Plot pretty pictures of what we just did
Python Code: # Third party python libraries import numpy # Try and import from "The Virtual Brain" from tvb.simulator.lab import * from tvb.datatypes.time_series import TimeSeriesRegion import tvb.analyzers.fmri_balloon as bold from tvb.simulator.plot import timeseries_interactive as timeseries_interactive Explanation: Explore LarterBreakspear model. Run time: 20 min (workstation circa 2012 Intel Xeon W3520@2.67Ghz) Memory requirement: ~300 MB Storage requirement: ~150MB NOTE: stats were made for a simulation using the 998 region Hagmann connectivity matrix. End of explanation LOG.info("Configuring...") #Initialise a Model, Coupling, and Connectivity. lb = models.LarterBreakspear(QV_max=1.0, QZ_max=1.0, d_V=0.65, d_Z=0.65, aee=0.36, ani=0.4, ane=1.0, C=0.1) lb.variables_of_interest = ["V", "W", "Z"] white_matter = connectivity.Connectivity(load_default=True) white_matter.speed = numpy.array([7.0]) white_matter_coupling = coupling.HyperbolicTangent(a=0.5*lb.QV_max, midpoint=lb.VT, sigma=lb.d_V) #Initialise an Integrator heunint = integrators.HeunDeterministic(dt=0.2) #Initialise some Monitors with period in physical time mon_tavg = monitors.TemporalAverage(period=2.) mon_bold = monitors.Bold(period=2000.) #Bundle them what_to_watch = (mon_bold, mon_tavg) #Initialise a Simulator -- Model, Connectivity, Integrator, and Monitors. sim = simulator.Simulator(model = lb, connectivity = white_matter, coupling = white_matter_coupling, integrator = heunint, monitors = what_to_watch) sim.configure() LOG.info("Starting simulation...") #Perform the simulation bold_data, bold_time = [], [] tavg_data, tavg_time = [], [] for raw, tavg in sim(simulation_length=480000): if not raw is None: bold_time.append(raw[0]) bold_data.append(raw[1]) if not tavg is None: tavg_time.append(tavg[0]) tavg_data.append(tavg[1]) LOG.info("Finished simulation.") Explanation: Perform the simulation End of explanation #Make the lists numpy.arrays for easier use. LOG.info("Converting result to array...") TAVG_TIME = numpy.array(tavg_time) BOLD_TIME = numpy.array(bold_time) BOLD = numpy.array(bold_data) TAVG = numpy.array(tavg_data) #Create TimeSeries instance tsr = TimeSeriesRegion(data = TAVG, time = TAVG_TIME, sample_period = 2.) tsr.configure() #Create and run the monitor/analyser bold_model = bold.BalloonModel(time_series = tsr) bold_data = bold_model.evaluate() bold_tsr = TimeSeriesRegion(connectivity = white_matter, data = bold_data.data, time = bold_data.time) #Prutty puctures... tsi = timeseries_interactive.TimeSeriesInteractive(time_series = bold_tsr) tsi.configure() tsi.show() Explanation: Plot pretty pictures of what we just did End of explanation
7,580
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I have the following dataframe:
Problem: import pandas as pd df = pd.DataFrame({'text': ['abc', 'def', 'ghi', 'jkl']}) def g(df): return pd.DataFrame({'text': [', '.join(df['text'].str.strip('"').tolist())]}) result = g(df.copy())
7,581
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2020 The TensorFlow IO Authors. Step1: 音频数据准备和增强 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https Step2: 使用方法 读取音频文件 在 TensorFlow IO 中,利用类 tfio.audio.AudioIOTensor 可以将音频文件读取到延迟加载的 IOTensor 中: Step3: 在上面的示例中,Flac 文件 brooklyn.flac 来自 Google Cloud 中可公开访问的音频片段。 示例中直接使用 GCS 地址 gs Step4: 音频可通过以下方式播放: Step5: 更方便的方式是,将张量转换为浮点数并在计算图中显示音频片段: Step6: 降噪 为音频降噪有时很有意义,这可以通过 API tfio.audio.trim 实现。从该 API 返回的是片段的一对 [start, stop] 位置: Step7: 淡入和淡出 一种有用的音频工程技术是淡入淡出,也就是逐渐增强或减弱音频信号。这可以通过 tfio.audio.fade 实现。tfio.audio.fade 支持不同的淡入淡出形状,如 linear、logarithmic 或 exponential: Step8: 声谱图 高级音频处理通常需要根据时间调整音频频率。在 tensorflow-io 中,可通过 tfio.audio.spectrogram 将波形图转换为声谱图。 Step9: 也可以转换为其他不同的比例: Step10: SpecAugment 除上述数据准备和增强 API 外,tensorflow-io 软件包还提供了高级声谱图增强,最主要的是在 SpecAugment Step11: 时间掩蔽 在时间掩蔽中,对 t 个连续时间步骤 [t0, t0 + t) 进行掩蔽,其中 t 选自从 0 到时间掩蔽参数 T 的均匀分布,而 t0 则选自 [0, τ − t),其中 τ 是时间步数。
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2020 The TensorFlow IO Authors. End of explanation !pip install tensorflow-io Explanation: 音频数据准备和增强 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://tensorflow.google.cn/io/tutorials/audio"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/audio.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/audio.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/io/tutorials/audio.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td> </table> 概述 自动语音识别面临的最大挑战之一是音频数据的准备和增强。音频数据分析可能涉及时域或频域,与图像等其他数据源相比,这提高了复杂性。 作为 TensorFlow 生态系统的一部分,tensorflow-io 软件包提供了不少与音频相关的 API。这些 API 非常有用,可简化音频数据的准备和增强。 设置 安装要求的软件包,然后重新启动运行时 End of explanation import tensorflow as tf import tensorflow_io as tfio audio = tfio.audio.AudioIOTensor('gs://cloud-samples-tests/speech/brooklyn.flac') print(audio) Explanation: 使用方法 读取音频文件 在 TensorFlow IO 中,利用类 tfio.audio.AudioIOTensor 可以将音频文件读取到延迟加载的 IOTensor 中: End of explanation audio_slice = audio[100:] # remove last dimension audio_tensor = tf.squeeze(audio_slice, axis=[-1]) print(audio_tensor) Explanation: 在上面的示例中,Flac 文件 brooklyn.flac 来自 Google Cloud 中可公开访问的音频片段。 示例中直接使用 GCS 地址 gs://cloud-samples-tests/speech/brooklyn.flac,因为 TensorFlow 支持 GCS 文件系统。除了 Flac 格式,凭借自动文件格式检测,AudioIOTensor 还支持 WAV、Ogg、MP3 和 MP4A 格式。 AudioIOTensor 是一个延迟加载张量,因此,刚开始只显示形状、dtype 和采样率。AudioIOTensor 的形状用 [samples, channels] 表示,这表示您加载的音频片段是单声道音频(int16 类型的 28979 个样本)。 仅需要时才会读取该音频片段的内容。要读取音频片段的内容,可通过 to_tensor() 将 AudioIOTensor 转换为 Tensor,也可以通过切片读取。如果只需要一个大音频片段的一小部分,切片尤其实用: End of explanation from IPython.display import Audio Audio(audio_tensor.numpy(), rate=audio.rate.numpy()) Explanation: 音频可通过以下方式播放: End of explanation import matplotlib.pyplot as plt tensor = tf.cast(audio_tensor, tf.float32) / 32768.0 plt.figure() plt.plot(tensor.numpy()) Explanation: 更方便的方式是,将张量转换为浮点数并在计算图中显示音频片段: End of explanation position = tfio.audio.trim(tensor, axis=0, epsilon=0.1) print(position) start = position[0] stop = position[1] print(start, stop) processed = tensor[start:stop] plt.figure() plt.plot(processed.numpy()) Explanation: 降噪 为音频降噪有时很有意义,这可以通过 API tfio.audio.trim 实现。从该 API 返回的是片段的一对 [start, stop] 位置: End of explanation fade = tfio.audio.fade( processed, fade_in=1000, fade_out=2000, mode="logarithmic") plt.figure() plt.plot(fade.numpy()) Explanation: 淡入和淡出 一种有用的音频工程技术是淡入淡出,也就是逐渐增强或减弱音频信号。这可以通过 tfio.audio.fade 实现。tfio.audio.fade 支持不同的淡入淡出形状,如 linear、logarithmic 或 exponential: End of explanation # Convert to spectrogram spectrogram = tfio.audio.spectrogram( fade, nfft=512, window=512, stride=256) plt.figure() plt.imshow(tf.math.log(spectrogram).numpy()) Explanation: 声谱图 高级音频处理通常需要根据时间调整音频频率。在 tensorflow-io 中,可通过 tfio.audio.spectrogram 将波形图转换为声谱图。 End of explanation # Convert to mel-spectrogram mel_spectrogram = tfio.audio.melscale( spectrogram, rate=16000, mels=128, fmin=0, fmax=8000) plt.figure() plt.imshow(tf.math.log(mel_spectrogram).numpy()) # Convert to db scale mel-spectrogram dbscale_mel_spectrogram = tfio.audio.dbscale( mel_spectrogram, top_db=80) plt.figure() plt.imshow(dbscale_mel_spectrogram.numpy()) Explanation: 也可以转换为其他不同的比例: End of explanation # Freq masking freq_mask = tfio.audio.freq_mask(dbscale_mel_spectrogram, param=10) plt.figure() plt.imshow(freq_mask.numpy()) Explanation: SpecAugment 除上述数据准备和增强 API 外,tensorflow-io 软件包还提供了高级声谱图增强,最主要的是在 SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition (Park et al., 2019) 中讨论的频率掩蔽和时间掩蔽。 频率掩蔽 在频率掩蔽中,对频率通道 [f0, f0 + f) 进行掩蔽,其中 f 选自从 0 到频率掩蔽参数 F 的均匀分布,而 f0 则选自 (0, ν − f),其中 ν 是频率通道的数量。 End of explanation # Time masking time_mask = tfio.audio.time_mask(dbscale_mel_spectrogram, param=10) plt.figure() plt.imshow(time_mask.numpy()) Explanation: 时间掩蔽 在时间掩蔽中,对 t 个连续时间步骤 [t0, t0 + t) 进行掩蔽,其中 t 选自从 0 到时间掩蔽参数 T 的均匀分布,而 t0 则选自 [0, τ − t),其中 τ 是时间步数。 End of explanation
7,582
Given the following text description, write Python code to implement the functionality described below step by step Description: Kampff lab - Polytrode Impedance Here a description of the dataset Step1: create a DataIO (and remove if already exists) Step2: CatalogueConstructor Make catalogue on the first 280. After this the signal is totally noisy. Step3: Noise measurement This is done with the MAD a robust variance. mad = median(abs(x-median(x)) * 1.4826 Step4: Inspect waveform quality at catalogue level Step5: construct catalogue Step6: apply peeler This is the real spike sorting Step7: final inspection of cells
Python Code: # suposing the datset is downloaded here # workdir = '/media/samuel/dataspikesorting/DataSpikeSortingHD2/kampff/polytrode Impedance/' workdir = '/home/samuel/Documents/projet/DataSpikeSorting/kampff/polytrode Impedance/' # Input file filename = workdir + 'amplifier2017-02-02T17_18_46/amplifier2017-02-02T17_18_46.bin' # dirname is where tridesclous will put eveything dirname = workdir + 'tdc_amplifier2017-02-02T17_18_46' %matplotlib notebook import numpy as np import matplotlib.pyplot as plt import tridesclous as tdc from tridesclous import DataIO, CatalogueConstructor, Peeler import os, shutil Explanation: Kampff lab - Polytrode Impedance Here a description of the dataset: http://www.kampff-lab.org/polytrode-impedance Here the official publication of this open dataset: https://crcns.org/data-sets/methods/evi-1/about-evi-1 And the citation: Joana P Neto, Pedro Baião, Gonçalo Lopes, João Frazão, Joana Nogueira, Elvira Fortunato, Pedro Barquinha, Adam R Kampff (2018); Extracellular recordings using a dense electrode array allowing direct comparison of the same neural signal measured with low and high impedance electrodes CRCNS.org http://dx.doi.org/10.6080/K07M064M And a paper on results in Frontier https://doi.org/10.3389/fnins.2018.00715 Introduction For this study they use polytrode aranged in chess board so that 16 of electrodes have a 1Mohm (at 1kHz) impedance and other 16 have 100kohm impedance. The goal of this is notebook is to reproduce the spike sorting pipeline on the dataset. In the official paper in frontier, Joana Neto and co, use kilosort for spike sorting but no public report the result of the dataset. Here is a replicate and reproducible pipeline with tridesclous. This is done only on one file amplifier2017-02-02T17_18_46 but the same script can be applied easly on other files form teh same dataset. Download Dataset must downloaded locally and manually from crcns or from the google drive in "workdir" path. The PRB file tridesclous need a PRB file that describe the geometry of probe. Create it by copy/paste or download it via github. Here I split the probe in 3 groups : * 0 is all channel * 1 is 1Mohm electrodes * 2 is 100kohm electrodes So we could test tridesclous on all channels or one or the other impedance. python channel_groups = { 0: { 'channels': [0, 31, 24, 7, 1, 21, 10, 30, 25, 6, 15, 20, 11, 16, 26, 5, 14, 19, 12, 17, 27, 4, 8, 18, 13, 23, 28, 3, 9, 29, 2, 22], 'graph' : [], 'geometry': { 0: [18.0, 0.0], 31: [18.0, 25.], 24: [0.0, 37.5], 7: [36.0, 37.5], 1: [18.0, 50.], 21: [0.0, 62.5], 10: [36., 62.5], 30: [18.0, 75.0], 25: [0.0, 87.5], 6: [36.0, 87.5], 15: [18.0, 100.0], 20: [0.0, 112.5], 11: [36.0, 112.5], 16: [18.0, 125.0], 26: [0.0, 137.5], 5: [36.0, 137.5], 14: [18.0, 150.0], 19: [0.0, 162.5], 12: [36.0, 162.5], 17: [18.0, 175.0], 27: [0.0, 187.5], 4: [36.0, 187.5], 8: [18.0, 200.0], 18: [0.0, 212.5], 13: [36.0, 212.5], 23: [18.0, 225.0], 28: [0.0, 237.5], 3: [36.0, 237.5], 9: [18.0, 250.0], 29: [0.0, 262.5], 2: [36.0, 262.5], 22: [18.0, 275.0], }, }, 1: { 'channels': [0, 24, 1, 10, 25, 15, 11, 26, 14, 12, 27, 8, 13, 28, 9, 2], 'graph' : [], 'geometry': { 0: [18.0, 0.0], 24: [0.0, 37.5], 1: [18.0, 50.], 10: [36., 62.5], 25: [0.0, 87.5], 15: [18.0, 100.0], 11: [36.0, 112.5], 26: [0.0, 137.5], 14: [18.0, 150.0], 12: [36.0, 162.5], 27: [0.0, 187.5], 8: [18.0, 200.0], 13: [36.0, 212.5], 28: [0.0, 237.5], 9: [18.0, 250.0], 2: [36.0, 262.5], } }, 2: { 'channels': [31, 7, 21, 30, 6, 20, 16, 5, 19, 17, 4, 18, 23, 3, 29, 22], 'graph' : [], 'geometry': { 31: [18.0, 25.0], 7: [36.0, 37.5], 21: [0.0, 62.5], 30: [18.0, 75.0], 6: [36.0, 87.5], 20: [0.0, 112.5], 16: [18.0, 125.0], 5: [36.0, 137.5], 19: [0.0, 162.5], 17: [18.0, 175.0], 4: [36.0, 187.5], 18: [0.0, 212.5], 23: [18.0, 225.0], 3: [36.0, 237.5], 29: [0.0, 262.5], 22: [18.0, 275.0], } } } End of explanation if os.path.exists(dirname): #remove is already exists to restart from stractch shutil.rmtree(dirname) dataio = DataIO(dirname=dirname) # feed DataIO with one file dataio.set_data_source(type='RawData', filenames=[filename], sample_rate=20000., dtype='uint16', total_channel=32, bit_to_microVolt=0.195) print(dataio) # set the probe file dataio.set_probe_file('kampff_polytrode_impedance_32.prb') Explanation: create a DataIO (and remove if already exists) End of explanation cc = CatalogueConstructor(dataio=dataio, chan_grp=0) cc.set_preprocessor_params(chunksize=1024, common_ref_removal=False, highpass_freq=250., lowpass_freq=9500., peak_sign='-', relative_threshold=5., peak_span=0.0001) cc.estimate_signals_noise(duration=30.) cc.run_signalprocessor(duration=280.) cc.extract_some_waveforms(n_left=-15, n_right=20, mode='rand', nb_max=20000) cc.clean_waveforms(alien_value_threshold=100.) cc.extract_some_features(method='peak_max') cc.find_clusters(method='sawchaincut', kde_bandwith=1.0) print(cc) Explanation: CatalogueConstructor Make catalogue on the first 280. After this the signal is totally noisy. End of explanation dataio = DataIO(dirname=dirname) tdc.summary_noise(dataio=dataio, chan_grp=0) Explanation: Noise measurement This is done with the MAD a robust variance. mad = median(abs(x-median(x)) * 1.4826 End of explanation tdc.summary_catalogue_clusters(dataio=dataio, chan_grp=0, label=0) Explanation: Inspect waveform quality at catalogue level End of explanation cc.make_catalogue_for_peeler() Explanation: construct catalogue End of explanation initial_catalogue = dataio.load_catalogue(chan_grp=0) peeler = Peeler(dataio) peeler.change_params(catalogue=initial_catalogue) peeler.run(duration=None, progressbar=True) Explanation: apply peeler This is the real spike sorting: find spike that correcpond to catalogue templates. This is done in 51 s. on my old laptop (Intel i5-3337U) without opencl. End of explanation tdc.summary_after_peeler_clusters(dataio, chan_grp=0, label=0) Explanation: final inspection of cells End of explanation
7,583
Given the following text description, write Python code to implement the functionality described below step by step Description: Naive Bayes and Bayes Classifiers author Step1: Simple Gaussian Example Step2: The data seems like it comes from two normal distributions, with the cyan class being more prevalent than the magenta class. A natural way to model this data would be to create a normal distribution for the cyan data, and another for the magenta distribution. Let's take a look at doing that. All we need to do is use the from_samples class method of the NormalDistribution class. Step3: It looks like some aspects of the data are captured well by doing things this way-- specifically the mean and variance of the normal distributions. This allows us to easily calculate $P(D|M)$ as the probability of a sample under either the cyan or magenta distributions using the normal (or Gaussian) probability density equation Step4: The prior $P(M)$ is a vector of probabilities over the classes that the model can predict, also known as components. In this case, if we draw a sample randomly from the data that we have, there is a ~83% chance that it will come from the cyan class and a ~17% chance that it will come from the magenta class. Let's multiply the probability densities we got before by this imbalance. Step5: This looks a lot more faithful to the original data, and actually corresponds to $P(M)P(D|M)$, the prior multiplied by the likelihood. However, these aren't actually probability distributions anymore, as they no longer integrate to 1. This is why the $P(M)P(D|M)$ term has to be normalized by the $P(D)$ term in Bayes' rule in order to get a probability distribution over the components. However, $P(D)$ is difficult to determine exactly-- what is the probability of the data? Well, we can sum over the classes to get that value, since $P(D) = \sum_{i=1}^{c} P(D|M)P(M)$ for a problem with c classes. This translates into $P(D) = P(M=Cyan)P(D|M=Cyan) + P(M=Magenta)P(D|M=Magenta)$ for this specific problem, and those values can just be pulled from the unnormalized plots above. This gives us the full Bayes' rule, with the posterior $P(M|D)$ being the proportion of density of the above plot coming from each of the two distributions at any point on the line. Let's take a look at the posterior probabilities of the two classes on the same line. Step6: The top plot shows the same densities as before, while the bottom plot shows the proportion of the density belonging to either class at that point. This proportion is known as the posterior $P(M|D)$, and can be interpreted as the probability of that point belonging to each class. This is one of the native benefits of probabilistic models, that instead of providing a hard class label for each sample, they can provide a soft label in the form of the probability of belonging to each class. We can implement all of this simply in pomegranate using the NaiveBayes class. Step7: Looks like we're getting the same plots for the posteriors just through fitting the naive Bayes model directly to data. The predictions made will come directly from the posteriors in this plot, with cyan predictions happening whenever the cyan posterior is greater than the magenta posterior, and vice-versa. Naive Bayes In the univariate setting, naive Bayes is identical to a general Bayes classifier. The divergence occurs in the multivariate setting, the naive Bayes model assumes independence of all features, while a Bayes classifier is more general and can support more complicated interactions or covariances between features. Let's take a look at what this means in terms of Bayes' rule. \begin{align} P(M|D) &= \frac{P(M)P(D|M)}{P(D)} \ &= \frac{P(M)\prod_{i=1}^{d}P(D_{i}|M_{i})}{P(D)} \end{align} This looks fairly simple to compute, as we just need to pass each dimension into the appropriate distribution and then multiply the returned probabilities together. This simplicity is one of the reasons why naive Bayes is so widely used. Let's look closer at using this in pomegranate, starting off by generating two blobs of data that overlap a bit and inspecting them. Step8: Now, let's fit our naive Bayes model to this data using pomegranate. We can use the from_samples class method, pass in the distribution that we want to model each dimension, and then the data. We choose to use NormalDistribution in this particular case, but any supported distribution would work equally well, such as BernoulliDistribution or ExponentialDistribution. To ensure we get the correct decision boundary, let's also plot the boundary recovered by sklearn. Step9: Drawing the decision boundary helps to verify that we've produced a good result by cleanly splitting the two blobs from each other. Bayes' rule provides a great deal of flexibility in terms of what the actually likelihood functions are. For example, when considering a multivariate distribution, there is no need for each dimension to be modeled by the same distribution. In fact, each dimension can be modeled by a different distribution, as long as we can multiply the $P(D|M)$ terms together. Classifying Signal with Different Distributions on Different Features Let's consider the example of some noisy signals that have been segmented. We know that they come from two underlying phenomena, the cyan phenomena and the magenta phenomena, and want to classify future segments. To do this, we have three features-- the mean signal of the segment, the standard deviation, and the duration. Step10: We can start by modeling each variable as Gaussians, like before, and see what accuracy we get. Step11: We get identical values for sklearn and for pomegranate, which is good. However, let's take a look at the data itself to see whether a Gaussian distribution is the appropriate distribution for the data. Step12: So, unsurprisingly (since you can see that I used non-Gaussian distributions to generate the data originally), it looks like only the mean follows a normal distribution, whereas the standard deviation seems to follow either a gamma or a log-normal distribution. We can take advantage of that by explicitly using these distributions instead of approximating them as normal distributions. pomegranate is flexible enough to allow for this, whereas sklearn currently is not. Step13: It looks like we're able to get a small improvement in accuracy just by using appropriate distributions for the features, without any type of data transformation or filtering. This certainly seems worthwhile if you can determine what the appropriate underlying distribution is. Next, there's obviously the issue of speed. Let's compare the speed of the pomegranate implementation and the sklearn implementation. Step14: Looks as if on this small dataset they're all taking approximately the same time. This is pretty much expected, as the fitting step is fairly simple and both implementations use C-level numerics for the calculations. We can give a more thorough treatment of the speed comparison on larger datasets. Let's look at the average time it takes to fit a model to data of increasing dimensionality across 25 runs. Step15: It appears as if the two implementations are basically the same speed. This is unsurprising given the simplicity of the calculations, and as mentioned before, the low level implementation. Multivariate Gaussian Bayes Classifiers The natural generalization of the naive Bayes classifier is to allow any multivariate function take the place of $P(D|M)$ instead of it being the product of several univariate probability distributions. One immediate difference is that now instead of creating a Gaussian model with effectively a diagonal covariance matrix, you can now create one with a full covariance matrix. Let's see an example of that at work. Step16: It looks like we are able to get a better boundary between the two blobs of data. The primary for this is because the data don't form spherical clusters, like you assume when you force a diagonal covariance matrix, but are tilted ellipsoids, that can be better modeled by a full covariance matrix. We can quantify this quickly by looking at performance on the training data. Step17: Looks like there is a significant boost. Naturally you'd want to evaluate the performance of the model on separate validation data, but for the purposes of demonstrating the effect of a full covariance matrix this should be sufficient. Gaussian Mixture Model Bayes Classifier While using a full covariance matrix is certainly more complicated than using only the diagonal, there is no reason that the $P(D|M)$ has to even be a single simple distribution versus a full probabilistic model. After all, all probabilistic models, including general mixtures, hidden Markov models, and Bayesian networks, can calculate $P(D|M)$. Let's take a look at an example of using a mixture model instead of a single gaussian distribution.
Python Code: %matplotlib inline import matplotlib.pyplot as plt import seaborn; seaborn.set_style('whitegrid') import numpy from pomegranate import * numpy.random.seed(0) numpy.set_printoptions(suppress=True) %load_ext watermark %watermark -m -n -p numpy,scipy,pomegranate Explanation: Naive Bayes and Bayes Classifiers author: Jacob Schreiber <br> contact: jmschreiber91@gmail.com Bayes classifiers are some of the simplest machine learning models that exist, due to their intuitive probabilistic interpretation and simple fitting step. Each class is modeled as a probability distribution, and the data is interpreted as samples drawn from these underlying distributions. Fitting the model to data is as simple as calculating maximum likelihood parameters for the data that falls under each class, and making predictions is as simple as using Bayes' rule to determine which class is most likely given the distributions. Bayes' Rule is the following: \begin{equation} P(M|D) = \frac{P(D|M)P(M)}{P(D)} \end{equation} where M stands for the model and D stands for the data. $P(M)$ is known as the <i>prior</i>, because it is the probability that a sample is of a certain class before you even know what the sample is. This is generally just the frequency of each class. Intuitively, it makes sense that you would want to model this, because if one class occurs 10x more than another class, it is more likely that a given sample will belong to that distribution. $P(D|M)$ is the likelihood, or the probability, of the data under a given model. Lastly, $P(M|D)$ is the posterior, which is the probability of each component of the model, or class, being the component which generated the data. It is called the posterior because the prior corresponds to probabilities before seeing data, and the posterior corresponds to probabilities after observing the data. In cases where the prior is uniform, the posterior is just equal to the normalized likelihoods. This equation forms the basis of most probabilistic modeling, with interesting priors allowing the user to inject sophisticated expert knowledge into the problem directly. pomegranate implements two distinct models of this format with the simpler being the naive Bayes classifier. The naive Bayes classifier assumes that each feature is independent from each other feature, and so breaks down $P(D|M)$ to be $\prod\limits_{i=1}^{d} P(D_{i}|M_{i})$ where $i$ is a specific feature in a data set that has $d$ features in it. This typically means faster calculations because covariance across features doesn't need to be considered, but it also is a natural way to model each feature as a different distribution because it ignores the complexities of modeling the covariance between, say, an exponential distribution and a normal distribution. The more general model is the Bayes classifier which does not assume that each feature is independent of the others. This allows you to plug in anything for the likelihood function, whether it be a multivariate Gaussian distribution or a whole other compositional model. For instance, one could create a classifier whose components are each large mixture models. This enables much more complex models to be learned within the simple framework of Bayes' rule. End of explanation X = numpy.concatenate((numpy.random.normal(3, 1, 200), numpy.random.normal(10, 2, 1000))) y = numpy.concatenate((numpy.zeros(200), numpy.ones(1000))) x1 = X[:200] x2 = X[200:] plt.figure(figsize=(16, 5)) plt.hist(x1, bins=25, color='m', edgecolor='m', label="Class A") plt.hist(x2, bins=25, color='c', edgecolor='c', label="Class B") plt.xlabel("Value", fontsize=14) plt.ylabel("Count", fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() Explanation: Simple Gaussian Example End of explanation d1 = NormalDistribution.from_samples(x1) d2 = NormalDistribution.from_samples(x2) idxs = numpy.arange(0, 15, 0.1) p1 = list(map(d1.probability, idxs)) p2 = list(map(d2.probability, idxs)) plt.figure(figsize=(16, 5)) plt.plot(idxs, p1, color='m'); plt.fill_between(idxs, 0, p1, facecolor='m', alpha=0.2) plt.plot(idxs, p2, color='c'); plt.fill_between(idxs, 0, p2, facecolor='c', alpha=0.2) plt.xlabel("Value", fontsize=14) plt.ylabel("Probability", fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() Explanation: The data seems like it comes from two normal distributions, with the cyan class being more prevalent than the magenta class. A natural way to model this data would be to create a normal distribution for the cyan data, and another for the magenta distribution. Let's take a look at doing that. All we need to do is use the from_samples class method of the NormalDistribution class. End of explanation magenta_prior = 1. * len(x1) / len(X) cyan_prior = 1. * len(x2) / len(X) plt.figure(figsize=(4, 6)) plt.title("Prior Probabilities P(M)", fontsize=14) plt.bar(0, magenta_prior, facecolor='m', edgecolor='m') plt.bar(1, cyan_prior, facecolor='c', edgecolor='c') plt.xticks([0, 1], ['P(Magenta)', 'P(Cyan)'], fontsize=14) plt.yticks(fontsize=14) plt.show() Explanation: It looks like some aspects of the data are captured well by doing things this way-- specifically the mean and variance of the normal distributions. This allows us to easily calculate $P(D|M)$ as the probability of a sample under either the cyan or magenta distributions using the normal (or Gaussian) probability density equation: \begin{align} P(D|M) &= P(x|\mu, \sigma) \ &= \frac{1}{\sqrt{2\pi\sigma^{2}}} exp \left(-\frac{(x-u)^{2}}{2\sigma^{2}} \right) \end{align} However, if we look at the original data, we see that the cyan distributions is both much wider than the purple distribution and much taller, as there were more samples from that class in general. If we reduce that data down to these two distributions, we lose the class imbalance. We want our prior to model this class imbalance, with the reasoning being that if we randomly draw a sample from the samples observed thus far, it is far more likely to be a cyan than a magenta sample. Let's take a look at this class imbalance exactly. End of explanation d1 = NormalDistribution.from_samples(x1) d2 = NormalDistribution.from_samples(x2) idxs = numpy.arange(0, 15, 0.1) p_magenta = numpy.array(list(map(d1.probability, idxs))) * magenta_prior p_cyan = numpy.array(list(map(d2.probability, idxs))) * cyan_prior plt.figure(figsize=(16, 5)) plt.plot(idxs, p_magenta, color='m'); plt.fill_between(idxs, 0, p_magenta, facecolor='m', alpha=0.2) plt.plot(idxs, p_cyan, color='c'); plt.fill_between(idxs, 0, p_cyan, facecolor='c', alpha=0.2) plt.xlabel("Value", fontsize=14) plt.ylabel("P(M)P(D|M)", fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() Explanation: The prior $P(M)$ is a vector of probabilities over the classes that the model can predict, also known as components. In this case, if we draw a sample randomly from the data that we have, there is a ~83% chance that it will come from the cyan class and a ~17% chance that it will come from the magenta class. Let's multiply the probability densities we got before by this imbalance. End of explanation magenta_posterior = p_magenta / (p_magenta + p_cyan) cyan_posterior = p_cyan / (p_magenta + p_cyan) plt.figure(figsize=(16, 5)) plt.subplot(211) plt.plot(idxs, p_magenta, color='m'); plt.fill_between(idxs, 0, p_magenta, facecolor='m', alpha=0.2) plt.plot(idxs, p_cyan, color='c'); plt.fill_between(idxs, 0, p_cyan, facecolor='c', alpha=0.2) plt.xlabel("Value", fontsize=14) plt.ylabel("P(M)P(D|M)", fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.subplot(212) plt.plot(idxs, magenta_posterior, color='m') plt.plot(idxs, cyan_posterior, color='c') plt.xlabel("Value", fontsize=14) plt.ylabel("P(M|D)", fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() Explanation: This looks a lot more faithful to the original data, and actually corresponds to $P(M)P(D|M)$, the prior multiplied by the likelihood. However, these aren't actually probability distributions anymore, as they no longer integrate to 1. This is why the $P(M)P(D|M)$ term has to be normalized by the $P(D)$ term in Bayes' rule in order to get a probability distribution over the components. However, $P(D)$ is difficult to determine exactly-- what is the probability of the data? Well, we can sum over the classes to get that value, since $P(D) = \sum_{i=1}^{c} P(D|M)P(M)$ for a problem with c classes. This translates into $P(D) = P(M=Cyan)P(D|M=Cyan) + P(M=Magenta)P(D|M=Magenta)$ for this specific problem, and those values can just be pulled from the unnormalized plots above. This gives us the full Bayes' rule, with the posterior $P(M|D)$ being the proportion of density of the above plot coming from each of the two distributions at any point on the line. Let's take a look at the posterior probabilities of the two classes on the same line. End of explanation idxs = idxs.reshape(idxs.shape[0], 1) X = X.reshape(X.shape[0], 1) model = NaiveBayes.from_samples(NormalDistribution, X, y) posteriors = model.predict_proba(idxs) plt.figure(figsize=(14, 4)) plt.plot(idxs, posteriors[:,0], color='m') plt.plot(idxs, posteriors[:,1], color='c') plt.xlabel("Value", fontsize=14) plt.ylabel("P(M|D)", fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() Explanation: The top plot shows the same densities as before, while the bottom plot shows the proportion of the density belonging to either class at that point. This proportion is known as the posterior $P(M|D)$, and can be interpreted as the probability of that point belonging to each class. This is one of the native benefits of probabilistic models, that instead of providing a hard class label for each sample, they can provide a soft label in the form of the probability of belonging to each class. We can implement all of this simply in pomegranate using the NaiveBayes class. End of explanation X = numpy.concatenate([numpy.random.normal(3, 2, size=(150, 2)), numpy.random.normal(7, 1, size=(250, 2))]) y = numpy.concatenate([numpy.zeros(150), numpy.ones(250)]) plt.figure(figsize=(8, 8)) plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c') plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m') plt.xlim(-2, 10) plt.ylim(-4, 12) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() Explanation: Looks like we're getting the same plots for the posteriors just through fitting the naive Bayes model directly to data. The predictions made will come directly from the posteriors in this plot, with cyan predictions happening whenever the cyan posterior is greater than the magenta posterior, and vice-versa. Naive Bayes In the univariate setting, naive Bayes is identical to a general Bayes classifier. The divergence occurs in the multivariate setting, the naive Bayes model assumes independence of all features, while a Bayes classifier is more general and can support more complicated interactions or covariances between features. Let's take a look at what this means in terms of Bayes' rule. \begin{align} P(M|D) &= \frac{P(M)P(D|M)}{P(D)} \ &= \frac{P(M)\prod_{i=1}^{d}P(D_{i}|M_{i})}{P(D)} \end{align} This looks fairly simple to compute, as we just need to pass each dimension into the appropriate distribution and then multiply the returned probabilities together. This simplicity is one of the reasons why naive Bayes is so widely used. Let's look closer at using this in pomegranate, starting off by generating two blobs of data that overlap a bit and inspecting them. End of explanation from sklearn.naive_bayes import GaussianNB model = NaiveBayes.from_samples(NormalDistribution, X, y) clf = GaussianNB().fit(X, y) xx, yy = numpy.meshgrid(numpy.arange(-2, 10, 0.02), numpy.arange(-4, 12, 0.02)) Z1 = model.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) Z2 = clf.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) plt.figure(figsize=(16, 8)) plt.subplot(121) plt.title("pomegranate naive Bayes", fontsize=16) plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c') plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m') plt.contour(xx, yy, Z1, cmap='Blues') plt.xlim(-2, 10) plt.ylim(-4, 12) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.subplot(122) plt.title("sklearn naive Bayes", fontsize=16) plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c') plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m') plt.contour(xx, yy, Z2, cmap='Blues') plt.xlim(-2, 10) plt.ylim(-4, 12) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() Explanation: Now, let's fit our naive Bayes model to this data using pomegranate. We can use the from_samples class method, pass in the distribution that we want to model each dimension, and then the data. We choose to use NormalDistribution in this particular case, but any supported distribution would work equally well, such as BernoulliDistribution or ExponentialDistribution. To ensure we get the correct decision boundary, let's also plot the boundary recovered by sklearn. End of explanation def plot_signal(X, n): plt.figure(figsize=(16, 6)) t_current = 0 for i in range(n): mu, std, t = X[i] chunk = numpy.random.normal(mu, std, int(t)) plt.plot(numpy.arange(t_current, t_current+t), chunk, c='cm'[i % 2]) t_current += t plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.xlabel("Time (s)", fontsize=14) plt.ylabel("Signal", fontsize=14) plt.ylim(20, 40) plt.show() def create_signal(n): X, y = [], [] for i in range(n): mu = numpy.random.normal(30.0, 0.4) std = numpy.random.lognormal(-0.1, 0.4) t = int(numpy.random.exponential(50)) + 1 X.append([mu, std, int(t)]) y.append(0) mu = numpy.random.normal(30.5, 0.8) std = numpy.random.lognormal(-0.3, 0.6) t = int(numpy.random.exponential(200)) + 1 X.append([mu, std, int(t)]) y.append(1) return numpy.array(X), numpy.array(y) X_train, y_train = create_signal(1000) X_test, y_test = create_signal(250) plot_signal(X_train, 20) Explanation: Drawing the decision boundary helps to verify that we've produced a good result by cleanly splitting the two blobs from each other. Bayes' rule provides a great deal of flexibility in terms of what the actually likelihood functions are. For example, when considering a multivariate distribution, there is no need for each dimension to be modeled by the same distribution. In fact, each dimension can be modeled by a different distribution, as long as we can multiply the $P(D|M)$ terms together. Classifying Signal with Different Distributions on Different Features Let's consider the example of some noisy signals that have been segmented. We know that they come from two underlying phenomena, the cyan phenomena and the magenta phenomena, and want to classify future segments. To do this, we have three features-- the mean signal of the segment, the standard deviation, and the duration. End of explanation model = NaiveBayes.from_samples(NormalDistribution, X_train, y_train) print("Gaussian Naive Bayes: ", (model.predict(X_test) == y_test).mean()) clf = GaussianNB().fit(X_train, y_train) print("sklearn Gaussian Naive Bayes: ", (clf.predict(X_test) == y_test).mean()) Explanation: We can start by modeling each variable as Gaussians, like before, and see what accuracy we get. End of explanation plt.figure(figsize=(14, 4)) plt.subplot(131) plt.title("Mean") plt.hist(X_train[y_train == 0, 0], color='c', alpha=0.5, bins=25) plt.hist(X_train[y_train == 1, 0], color='m', alpha=0.5, bins=25) plt.subplot(132) plt.title("Standard Deviation") plt.hist(X_train[y_train == 0, 1], color='c', alpha=0.5, bins=25) plt.hist(X_train[y_train == 1, 1], color='m', alpha=0.5, bins=25) plt.subplot(133) plt.title("Duration") plt.hist(X_train[y_train == 0, 2], color='c', alpha=0.5, bins=25) plt.hist(X_train[y_train == 1, 2], color='m', alpha=0.5, bins=25) plt.show() Explanation: We get identical values for sklearn and for pomegranate, which is good. However, let's take a look at the data itself to see whether a Gaussian distribution is the appropriate distribution for the data. End of explanation model = NaiveBayes.from_samples(NormalDistribution, X_train, y_train) print("Gaussian Naive Bayes: ", (model.predict(X_test) == y_test).mean()) clf = GaussianNB().fit(X_train, y_train) print("sklearn Gaussian Naive Bayes: ", (clf.predict(X_test) == y_test).mean()) model = NaiveBayes.from_samples([NormalDistribution, LogNormalDistribution, ExponentialDistribution], X_train, y_train) print("Heterogeneous Naive Bayes: ", (model.predict(X_test) == y_test).mean()) Explanation: So, unsurprisingly (since you can see that I used non-Gaussian distributions to generate the data originally), it looks like only the mean follows a normal distribution, whereas the standard deviation seems to follow either a gamma or a log-normal distribution. We can take advantage of that by explicitly using these distributions instead of approximating them as normal distributions. pomegranate is flexible enough to allow for this, whereas sklearn currently is not. End of explanation %timeit GaussianNB().fit(X_train, y_train) %timeit NaiveBayes.from_samples(NormalDistribution, X_train, y_train) %timeit NaiveBayes.from_samples([NormalDistribution, LogNormalDistribution, ExponentialDistribution], X_train, y_train) Explanation: It looks like we're able to get a small improvement in accuracy just by using appropriate distributions for the features, without any type of data transformation or filtering. This certainly seems worthwhile if you can determine what the appropriate underlying distribution is. Next, there's obviously the issue of speed. Let's compare the speed of the pomegranate implementation and the sklearn implementation. End of explanation pom_time, skl_time = [], [] n1, n2 = 15000, 60000, for d in range(1, 101, 5): X = numpy.concatenate([numpy.random.normal(3, 2, size=(n1, d)), numpy.random.normal(7, 1, size=(n2, d))]) y = numpy.concatenate([numpy.zeros(n1), numpy.ones(n2)]) tic = time.time() for i in range(25): GaussianNB().fit(X, y) skl_time.append((time.time() - tic) / 25) tic = time.time() for i in range(25): NaiveBayes.from_samples(NormalDistribution, X, y) pom_time.append((time.time() - tic) / 25) plt.figure(figsize=(14, 6)) plt.plot(range(1, 101, 5), pom_time, color='c', label="pomegranate") plt.plot(range(1, 101, 5), skl_time, color='m', label="sklearn") plt.xticks(fontsize=14) plt.xlabel("Number of Dimensions", fontsize=14) plt.yticks(fontsize=14) plt.ylabel("Time (s)") plt.legend(fontsize=14) plt.show() Explanation: Looks as if on this small dataset they're all taking approximately the same time. This is pretty much expected, as the fitting step is fairly simple and both implementations use C-level numerics for the calculations. We can give a more thorough treatment of the speed comparison on larger datasets. Let's look at the average time it takes to fit a model to data of increasing dimensionality across 25 runs. End of explanation tilt_a = [[-2, 0.5], [5, 2]] tilt_b = [[-1, 1.5], [3, 3]] X = numpy.concatenate((numpy.random.normal(4, 1, size=(250, 2)).dot(tilt_a), numpy.random.normal(3, 1, size=(800, 2)).dot(tilt_b))) y = numpy.concatenate((numpy.zeros(250), numpy.ones(800))) model_a = NaiveBayes.from_samples(NormalDistribution, X, y) model_b = BayesClassifier.from_samples(MultivariateGaussianDistribution, X, y) xx, yy = numpy.meshgrid(numpy.arange(-5, 30, 0.02), numpy.arange(0, 25, 0.02)) Z1 = model_a.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) Z2 = model_b.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) plt.figure(figsize=(18, 8)) plt.subplot(121) plt.contour(xx, yy, Z1, cmap='Blues') plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c', alpha=0.3) plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m', alpha=0.3) plt.xlim(-5, 30) plt.ylim(0, 25) plt.subplot(122) plt.contour(xx, yy, Z2, cmap='Blues') plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c', alpha=0.3) plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m', alpha=0.3) plt.xlim(-5, 30) plt.ylim(0, 25) plt.show() Explanation: It appears as if the two implementations are basically the same speed. This is unsurprising given the simplicity of the calculations, and as mentioned before, the low level implementation. Multivariate Gaussian Bayes Classifiers The natural generalization of the naive Bayes classifier is to allow any multivariate function take the place of $P(D|M)$ instead of it being the product of several univariate probability distributions. One immediate difference is that now instead of creating a Gaussian model with effectively a diagonal covariance matrix, you can now create one with a full covariance matrix. Let's see an example of that at work. End of explanation print("naive training accuracy: {:4.4}".format((model_a.predict(X) == y).mean())) print("bayes classifier training accuracy: {:4.4}".format((model_b.predict(X) == y).mean())) Explanation: It looks like we are able to get a better boundary between the two blobs of data. The primary for this is because the data don't form spherical clusters, like you assume when you force a diagonal covariance matrix, but are tilted ellipsoids, that can be better modeled by a full covariance matrix. We can quantify this quickly by looking at performance on the training data. End of explanation X = numpy.empty(shape=(0, 2)) X = numpy.concatenate((X, numpy.random.normal(4, 1, size=(200, 2)).dot([[-2, 0.5], [2, 0.5]]))) X = numpy.concatenate((X, numpy.random.normal(3, 1, size=(350, 2)).dot([[-1, 2], [1, 0.8]]))) X = numpy.concatenate((X, numpy.random.normal(7, 1, size=(700, 2)).dot([[-0.75, 0.8], [0.9, 1.5]]))) X = numpy.concatenate((X, numpy.random.normal(6, 1, size=(120, 2)).dot([[-1.5, 1.2], [0.6, 1.2]]))) y = numpy.concatenate((numpy.zeros(550), numpy.ones(820))) model_a = BayesClassifier.from_samples(MultivariateGaussianDistribution, X, y) gmm_a = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X[y == 0]) gmm_b = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X[y == 1]) model_b = BayesClassifier([gmm_a, gmm_b], weights=numpy.array([1-y.mean(), y.mean()])) xx, yy = numpy.meshgrid(numpy.arange(-10, 10, 0.02), numpy.arange(0, 25, 0.02)) Z1 = model_a.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) Z2 = model_b.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) centroids1 = numpy.array([distribution.mu for distribution in model_a.distributions]) centroids2 = numpy.concatenate([[distribution.mu for distribution in component.distributions] for component in model_b.distributions]) plt.figure(figsize=(18, 8)) plt.subplot(121) plt.contour(xx, yy, Z1, cmap='Blues') plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c', alpha=0.3) plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m', alpha=0.3) plt.scatter(centroids1[:,0], centroids1[:,1], color='k', s=100) plt.subplot(122) plt.contour(xx, yy, Z2, cmap='Blues') plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c', alpha=0.3) plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m', alpha=0.3) plt.scatter(centroids2[:,0], centroids2[:,1], color='k', s=100) plt.show() Explanation: Looks like there is a significant boost. Naturally you'd want to evaluate the performance of the model on separate validation data, but for the purposes of demonstrating the effect of a full covariance matrix this should be sufficient. Gaussian Mixture Model Bayes Classifier While using a full covariance matrix is certainly more complicated than using only the diagonal, there is no reason that the $P(D|M)$ has to even be a single simple distribution versus a full probabilistic model. After all, all probabilistic models, including general mixtures, hidden Markov models, and Bayesian networks, can calculate $P(D|M)$. Let's take a look at an example of using a mixture model instead of a single gaussian distribution. End of explanation
7,584
Given the following text description, write Python code to implement the functionality described below step by step Description: Solution to a problem posted at https Step1: Roll six 6-sided dice Step2: Count how many times each outcome occurs and score accordingly Step3: Run many times and accumulate scores Step4: Print the percentages of each score Step5: Or even better, just enumerate the possibilities.
Python Code: from __future__ import print_function, division from numpy.random import choice from collections import Counter from collections import defaultdict Explanation: Solution to a problem posted at https://www.reddit.com/r/statistics/comments/4csjee/finding_pab_given_two_sets_of_data/ Copyright 2016 Allen Downey MIT License: http://opensource.org/licenses/MIT End of explanation def roll(die): return choice(die, 6) die = [1,2,3,4,5,6] roll(die) Explanation: Roll six 6-sided dice: End of explanation def compute_score(outcome): counts = Counter(outcome) dd = defaultdict(list) [dd[v].append(k) for k, v in counts.items()] return len(dd[max(dd)]) compute_score([1,1,1,1,1,1]) Explanation: Count how many times each outcome occurs and score accordingly: End of explanation n = 100000 scores = [compute_score(roll(die)) for _ in range(n)] Explanation: Run many times and accumulate scores: End of explanation for score, freq in sorted(Counter(scores).items()): print(score, 100*freq/n) Explanation: Print the percentages of each score: End of explanation from itertools import product die = [1,2,3,4,5,6] counts = Counter(compute_score(list(outcome)) for outcome in product(*[die]*6)) n = sum(counts.values()) for score, freq in sorted(counts.items()): print(score, 100*freq/n) Explanation: Or even better, just enumerate the possibilities. End of explanation
7,585
Given the following text description, write Python code to implement the functionality described below step by step Description: Energy Efficiency What can you tell me about the data? Step1: X2, X3, X4 might be candidates for normalization; X5, X6, X8 likely to be discrete values; Y1, Y2 within the same range Step2: observations Step3: Loading the Prepared Data Step4: What is a bunch?? We'll talk about that soon. In the meantime ask for help... Step5: Simple Regression with Ordinary Least Squares (OLS) Step6: Ridge Regression Step7: so we picked a bad alpha - let's pick a better one... Choosing Alpha Step8: Lasso Regression Step9: Pipelined Model Step10: now it's time to worry about overfit Visualization with MatPlotLib
Python Code: import matplotlib.pyplot as plt %matplotlib inline import pandas as pd df = pd.read_csv('../data/energy/energy.csv') df.shape df.describe() Explanation: Energy Efficiency What can you tell me about the data? End of explanation import matplotlib.pyplot as plt %matplotlib inline from pandas.tools.plotting import scatter_matrix scatter_matrix(df, alpha=0.2, figsize=(18,18), diagonal='kde') plt.show() Explanation: X2, X3, X4 might be candidates for normalization; X5, X6, X8 likely to be discrete values; Y1, Y2 within the same range End of explanation import matplotlib.pyplot as plt df.plot() plt.show() #Individual Elements in the DF df['X1'].plot() plt.show() # Use the 'kind' keyword for different variations # Other kinds: 'bar', 'hist', 'box', 'kde', # 'area', 'scatter', 'hexbin', 'pie' df.plot(x='X1', y='X2', kind='scatter') plt.show() plt.savefig('myfig.png') # Complex function in pandas.plotting that take DataFrame or Series as arg # Scatter Matrix, Andrews Curves, Parallel Coordinates, Lag Plot, # Autocorrelation Plot, Bootstrap Plot, RadViz Explanation: observations: maybe X5 is binary? X2 and X1 seem to have strong correlation Data Visualization with Pandas End of explanation from utils import * from sklearn.cross_validation import train_test_split as tts dataset = load_energy() print(dataset) type(dataset) Explanation: Loading the Prepared Data End of explanation help(dataset) dataset.data.shape dataset.target('Y1').shape #other ways to explore 'dataset' print(dataset.DESCR) #more ways to exlore 'dataset' dir(dataset) dir(dataset) splits = tts(dataset.data, dataset.target('Y1'), test_size=0.2) # what is splits? print(splits) X_train, X_test, y_train, y_test = splits X_train.shape y_train.shape Explanation: What is a bunch?? We'll talk about that soon. In the meantime ask for help... End of explanation from sklearn import linear_model from sklearn.metrics import mean_squared_error, r2_score regr = linear_model.LinearRegression() regr.fit(X_train,y_train) print regr.coef_ print regr.intercept_ print mean_squared_error(y_test, regr.predict(X_test)) regr.score(X_test,y_test) # same as doing r2_score(y_est, regr.predict(X_test)) Explanation: Simple Regression with Ordinary Least Squares (OLS) End of explanation clf = linear_model.Ridge(alpha=0.5) clf.fit(X_train, y_train) print mean_squared_error(y_test, clf.predict(X_test)) clf.score(X_test, y_test) Explanation: Ridge Regression End of explanation import numpy as np # try 200 different alphas between -10 and -2 n_alphas = 200 alphas = np.logspace(-10, -2, n_alphas) clf = linear_model.RidgeCV(alphas=alphas) clf.fit(X_train, y_train) #which alpha did it pick? print clf.alpha_ clf.score(X_test, y_test) # plot our alphas linear_model.Ridge(fit_intercept=False) errors = [] for alpha in alphas: splits = tts(dataset.data, dataset.target('Y1'), test_size=0.2) X_train, X_test, y_train, y_test = splits clf.set_params(alpha=alpha) clf.fit(X_train, y_train) error = mean_squared_error(y_test, clf.predict(X_test)) errors.append(error) axe = plt.gca() axe.plot(alphas, errors) plt.show() Explanation: so we picked a bad alpha - let's pick a better one... Choosing Alpha End of explanation clf = linear_model.Lasso(alpha=0.5) clf.fit(X_train, y_train) print mean_squared_error(y_test, clf.predict(X_test)) clf.score(X_test, y_test) Explanation: Lasso Regression End of explanation from sklearn.preprocessing import PolynomialFeatures from sklearn.pipeline import make_pipeline model = make_pipeline(PolynomialFeatures(2), linear_model. Ridge()) model.fit(X_train, y_train) mean_squared_error(y_test, model.predict(X_test)) model.score(X_test, y_test) Explanation: Pipelined Model End of explanation import numpy as np import matplotlib.pyplot as plt %matplotlib inline x = np.linspace(-15,15,100) # 100 evenly spaced nums between -15 and 15 y = np.sin(x)/x # compute values of sin(x) / x # compose plot plt.plot(x,y, label="f(x)") # sin(x)/x plt.plot(x,y, 'co', label="cyan dot f(x)") plt.plot(x,2*y,x,3*y, label="scaled f(x)") # add plot details! Or else Ben will be mad plt.plot(x,y, label="f(x)") plt.plot(x,y, 'co', label="cyan dot f(x)") plt.plot(x,2*y,x,3*y, label="scaled f(x)") plt.xlabel("x-axis") plt.ylabel("y-axis") plt.title("Graph of Functions") plt.legend() plt.show() Explanation: now it's time to worry about overfit Visualization with MatPlotLib End of explanation
7,586
Given the following text description, write Python code to implement the functionality described below step by step Description: First, I made a mistake naming the data set! It's 2015 data, not 2014 data. But yes, still use 311-2014.csv. You can rename it. Importing and preparing your data Import your data, but only the first 200,000 rows. You'll also want to change the index to be a datetime based on the Created Date column - you'll want to check if it's already a datetime, and parse it if not. Step1: What was the most popular type of complaint, and how many times was it filed? Step2: Make a horizontal bar graph of the top 5 most frequent complaint types. Step3: Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually. Step4: According to your selection of data, how many cases were filed in March? How about May? Step5: I'd like to see all of the 311 complaints called in on April 1st. Surprise! We couldn't do this in class, but it was just a limitation of our data set Step6: What was the most popular type of complaint on April 1st? Step7: What were the most popular three types of complaint on April 1st Step8: What month has the most reports filed? How many? Graph it. Step9: What week of the year has the most reports filed? How many? Graph the weekly complaints. Step10: Noise complaints are a big deal. Use .str.contains to select noise complaints, and make an chart of when they show up annually. Then make a chart about when they show up every day (cyclic). Step11: Which were the top five days of the year for filing complaints? How many on each of those days? Graph it. Step12: Interesting—it looks cyclical. Let's see what day of the week is most popular Step13: What hour of the day are the most complaints? Graph a day of complaints. Step14: One of the hours has an odd number of complaints. What are the most common complaints at that hour, and what are the most common complaints the hour before and after? Step15: So odd. What's the per-minute breakdown of complaints between 12am and 1am? You don't need to include 1am. Step16: Looks like midnight is a little bit of an outlier. Why might that be? Take the 5 most common agencies and graph the times they file reports at (all day, not just midnight). 5 Top Agencies Step17: Graph those same agencies on an annual basis - make it weekly. When do people like to complain? When does the NYPD have an odd number of complaints? Step18: It looks like complaints are most popular in May, June, September—generally in the summer. Step19: It looks like complaints are least popular in around Christmas and New Year's. Maybe the NYPD deals with different issues at different times? Check the most popular complaints in July and August vs the month of May. Also check the most common complaints for the Housing Preservation Bureau (HPD) in winter vs. summer. Most popular NYPD complaints in July and August Step20: Most popular NYPD complaints in May Step21: Most popular HPD complaints in June, July, and August Step22: Most popular HPD complaints in December, January, and February
Python Code: df = pd.read_csv('311-2015.csv', dtype = str) df.head() import datetime def created_date_to_datetime(date_str): return datetime.datetime.strptime(date_str, '%m/%d/%Y %I:%M:%S %p') df['created_datetime'] = df['Created Date'].apply(created_date_to_datetime) df = df.set_index('created_datetime') Explanation: First, I made a mistake naming the data set! It's 2015 data, not 2014 data. But yes, still use 311-2014.csv. You can rename it. Importing and preparing your data Import your data, but only the first 200,000 rows. You'll also want to change the index to be a datetime based on the Created Date column - you'll want to check if it's already a datetime, and parse it if not. End of explanation freq_complaints = df[['Unique Key', 'Complaint Type']].groupby('Complaint Type').count().sort_values('Unique Key', ascending=False).head() freq_complaints Explanation: What was the most popular type of complaint, and how many times was it filed? End of explanation ax = freq_complaints.plot(kind = 'barh', legend = False) ax.set_title('5 Most Frequent 311 Complaints') ax.set_xlabel('Number of Complaints in 2015') Explanation: Make a horizontal bar graph of the top 5 most frequent complaint types. End of explanation df[['Unique Key', 'Borough']].groupby('Borough').count().sort_values('Unique Key', ascending = False) Explanation: Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually. End of explanation cases_in_mar = df[df.index.month == 3]['Unique Key'].count() print('There were', cases_in_mar, 'cases filed in March.') cases_in_may = df[df.index.month == 5]['Unique Key'].count() print('There were', cases_in_may, 'cases filed in May.') Explanation: According to your selection of data, how many cases were filed in March? How about May? End of explanation april_1_complaints = df[(df.index.month == 4) & (df.index.day == 1)][['Unique Key', 'Created Date', 'Complaint Type', 'Descriptor']] april_1_complaints Explanation: I'd like to see all of the 311 complaints called in on April 1st. Surprise! We couldn't do this in class, but it was just a limitation of our data set End of explanation april_1_complaints[['Unique Key', 'Complaint Type']].groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head(1) Explanation: What was the most popular type of complaint on April 1st? End of explanation april_1_complaints[['Unique Key', 'Complaint Type']].groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head(3) Explanation: What were the most popular three types of complaint on April 1st End of explanation complaints_by_month = df['Unique Key'].groupby(df.index.month).count() complaints_by_month ax = complaints_by_month.plot() ax.set_xticks([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]) ax.set_xticklabels(['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']) ax.set_ylabel('Number of Complaints') ax.set_title('311 Complaints by Month in 2015') # x_values = df.groupby(df.index.month).median().index min_values = 0 max_values = complaints_by_month ax.fill_between(x_values, min_values, max_values, alpha = 0.4) Explanation: What month has the most reports filed? How many? Graph it. End of explanation complaints_by_week = df['Unique Key'].groupby(df.index.week).count() complaints_by_week ax = complaints_by_week.plot() ax.set_xticks(range(1,53)) ax.set_xticklabels(['', '', '', '', '5', '', '', '', '', '10', '', '', '', '', '15', '', '', '', '', '20', '', '', '', '', '25', '', '', '', '', '30', '', '', '', '', '35', '', '', '', '', '40', '', '', '', '', '45', '', '', '', '', '50', '', '',]) ax.set_ylabel('Number of Complaints') ax.set_xlabel('Week of the Year') ax.set_title('311 Complaints by Week in 2015') Explanation: What week of the year has the most reports filed? How many? Graph the weekly complaints. End of explanation noise_complaints = df[df['Complaint Type'].str.contains('Noise') == True] noise_complaints_hour = noise_complaints['Unique Key'].groupby(noise_complaints.index.hour).count() ax = noise_complaints_hour.plot() ax.set_xticks(range(0,24)) ax.set_xticklabels(['Midnight', '', '', '', '', '', '6 am', '', '', '', '', '', 'Noon', '', '', '', '', '', '6 pm', '', '', '', '', '']) ax.set_ylabel('Number of Complaints') ax.set_xlabel('Time of Day') ax.set_title('Noise Complaints by Time of Day in 2015') Explanation: Noise complaints are a big deal. Use .str.contains to select noise complaints, and make an chart of when they show up annually. Then make a chart about when they show up every day (cyclic). End of explanation top_complaining_days = df['Unique Key'].resample('D').count().sort_values(ascending = False).head() top_complaining_days ax = top_complaining_days.plot(kind = 'barh') ax.set_ylabel('Date') ax.set_xlabel('Number of Complaints') ax.set_title('The Top 5 Days for 311 Complaints in 2015') complaining_days = df['Unique Key'].resample('D').count() ax = complaining_days.plot() ax.set_ylabel('Number of Complaints') ax.set_xlabel('Day of Year') ax.set_title('Noise Complaints by Day in 2015') Explanation: Which were the top five days of the year for filing complaints? How many on each of those days? Graph it. End of explanation df.index[1] def get_day_of_wk(timestamp): return datetime.datetime.strftime(timestamp, '%a') df['datetime'] = df.index df['day_of_wk'] = df['datetime'].apply(get_day_of_wk) complaining_day_of_wk = df[['Unique Key', 'day_of_wk']].groupby('day_of_wk').count() complaining_day_of_wk['number_of_day'] = [6, 2, 7, 1, 5, 3, 4] complaining_day_of_wk_sorted = complaining_day_of_wk.sort_values('number_of_day') complaining_day_of_wk_sorted ax = complaining_day_of_wk_sorted.plot(y = 'Unique Key', legend = False) Explanation: Interesting—it looks cyclical. Let's see what day of the week is most popular: End of explanation hourly_complaints = df['Unique Key'].groupby(df.index.hour).count() ax = hourly_complaints.plot() ax.set_xticks(range(0,24)) ax.set_xticklabels(['Midnight', '', '', '', '', '', '6 am', '', '', '', '', '', 'Noon', '', '', '', '', '', '6 pm', '', '', '', '', '']) ax.set_ylabel('Number of Complaints') ax.set_xlabel('Time of Day') ax.set_title('311 Complaints by Time of Day in 2015') Explanation: What hour of the day are the most complaints? Graph a day of complaints. End of explanation # 11 pm df[df.index.hour == 23][['Unique Key', 'Complaint Type', 'Descriptor']].groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head() # 12 am df[df.index.hour == 0][['Unique Key', 'Complaint Type', 'Descriptor']].groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head() # 1 am df[df.index.hour == 1][['Unique Key', 'Complaint Type', 'Descriptor']].groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head() Explanation: One of the hours has an odd number of complaints. What are the most common complaints at that hour, and what are the most common complaints the hour before and after? End of explanation midnight_complaints = df[df.index.hour == 0][['Unique Key', 'Complaint Type']] for minute in range(0, 60): top_complaint = midnight_complaints[midnight_complaints.index.minute == minute].groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head(1) if minute < 10: minute = '0' + str(minute) else: minute = str(minute) print('12:' + minute + '\'s top complaint was:', top_complaint) print('') # hourly_complaints = df['Unique Key'].groupby(df.index.hour).count() Explanation: So odd. What's the per-minute breakdown of complaints between 12am and 1am? You don't need to include 1am. End of explanation df[['Unique Key', 'Agency']].groupby('Agency').count().sort_values('Unique Key', ascending = False).head() def agency_hourly_complaints(agency_name_str): agency_complaints = df[df['Agency'] == agency_name_str] return agency_complaints['Unique Key'].groupby(agency_complaints.index.hour).count() ax = agency_hourly_complaints('HPD').plot(label = 'HPD', legend = True) for x in ['NYPD', 'DOT', 'DEP', 'DSNY']: agency_hourly_complaints(x).plot(ax = ax, label = x, legend = True) Explanation: Looks like midnight is a little bit of an outlier. Why might that be? Take the 5 most common agencies and graph the times they file reports at (all day, not just midnight). 5 Top Agencies: End of explanation def agency_weekly_complaints(agency_name_str): agency_complaints = df[df['Agency'] == agency_name_str] return agency_complaints['Unique Key'].groupby(agency_complaints.index.week).count() ax = agency_weekly_complaints('NYPD').plot(label = 'NYPD', legend = True) for x in ['DOT', 'HPD', 'DPR', 'DSNY']: agency_weekly_complaints(x).plot(ax = ax, label = x, legend = True) NYPD_complaints = df[df['Agency'] == 'NYPD'] NYPD_weekly_complaints = NYPD_complaints['Unique Key'].groupby(NYPD_complaints.index.week).count() NYPD_weekly_complaints[NYPD_weekly_complaints > 13000] # # a way to use the function agency_weekly_complaints that's actually longer than not using it. # week_number = 0 # for week in agency_weekly_complaints('NYPD'): # week_number += 1 # if week > 1500: # print('In week', week_number) # print('there were', week, 'complaints.') # print('') Explanation: Graph those same agencies on an annual basis - make it weekly. When do people like to complain? When does the NYPD have an odd number of complaints? End of explanation NYPD_weekly_complaints[NYPD_weekly_complaints < 6000] Explanation: It looks like complaints are most popular in May, June, September—generally in the summer. End of explanation NYPD_complaints = df[df['Agency'] == 'NYPD'] NYPD_jul_aug_complaints = NYPD_complaints[(NYPD_complaints.index.month == 7) | (NYPD_complaints.index.month == 8)][['Unique Key', 'Complaint Type']] NYPD_jul_aug_complaints.groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head() Explanation: It looks like complaints are least popular in around Christmas and New Year's. Maybe the NYPD deals with different issues at different times? Check the most popular complaints in July and August vs the month of May. Also check the most common complaints for the Housing Preservation Bureau (HPD) in winter vs. summer. Most popular NYPD complaints in July and August: End of explanation NYPD_complaints = df[df['Agency'] == 'NYPD'] NYPD_may_complaints = NYPD_complaints[(NYPD_complaints.index.month == 5)][['Unique Key', 'Complaint Type']] NYPD_may_complaints.groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head() Explanation: Most popular NYPD complaints in May: End of explanation HPD_complaints = df[df['Agency'] == 'HPD'] HPD_jun_jul_aug_complaints = HPD_complaints[(HPD_complaints.index.month == 6) | (HPD_complaints.index.month == 7) | (HPD_complaints.index.month == 8)][['Unique Key', 'Complaint Type']] HPD_jun_jul_aug_complaints.groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head() Explanation: Most popular HPD complaints in June, July, and August: End of explanation HPD_complaints = df[df['Agency'] == 'HPD'] HPD_dec_jan_feb_complaints = HPD_complaints[(HPD_complaints.index.month == 12) | (HPD_complaints.index.month == 1) | (HPD_complaints.index.month == 2)][['Unique Key', 'Complaint Type']] HPD_dec_jan_feb_complaints.groupby('Complaint Type').count().sort_values('Unique Key', ascending = False).head() Explanation: Most popular HPD complaints in December, January, and February: End of explanation
7,587
Given the following text description, write Python code to implement the functionality described below step by step Description: Simulating a Yo-Yo Modeling and Simulation in Python Copyright 2021 Allen Downey License Step1: Yo-yo Suppose you are holding a yo-yo with a length of string wound around its axle, and you drop it while holding the end of the string stationary. As gravity accelerates the yo-yo downward, tension in the string exerts a force upward. Since this force acts on a point offset from the center of mass, it exerts a torque that causes the yo-yo to spin. The following diagram shows the forces on the yo-yo and the resulting torque. The outer shaded area shows the body of the yo-yo. The inner shaded area shows the rolled up string, the radius of which changes as the yo-yo unrolls. In this system, we can't figure out the linear and angular acceleration independently; we have to solve a system of equations Step2: The results are $T = m g I / I^* $ $a = -m g r^2 / I^* $ $\alpha = m g r / I^* $ where $I^*$ is the augmented moment of inertia, $I + m r^2$. You can also see the derivation of these equations in this video. We can use these equations for $a$ and $\alpha$ to write a slope function and simulate this system. Exercise Step3: Rmin is the radius of the axle. Rmax is the radius of the axle plus rolled string. Rout is the radius of the yo-yo body. mass is the total mass of the yo-yo, ignoring the string. L is the length of the string. g is the acceleration of gravity. Step4: Based on these parameters, we can compute the moment of inertia for the yo-yo, modeling it as a solid cylinder with uniform density (see here). In reality, the distribution of weight in a yo-yo is often designed to achieve desired effects. But we'll keep it simple. Step5: And we can compute k, which is the constant that determines how the radius of the spooled string decreases as it unwinds. Step6: The state variables we'll use are angle, theta, angular velocity, omega, the length of the spooled string, y, and the linear velocity of the yo-yo, v. Here is a State object with the the initial conditions. Step7: And here's a System object with init and t_end (chosen to be longer than I expect for the yo-yo to drop 1 m). Step8: Write a slope function for this system, using these results from the book Step9: Test your slope function with the initial conditions. The results should be approximately 0, 180.5, 0, -2.9 Step10: Notice that the initial acceleration is substantially smaller than g because the yo-yo has to start spinning before it can fall. Write an event function that will stop the simulation when y is 0. Step11: Test your event function Step12: Then run the simulation. Step13: Check the final state. If things have gone according to plan, the final value of y should be close to 0. Step14: How long does it take for the yo-yo to fall 1 m? Does the answer seem reasonable? The following cells plot the results. theta should increase and accelerate. Step15: y should decrease and accelerate down. Step16: Plot velocity as a function of time; is the acceleration constant? Step17: We can use gradient to estimate the derivative of v. How does the acceleration of the yo-yo compare to g? Step18: And we can use the formula for r to plot the radius of the spooled thread over time.
Python Code: # install Pint if necessary try: import pint except ImportError: !pip install pint # download modsim.py if necessary from os.path import exists filename = 'modsim.py' if not exists(filename): from urllib.request import urlretrieve url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/' local, _ = urlretrieve(url+filename, filename) print('Downloaded ' + local) # import functions from modsim from modsim import * Explanation: Simulating a Yo-Yo Modeling and Simulation in Python Copyright 2021 Allen Downey License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International End of explanation from sympy import symbols, Eq, solve T, a, alpha, I, m, g, r = symbols('T a alpha I m g r') eq1 = Eq(a, -r * alpha) eq1 eq2 = Eq(T - m * g, m * a) eq2 eq3 = Eq(T * r, I * alpha) eq3 soln = solve([eq1, eq2, eq3], [T, a, alpha]) soln[T] soln[a] soln[alpha] Explanation: Yo-yo Suppose you are holding a yo-yo with a length of string wound around its axle, and you drop it while holding the end of the string stationary. As gravity accelerates the yo-yo downward, tension in the string exerts a force upward. Since this force acts on a point offset from the center of mass, it exerts a torque that causes the yo-yo to spin. The following diagram shows the forces on the yo-yo and the resulting torque. The outer shaded area shows the body of the yo-yo. The inner shaded area shows the rolled up string, the radius of which changes as the yo-yo unrolls. In this system, we can't figure out the linear and angular acceleration independently; we have to solve a system of equations: $\sum F = m a $ $\sum \tau = I \alpha$ where the summations indicate that we are adding up forces and torques. As in the previous examples, linear and angular velocity are related because of the way the string unrolls: $\frac{dy}{dt} = -r \frac{d \theta}{dt} $ In this example, the linear and angular accelerations have opposite sign. As the yo-yo rotates counter-clockwise, $\theta$ increases and $y$, which is the length of the rolled part of the string, decreases. Taking the derivative of both sides yields a similar relationship between linear and angular acceleration: $\frac{d^2 y}{dt^2} = -r \frac{d^2 \theta}{dt^2} $ Which we can write more concisely: $ a = -r \alpha $ This relationship is not a general law of nature; it is specific to scenarios like this where there is rolling without stretching or slipping. Because of the way we've set up the problem, $y$ actually has two meanings: it represents the length of the rolled string and the height of the yo-yo, which decreases as the yo-yo falls. Similarly, $a$ represents acceleration in the length of the rolled string and the height of the yo-yo. We can compute the acceleration of the yo-yo by adding up the linear forces: $\sum F = T - mg = ma $ Where $T$ is positive because the tension force points up, and $mg$ is negative because gravity points down. Because gravity acts on the center of mass, it creates no torque, so the only torque is due to tension: $\sum \tau = T r = I \alpha $ Positive (upward) tension yields positive (counter-clockwise) angular acceleration. Now we have three equations in three unknowns, $T$, $a$, and $\alpha$, with $I$, $m$, $g$, and $r$ as known parameters. We could solve these equations by hand, but we can also get SymPy to do it for us. End of explanation Rmin = 8e-3 # m Rmax = 16e-3 # m Rout = 35e-3 # m mass = 50e-3 # kg L = 1 # m g = 9.8 # m / s**2 Explanation: The results are $T = m g I / I^* $ $a = -m g r^2 / I^* $ $\alpha = m g r / I^* $ where $I^*$ is the augmented moment of inertia, $I + m r^2$. You can also see the derivation of these equations in this video. We can use these equations for $a$ and $\alpha$ to write a slope function and simulate this system. Exercise: Simulate the descent of a yo-yo. How long does it take to reach the end of the string? Here are the system parameters: End of explanation 1 / (Rmax) Explanation: Rmin is the radius of the axle. Rmax is the radius of the axle plus rolled string. Rout is the radius of the yo-yo body. mass is the total mass of the yo-yo, ignoring the string. L is the length of the string. g is the acceleration of gravity. End of explanation I = mass * Rout**2 / 2 I Explanation: Based on these parameters, we can compute the moment of inertia for the yo-yo, modeling it as a solid cylinder with uniform density (see here). In reality, the distribution of weight in a yo-yo is often designed to achieve desired effects. But we'll keep it simple. End of explanation k = (Rmax**2 - Rmin**2) / 2 / L k Explanation: And we can compute k, which is the constant that determines how the radius of the spooled string decreases as it unwinds. End of explanation init = State(theta=0, omega=0, y=L, v=0) Explanation: The state variables we'll use are angle, theta, angular velocity, omega, the length of the spooled string, y, and the linear velocity of the yo-yo, v. Here is a State object with the the initial conditions. End of explanation system = System(init=init, t_end=2) Explanation: And here's a System object with init and t_end (chosen to be longer than I expect for the yo-yo to drop 1 m). End of explanation # Solution goes here Explanation: Write a slope function for this system, using these results from the book: $ r = \sqrt{2 k y + R_{min}^2} $ $ T = m g I / I^* $ $ a = -m g r^2 / I^* $ $ \alpha = m g r / I^* $ where $I^*$ is the augmented moment of inertia, $I + m r^2$. End of explanation # Solution goes here Explanation: Test your slope function with the initial conditions. The results should be approximately 0, 180.5, 0, -2.9 End of explanation # Solution goes here Explanation: Notice that the initial acceleration is substantially smaller than g because the yo-yo has to start spinning before it can fall. Write an event function that will stop the simulation when y is 0. End of explanation # Solution goes here Explanation: Test your event function: End of explanation # Solution goes here Explanation: Then run the simulation. End of explanation # Solution goes here Explanation: Check the final state. If things have gone according to plan, the final value of y should be close to 0. End of explanation results.theta.plot(color='C0', label='theta') decorate(xlabel='Time (s)', ylabel='Angle (rad)') Explanation: How long does it take for the yo-yo to fall 1 m? Does the answer seem reasonable? The following cells plot the results. theta should increase and accelerate. End of explanation results.y.plot(color='C1', label='y') decorate(xlabel='Time (s)', ylabel='Length (m)') Explanation: y should decrease and accelerate down. End of explanation results.v.plot(label='velocity', color='C3') decorate(xlabel='Time (s)', ylabel='Velocity (m/s)') Explanation: Plot velocity as a function of time; is the acceleration constant? End of explanation a = gradient(results.v) a.plot(label='acceleration', color='C4') decorate(xlabel='Time (s)', ylabel='Acceleration (m/$s^2$)') Explanation: We can use gradient to estimate the derivative of v. How does the acceleration of the yo-yo compare to g? End of explanation r = np.sqrt(2*k*results.y + Rmin**2) r.plot(label='radius') decorate(xlabel='Time (s)', ylabel='Radius of spooled thread (m)') Explanation: And we can use the formula for r to plot the radius of the spooled thread over time. End of explanation
7,588
Given the following text description, write Python code to implement the functionality described below step by step Description: Sentinel-2 Cloud Masking with s2cloudless Author Step1: Assemble cloud mask components This section builds an S2 SR collection and defines functions to add cloud and cloud shadow component layers to each image in the collection. Define collection filter and cloud mask parameters Define parameters that are used to filter the S2 image collection and determine cloud and cloud shadow identification. |Parameter | Type | Description | | Step2: Build a Sentinel-2 collection Sentinel-2 surface reflectance and Sentinel-2 cloud probability are two different image collections. Each collection must be filtered similarly (e.g., by date and bounds) and then the two filtered collections must be joined. Define a function to filter the SR and s2cloudless collections according to area of interest and date parameters, then join them on the system Step3: Apply the get_s2_sr_cld_col function to build a collection according to the parameters defined above. Step4: Define cloud mask component functions Cloud components Define a function to add the s2cloudless probability layer and derived cloud mask as bands to an S2 SR image input. Step5: Cloud shadow components Define a function to add dark pixels, cloud projection, and identified shadows as bands to an S2 SR image input. Note that the image input needs to be the result of the above add_cloud_bands function because it relies on knowing which pixels are considered cloudy ('clouds' band). Step6: Final cloud-shadow mask Define a function to assemble all of the cloud and cloud shadow components and produce the final mask. Step7: Visualize and evaluate cloud mask components This section provides functions for displaying the cloud and cloud shadow components. In most cases, adding all components to images and viewing them is unnecessary. This section is included to illustrate how the cloud/cloud shadow mask is developed and demonstrate how to test and evaluate various parameters, which is helpful when defining masking variables for an unfamiliar region or time of year. In applications outside of this tutorial, if you prefer to include only the final cloud/cloud shadow mask along with the original image bands, replace Step8: Define a function to display all of the cloud and cloud shadow components to an interactive Folium map. The input is an image collection where each image is the result of the add_cld_shdw_mask function defined previously. Step9: Display mask component layers Map the add_cld_shdw_mask function over the collection to add mask component bands to each image, then display the results. Give the system some time to render everything, it should take less than a minute. Step10: Evaluate mask component layers In the above map, use the layer control panel in the upper right corner to toggle layers on and off; layer names are the same as band names, for easy code referral. Note that the layers have a minimum zoom level of 9 to avoid resource issues that can occur when visualizing layers that depend on the ee.Image.reproject function (used during cloud shadow project and mask dilation). Try changing the above CLD_PRB_THRESH, NIR_DRK_THRESH, CLD_PRJ_DIST, and BUFFER input variables and rerunning the previous cell to see how the results change. Find a good set of values for a given overpass and then try the procedure with a new overpass with different cloud conditions (this S2 SR image browser app is handy for quickly identifying images and determining image collection filter criteria). Try to identify a set of parameter values that balances cloud/cloud shadow commission and omission error for a range of cloud types. In the next section, we'll use the values to actually apply the mask to generate a cloud-free composite for 2020. Apply cloud and cloud shadow mask In this section we'll generate a cloud-free composite for the same region as above that represents mean reflectance for July and August, 2020. Define collection filter and cloud mask parameters We'll redefine the parameters to be a little more aggressive, i.e. decrease the cloud probability threshold, increase the cloud projection distance, and increase the buffer. These changes will increase cloud commission error (mask out some clear pixels), but since we will be compositing images from three months, there should be plenty of observations to complete the mosaic. Step11: Build a Sentinel-2 collection Reassemble the S2-cloudless collection since the collection filter parameters have changed. Step12: Define cloud mask application function Define a function to apply the cloud mask to each image in the collection. Step13: Process the collection Add cloud and cloud shadow component bands to each image and then apply the mask to each image. Reduce the collection by median (in your application, you might consider using medoid reduction to build a composite from actual data values, instead of per-band statistics). Step14: Display the cloud-free composite Display the results. Be patient while the map renders, it may take a minute; ee.Image.reproject is forcing computations to happen at 100 and 20 m scales (i.e. it is not relying on appropriate pyramid level scales for analysis). The issues with ee.Image.reproject being resource-intensive in this case are mostly confined to interactive map viewing. Batch image exports and table reduction exports where the scale parameter is set to typical Sentinel-2 scales (10-60 m) are less affected.
Python Code: import ee # Trigger the authentication flow. ee.Authenticate() # Initialize the library. ee.Initialize() Explanation: Sentinel-2 Cloud Masking with s2cloudless Author: jdbcode This tutorial is an introduction to masking clouds and cloud shadows in Sentinel-2 (S2) surface reflectance (SR) data using Earth Engine. Clouds are identified from the S2 cloud probability dataset (s2cloudless) and shadows are defined by cloud projection intersection with low-reflectance near-infrared (NIR) pixels. For a similar JavaScript API script, see this Code Editor example. Run me first Run the following cell to initialize the Earth Engine API. The output will contain instructions on how to grant this notebook access to Earth Engine using your account. End of explanation AOI = ee.Geometry.Point(-122.269, 45.701) START_DATE = '2020-06-01' END_DATE = '2020-06-02' CLOUD_FILTER = 60 CLD_PRB_THRESH = 50 NIR_DRK_THRESH = 0.15 CLD_PRJ_DIST = 1 BUFFER = 50 Explanation: Assemble cloud mask components This section builds an S2 SR collection and defines functions to add cloud and cloud shadow component layers to each image in the collection. Define collection filter and cloud mask parameters Define parameters that are used to filter the S2 image collection and determine cloud and cloud shadow identification. |Parameter | Type | Description | | :-- | :-- | :-- | | AOI | ee.Geometry | Area of interest | | START_DATE | string | Image collection start date (inclusive) | | END_DATE | string | Image collection end date (exclusive) | | CLOUD_FILTER | integer | Maximum image cloud cover percent allowed in image collection | | CLD_PRB_THRESH | integer | Cloud probability (%); values greater than are considered cloud | | NIR_DRK_THRESH | float | Near-infrared reflectance; values less than are considered potential cloud shadow | | CLD_PRJ_DIST | float | Maximum distance (km) to search for cloud shadows from cloud edges | | BUFFER | integer | Distance (m) to dilate the edge of cloud-identified objects | The values currently set for AOI, START_DATE, END_DATE, and CLOUD_FILTER are intended to build a collection for a single S2 overpass of a region near Portland, Oregon, USA. When parameterizing and evaluating cloud masks for a new area, it is good practice to identify a single overpass date and limit the regional extent to minimize processing requirements. If you want to work with a different example, use this Earth Engine App to identify an image that includes some clouds, then replace the relevant parameter values below with those provided in the app. End of explanation def get_s2_sr_cld_col(aoi, start_date, end_date): # Import and filter S2 SR. s2_sr_col = (ee.ImageCollection('COPERNICUS/S2_SR') .filterBounds(aoi) .filterDate(start_date, end_date) .filter(ee.Filter.lte('CLOUDY_PIXEL_PERCENTAGE', CLOUD_FILTER))) # Import and filter s2cloudless. s2_cloudless_col = (ee.ImageCollection('COPERNICUS/S2_CLOUD_PROBABILITY') .filterBounds(aoi) .filterDate(start_date, end_date)) # Join the filtered s2cloudless collection to the SR collection by the 'system:index' property. return ee.ImageCollection(ee.Join.saveFirst('s2cloudless').apply(**{ 'primary': s2_sr_col, 'secondary': s2_cloudless_col, 'condition': ee.Filter.equals(**{ 'leftField': 'system:index', 'rightField': 'system:index' }) })) Explanation: Build a Sentinel-2 collection Sentinel-2 surface reflectance and Sentinel-2 cloud probability are two different image collections. Each collection must be filtered similarly (e.g., by date and bounds) and then the two filtered collections must be joined. Define a function to filter the SR and s2cloudless collections according to area of interest and date parameters, then join them on the system:index property. The result is a copy of the SR collection where each image has a new 's2cloudless' property whose value is the corresponding s2cloudless image. End of explanation s2_sr_cld_col_eval = get_s2_sr_cld_col(AOI, START_DATE, END_DATE) Explanation: Apply the get_s2_sr_cld_col function to build a collection according to the parameters defined above. End of explanation def add_cloud_bands(img): # Get s2cloudless image, subset the probability band. cld_prb = ee.Image(img.get('s2cloudless')).select('probability') # Condition s2cloudless by the probability threshold value. is_cloud = cld_prb.gt(CLD_PRB_THRESH).rename('clouds') # Add the cloud probability layer and cloud mask as image bands. return img.addBands(ee.Image([cld_prb, is_cloud])) Explanation: Define cloud mask component functions Cloud components Define a function to add the s2cloudless probability layer and derived cloud mask as bands to an S2 SR image input. End of explanation def add_shadow_bands(img): # Identify water pixels from the SCL band. not_water = img.select('SCL').neq(6) # Identify dark NIR pixels that are not water (potential cloud shadow pixels). SR_BAND_SCALE = 1e4 dark_pixels = img.select('B8').lt(NIR_DRK_THRESH*SR_BAND_SCALE).multiply(not_water).rename('dark_pixels') # Determine the direction to project cloud shadow from clouds (assumes UTM projection). shadow_azimuth = ee.Number(90).subtract(ee.Number(img.get('MEAN_SOLAR_AZIMUTH_ANGLE'))); # Project shadows from clouds for the distance specified by the CLD_PRJ_DIST input. cld_proj = (img.select('clouds').directionalDistanceTransform(shadow_azimuth, CLD_PRJ_DIST*10) .reproject(**{'crs': img.select(0).projection(), 'scale': 100}) .select('distance') .mask() .rename('cloud_transform')) # Identify the intersection of dark pixels with cloud shadow projection. shadows = cld_proj.multiply(dark_pixels).rename('shadows') # Add dark pixels, cloud projection, and identified shadows as image bands. return img.addBands(ee.Image([dark_pixels, cld_proj, shadows])) Explanation: Cloud shadow components Define a function to add dark pixels, cloud projection, and identified shadows as bands to an S2 SR image input. Note that the image input needs to be the result of the above add_cloud_bands function because it relies on knowing which pixels are considered cloudy ('clouds' band). End of explanation def add_cld_shdw_mask(img): # Add cloud component bands. img_cloud = add_cloud_bands(img) # Add cloud shadow component bands. img_cloud_shadow = add_shadow_bands(img_cloud) # Combine cloud and shadow mask, set cloud and shadow as value 1, else 0. is_cld_shdw = img_cloud_shadow.select('clouds').add(img_cloud_shadow.select('shadows')).gt(0) # Remove small cloud-shadow patches and dilate remaining pixels by BUFFER input. # 20 m scale is for speed, and assumes clouds don't require 10 m precision. is_cld_shdw = (is_cld_shdw.focalMin(2).focalMax(BUFFER*2/20) .reproject(**{'crs': img.select([0]).projection(), 'scale': 20}) .rename('cloudmask')) # Add the final cloud-shadow mask to the image. return img_cloud_shadow.addBands(is_cld_shdw) Explanation: Final cloud-shadow mask Define a function to assemble all of the cloud and cloud shadow components and produce the final mask. End of explanation # Import the folium library. import folium # Define a method for displaying Earth Engine image tiles to a folium map. def add_ee_layer(self, ee_image_object, vis_params, name, show=True, opacity=1, min_zoom=0): map_id_dict = ee.Image(ee_image_object).getMapId(vis_params) folium.raster_layers.TileLayer( tiles=map_id_dict['tile_fetcher'].url_format, attr='Map Data &copy; <a href="https://earthengine.google.com/">Google Earth Engine</a>', name=name, show=show, opacity=opacity, min_zoom=min_zoom, overlay=True, control=True ).add_to(self) # Add the Earth Engine layer method to folium. folium.Map.add_ee_layer = add_ee_layer Explanation: Visualize and evaluate cloud mask components This section provides functions for displaying the cloud and cloud shadow components. In most cases, adding all components to images and viewing them is unnecessary. This section is included to illustrate how the cloud/cloud shadow mask is developed and demonstrate how to test and evaluate various parameters, which is helpful when defining masking variables for an unfamiliar region or time of year. In applications outside of this tutorial, if you prefer to include only the final cloud/cloud shadow mask along with the original image bands, replace: return img_cloud_shadow.addBands(is_cld_shdw) with return img.addBands(is_cld_shdw) in the above add_cld_shdw_mask function. Define functions to display image and mask component layers. Folium will be used to display map layers. Import folium and define a method to display Earth Engine image tiles. End of explanation def display_cloud_layers(col): # Mosaic the image collection. img = col.mosaic() # Subset layers and prepare them for display. clouds = img.select('clouds').selfMask() shadows = img.select('shadows').selfMask() dark_pixels = img.select('dark_pixels').selfMask() probability = img.select('probability') cloudmask = img.select('cloudmask').selfMask() cloud_transform = img.select('cloud_transform') # Create a folium map object. center = AOI.centroid(10).coordinates().reverse().getInfo() m = folium.Map(location=center, zoom_start=12) # Add layers to the folium map. m.add_ee_layer(img, {'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 2500, 'gamma': 1.1}, 'S2 image', True, 1, 9) m.add_ee_layer(probability, {'min': 0, 'max': 100}, 'probability (cloud)', False, 1, 9) m.add_ee_layer(clouds, {'palette': 'e056fd'}, 'clouds', False, 1, 9) m.add_ee_layer(cloud_transform, {'min': 0, 'max': 1, 'palette': ['white', 'black']}, 'cloud_transform', False, 1, 9) m.add_ee_layer(dark_pixels, {'palette': 'orange'}, 'dark_pixels', False, 1, 9) m.add_ee_layer(shadows, {'palette': 'yellow'}, 'shadows', False, 1, 9) m.add_ee_layer(cloudmask, {'palette': 'orange'}, 'cloudmask', True, 0.5, 9) # Add a layer control panel to the map. m.add_child(folium.LayerControl()) # Display the map. display(m) Explanation: Define a function to display all of the cloud and cloud shadow components to an interactive Folium map. The input is an image collection where each image is the result of the add_cld_shdw_mask function defined previously. End of explanation s2_sr_cld_col_eval_disp = s2_sr_cld_col_eval.map(add_cld_shdw_mask) display_cloud_layers(s2_sr_cld_col_eval_disp) Explanation: Display mask component layers Map the add_cld_shdw_mask function over the collection to add mask component bands to each image, then display the results. Give the system some time to render everything, it should take less than a minute. End of explanation AOI = ee.Geometry.Point(-122.269, 45.701) START_DATE = '2020-06-01' END_DATE = '2020-09-01' CLOUD_FILTER = 60 CLD_PRB_THRESH = 40 NIR_DRK_THRESH = 0.15 CLD_PRJ_DIST = 2 BUFFER = 100 Explanation: Evaluate mask component layers In the above map, use the layer control panel in the upper right corner to toggle layers on and off; layer names are the same as band names, for easy code referral. Note that the layers have a minimum zoom level of 9 to avoid resource issues that can occur when visualizing layers that depend on the ee.Image.reproject function (used during cloud shadow project and mask dilation). Try changing the above CLD_PRB_THRESH, NIR_DRK_THRESH, CLD_PRJ_DIST, and BUFFER input variables and rerunning the previous cell to see how the results change. Find a good set of values for a given overpass and then try the procedure with a new overpass with different cloud conditions (this S2 SR image browser app is handy for quickly identifying images and determining image collection filter criteria). Try to identify a set of parameter values that balances cloud/cloud shadow commission and omission error for a range of cloud types. In the next section, we'll use the values to actually apply the mask to generate a cloud-free composite for 2020. Apply cloud and cloud shadow mask In this section we'll generate a cloud-free composite for the same region as above that represents mean reflectance for July and August, 2020. Define collection filter and cloud mask parameters We'll redefine the parameters to be a little more aggressive, i.e. decrease the cloud probability threshold, increase the cloud projection distance, and increase the buffer. These changes will increase cloud commission error (mask out some clear pixels), but since we will be compositing images from three months, there should be plenty of observations to complete the mosaic. End of explanation s2_sr_cld_col = get_s2_sr_cld_col(AOI, START_DATE, END_DATE) Explanation: Build a Sentinel-2 collection Reassemble the S2-cloudless collection since the collection filter parameters have changed. End of explanation def apply_cld_shdw_mask(img): # Subset the cloudmask band and invert it so clouds/shadow are 0, else 1. not_cld_shdw = img.select('cloudmask').Not() # Subset reflectance bands and update their masks, return the result. return img.select('B.*').updateMask(not_cld_shdw) Explanation: Define cloud mask application function Define a function to apply the cloud mask to each image in the collection. End of explanation s2_sr_median = (s2_sr_cld_col.map(add_cld_shdw_mask) .map(apply_cld_shdw_mask) .median()) Explanation: Process the collection Add cloud and cloud shadow component bands to each image and then apply the mask to each image. Reduce the collection by median (in your application, you might consider using medoid reduction to build a composite from actual data values, instead of per-band statistics). End of explanation # Create a folium map object. center = AOI.centroid(10).coordinates().reverse().getInfo() m = folium.Map(location=center, zoom_start=12) # Add layers to the folium map. m.add_ee_layer(s2_sr_median, {'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 2500, 'gamma': 1.1}, 'S2 cloud-free mosaic', True, 1, 9) # Add a layer control panel to the map. m.add_child(folium.LayerControl()) # Display the map. display(m) Explanation: Display the cloud-free composite Display the results. Be patient while the map renders, it may take a minute; ee.Image.reproject is forcing computations to happen at 100 and 20 m scales (i.e. it is not relying on appropriate pyramid level scales for analysis). The issues with ee.Image.reproject being resource-intensive in this case are mostly confined to interactive map viewing. Batch image exports and table reduction exports where the scale parameter is set to typical Sentinel-2 scales (10-60 m) are less affected. End of explanation
7,589
Given the following text description, write Python code to implement the functionality described below step by step Description: Planet Analytics API Tutorial Summary Statistics Step1: 2. Post a stats job request a) Check API Connection Note Step2: b) Select your subscription The analytics stats API enables you to create summary stats reports for your analytics subscriptions. You will need the id of a subscription of interest in order to make a stats request. Step3: c) Post a stats report job request to the Analytic Feeds API Step4: d) Poll the stats endpoint for job completion Step5: 3. Get job report results a) Get report link from the completed stats job Step6: b) Get report as json Step7: c) Get job report results as a dataframe The summary stats report can be returned as a csv file. Below, we request the csv and create a pandas dataframe. Step8: 4. Visualize the time series Step9: 5. Customize area of interest and time range a) Pick out an AOI First take a look at the full subscription AOI Step10: You can request stats for the entire subscription geometry or for subregions of the subscription geometry. Below we construct a small box inside of the subscription boundary for this demo. First, convert the subscription boundary to a shapely shape Step11: Now get a bounding box around the subscription geometry Step12: Add the bounding box to the map. The bounding box should contain the entire aoi_shape. Step13: Construct a smaller box that will fit inside of the aoi_shape Step14: Get a custom AOI by taking the interesction of the subscription geometry and the smaller box Step15: Visualize the new custom_aoi on the map Step16: We used shapely to construct a cusom_aoi. We now need to convert our custom area of interest into geojson again for the api request. Alternatively, you can go to geojson.io to draw a custom area of interest on a map and get the geojson representation. Note Step17: b) Select a Custom Time Range Step18: c) Request the custom Report Step19: Poll for job completion Step20: Get the customized stats report as a dataframe
Python Code: !pip install --quiet hvplot Explanation: Planet Analytics API Tutorial Summary Statistics: Buildings Overview Introduction Post a stats job request Get job report results Visualize the time series Customize area of interest and time range 1. Introduction This notebook demonstrates how to request road summary statistics for a subscription using the Anaytics Feeds Stats API and visualize them as time series, enabling further analyses including patterns of life, development trends and anomaly detection. The workflow involves: - Posting a stats job request - Polling the job stats endpoint - Getting the job report results - Restructuring the results into a pandas dataframe - Visualizing the time series Add extra dependencies This notebook requires hvplot, which may not be available in the main notebook docker image. End of explanation import os import requests ANALYTICS_BASE_URL = 'https://api.planet.com/analytics/' # ANALYTICS_BASE_URL = 'https://sif-next.prod.planet-labs.com/' # change this line if your API key is not set as an env var API_KEY = os.environ['PL_API_KEY'] # alternatively, you can just set your API key directly as a string variable: # API_KEY = "YOUR_PLANET_API_KEY_HERE" # set up a reusable session with required headers session = requests.Session() session.headers.update({'content-type':'application/json','Authorization': 'api-key ' + API_KEY}) # make a request to the analytics api resp = session.get(ANALYTICS_BASE_URL) if resp.ok: print("Yay, you are able to connect to the Planet Analytics API!") else: print("Something is wrong:", resp.content) Explanation: 2. Post a stats job request a) Check API Connection Note: If you do not have access to the Analytics Feeds API, you may not be able to run through these examples. Contact Sales to learn more. End of explanation import pandas as pd # Make sure you have a subscription for this buildings feed FEED_ID = '4cdb1add-2b0a-49fd-9968-54f85bbb6172' resp = session.get(f"{ANALYTICS_BASE_URL}subscriptions?feedID={FEED_ID}") if not resp.ok: raise Exception('Bad response:', resp.content) subscriptions = resp.json()['data'] if len(subscriptions) == 0: raise Exception(f"You do not have any subscriptions for feed {FEED_ID}") df = pd.DataFrame.from_records(subscriptions) df[['id', 'title', 'description', 'startTime', 'endTime']] # you can use any of the above subscriptions subscription = subscriptions[0] Explanation: b) Select your subscription The analytics stats API enables you to create summary stats reports for your analytics subscriptions. You will need the id of a subscription of interest in order to make a stats request. End of explanation import json import pprint request_body = { "title": "Stats Demo", "subscriptionID": subscription['id'], "interval": "month", # most road and building feeds generate results on a monthly cadence # "collection": collection, # add a geojson feature collection if you want use a custom area of interest # "startTime": start_time, # add custom start time here if desired # "endTime": end_time # add custom end time here if desired } stats_post_url = ANALYTICS_BASE_URL + 'stats' job_post_resp = session.post( stats_post_url, data=json.dumps(request_body) ) pprint.pprint(job_post_resp.json()) Explanation: c) Post a stats report job request to the Analytic Feeds API End of explanation import time job_link = job_post_resp.json()['links'][0]['href'] status = "pending" while status != "completed": report_status_resp = session.get( job_link, ) status = report_status_resp.json()['status'] print(status) time.sleep(2) pprint.pprint(report_status_resp.json()) Explanation: d) Poll the stats endpoint for job completion End of explanation report_results_link = report_status_resp.json()['links'][-1]['href'] report_results_link Explanation: 3. Get job report results a) Get report link from the completed stats job End of explanation results_resp = session.get( report_results_link, ) print(results_resp.status_code) pprint.pprint(results_resp.json()) Explanation: b) Get report as json End of explanation report_csv_url = report_results_link + '?format=csv' print(report_csv_url) from io import StringIO csv_resp = session.get(report_csv_url) data = StringIO(csv_resp.text) df = pd.read_csv(data) df.head() Explanation: c) Get job report results as a dataframe The summary stats report can be returned as a csv file. Below, we request the csv and create a pandas dataframe. End of explanation import holoviews as hv import hvplot.pandas from bokeh.models.formatters import DatetimeTickFormatter hv.extension('bokeh') formatter = DatetimeTickFormatter(months='%b %Y') df.hvplot().options(xformatter=formatter, width=1000) df['Building Area Percentage'] = df['Feature Area'] / df['Total Area'] * 100 df['Building Area Percentage'].hvplot().options(xformatter=formatter, width=600) Explanation: 4. Visualize the time series End of explanation from ipyleaflet import Map, GeoJSON # center an ipyleaflet map around the subscription geom = subscription['geometry'] if geom['type'] == 'Polygon': lon, lat = geom['coordinates'][0][0] elif geom['type'] == 'MultiPolygon': lon, lat = geom['coordinates'][0][0][0] else: print('You may need to re-center the map') lon, lat = -122.41, 37.77 m = Map(center=(lat, lon), zoom=8) # add the subscription geometry to the map polygon = GeoJSON(data=geom) m.add_layer(polygon); m Explanation: 5. Customize area of interest and time range a) Pick out an AOI First take a look at the full subscription AOI: End of explanation import shapely.geometry aoi_shape = shapely.geometry.shape(subscription['geometry']) aoi_shape Explanation: You can request stats for the entire subscription geometry or for subregions of the subscription geometry. Below we construct a small box inside of the subscription boundary for this demo. First, convert the subscription boundary to a shapely shape End of explanation print(aoi_shape.bounds) minx, miny, maxx, maxy = aoi_shape.bounds bbox = shapely.geometry.box(minx, miny, maxx, maxy) bbox Explanation: Now get a bounding box around the subscription geometry: End of explanation bbox_polygon = GeoJSON(data=shapely.geometry.mapping(bbox), style={'color': 'green', 'opacity': 1, 'fillOpacity': 0.1}) m.add_layer(bbox_polygon); m Explanation: Add the bounding box to the map. The bounding box should contain the entire aoi_shape. End of explanation x_diff = maxx - minx minx2 = minx + x_diff / 5 maxx2 = maxx - x_diff / 5 y_diff = maxy - miny miny2 = miny + y_diff / 5 maxy2 = maxy - y_diff / 5 smaller_box = shapely.geometry.box(minx2, miny2, maxx2, maxy2) print(smaller_box.bounds) smaller_box Explanation: Construct a smaller box that will fit inside of the aoi_shape End of explanation custom_aoi = smaller_box.intersection(aoi_shape) custom_aoi Explanation: Get a custom AOI by taking the interesction of the subscription geometry and the smaller box End of explanation bbox_polygon = GeoJSON(data=shapely.geometry.mapping(custom_aoi), style={'color': 'red', 'opacity': 1, 'fillOpacity': 0.1}) m.add_layer(bbox_polygon); m Explanation: Visualize the new custom_aoi on the map End of explanation import geojson import pprint feature = geojson.Feature(geometry=shapely.geometry.mapping(custom_aoi), id="my_custom_box") collection = geojson.FeatureCollection(features=[feature]) pprint.pprint(collection) Explanation: We used shapely to construct a cusom_aoi. We now need to convert our custom area of interest into geojson again for the api request. Alternatively, you can go to geojson.io to draw a custom area of interest on a map and get the geojson representation. Note: If you don't provide a custom AOI in your stats request, the entire subscription geometry is used. End of explanation import datetime, dateutil start_datetime = dateutil.parser.parse(subscription['startTime']) + datetime.timedelta(weeks=4) # isoformat returns a time with +00:00 and this api requires the Z suffix and no time offset start_time = start_datetime.isoformat()[:-6] + 'Z' end_time = subscription['endTime'] print(start_time) print(end_time) Explanation: b) Select a Custom Time Range End of explanation request_body = { "title": "Building Stats Demo - Custom AOI and TOI", "subscriptionID": subscription['id'], "interval": "month", # most road and building feeds generate results on a monthly cadence "collection": collection, # this is the custom_aoi as a geojson feature collection, "clipToSubscription": True, # use this option if you are ok with the custom AOI being clipped to the subscription boundary "startTime": start_time, # custom start time "endTime": end_time # custom end time } job_post_resp = session.post( stats_post_url, data=json.dumps(request_body) ) pprint.pprint(job_post_resp.json()) Explanation: c) Request the custom Report End of explanation job_link = job_post_resp.json()['links'][0]['href'] status = "pending" while status != "completed": report_status_resp = session.get( job_link, ) status = report_status_resp.json()['status'] print(status) time.sleep(2) Explanation: Poll for job completion End of explanation report_link = [l for l in report_status_resp.json()['links'] if l['rel'] == 'report'][0]['href'] report_csv_url = report_link + '?format=csv' csv_resp = session.get(report_csv_url) data = StringIO(csv_resp.text) df = pd.read_csv(data) df.head() Explanation: Get the customized stats report as a dataframe End of explanation
7,590
Given the following text description, write Python code to implement the functionality described below step by step Description: &larr; Back to Index Zero Crossing Rate The zero crossing rate indicates the number of times that a signal crosses the horizontal axis. Let's load a signal Step1: Listen to the signal Step2: Plot the signal Step3: Let's zoom in Step4: I count five zero crossings. Let's compute the zero crossings using librosa. Step5: That computed a binary mask where True indicates the presence of a zero crossing. To find the total number of zero crossings, use sum Step6: To find the zero-crossing rate over time, use zero_crossing_rate Step7: Plot the zero-crossing rate Step8: Note how the high zero-crossing rate corresponds to the presence of the snare drum. The reason for the high rate near the beginning is because the silence oscillates quietly around zero Step9: A simple hack around this is to add a small constant before computing the zero crossing rate Step10: Questions Try for other audio files. Does the zero-crossing rate still return something useful in polyphonic mixtures?
Python Code: x, sr = librosa.load('audio/simple_loop.wav') Explanation: &larr; Back to Index Zero Crossing Rate The zero crossing rate indicates the number of times that a signal crosses the horizontal axis. Let's load a signal: End of explanation ipd.Audio(x, rate=sr) Explanation: Listen to the signal: End of explanation plt.figure(figsize=(14, 5)) librosa.display.waveplot(x, sr=sr) Explanation: Plot the signal: End of explanation n0 = 6500 n1 = 7500 plt.figure(figsize=(14, 5)) plt.plot(x[n0:n1]) Explanation: Let's zoom in: End of explanation zero_crossings = librosa.zero_crossings(x[n0:n1], pad=False) zero_crossings.shape Explanation: I count five zero crossings. Let's compute the zero crossings using librosa. End of explanation print(sum(zero_crossings)) Explanation: That computed a binary mask where True indicates the presence of a zero crossing. To find the total number of zero crossings, use sum: End of explanation zcrs = librosa.feature.zero_crossing_rate(x) print(zcrs.shape) Explanation: To find the zero-crossing rate over time, use zero_crossing_rate: End of explanation plt.figure(figsize=(14, 5)) plt.plot(zcrs[0]) Explanation: Plot the zero-crossing rate: End of explanation plt.figure(figsize=(14, 5)) plt.plot(x[:1000]) plt.ylim(-0.0001, 0.0001) Explanation: Note how the high zero-crossing rate corresponds to the presence of the snare drum. The reason for the high rate near the beginning is because the silence oscillates quietly around zero: End of explanation zcrs = librosa.feature.zero_crossing_rate(x + 0.0001) plt.figure(figsize=(14, 5)) plt.plot(zcrs[0]) Explanation: A simple hack around this is to add a small constant before computing the zero crossing rate: End of explanation ls audio Explanation: Questions Try for other audio files. Does the zero-crossing rate still return something useful in polyphonic mixtures? End of explanation
7,591
Given the following text description, write Python code to implement the functionality described below step by step Description: Outline IPython and IPython Notebooks Numpy Pandas Python and IPython python is a programming language and also the name of the program that runs scripts written in that language. If you're running scripts from the command line you can use either ipython with something like ipython my_script.py or python with something like python my_script.py If you're using the command line interpreter interactively to load and explore data, try out a new package, etc. always use ipython over python. This is because ipython has a bunch of features like tab completion, inline help, and easy access to shell commands which are just plain great (more on these in a bit). IPython Notebook IPython notebook is an interactive front-end to ipython which lets you combine snippets of python code with explanations, images, videos, whatever. It's also really convenient for conveying experimental results. http Step1: Arrays and Matrices In addition to arrays which can have any number of dimensions, Numpy also has a matrix data type which always has exactly 2. DO NOT USE matrix. The original intention behind this data type was to make Numpy feel a bit more like Matlab, mainly by making the * operator perform matrix multiplication so you don't have to use np.dot. But matrix isn't as well developed by the Numpy people as array is. matrix is slower and using it will sometimes throw errors in other people's code because everyone expects you to use array. Step2: <span style="color Step3: Pandas Pandas is a python package which adds some useful data analysis features to numpy arrays. Most importantly, it contains a DataFrame data type like the r dataframe Step4: Pandas Joins - If you have time The real magic with a Pandas DataFrame comes from the merge method which can match up the rows and columns from two DataFrames and combine their data. Let's load another file which has shoesize for just a few players Step5: <span style="color
Python Code: # you don't have to rename numpy to np but it's customary to do so import numpy as np # you can create a 1-d array with a list of numbers a = np.array([1, 4, 6]) print 'a:' print a print 'a.shape:', a.shape print # you can create a 2-d array with a list of lists of numbers b = np.array([[6, 7], [3, 1], [4, 0]]) print 'b:' print b print 'b.shape:', b.shape print # you can create an array of ones print 'np.ones(3, 4):' print np.ones((3, 4)) print # you can create an array of zeros print 'np.zeros(2, 5):' print np.zeros((2, 5)) print # you can create an array which of a range of numbers and reshape it print 'np.arange(6):' print np.arange(6) print print 'np.arange(6).reshape(2, 3):' print np.arange(6).reshape(2, 3) print # you can take the transpose of a matrix with .transpose or .T print 'b and b.T:' print b print print b.T print # you can iterate over rows i = 0 for this_row in b: print 'row', i, ': ', this_row i += 1 print # you can access sections of an array with slices print 'first two rows of the first column of b:' print b[:2, 0] print # you can concatenate arrays in various ways: print 'np.hstack([b, b]):' print np.hstack([b, b]) print print 'np.vstack([b, b]):' print np.vstack([b, b]) print # note that you get an error if you pass in print 'np.hstack(b, b):' print np.hstack(b, b) print # you can perform matrix multiplication with np.dot() c = np.dot(a, b) print 'c = np.dot(a, b):' print c print # if a is already a numpy array, then you can also use this chained # matrix multiplication notation. use whichever looks cleaner in # context print 'a.dot(b):' print a.dot(b) print # you can perform element-wise multiplication with * d = b * b print 'd = b * b:' print d print a.dot(b) Explanation: Outline IPython and IPython Notebooks Numpy Pandas Python and IPython python is a programming language and also the name of the program that runs scripts written in that language. If you're running scripts from the command line you can use either ipython with something like ipython my_script.py or python with something like python my_script.py If you're using the command line interpreter interactively to load and explore data, try out a new package, etc. always use ipython over python. This is because ipython has a bunch of features like tab completion, inline help, and easy access to shell commands which are just plain great (more on these in a bit). IPython Notebook IPython notebook is an interactive front-end to ipython which lets you combine snippets of python code with explanations, images, videos, whatever. It's also really convenient for conveying experimental results. http://nbviewer.ipython.org Notebook Concepts Cells -- That grey box is called a cell. An IPython notebook is nothing but a series of cells. Selecting -- You can tell if you have a cell selected because it will have a thin, black box around it. Running a Cell -- Running a cell displays its output. You can run a cell by pressing shift + enter while it's selected (or click the play button toward the top of the screen). Modes -- There are two different ways of having a cell selected: Command Mode -- Lets you delete a cell and change its type (more on this in a second). Edit Mode -- Lets you change the contents of a cell. Aside: Keyboard Shortcuts That I Use A Lot (When describing keyboard shortcuts, + means 'press at the same time', , means 'press after' Enter -- Run this cell and make a new one after it Esc -- Stop editing this cell Option + Enter -- Run this cell and make a new cell after it (Note: this is OSX specific. Check help >> keyboard shortcuts to find your operating system's version) Shift + Enter -- Run this cell and don't make a new one after it Up Arrow and Down Arrow -- Navigate between cells (must be in command mode) Esc, m, Enter -- Convert the current cell to markdown and start editing it again Esc, y, Enter -- Convert the current cell to a code cell and start editing it again Esc, d, d -- Delete the current cell Esc, a -- Create a new cell above the current one Esc, b -- Create a new cell below the current one Command + / -- Toggle comments in Python code (OSX) Ctrl + / -- Toggle comments in Python code (Linux / Windows) <span style="color:red">Check more at here </span> Numpy Numpy is the main package that you'll use for doing scientific computing in Python. Numpy provides a multidimensional array datatype called ndarray which can do things like vector and matrix computations. Resources: Official Numpy Tutorial Numpy, R, Matlab Cheat Sheet Another Numpy, R, Matlab Cheat Sheet End of explanation # you can convert a 1-d array to a 2-d array with np.newaxis print 'a:' print a print 'a.shape:', a.shape print print 'a[np.newaxis] is a 2-d row vector:' print a[np.newaxis] print 'a[np.newaxis].shape:', a[np.newaxis].shape print print 'a[np.newaxis].T: is a 2-d column vector:' print a[np.newaxis].T print 'a[np.newaxis].T.shape:', a[np.newaxis].T.shape print # numpy provides a ton of other functions for working with matrices m = np.array([[1, 2],[3, 4]]) m_inverse = np.linalg.inv(m) print 'inverse of [[1, 2],[3, 4]]:' print m_inverse print print 'm.dot(m_inverse):' print m.dot(m_inverse) # and for doing all kinds of sciency type stuff. like generating random numbers: np.random.seed(5678) n = np.random.randn(3, 4) print 'a matrix with random entries drawn from a Normal(0, 1) distribution:' print n Explanation: Arrays and Matrices In addition to arrays which can have any number of dimensions, Numpy also has a matrix data type which always has exactly 2. DO NOT USE matrix. The original intention behind this data type was to make Numpy feel a bit more like Matlab, mainly by making the * operator perform matrix multiplication so you don't have to use np.dot. But matrix isn't as well developed by the Numpy people as array is. matrix is slower and using it will sometimes throw errors in other people's code because everyone expects you to use array. End of explanation a = np.ones(n_data)[np.newaxis].T a np.random.seed(3333) n_data = 10 # number of data points. i.e. N n_dim = 5 # number of dimensions of each datapoint. i.e. D betas = np.random.randn(n_dim + 1) X_no_constant = np.random.randn(n_data, n_dim) print 'X_no_constant:' print X_no_constant print # INSERT YOUR CODE HERE! X = np.hstack([np.ones(n_data)[np.newaxis].T, X_no_constant]) y = np.dot(X, betas) # Tests: y_expected = np.array([-0.41518357, -9.34696153, 5.08980544, -0.26983873, -1.47667864, 1.96580794, 6.87009791, -2.07784135, -0.7726816, -2.74954984]) np.testing.assert_allclose(y, y_expected) print '****** Tests passed! ******' Explanation: <span style="color:red">Self-Driven Numpy Exercise</span> In the cell below, add a column of ones to the matrix X_no_constant. This is a common task in linear regression and general linear modeling and something that you'll have to be able to do later today. Multiply your new matrix by the betas vector below to make a vector called y You'll know you've got it when the cell prints '****** Tests passed! ******' at the bottom. Specificically, given a matrix: \begin{equation} \qquad \mathbf{X_{NoConstant}} = \left( \begin{array}{ccc} x_{1,1} & x_{1,2} & \dots & x_{1,D} \ x_{2,1} & x_{2,2} & \dots & x_{2,D} \ \vdots & \vdots & \ddots & \vdots \ x_{i,1} & x_{i,2} & \dots & x_{i,D} \ \vdots & \vdots & \ddots & \vdots \ x_{N,1} & x_{N,2} & \dots & x_{N,D} \ \end{array} \right) \qquad \end{equation} We want to convert it to: \begin{equation} \qquad \mathbf{X} = \left( \begin{array}{ccc} 1 & x_{1,1} & x_{1,2} & \dots & x_{1,D} \ 1 & x_{2,1} & x_{2,2} & \dots & x_{2,D} \ \vdots & \vdots & \vdots & \ddots & \vdots \ 1 & x_{i,1} & x_{i,2} & \dots & x_{i,D} \ \vdots & \vdots & \ddots & \vdots \ 1 & x_{N,1} & x_{N,2} & \dots & x_{N,D} \ \end{array} \right) \qquad \end{equation} So that if we have a vector of regression coefficients like this: \begin{equation} \qquad \beta = \left( \begin{array}{ccc} \beta_0 \ \beta_1 \ \vdots \ \beta_j \ \vdots \ \beta_D \end{array} \right) \end{equation} We can do this: \begin{equation} \mathbf{y} \equiv \mathbf{X} \mathbf{\beta} \end{equation} End of explanation # like with numpy, you don't have to rename pandas to pd, but it's customary to do so import pandas as pd b = np.array([[6, 7], [3, 1], [4, 0]]) df = pd.DataFrame(data=b, columns=['Weight', 'Height']) print 'b:' print b print print 'DataFame version of b:' print df print # Pandas can save and load CSV files. # Python can do this too, but with Pandas, you get a DataFrame # at the end which understands things like column headings baseball = pd.read_csv('data/baseball.dat.txt') # A Dataframe's .head() method shows its first 5 rows baseball.head() # you can see all the column names print 'baseball.keys():' print baseball.keys() print # print 'baseball.Salary:' # print baseball.Salary # print # print "baseball['Salary']:" # print baseball['Salary'] baseball.info() baseball.describe() # baseball # You can perform queries on your data frame. # This statement gives you a True/False vector telling you # whether the player in each row has a salary over $1 Million millionaire_indices = baseball['Salary'] > 1000 # print millionaire_indices # you can use the query indices to look at a subset of your original dataframe print 'baseball.shape:', baseball.shape print "baseball[millionaire_indices].shape:", baseball[millionaire_indices].shape # you can look at a subset of rows and columns at the same time print "baseball[millionaire_indices][['Salary', 'AVG', 'Runs', 'Name']]:" baseball[millionaire_indices][['Salary', 'AVG', 'Runs', 'Name']].head() Explanation: Pandas Pandas is a python package which adds some useful data analysis features to numpy arrays. Most importantly, it contains a DataFrame data type like the r dataframe: a set of named columns organized into something like a 2d array. Pandas is great. Resources: 10 Minutes to Pandas Pandas Data Structures Tutorial Merge, Join, Concatenate Tutorial Another Numpy/Pandas Tutorial End of explanation # load shoe size data shoe_size_df = pd.read_csv('data/baseball2.dat.txt') shoe_size_df merged = pd.merge(baseball, shoe_size_df, on=['Name']) merged merged_outer = pd.merge(baseball, shoe_size_df, on=['Name'], how='outer') merged_outer.head() Explanation: Pandas Joins - If you have time The real magic with a Pandas DataFrame comes from the merge method which can match up the rows and columns from two DataFrames and combine their data. Let's load another file which has shoesize for just a few players End of explanation np.random.seed(3333) n_data = 10 # number of data points. i.e. N n_dim = 5 # number of dimensions of each datapoint. i.e. D betas = np.random.randn(n_dim + 1) X_df = pd.DataFrame(data=np.random.randn(n_data, n_dim)) # INSERT YOUR CODE HERE! X_df['const'] = np.ones(n_data) y_new = np.dot(X_df, betas) # Tests: assert 'const' in X_df.keys(), 'The new column must be called "const"' assert np.all(X_df.shape == (n_data, n_dim+1)) assert len(y_new == n_data) print '****** Tests passed! ******' X_df Explanation: <span style="color:red">Self-Driven Pandas Exercise</span> Partner up with someone next to you. Then, on one of your computers: Prepend a column of ones to the dataframe X_df below. Name the new column 'const'. Again, matrix multiply X_df by the betas vector and assign the result to an new variable: y_new You'll know you've got it when the cell prints '****** Tests passed! ******' at the bottom. Hint: This stackoverflow post may be useful: http://stackoverflow.com/questions/13148429/how-to-change-the-order-of-dataframe-columns End of explanation
7,592
Given the following text description, write Python code to implement the functionality described below step by step Description: Bokeh <a href="https Step1: Plot lines Source Step2: Scatter plot Source Step3: Squares Step4: Hex Tiles Bokeh can plot hexagonal tiles, which are often used for showing binned aggregations. The hex_tile() method takes a size parameter to define the size of the hex grid, and axial coordinates to specify which tiles are present. Step5: Palettes and color mappers Palettes See https Step6: Color Mappers See Step7: Export to PNG or SVG See Step8: Update Source Step9: Animation Source Step10: Interactive plots Source
Python Code: import bokeh from bokeh.plotting import figure, output_notebook, show Explanation: Bokeh <a href="https://colab.research.google.com/github/jdhp-docs/notebooks/blob/master/python_bokeh_en.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a> <a href="https://mybinder.org/v2/gh/jdhp-docs/notebooks/master?filepath=python_bokeh_en.ipynb"><img align="left" src="https://mybinder.org/badge.svg" alt="Open in Binder" title="Open and Execute in Binder"></a> End of explanation # prepare some data x = [1, 2, 3, 4, 5] y = [6, 7, 2, 4, 5] output_notebook() # create a new plot with a title and axis labels p = figure(title="simple line example", x_axis_label='x', y_axis_label='y') # add a line renderer with legend and line thickness p.line(x, y, legend="Temp.", line_width=2) # show the results show(p) Explanation: Plot lines Source: - https://bokeh.pydata.org/en/latest/docs/user_guide/quickstart.html#getting-started - https://bokeh.pydata.org/en/latest/docs/user_guide/notebook.html End of explanation output_notebook() p = figure(plot_width=400, plot_height=400) # add a circle renderer with a size, color, and alpha p.circle([1, 2, 3, 4, 5], [6, 7, 2, 4, 5], size=20, color="navy", alpha=0.5) # show the results show(p) Explanation: Scatter plot Source: - https://bokeh.pydata.org/en/latest/docs/user_guide/plotting.html#scatter-markers Circles End of explanation output_notebook() p = figure(plot_width=400, plot_height=400) # add a square renderer with a size, color, and alpha p.square([1, 2, 3, 4, 5], [6, 7, 2, 4, 5], size=20, color="olive", alpha=0.5) # show the results show(p) Explanation: Squares End of explanation import numpy as np from bokeh.io import output_notebook, show from bokeh.plotting import figure from bokeh.util.hex import axial_to_cartesian output_notebook() q = np.array([0, 0, 0, -1, -1, 1, 1]) r = np.array([0, -1, 1, 0, 1, -1, 0]) p = figure(plot_width=400, plot_height=400, toolbar_location=None) p.grid.visible = False p.hex_tile(q, r, size=1, fill_color=["firebrick"]*3 + ["olive"]*2 + ["navy"]*2, line_color="white", alpha=0.5) x, y = axial_to_cartesian(q, r, 1, "pointytop") p.text(x, y, text=["(%d, %d)" % (q,r) for (q, r) in zip(q, r)], text_baseline="middle", text_align="center") show(p) Explanation: Hex Tiles Bokeh can plot hexagonal tiles, which are often used for showing binned aggregations. The hex_tile() method takes a size parameter to define the size of the hex grid, and axial coordinates to specify which tiles are present. End of explanation bokeh.palettes.Category10_3 bokeh.palettes.Category10_4 bokeh.palettes.viridis(10) bokeh.palettes.Viridis256 Explanation: Palettes and color mappers Palettes See https://bokeh.pydata.org/en/latest/docs/reference/palettes.html End of explanation output_notebook() p = figure(plot_width=400, plot_height=400) x = np.array([1, 2, 3, 4, 5]) y = np.array([6, 7, 2, 4, 5]) color_mapper = bokeh.models.mappers.LinearColorMapper(palette=bokeh.palettes.Viridis256, low=y.min(), high=y.max()) # add a circle renderer with a size, color, and alpha p.circle(x, y, size=20, color="navy", fill_color=bokeh.transform.transform('y', color_mapper), alpha=0.5) # show the results show(p) Explanation: Color Mappers See: - https://bokeh.pydata.org/en/latest/docs/reference/models/mappers.html#bokeh.models.mappers.ColorMapper - https://stackoverflow.com/questions/47651752/bokeh-scatterplot-with-gradient-colors - https://stackoverflow.com/questions/49833824/interactive-scatter-plot-in-bokeh-with-hover-tool End of explanation from bokeh.plotting import figure, output_file, show, ColumnDataSource from bokeh.models import HoverTool from bokeh.io import output_notebook output_notebook() source = ColumnDataSource( data=dict( x=[1, 2, 3, 4, 5], y=[2, 5, 8, 2, 7], desc=['A', 'b', 'C', 'd', 'E'], ) ) hover = HoverTool( tooltips=[ ("index", "$index"), ("(x,y)", "($x, $y)"), ("desc", "@desc"), ] ) fig = figure(plot_width=300, plot_height=300, #tools=[hover], title="Mouse over the dots") fig.add_tools(hover) fig.circle('x', 'y', size=10, source=source) show(fig) Explanation: Export to PNG or SVG See: https://bokeh.pydata.org/en/latest/docs/user_guide/export.html Tooltips Source: https://gist.github.com/dela3499/e159b388258b5f1a7a3bac42fc0179fd End of explanation from bokeh.io import push_notebook, output_notebook, show from bokeh.plotting import figure output_notebook() p = figure(plot_width=400, plot_height=400) # add a square renderer with a size, color, and alpha s = p.square([1, 2, 3, 4, 5], [6, 7, 2, 4, 5], size=20, color="olive", alpha=0.5) # show the results h = show(p, notebook_handle=True) h s.glyph.fill_color = "navy" push_notebook(handle=h) Explanation: Update Source: - https://bokeh.pydata.org/en/latest/docs/user_guide/notebook.html#notebook-handles End of explanation from bokeh.io import push_notebook, output_notebook, show from bokeh.plotting import figure import time import numpy as np # prepare some data x = np.linspace(-2. * np.pi, 2. * np.pi, 1000) y = np.sin(x) output_notebook() # create a new plot with a title and axis labels p = figure(title="ipywidgets test", x_axis_label='x', y_axis_label='y') # add a line renderer with legend and line thickness l = p.line(x, y, legend="Sin(x)", line_width=2) # show the results h = show(p, notebook_handle=True) # <-- !!! for t in range(200): l.data_source.data['y'] = np.sin(x - 0.1 * t) push_notebook(handle=h) time.sleep(0.01) Explanation: Animation Source: - https://bokeh.pydata.org/en/latest/docs/user_guide/notebook.html#notebook-handles End of explanation from bokeh.io import push_notebook, output_notebook, show from bokeh.plotting import figure import ipywidgets from ipywidgets import interact import numpy as np # prepare some data x = np.linspace(-2. * np.pi, 2. * np.pi, 1000) y = np.sin(x) output_notebook() # create a new plot with a title and axis labels p = figure(title="ipywidgets test", x_axis_label='x', y_axis_label='y') # add a line renderer with legend and line thickness l = p.line(x, y, legend="Sin(x)", line_width=2) # show the results h = show(p, notebook_handle=True) @interact(t=(0, 100, 1)) def update_plot(t): l.data_source.data['y'] = np.sin(x - 0.1 * t) push_notebook(handle=h) Explanation: Interactive plots Source: - https://bokeh.pydata.org/en/latest/docs/user_guide/notebook.html#jupyter-interactors End of explanation
7,593
Given the following text description, write Python code to implement the functionality described below step by step Description: PolyFill Step1: Methods Geometry methods Step2: Methods to add vertices and edges to table Step3: Main Script Set initial vertices Step4: Force-directed graph
Python Code: from IPython.core.display import HTML from string import Template import pandas as pd import random, math HTML('<script src="lib/d3/d3.min.js"></script>') Explanation: PolyFill: Example of D3 in Jupyter This example shows the combined use of Python and D3 for a randomized 2D space-filling algorithm and visualization. It uses D3's force-directed graph methods. For description of the example's motivations and space-filling algorithm see this blog post. Libraries End of explanation def dotproduct(v1, v2): return sum((a*b) for a, b in zip(v1, v2)) def vectorLength(v): return math.sqrt(dotproduct(v, v)) def angleBtwnVectors(v1, v2): dot = dotproduct(v1, v2) det = v1[0]*v2[1] - v1[1]*v2[0] r = math.atan2(det, dot) return r * 180.0 / math.pi def angleViolation(vectorList,maxAllowed): violation = False angleList = [] for i in range(0,len(vectorList)): angleList.append( angleBtwnVectors([1,0],vectorList[i]) ) angleList.sort() angleList.append(angleList[0] + 360.0) for i in range(1,len(angleList)): if abs(angleList[i] - angleList[i-1]) > maxAllowed: violation = True return violation Explanation: Methods Geometry methods End of explanation def addVertex(vertices): r = len(vertices['x']) vertices.ix[r,'x'] = 0 vertices.ix[r,'y'] = 0 return r def locateVertex(p,r,vertices): vertices.ix[r,'x'] = p[0] vertices.ix[r,'y'] = p[1] def addEdge(r1,r2,vertices): for c in ['a1','a2','a3','a4','a5']: if not vertices.ix[r1,c] > -1: vertices.ix[r1,c] = r2 break for c in ['a1','a2','a3','a4','a5']: if not vertices.ix[r2,c] > -1: vertices.ix[r2,c] = r1 break Explanation: Methods to add vertices and edges to table End of explanation config = { 'random_seed' : 17, 'xExt': [-0.1,1.1] , 'yExt': [-0.1,1.1] , 'densityX' : 40 , 'densityY' : 20 , 'prob_0' : 0.1 , 'prob_3' : 0.075 , 'num_mod_steps' : 10 , 'mod_step_ratio' : 0.1 } random.seed(config['random_seed']) vertices = pd.DataFrame({'x':[],'y':[], 'a1':[],'a2':[],'a3':[],'a4':[],'a5':[],'a6':[] }) y = 0 densityX = config['densityX'] densityY = config['densityY'] nextLine = range(densityX) for i in range(len(nextLine)): r = addVertex(vertices) for line in range(densityY): currentLine = nextLine nextLine = [] numPointsInLine = len(currentLine) previousNone = False for i in range(numPointsInLine): p = [i/float(numPointsInLine-1),y] locateVertex(p,currentLine[i],vertices) if i > 0: addEdge(currentLine[i-1],currentLine[i],vertices) if line < densityY-1: # push either 0, 1 or 3 new vertices rnd = random.uniform(0,1) valid = (not previousNone) and line > 0 and i > 0 and i < (numPointsInLine - 1) if rnd < config['prob_0'] and valid: # 0 vertices previousNone = True elif rnd < (config['prob_3'] + config['prob_0']) and line < densityY-2: # 3 vertices nv = [] for j in range(3): if j == 0 and previousNone: nv.append(len(vertices['x']) - 1) else: nv.append(addVertex(vertices)) nextLine.append(nv[j]) addEdge(currentLine[i],nv[0],vertices) addEdge(currentLine[i],nv[2],vertices) previousNone = False else: # 1 vertex if previousNone: nv = len(vertices['x']) - 1 else: nv = addVertex(vertices) nextLine.append(nv) addEdge(currentLine[i],nv,vertices) previousNone = False y += 1.0 / float(densityY-1) vertices.head(10) Explanation: Main Script Set initial vertices End of explanation graph_config = pd.DataFrame(vertices).copy() adjacencies = [] for i in range(len(graph_config['x'])): ve = [] for j in range(1,7): if graph_config.ix[i,'a'+str(j)] > -1: ve.append( int(graph_config.ix[i,'a'+str(j)]) ) adjacencies.append(ve) graph_config['adjacencies'] = adjacencies graph_config['vertex'] = graph_config.index graph_config = graph_config.drop(['a1','a2','a3','a4','a5','a6'],axis=1) graph_config.head() graph_template = Template(''' <style> .vertex { fill: #777; } .edge { stroke: #111; stroke-opacity: 1; stroke-width: 0.5; } .link { stroke: #000; stroke-width: 0.5px; } .node { cursor: move; fill: #ccc; stroke: #000; stroke-width: 0.25px; } .node.fixed { fill: #f00; } </style> <button id="restart" type="button">re-start animation</button> <div> <svg width="100%" height="352px" id="graph"></svg> </div> <script> var width = 750; var height = 350; var svg = d3.select("#graph").append("g") .attr("transform", "translate(" + 1 + "," + 1 + ")"); var draw_graph = function() { svg.selectAll(".link").remove(); svg.selectAll(".node").remove(); var force = d3.layout.force() .size([width, height]) .linkStrength(0.9) .friction(0.9) .linkDistance(1) .charge(-1) .gravity(0.007) .theta(0.8) .alpha(0.1) .on("tick", tick); var drag = force.drag() .on("dragstart", dragstart); var link = svg.selectAll(".link"), node = svg.selectAll(".node"); var vertices = $vertices ; graph = {'nodes': [], 'links': []} vertices.forEach(function(v) { var f = false; if ( (v.x <= 0) || (v.x >= 1) || (v.y <= 0) || (v.y >= 0.999999999999999) ) { f = true; } graph.nodes.push({'x': v.x * width, 'y': v.y * height, 'fixed': f }) var e = v.adjacencies; for (var i=0; i<e.length; i++){ graph.links.push({'source': v.vertex, 'target': e[i] }) }; }); force .nodes(graph.nodes) .links(graph.links) .start(); link = link.data(graph.links) .enter().append("line") .attr("class", "link"); node = node.data(graph.nodes) .enter().append("circle") .attr("class", "node") .attr("r", 1.5) .on("dblclick", dblclick) .call(drag); function tick() { link.attr("x1", function(d) { return d.source.x; }) .attr("y1", function(d) { return d.source.y; }) .attr("x2", function(d) { return d.target.x; }) .attr("y2", function(d) { return d.target.y; }); node.attr("cx", function(d) { return d.x; }) .attr("cy", function(d) { return d.y; }); } function dblclick(d) { d3.select(this).classed("fixed", d.fixed = false); } function dragstart(d) { d3.select(this).classed("fixed", d.fixed = true); } } $( "#restart" ).on('click touchstart', function() { draw_graph(); }); draw_graph(); </script> ''') HTML(graph_template.safe_substitute({'vertices': graph_config.to_dict(orient='records')})) Explanation: Force-directed graph End of explanation
7,594
Given the following text description, write Python code to implement the functionality described below step by step Description: Matplotlib Below are some code from the code source Data Science from Scratch Step1: Bar Chart Step2: Histogram A bar chart can also be a good choice for plotting histograms of bucketed numeric values, in order to visually explore how the values are distributed. Step3: Line Charts Step4: Scatterplots A scatterplot is the right choice for visualizing the relationship between two paired sets of data. Step5: Wrong plot Added plt.axis("equal") to fix the scale
Python Code: from matplotlib import pyplot as plt import matplotlib.pyplot as plt from collections import Counter %matplotlib inline years = [1950, 1960, 1970, 1980, 1990, 2000, 2010] gdp = [300.2, 543.3, 1075.9, 2862.5, 5979.6, 10289.7, 14958.3] # create a line chart, years on x-axis, gdp on y-axis plt.plot(years, gdp, color='green', marker='o', linestyle='solid') # add a title plt.title("Nominal GDP") # add a label to the y-axis plt.ylabel("Billions of $") plt.show() Explanation: Matplotlib Below are some code from the code source Data Science from Scratch: https://github.com/joelgrus/data-science-from-scratch End of explanation movies = ["Annie Hall", "Ben-Hur", "Casablanca", "Gandhi", "West Side Story"] num_oscars = [5, 11, 3, 8, 10] # bars are by default width 0.8, so we'll add 0.1 to the left coordinates # so that each bar is centered xs = [i + 0.1 for i, _ in enumerate(movies)] print "xs", xs # plot bars with left x-coordinates [xs], heights [num_oscars] plt.bar(xs, num_oscars) plt.ylabel("# of Academy Awards") plt.title("My Favorite Movies") # label x-axis with movie names at bar centers plt.xticks([i + 0.5 for i, _ in enumerate(movies)], movies) plt.show() Explanation: Bar Chart End of explanation grades = [83,95,91,87,70,0,85,82,100,67,73,77,0] decile = lambda grade: grade // 10 * 10 histogram = Counter(decile(grade) for grade in grades) print "Histogram values", histogram plt.bar([x - 4 for x in histogram.keys()], # shift each bar to the left by 4 histogram.values(), # give each bar its correct height 8) # give each bar a width of 8 plt.axis([-5, 105, 0, 5]) # x-axis from -5 to 105, # y-axis from 0 to 5 plt.xticks([10 * i for i in range(11)]) # x-axis labels at 0, 10, ..., 100 plt.xlabel("Decile") plt.ylabel("# of Students") plt.title("Distribution of Exam 1 Grades") plt.show() mentions = [500, 505] years = [2013, 2014] plt.bar([2012.6, 2013.6], mentions, 0.8) plt.xticks(years) plt.ylabel("# of times I heard someone say 'data science'") # if you don't do this, matplotlib will label the x-axis 0, 1 # and then add a +2.013e3 off in the corner (bad matplotlib!) plt.ticklabel_format(useOffset=False) # misleading y-axis only shows the part above 500 plt.axis([2012.5,2014.5,499,506]) plt.title("Look at the 'Huge' Increase!") plt.show() plt.bar([2012.6, 2013.6], mentions, 0.8) plt.xticks(years) plt.ylabel("# of times I heard someone say 'data science'") # if you don't do this, matplotlib will label the x-axis 0, 1 # and then add a +2.013e3 off in the corner (bad matplotlib!) plt.ticklabel_format(useOffset=False) plt.axis([2012.5,2014.5,0,550]) plt.title("Not So Huge Anymore") plt.show() Explanation: Histogram A bar chart can also be a good choice for plotting histograms of bucketed numeric values, in order to visually explore how the values are distributed. End of explanation variance = [1, 2, 4, 8, 16, 32, 64, 128, 256] bias_squared = [256, 128, 64, 32, 16, 8, 4, 2, 1] total_error = [x + y for x, y in zip(variance, bias_squared)] xs = [i for i, _ in enumerate(variance)] # we can make multiple calls to plt.plot # to show multiple series on the same chart plt.plot(xs, variance, 'g-', label='variance') # green solid line plt.plot(xs, bias_squared, 'r-.', label='bias^2') # red dot-dashed line plt.plot(xs, total_error, 'b:', label='total error') # blue dotted line # because we've assigned labels to each series # we can get a legend for free # loc=9 means "top center" plt.legend(loc=9) plt.xlabel("model complexity") plt.title("The Bias-Variance Tradeoff") plt.show() Explanation: Line Charts End of explanation friends = [ 70, 65, 72, 63, 71, 64, 60, 64, 67] minutes = [175, 170, 205, 120, 220, 130, 105, 145, 190] labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i'] plt.scatter(friends, minutes) # label each point for label, friend_count, minute_count in zip(labels, friends, minutes): plt.annotate(label, xy=(friend_count, minute_count), # put the label with its point xytext=(5, -5), # but slightly offset textcoords='offset points') plt.title("Daily Minutes vs. Number of Friends") plt.xlabel("# of friends") plt.ylabel("daily minutes spent on the site") plt.show() Explanation: Scatterplots A scatterplot is the right choice for visualizing the relationship between two paired sets of data. End of explanation test_1_grades = [ 99, 90, 85, 97, 80] test_2_grades = [100, 85, 60, 90, 70] plt.scatter(test_1_grades, test_2_grades) plt.axis("equal") ##to fix the scale plt.title("Axes Aren't Comparable") plt.xlabel("test 1 grade") plt.ylabel("test 2 grade") plt.show() Explanation: Wrong plot Added plt.axis("equal") to fix the scale End of explanation
7,595
Given the following text description, write Python code to implement the functionality described below step by step Description: Background In this notebook, we show how to feed the embeddings from the language model into the MLP classifier. Then, we take the github repo, kubernetes/kubernetes, as an example. We do transfer learning and show the results. Data combined_sig_df.pkl https Step1: Split data Split the data into two sets according to the column, part. There are 28 labels in total because 28 sig labels have at least 30 issues, which are preprocessed in the notebook, EvaluateEmbeddings. Step2: Sklearn MLP Feed the embeddings from the language model to the MLP classifier. Step3: Precision & Recall Calculate precision & recall to let us make a approapriate threshold. Step4: Gird search To tune the MLP. Step5: Precision & Recall for grid search Step6: Save model Step7: Load model Load model from pickle file and retest it. Step8: Write it as a class Step9: Find best probability threshold Step10: Grid search for MLP class
Python Code: import pandas as pd combined_sig_df = pd.read_pickle('combined_sig_df.pkl') feat_df = pd.read_csv('feat_df.csv') # github issue contents combined_sig_df.head(3) # embeddings of github issues [mean, max] feat_df.head(3) # count the labels in the holdout set from collections import Counter c = Counter() for row in combined_sig_df[combined_sig_df.part == 6].labels: c.update(row) Explanation: Background In this notebook, we show how to feed the embeddings from the language model into the MLP classifier. Then, we take the github repo, kubernetes/kubernetes, as an example. We do transfer learning and show the results. Data combined_sig_df.pkl https://storage.googleapis.com/issue_label_bot/notebook_files/combined_sig_df.pkl This file includes the github issue contents including titles, bodies, and labels. feat_df.csv https://storage.googleapis.com/issue_label_bot/notebook_files/feat_df.csv This file includes 1600-dimentional embeddings of 14390 issues from kubernetes/kubernetes. End of explanation train_mask = combined_sig_df.part != 6 holdout_mask = ~train_mask X = feat_df[train_mask].values label_columns = [x for x in combined_sig_df.columns if 'sig/' in x] y = combined_sig_df[label_columns][train_mask].values print(X.shape) print(y.shape) X_holdout = feat_df[holdout_mask].values y_holdout = combined_sig_df[label_columns][holdout_mask].values print(X_holdout.shape) print(y_holdout.shape) from sklearn.metrics import roc_auc_score def calculate_auc(predictions): auc_scores = [] counts = [] for i, l in enumerate(label_columns): y_hat = predictions[:, i] y = y_holdout[:, i] auc = roc_auc_score(y_true=y, y_score=y_hat) auc_scores.append(auc) counts.append(c[l]) df = pd.DataFrame({'label': label_columns, 'auc': auc_scores, 'count': counts}) display(df) weightedavg_auc = df.apply(lambda x: x.auc * x['count'], axis=1).sum() / df['count'].sum() print(f'Weighted Average AUC: {weightedavg_auc}') return df, weightedavg_auc Explanation: Split data Split the data into two sets according to the column, part. There are 28 labels in total because 28 sig labels have at least 30 issues, which are preprocessed in the notebook, EvaluateEmbeddings. End of explanation from sklearn.neural_network import MLPClassifier from sklearn.model_selection import GridSearchCV mlp = MLPClassifier(early_stopping=True, n_iter_no_change=5, max_iter=500, solver='adam', random_state=1234) mlp.fit(X, y) mlp_predictions = mlp.predict_proba(X_holdout) mlp_df, mlp_auc = calculate_auc(mlp_predictions) Explanation: Sklearn MLP Feed the embeddings from the language model to the MLP classifier. End of explanation import numpy as np def calculate_max_range_count(x): max_range_count = [0] * 11 # [0,0.1), [0.1,0.2), ... , [0.9,1), [1,1] for i in x: max_range_count[int(max(i) // 0.1)] += 1 thresholds_lower = [0.1 * i for i in range(11)] thresholds_upper = [0.1 * (i+1) for i in range(10)] + [1] df = pd.DataFrame({'l': thresholds_lower, 'u': thresholds_upper, 'count': max_range_count}) display(df) return df, max_range_count _, _ = calculate_max_range_count(mlp_predictions) def calculate_result(y_true, y_pred, threshold=0.0): total_true = np.array([0] * len(y_pred[0])) total_pred_true = np.array([0] * len(y_pred[0])) pred_correct = np.array([0] * len(y_pred[0])) for i in range(len(y_pred)): y_true_label = np.where(y_true[i] == 1)[0] total_true[y_true_label] += 1 y_pred_true = np.where(y_pred[i] >= threshold)[0] total_pred_true[y_pred_true] += 1 for j in y_true_label: if j in y_pred_true: pred_correct[j] += 1 df = pd.DataFrame({'label': label_columns, 'precision': (pred_correct / total_pred_true), 'recall': (pred_correct / total_true)}) print(f'Threshold: {threshold}') display(df) return df, (pred_correct / total_pred_true), (pred_correct / total_true) _, _, _ = calculate_result(y_holdout, mlp_predictions, threshold=0.0) _, _, _ = calculate_result(y_holdout, mlp_predictions, threshold=0.3) _, _, _ = calculate_result(y_holdout, mlp_predictions, threshold=0.5) _, _, _ = calculate_result(y_holdout, mlp_predictions, threshold=0.7) Explanation: Precision & Recall Calculate precision & recall to let us make a approapriate threshold. End of explanation # params = {'hidden_layer_sizes': [(100,), (200,), (400, ), (50, 50), (100, 100), (200, 200)], # 'alpha': [.001, .01, .1, 1, 10], # 'learning_rate': ['constant', 'adaptive'], # 'learning_rate_init': [.001, .01, .1]} params = {'hidden_layer_sizes': [(100,), (200,), (400, ), (50, 50), (100, 100), (200, 200)], 'alpha': [.001], 'learning_rate': ['adaptive'], 'learning_rate_init': [.001]} mlp_clf = MLPClassifier(early_stopping=True, validation_fraction=.2, n_iter_no_change=4, max_iter=500) gscvmlp = GridSearchCV(mlp_clf, params, cv=5, n_jobs=-1) gscvmlp.fit(X, y) print(f'The best model from grid search is:\n=====================================\n{gscvmlp.best_estimator_}') mlp_tuned_predictions = gscvmlp.predict_proba(X_holdout) mlp_tuned_df, mlp_tuned_auc = calculate_auc(mlp_tuned_predictions) Explanation: Gird search To tune the MLP. End of explanation _, _ = calculate_max_range_count(mlp_tuned_predictions) _, _, _ = calculate_result(y_holdout, mlp_tuned_predictions, threshold=0.0) _, _, _ = calculate_result(y_holdout, mlp_tuned_predictions, threshold=0.7) Explanation: Precision & Recall for grid search End of explanation import dill as dpickle with open('mlp_k8s.dpkl', 'wb') as f: dpickle.dump(gscvmlp, f) Explanation: Save model End of explanation import dill as dpickle with open('mlp_k8s.dpkl', 'rb') as f: gscvmlp = dpickle.load(f) mlp_tuned_predictions = gscvmlp.predict_proba(X_holdout) mlp_tuned_df, mlp_tuned_auc = calculate_auc(mlp_tuned_predictions) Explanation: Load model Load model from pickle file and retest it. End of explanation from sklearn.neural_network import MLPClassifier from sklearn.model_selection import GridSearchCV from sklearn.metrics import roc_auc_score from collections import Counter import dill as dpickle import numpy as np import pandas as pd class MLP: def __init__(self, counter, # for calculate auc label_columns, activation='relu', alpha=0.0001, early_stopping=True, epsilon=1e-08, hidden_layer_sizes=(100,), learning_rate='constant', learning_rate_init=0.001, max_iter=500, model_file="model.dpkl", momentum=0.9, n_iter_no_change=5, precision_thre=0.7, prob_thre=0.0, random_state=1234, recall_thre=0.5, solver='adam', validation_fraction=0.1): self.clf = MLPClassifier(activation=activation, alpha=alpha, early_stopping=early_stopping, epsilon=epsilon, hidden_layer_sizes=hidden_layer_sizes, learning_rate=learning_rate, learning_rate_init=learning_rate_init, max_iter=max_iter, momentum=momentum, n_iter_no_change=n_iter_no_change, random_state=random_state, solver=solver, validation_fraction=validation_fraction) self.model_file = model_file self.precision_thre = precision_thre self.prob_thre = prob_thre self.recall_thre = recall_thre self.counter = counter self.label_columns = label_columns self.precision = None self.recall = None self.exclusion_list = None def fit(self, X, y): self.clf.fit(X, y) def predict_proba(self, X): return self.clf.predict_proba(X) def calculate_auc(self, y_holdout, predictions): auc_scores = [] counts = [] for i, l in enumerate(self.label_columns): y_hat = predictions[:, i] y = y_holdout[:, i] auc = roc_auc_score(y_true=y, y_score=y_hat) auc_scores.append(auc) counts.append(self.counter[l]) df = pd.DataFrame({'label': self.label_columns, 'auc': auc_scores, 'count': counts}) display(df) weightedavg_auc = df.apply(lambda x: x.auc * x['count'], axis=1).sum() / df['count'].sum() print(f'Weighted Average AUC: {weightedavg_auc}') return df, weightedavg_auc def calculate_max_range_count(self, prob): thresholds_lower = [0.1 * i for i in range(11)] thresholds_upper = [0.1 * (i+1) for i in range(10)] + [1] max_range_count = [0] * 11 # [0,0.1), [0.1,0.2), ... , [0.9,1), [1,1] for i in prob: max_range_count[int(max(i) // 0.1)] += 1 df = pd.DataFrame({'l': thresholds_lower, 'u': thresholds_upper, 'count': max_range_count}) display(df) return df, max_range_count def calculate_result(self, y_true, y_pred, display_table=True, prob_thre=0.0): if prob_thre: self.prob_thre = prob_thre total_true = np.array([0] * len(y_pred[0])) total_pred_true = np.array([0] * len(y_pred[0])) pred_correct = np.array([0] * len(y_pred[0])) for i in range(len(y_pred)): y_true_label = np.where(y_true[i] == 1)[0] total_true[y_true_label] += 1 y_pred_true = np.where(y_pred[i] >= prob_thre)[0] total_pred_true[y_pred_true] += 1 for j in y_true_label: if j in y_pred_true: pred_correct[j] += 1 self.precision = pred_correct / total_pred_true self.recall = pred_correct / total_true df = pd.DataFrame({'label': self.label_columns, 'precision': self.precision, 'recall': self.recall}) if display_table: print(f'Threshold: {self.prob_thre}') display(df) return df, self.precision, self.recall def find_best_prob_thre(self, y_true, y_pred): best_prob_thre = 0 prec_count = 0 reca_count = 0 print (f'Precision threshold: {self.precision_thre}\nRecall threshold:{self.recall_thre}') thre = 0.0 while thre < 1: _, prec, reca = self.calculate_result(y_true, y_pred, display_table=False, prob_thre=thre) pc = 0 for p in prec: if p >= self.precision_thre: pc += 1 rc = 0 for r in reca: if r >= self.recall_thre: rc += 1 if pc > prec_count or pc == prec_count and rc >= reca_count: best_prob_thre = thre prec_count = pc reca_count = rc thre += 0.1 self.best_prob_thre = best_prob_thre print(f'Best probability threshold: {best_prob_thre},\n{min(prec_count, reca_count)} labels meet both of the precision threshold and the recall threshold') def get_exclusion_list(self): assert len(self.precision) == len(self.recall) self.exclusion_list = [] for p, r, label in zip(self.precision, self.recall, self.label_columns): if p < self.precision_thre or r < self.recall_thre: self.exclusion_list.append(label) return self.exclusion_list def grid_search(self, params, cv=5, n_jobs=-1): self.clf = GridSearchCV(self.clf, params, cv=cv, n_jobs=n_jobs) def save_model(self): with open(self.model_file, 'wb') as f: dpickle.dump(self.clf, f) def load_model(self): with open(self.model_file, 'rb') as f: self.clf = dpickle.load(f) c = Counter() for row in combined_sig_df[combined_sig_df.part == 6].labels: c.update(row) clf = MLP(c, label_columns, early_stopping=True, n_iter_no_change=5, max_iter=500, solver='adam', random_state=1234, precision_thre=0.7, recall_thre=0.3) clf.fit(X, y) mlp_predictions = clf.predict_proba(X_holdout) mlp_df, mlp_auc = clf.calculate_auc(y_holdout, mlp_predictions) _, _ = clf.calculate_max_range_count(mlp_predictions) _, _, _ = clf.calculate_result(y_holdout, mlp_predictions) Explanation: Write it as a class End of explanation clf.find_best_prob_thre(y_holdout, mlp_predictions) _, _, _ = clf.calculate_result(y_holdout, mlp_predictions, prob_thre=0.7) clf.get_exclusion_list() Explanation: Find best probability threshold End of explanation params = {'hidden_layer_sizes': [(100,), (200,), (400, ), (50, 50), (100, 100), (200, 200)], 'alpha': [.001], 'learning_rate': ['adaptive'], 'learning_rate_init': [.001]} clf.grid_search(params, cv=5, n_jobs=-1) clf.fit(X, y) mlp_predictions = clf.predict_proba(X_holdout) mlp_df, mlp_auc = clf.calculate_auc(y_holdout, mlp_predictions) clf.save_model() clf.load_model() mlp_predictions = clf.predict_proba(X_holdout) mlp_df, mlp_auc = clf.calculate_auc(y_holdout, mlp_predictions) new_clf = MLP(c, label_columns) new_clf.load_model() mlp_predictions = new_clf.predict_proba(X_holdout) mlp_df, mlp_auc = new_clf.calculate_auc(y_holdout, mlp_predictions) _, _ = new_clf.calculate_max_range_count(mlp_predictions) _, _, _ = new_clf.calculate_result(y_holdout, mlp_predictions) Explanation: Grid search for MLP class End of explanation
7,596
Given the following text description, write Python code to implement the functionality described below step by step Description: We will try to learn from data using a very simple example of tossing a coin. We will first generate some data (30% heads and 70% tails) and will try to learn the CPD of the coin using Maximum Likelihood Estimator and Bayesian Estimator with Dirichlet prior. Step1: We can see that we get the results as expected. In the maximum likelihood case we got the probability just based on the data where as in the bayesian case we had a prior of $ P(H) = 0.5 $ and $ P(T) = 0.5 $, therefore with 30% heads and 70% tails in the data we got a posterior of $ P(H) = 0.4 $ and $ P(T) = 0.6 $. Similarly we can learn in case of more complex model. Let's take an example of the student model and compare the results in case of Maximum Likelihood estimator and Bayesian Estimator. TODO Step2: As the data was randomly generated with equal probabilities for each state we can see here that all the probability values are close to 0.5 which we expected. Now coming to the Bayesian Estimator
Python Code: # Generate data import numpy as np import pandas as pd raw_data = np.array([0] * 30 + [1] * 70) # Representing heads by 0 and tails by 1 data = pd.DataFrame(raw_data, columns=['coin']) print(data) # Defining the Bayesian Model from pgmpy.models import BayesianModel from pgmpy.estimators import MaximumLikelihoodEstimator, BayesianEstimator model = BayesianModel() model.add_node('coin') # Fitting the data to the model using Maximum Likelihood Estimator model.fit(data, estimator_type=MaximumLikelihoodEstimator) print(model.get_cpds('coin')) # Fitting the data to the model using Bayesian Estimator with Dirichlet prior with equal pseudo counts. model.fit(data, estimator_type=BayesianEstimator, prior_type='dirichlet', pseudo_counts={'coin': [50, 50]}) print(model.get_cpds('coin')) Explanation: We will try to learn from data using a very simple example of tossing a coin. We will first generate some data (30% heads and 70% tails) and will try to learn the CPD of the coin using Maximum Likelihood Estimator and Bayesian Estimator with Dirichlet prior. End of explanation # Generating radom data with each variable have 2 states and equal probabilities for each state import numpy as np import pandas as pd raw_data = np.random.randint(low=0, high=2, size=(1000, 5)) data = pd.DataFrame(raw_data, columns=['D', 'I', 'G', 'L', 'S']) print(data) # Defining the model from pgmpy.models import BayesianModel from pgmpy.estimators import MaximumLikelihoodEstimator, BayesianEstimator model = BayesianModel([('D', 'G'), ('I', 'G'), ('I', 'S'), ('G', 'L')]) # Learing CPDs using Maximum Likelihood Estimators model.fit(data, estimator_type=MaximumLikelihoodEstimator) for cpd in model.get_cpds(): print("CPD of {variable}:".format(variable=cpd.variable)) print(cpd) Explanation: We can see that we get the results as expected. In the maximum likelihood case we got the probability just based on the data where as in the bayesian case we had a prior of $ P(H) = 0.5 $ and $ P(T) = 0.5 $, therefore with 30% heads and 70% tails in the data we got a posterior of $ P(H) = 0.4 $ and $ P(T) = 0.6 $. Similarly we can learn in case of more complex model. Let's take an example of the student model and compare the results in case of Maximum Likelihood estimator and Bayesian Estimator. TODO: Add fig for Student example End of explanation # Learning with Bayesian Estimator using dirichlet prior for each variable. pseudo_counts = {'D': [300, 700], 'I': [500, 500], 'G': [800, 200], 'L': [500, 500], 'S': [400, 600]} model.fit(data, estimator_type=BayesianEstimator, prior_type='dirichlet', pseudo_counts=pseudo_counts) for cpd in model.get_cpds(): print("CPD of {variable}:".format(variable=cpd.variable)) print(cpd) Explanation: As the data was randomly generated with equal probabilities for each state we can see here that all the probability values are close to 0.5 which we expected. Now coming to the Bayesian Estimator: End of explanation
7,597
Given the following text description, write Python code to implement the functionality described below step by step Description: In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful. Step1: Let's show the symbols data, to see how good the recommender has to be. Step2: Let's run the trained agent, with the test set First a non-learning test Step3: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few). Step4: What are the metrics for "holding the position"? Step5: Conclusion
Python Code: # Basic imports import os import pandas as pd import matplotlib.pyplot as plt import numpy as np import datetime as dt import scipy.optimize as spo import sys from time import time from sklearn.metrics import r2_score, median_absolute_error from multiprocessing import Pool %matplotlib inline %pylab inline pylab.rcParams['figure.figsize'] = (20.0, 10.0) %load_ext autoreload %autoreload 2 sys.path.append('../../') import recommender.simulator as sim from utils.analysis import value_eval from recommender.agent import Agent from functools import partial NUM_THREADS = 1 LOOKBACK = 252*2 + 28 STARTING_DAYS_AHEAD = 20 POSSIBLE_FRACTIONS = [0.0, 1.0] # Get the data SYMBOL = 'SPY' total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature') data_train_df = total_data_train_df[SYMBOL].unstack() total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature') data_test_df = total_data_test_df[SYMBOL].unstack() if LOOKBACK == -1: total_data_in_df = total_data_train_df data_in_df = data_train_df else: data_in_df = data_train_df.iloc[-LOOKBACK:] total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:] # Create many agents index = np.arange(NUM_THREADS).tolist() env, num_states, num_actions = sim.initialize_env(total_data_in_df, SYMBOL, starting_days_ahead=STARTING_DAYS_AHEAD, possible_fractions=POSSIBLE_FRACTIONS) agents = [Agent(num_states=num_states, num_actions=num_actions, random_actions_rate=0.98, random_actions_decrease=0.999, dyna_iterations=0, name='Agent_{}'.format(i)) for i in index] def show_results(results_list, data_in_df, graph=False): for values in results_list: total_value = values.sum(axis=1) print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value)))) print('-'*100) initial_date = total_value.index[0] compare_results = data_in_df.loc[initial_date:, 'Close'].copy() compare_results.name = SYMBOL compare_results_df = pd.DataFrame(compare_results) compare_results_df['portfolio'] = total_value std_comp_df = compare_results_df / compare_results_df.iloc[0] if graph: plt.figure() std_comp_df.plot() Explanation: In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful. End of explanation print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:])))) # Simulate (with new envs, each time) n_epochs = 4 for i in range(n_epochs): tic = time() env.reset(STARTING_DAYS_AHEAD) results_list = sim.simulate_period(total_data_in_df, SYMBOL, agents[0], starting_days_ahead=STARTING_DAYS_AHEAD, possible_fractions=POSSIBLE_FRACTIONS, verbose=False, other_env=env) toc = time() print('Epoch: {}'.format(i)) print('Elapsed time: {} seconds.'.format((toc-tic))) print('Random Actions Rate: {}'.format(agents[0].random_actions_rate)) show_results([results_list], data_in_df) env.reset(STARTING_DAYS_AHEAD) results_list = sim.simulate_period(total_data_in_df, SYMBOL, agents[0], learn=False, starting_days_ahead=STARTING_DAYS_AHEAD, possible_fractions=POSSIBLE_FRACTIONS, other_env=env) show_results([results_list], data_in_df, graph=True) Explanation: Let's show the symbols data, to see how good the recommender has to be. End of explanation TEST_DAYS_AHEAD = 20 env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD) tic = time() results_list = sim.simulate_period(total_data_test_df, SYMBOL, agents[0], learn=False, starting_days_ahead=TEST_DAYS_AHEAD, possible_fractions=POSSIBLE_FRACTIONS, verbose=False, other_env=env) toc = time() print('Epoch: {}'.format(i)) print('Elapsed time: {} seconds.'.format((toc-tic))) print('Random Actions Rate: {}'.format(agents[0].random_actions_rate)) show_results([results_list], data_test_df, graph=True) Explanation: Let's run the trained agent, with the test set First a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality). End of explanation env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD) tic = time() results_list = sim.simulate_period(total_data_test_df, SYMBOL, agents[0], learn=True, starting_days_ahead=TEST_DAYS_AHEAD, possible_fractions=POSSIBLE_FRACTIONS, verbose=False, other_env=env) toc = time() print('Epoch: {}'.format(i)) print('Elapsed time: {} seconds.'.format((toc-tic))) print('Random Actions Rate: {}'.format(agents[0].random_actions_rate)) show_results([results_list], data_test_df, graph=True) Explanation: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few). End of explanation print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[STARTING_DAYS_AHEAD:])))) Explanation: What are the metrics for "holding the position"? End of explanation import pickle with open('../../data/simple_q_learner_fast_learner.pkl', 'wb') as best_agent: pickle.dump(agents[0], best_agent) Explanation: Conclusion: Sharpe ratio is clearly better than the benchmark. Cumulative return is similar (a bit better with the non-learner, a bit worse with the learner) End of explanation
7,598
Given the following text description, write Python code to implement the functionality described below step by step Description: DAT-ATX-1 Capstone Project Nikolaos Vergos, February 2016 &#110;&#118;&#101;&#114;&#103;&#111;&#115;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109; 1. Data Preparation & Exploratory Analysis Disclaimer Step1: 1. Familiarize ourselves with our data Step2: Almost all column names have spaces between the words. Even though it looks nice on the dataframe, it can actually be quite problematic further down the way. Therefore, let us implement an easy fix that will save us lots of pain later on Step3: The "Inspection_Date" column looks like a humanly readable series of dates, however it is encoded as (useless) strings. Fortunately we can easily tell pandas to see this column as a datetime format for easier and better reference Step4: Let us also create an auxiliary DataFrame where each establishment shows up once (even though there are 5-6 inspection instances per establishment) - we need to know how many individual establishments are inspected. Side note Step5: We have nearly 23,000 inspections from over 4,700 distinct establishments. For each inspection, we have (among other things) a health inspection score, an establishment name, an inspection date, and a description of the process (i.e., whether the inspection was routine, or reflected a follow-up visit triggered by an establishment failing an earlier inspection). Let us start our exploratory data analysis by a simple frequency table of the health inspection scores Step6: The first striking result is that, given the health inspectors are doing their job meticulously, Austin is a quite sanitary city for diners. Recall that a failing score is below 70, and according to the ad hoc letter grade scheme we are going to implement, the vast majority of inspections leeds to a letter grade "A", which we would say symbolizes a "pristine" establishment. We are going to perform some thorough research in the Machine Learning part of this Project, however this dataset can yield some quite interesting visualizations Step7: We see that the distribution of our data is heavily skewed to the left (i.e. bunched up toward the right with a "tail" stretching toward the left). There are zero health inspection scores of 98 or 99--presumably because the smallest possible violation results in a loss of at least 3 points. A similar explanation probably accounts for the dip around 95 i.e., it's relatively unusual to getting docked exactly 5 points for a single infraction. More curious, though, is what happens at the low end of the distribution--specifically around a score of 70, where there appears to be a relatively sharp discontinuity. To see it better, we can zoom in Step8: There is a sharp break at 70. Recall that 70 is the minimum score required to pass inspection. If a restaurant gets a score below that, it fails inspection, which presumably triggers all kinds of unpleasant things - e.g., more paperwork for the inspector, bad PR for the restaurant, and a follow-up (non-routine) inspection one or two weeks later. So one possibility here is that there's some pressure on inspectors - whether explicit or implicit - to avoid giving restaurants failing scores. This could possibly explain the fact that 99% of the entries in our data set have passing scores, as we are going to see later. Unfortunately this great imbalance between passing and failing restaurants will significantly hinder our attempts to build predictive models with confidence. Dr. Tal Yarkoni offers some more interesting explanations Step9: Let me now filter out everything that doesn't follow my loose definition of a restaurant Step10: We got rid of 4583 DataFrame rows corresponding to inspections of 840 establishments. Not bad at all! 3. Data Exploration & Visualizations An important question is whether scores are stable over time within individual restaurants. Since the vast majority of restaurants have been inspected several times (mostly twice a year) over the last three years, we can directly answer this question by computing the test-retest Pearson correlation metric across multiple inspections. A quick, though somewhat crude, way to do this is to randomly select two inspections for every restaurant with multiple inspections and compute the correlation. The resulting scatter plot looks like this Step12: The test-retest correlation of 0.47 indicates that there is indeed a fair amount of consistency to the scores. That's reassuring, in that a very low correlation might lead us to worry that the health inspection process itself is unreliable, since it's hard to imagine that there aren't real differences in how mindful different proprietors are of health code - not to mention the fact that some kinds of establishments are likely to be at much higher risk of code violations than others in virtue of the kind of food they serve. In simpler words, if a restaurant does well, it will probably keep doing well. If it barely passes inspection one time, it probably won't improve dramatically for the second time. However, there's a quite interesting phenomenon at the "outliers" of the scatterplot above Step13: The values in parentheses on the y-axis labels denote the number of unique restaurants and total inspections in each group, respectively. The error bars denote the 95% confidence interval. The plot largely corroborates what we probably already knew, with perhaps a couple of minor surprises. It's not surprising that establishments primarily serving coffee, ice cream, or pastries tend to do very well on health inspections. The same applies for most burger and pizza places, and I believe this has to do with strict health protocols universally implemented by national chains that seem to dominate those categories. As we will see with the next plot, restaurants belonging to major chains tend to do exceptionally well in health inspections with very little variance. At the other end of the spectrum, the 6 restaurants with the word "buffet" in their name do quite... abysmally. Their average score of 79 is pretty close to the magical failure number of 70. Across 34 different inspections, no "buffet"-containing restaurant in Austin has managed to obtain a score higher than 90 in the past 3 years. Of course, this conclusion only applies to buffet restaurants that have the word "buffet" in their name, but I believe those comprise the vast majority of all-you-can-eat type eateries, even though I hope I am wrong, because to be honest sometimes buffets can be quite yummy... Also not happy winners in this analysis Step14: Good news! It's pretty much impossible to get sick at Jamba Juice or Starbucks. The large pizza chains all do very well. Also, if you're looking for health code-friendly Asian food, Panda Express is your new best friend. If your primary basis for deciding between drive-thrus is tiny differences in health inspection score, you may want to opt for local hero Whataburger, Wendy's or Burger King over McDonald's. Otherwise, there's nothing terribly interesting here, other than the suprisingly low (for me) scores of Pei Wei, which I have to admit I used to hold in a higher esteem. An average score of 88 of course isn't something to be very disappointed about, but it seems that they, being a national chain, are slightly more lax when it comes to their food preparation protocols compared to their more "fast-food" counterpart, Panda Express. The reader is strongly encouraged to visit Tal Yarkoni's notebook for further exploratory data analysis on the Ceity of Austin Restaurant Health Inspection dataset. 4. Enriching our dataset by adding numerics As we have seen, our only quantitative variable in our data set is each restaurant's score. Even though the given data set can lead us to a fairly thorough exploration and raise some quite interesting questions, as we have seen so far, my opinion is that our output would be much richer if we had more information. The steps I took to address this are Step15: Those 56 Zip Codes cover the entire area of Travis County where the Austin/Travis County Health and Human Services Department conducts the inspections. After spending some time studying a detailed map, I decided to only keep a subset of those 56 Zip Codes from now on, even though this means I am trimming down my data set. I am interested in Austin proper Step16: 36 Zip Codes of a more or less contiguous area is what we will be focusing on from now on. Step17: Let us now introduce the U.S. 2014 Census numerics data Step18: 78712 is the special Zip Code for the University of Texas Campus and 78719 corresponds to Austin - Bergstrom International Airport. Let us now merge the two DataFrames, ACL and df_numerics into a common one Step19: 5. DataFrame TLC and a very interesting visualization Our "main" DataFrame needs some TLC. We also have to exploit the "Address" column further by creating neat columns with street address, city information and coordinates for each row - even though we are focusing in the contiguous Austin area, we are still having the areas of Rollingwood, West Lake Hills, Sunset Valley and Bee Cave to take into account (and get some interesting associations between Restaurant Health Inspection Score and Income...) Step20: Let's reformat the first two columns for geocoding Step21: Each element of column [2] of the merged_location DataFrame is a parenthesis with a pair of coordinates inside it, representing latitude and longitude. Those seem like an interesting addition to our dataframe, therefore we'll do a little bit more work to add them as individual columns Step22: Following Chris Albon's excellent code snippet Step23: Finally, the interesting visualization I promised. Even though this will be studied in more detail in the next "chapter" of this project, supervised learning, I was wondering whether there is any stark correlation between a restaurant's health inspection score (response) and each one of the numerics columns I have added to our data Step24: At first sight the data looks scattered all over the place for all three plots, and there's possibly no correlation between any of the pairs. However there are some interesting remarks one could make
Python Code: import warnings warnings.filterwarnings('ignore') import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline from scipy import stats # For correlation coefficient calculation Explanation: DAT-ATX-1 Capstone Project Nikolaos Vergos, February 2016 &#110;&#118;&#101;&#114;&#103;&#111;&#115;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109; 1. Data Preparation & Exploratory Analysis Disclaimer: Many ideas for the Exploratory Data Analysis part, particularly the discussion of the health inspection scores histogram, the correlation between two different inspections and the plot_restaurant_score_by_type Python function and the discussion of the resulting plots have been taken from the analysis of Dr. Tal Yarkoni 0. Import libraries & packages End of explanation #Reading the dataset in a dataframe using Pandas df = pd.read_csv('../data/Restaurant_Inspection_Scores.csv') #Print first observations df.head() len(df) # Drop duplicate entries df = df.drop_duplicates() print "There are {0} entries (rows) in this dataframe".format(len(df)) # Initial assessment Explanation: 1. Familiarize ourselves with our data End of explanation df = df.rename(columns={'Restaurant Name': 'Restaurant_Name', 'Zip Code' : 'Zip_Code', 'Inspection Date' : 'Inspection_Date', 'Facility ID' : 'Facility_ID', 'Process Description' : 'Process_Description'}) Explanation: Almost all column names have spaces between the words. Even though it looks nice on the dataframe, it can actually be quite problematic further down the way. Therefore, let us implement an easy fix that will save us lots of pain later on: End of explanation from datetime import datetime df['Inspection_Date'] = pd.to_datetime(df['Inspection_Date']) Explanation: The "Inspection_Date" column looks like a humanly readable series of dates, however it is encoded as (useless) strings. Fortunately we can easily tell pandas to see this column as a datetime format for easier and better reference: End of explanation establishments = df.groupby('Facility_ID') # Print some stuff print "Number of inspections:", len(df) print "Number of individual establishments inspected:", len(establishments) print "\nColumns:\n", df.dtypes Explanation: Let us also create an auxiliary DataFrame where each establishment shows up once (even though there are 5-6 inspection instances per establishment) - we need to know how many individual establishments are inspected. Side note: I am using the word "establishment" instead of "restaurant", because the Austin/Travis County Health and Human Services Department conducts the permitting and inspection of more than 4,000 food establishments in Austin, several local municipalities and rural Travis County. This includes any place serving food: restaurants, school cafeterias, grocery and convenience stores etc. We will deal with this later, because we have chosen to focus our study on restaurants (loosely defined as places that a potential patron chooses to go dine or carry out) End of explanation print df['Score'].value_counts().sort_index() # Frequency Table Explanation: We have nearly 23,000 inspections from over 4,700 distinct establishments. For each inspection, we have (among other things) a health inspection score, an establishment name, an inspection date, and a description of the process (i.e., whether the inspection was routine, or reflected a follow-up visit triggered by an establishment failing an earlier inspection). Let us start our exploratory data analysis by a simple frequency table of the health inspection scores: End of explanation # Function to plot a histogram based on the frequency table above: def plot_histogram(): df['Score'].hist(bins=50) plt.xlabel('Health Inspection Score') plt.ylabel('Count') plt.savefig('histogram.png') plot_histogram() # A somewhat prettier histogram with seaborn; Relative frequencies on y-axis sns.distplot(df['Score']); Explanation: The first striking result is that, given the health inspectors are doing their job meticulously, Austin is a quite sanitary city for diners. Recall that a failing score is below 70, and according to the ad hoc letter grade scheme we are going to implement, the vast majority of inspections leeds to a letter grade "A", which we would say symbolizes a "pristine" establishment. We are going to perform some thorough research in the Machine Learning part of this Project, however this dataset can yield some quite interesting visualizations: End of explanation plot_histogram() plt.xlim([50,80]); plt.ylim([0,300]); Explanation: We see that the distribution of our data is heavily skewed to the left (i.e. bunched up toward the right with a "tail" stretching toward the left). There are zero health inspection scores of 98 or 99--presumably because the smallest possible violation results in a loss of at least 3 points. A similar explanation probably accounts for the dip around 95 i.e., it's relatively unusual to getting docked exactly 5 points for a single infraction. More curious, though, is what happens at the low end of the distribution--specifically around a score of 70, where there appears to be a relatively sharp discontinuity. To see it better, we can zoom in: End of explanation # Unfortunately the words "Market", "Grocery", "Church" correspond to places # like convenience stores and grocery stores. # I want to make sure that some restaurants with those words in their name will go through my filter. df.loc[(df['Restaurant_Name'].str.contains('Whole')) , 'Restaurant_Name'] = "Whole Foods" df.loc[(df['Restaurant_Name'].str.contains('Central Market')) , 'Restaurant_Name'] = "Central Mkt" df.loc[(df['Restaurant_Name'].str.contains('Boston Market')) , 'Restaurant_Name'] = "Boston Mkt" df.loc[(df['Restaurant_Name'].str.contains('Mandola')) , 'Restaurant_Name'] = "Mandola's" df.loc[(df['Restaurant_Name'].str.contains('Royal Blue')) , 'Restaurant_Name'] = "Royal Blue" df.loc[(df['Restaurant_Name'].str.contains('Rudy')) , 'Restaurant_Name'] = "Rudy's" df.loc[(df['Restaurant_Name'].str.contains('Fit Foods')) , 'Restaurant_Name'] = "My Ft Foods" df.loc[(df['Restaurant_Name'].str.contains("Church's Chicken")) , 'Restaurant_Name'] = "Chrch Chicken" df.loc[(df['Restaurant_Name'].str.contains("Schlotzsky's")) , 'Restaurant_Name'] = "Schlotzsky's" len(df) Explanation: There is a sharp break at 70. Recall that 70 is the minimum score required to pass inspection. If a restaurant gets a score below that, it fails inspection, which presumably triggers all kinds of unpleasant things - e.g., more paperwork for the inspector, bad PR for the restaurant, and a follow-up (non-routine) inspection one or two weeks later. So one possibility here is that there's some pressure on inspectors - whether explicit or implicit - to avoid giving restaurants failing scores. This could possibly explain the fact that 99% of the entries in our data set have passing scores, as we are going to see later. Unfortunately this great imbalance between passing and failing restaurants will significantly hinder our attempts to build predictive models with confidence. Dr. Tal Yarkoni offers some more interesting explanations: For example, it's possible that the health inspection guidelines explicitly allow inspectors to let restaurants off with a warning. Or perhaps the scoring algorithm isn't just a linear sum of all the violations observed, and there has to be some particularly egregious health code violation in order for a restaurant to receive a score below 70. Or, inspectors may be pragmatically working around weaknesses in the code - e.g., a restaurant may theoretically be able to fail inspection because of a large number of minor infractions, even if no single infraction presents any meaningful health risk to customers. Still, absent an explanation, there's at least some potential cause for concern here, and it's certainly consistent with the data that health inspectors might be systematically grading restaurants more leniently than they should be. Are the inspectors doing their job meticulously? Is Austin dining really so sanitary overall? This is a recurring theme of this study, as we are going to see. 2. Data Preparation On to the immensely boring, yet quintessential part: data cleaning. I will start by renaming some of our restaurants so they'll avoid the axe when I remove everything which isn't a restaurant - as I have mentioned, unfortunately the data set includes every establishment that needs a health inspection, and I'm only interested in places where people actually choose to dine in, not schools, churches, jails (!) etc... End of explanation df = df[df['Restaurant_Name'].str.contains('School') == False] df = df[df['Restaurant_Name'].str.contains('Elementary') == False] df = df[df['Restaurant_Name'].str.contains('Care') == False] df = df[df['Restaurant_Name'].str.contains('Middle') == False] df = df[df['Restaurant_Name'].str.contains('Cafeteria') == False] df = df[df['Restaurant_Name'].str.contains('Jail') == False] df = df[df['Restaurant_Name'].str.contains('ISD') == False] df = df[df['Restaurant_Name'].str.contains('Academy') == False] df = df[df['Restaurant_Name'].str.contains('Mart') == False] df = df[df['Restaurant_Name'].str.contains('Gas') == False] df = df[df['Restaurant_Name'].str.contains('Convenience') == False] df = df[df['Restaurant_Name'].str.contains('7-Eleven') == False] df = df[df['Restaurant_Name'].str.contains('HEB') == False] df = df[df['Restaurant_Name'].str.contains('Station') == False] df = df[df['Restaurant_Name'].str.contains('Randall') == False] df = df[df['Restaurant_Name'].str.contains('Target') == False] df = df[df['Restaurant_Name'].str.contains('Flea') == False] df = df[df['Restaurant_Name'].str.contains('Gym') == False] df = df[df['Restaurant_Name'].str.contains('Fitness') == False] df = df[df['Restaurant_Name'].str.contains('Fit') == False] df = df[df['Restaurant_Name'].str.contains('Church') == False] df = df[df['Restaurant_Name'].str.contains('Dollar') == False] df = df[df['Restaurant_Name'].str.contains('Store') == False] df = df[df['Restaurant_Name'].str.contains('Texaco') == False] print len(df) restaurants = df.groupby('Facility_ID') # We can switch from "establishments" to "restaurants" after the purge len(restaurants) Explanation: Let me now filter out everything that doesn't follow my loose definition of a restaurant: End of explanation # Cite: Analysis by Dr. Tal Yarkoni # Filter for restaurants with > 1 inspection two_or_more = restaurants.filter(lambda x: x.shape[0] > 1) print "Number of restaurants with two or more inspections:", two_or_more['Facility_ID'].nunique() # Shuffle order and select a random pair for each restaurant two_or_more = two_or_more.reindex(np.random.permutation(two_or_more.index)) random_pairs = two_or_more.groupby('Facility_ID', as_index=False).head(2).sort('Facility_ID') random_pairs['number'] = np.tile(np.array([1,2]), len(random_pairs)/2) pairs = random_pairs.pivot(index='Facility_ID', columns='number', values='Score') r, p = stats.pearsonr(*pairs.values.T) # Plot the relationship f, ax = plt.subplots(figsize=(6, 6)) sns.regplot(pairs[1], pairs[2], x_jitter=2, y_jitter=2, color="#334477", scatter_kws={"alpha": .05, "s":100}) ax.text(62, 72, "r = %.2f" % r, fontsize=14) ax.set(xlim=(60, 105), ylim=(60, 105), xlabel='Score for Inspection 1', ylabel='Score for Inspection 2'); Explanation: We got rid of 4583 DataFrame rows corresponding to inspections of 840 establishments. Not bad at all! 3. Data Exploration & Visualizations An important question is whether scores are stable over time within individual restaurants. Since the vast majority of restaurants have been inspected several times (mostly twice a year) over the last three years, we can directly answer this question by computing the test-retest Pearson correlation metric across multiple inspections. A quick, though somewhat crude, way to do this is to randomly select two inspections for every restaurant with multiple inspections and compute the correlation. The resulting scatter plot looks like this: End of explanation # Create unique ID column that's human-readable--concatenate name and address df['string_id'] = [x.lower() for x in (df['Restaurant_Name'] + '_' + df['Facility_ID'].astype(str))] def plot_restaurant_score_by_type(types): Takes a list of strings, each defining a group of restaurants that contain that particular string within their title. means, sems, labels = [], [], [] n_types = len(types) # Generate means, CI/SEM, and labels for c in types: stores = df[df['string_id'].str.contains(c)] unique_stores = stores.groupby('string_id')['Score'].mean() n_stores = len(unique_stores) n_inspections = len(stores) std = unique_stores.std() means.append(unique_stores.mean()) sems.append(stats.sem(unique_stores)) # sem: standard error of the mean labels.append('"' + c + '" (%d, %d)' % (n_stores, n_inspections)) # Order by descending score plot_data = pd.DataFrame({'mean':means, 'sem':sems}, index=labels) plot_data = plot_data.sort('mean', ascending=True) # Plot pal = sns.color_palette("husl", len(plot_data)) f, ax = plt.subplots(figsize=(4, 8)) for y, (label, (mean, sem)) in enumerate(plot_data.iterrows()): ax.errorbar(mean, y, xerr=sem, fmt='o', color=pal[y]) ax.set_ylim([-1, n_types]) ax.set_yticks(np.arange(len(plot_data))) ax.set_yticklabels(plot_data.index, rotation=0, horizontalalignment='right') ax.set_xlabel('Health inspection score', fontsize=14) ax.set_ylabel('Restaurant name contains...', fontsize=14) types = ['chin', 'mexic', 'indian', 'thai', 'vietnam|pho', 'italia', 'taco|taqu', 'sushi|jap', 'pizz', 'korea', 'burger', 'donut|doughnut', 'coffee', 'bakery', 'ice cream', 'chicken', 'buffet', 'grill', 'bbq|barbe', 'steak', 'greek', 'beer'] plot_restaurant_score_by_type(types) Explanation: The test-retest correlation of 0.47 indicates that there is indeed a fair amount of consistency to the scores. That's reassuring, in that a very low correlation might lead us to worry that the health inspection process itself is unreliable, since it's hard to imagine that there aren't real differences in how mindful different proprietors are of health code - not to mention the fact that some kinds of establishments are likely to be at much higher risk of code violations than others in virtue of the kind of food they serve. In simpler words, if a restaurant does well, it will probably keep doing well. If it barely passes inspection one time, it probably won't improve dramatically for the second time. However, there's a quite interesting phenomenon at the "outliers" of the scatterplot above: There seem to be some restaurants that have obtained "pristine" scores during inspection 1 and then dropped by 20 points or more during inspection 2 and vice versa. The vast majority of establishments show a fair grade of consistency though. The following part could have been much easier and much more exciting, had the City of Austin provided us with a richer dataset including some extra columns with information such as type of cuisine or some classification of infestations that have caused point deductions. A significantly richer analysis would have been possible in that case. We can still implement an ad hoc workaround to break out restaurants by the kind of food they serve, provided their names are indicative of the cuisine. we're going to categorize restaurant type much more crudely. We'll take advantage of the fact that many restaurants use their name to announce the kind of food they serve - witness, for example, "Carino's Italian", "Asia Chinese Restaurant", and "Mi Casa Mexican Restaurant". Dr. Yarkoni did an amazing job: By grouping together restaurants with the same stem of a word in their names, we can generate the following plot: End of explanation chains = ['starbucks', 'mcdonald', 'subway', 'popeye', 'whataburger', 'domino', 'jamba', 'schlotzsky', 'taco shack', 'burger king', 'wendy', 'panda', 'chick-fil', 'pizza hut', 'papa john', 'chipotle', 'pei wei', 'torchy', 'tacodeli'] plot_restaurant_score_by_type(chains) Explanation: The values in parentheses on the y-axis labels denote the number of unique restaurants and total inspections in each group, respectively. The error bars denote the 95% confidence interval. The plot largely corroborates what we probably already knew, with perhaps a couple of minor surprises. It's not surprising that establishments primarily serving coffee, ice cream, or pastries tend to do very well on health inspections. The same applies for most burger and pizza places, and I believe this has to do with strict health protocols universally implemented by national chains that seem to dominate those categories. As we will see with the next plot, restaurants belonging to major chains tend to do exceptionally well in health inspections with very little variance. At the other end of the spectrum, the 6 restaurants with the word "buffet" in their name do quite... abysmally. Their average score of 79 is pretty close to the magical failure number of 70. Across 34 different inspections, no "buffet"-containing restaurant in Austin has managed to obtain a score higher than 90 in the past 3 years. Of course, this conclusion only applies to buffet restaurants that have the word "buffet" in their name, but I believe those comprise the vast majority of all-you-can-eat type eateries, even though I hope I am wrong, because to be honest sometimes buffets can be quite yummy... Also not happy winners in this analysis: restaurants serving ethnic food - with the possible exception of Indian restaurants (though the variance for Indian restaurants is high. Asian restaurants do particularly poorly; for example, "thai"-containing restaurants obtain a score of 83, on average. It seems that every restaurant following the "bbq" category could be classfied as "ethnic" and this leads to an interesting question, which we cannot really answer given the data we have: Are ethnic restaurants bound to be doing worse than their "american" counterparts, or is there some kind of inspector bias against "different" cuisines? Of course diners can form their individual opinions freely and I do not wish to nudge the discussion toward either side, however this trend is really striking to me. On a positive note, though, we can see that the 74 "burger"-containing establishments in this list - you know, the ones where cooks spend much of their day wading around in buckets of raw meat--tend to have very good scores (perhaps because of the perception that health inspectors are gunning for them, I don't know). So, given a choice between Thai food and burger, health code-wise, you're arguably better off with burger. Of course, in the grand scheme of things, these are not terribly large differences, and the vast majority of the time, you're going to be just fine after eating pretty much anywhere (including even those restaurants that fail inspection). Various discussions with colleagues have lead me to the result that this is most probably happening because national chains can't and won't tolerate PR scandals associated with the sanitary qualities of their food, therefore their protocols concerning food preparations are quite strict, as I have mentioned above. Let's eat clean tonight! - Pizza Hut, right??? Since it's easy to bin restaurants by title, something else we can do is look at the performance of major local and national food chains. Here's what that looks like, for selected chains with major representation in Austin: End of explanation # Let's see how many Zip Codes do we start with: print len(pd.unique(df.Zip_Code.ravel())) Explanation: Good news! It's pretty much impossible to get sick at Jamba Juice or Starbucks. The large pizza chains all do very well. Also, if you're looking for health code-friendly Asian food, Panda Express is your new best friend. If your primary basis for deciding between drive-thrus is tiny differences in health inspection score, you may want to opt for local hero Whataburger, Wendy's or Burger King over McDonald's. Otherwise, there's nothing terribly interesting here, other than the suprisingly low (for me) scores of Pei Wei, which I have to admit I used to hold in a higher esteem. An average score of 88 of course isn't something to be very disappointed about, but it seems that they, being a national chain, are slightly more lax when it comes to their food preparation protocols compared to their more "fast-food" counterpart, Panda Express. The reader is strongly encouraged to visit Tal Yarkoni's notebook for further exploratory data analysis on the Ceity of Austin Restaurant Health Inspection dataset. 4. Enriching our dataset by adding numerics As we have seen, our only quantitative variable in our data set is each restaurant's score. Even though the given data set can lead us to a fairly thorough exploration and raise some quite interesting questions, as we have seen so far, my opinion is that our output would be much richer if we had more information. The steps I took to address this are: I unilaterally decided to trim down the area of interest. Health Inspections are conducted by Travis County which is rather spacious and includes the City of Austin, the neighboring communities like Pflugerville as well as some far-flung suburbs like Lago Vista or Manor. I spent some time over a map and ended up deciding to focus my analysis from now on in my loose definition of the Austin City Limits. Roughly this translates as eliminating all rows from my DataFrames that belong to the suburban Zip Codes. I did some more research on the remaining 36 Zip Codes after the purge above, and following U.S. Census Data from 2014 I incorporated three more columns into my DataFrame: Population, Median Income and Home Ownership percentage for each one of the Austin City Limits Zip Codes. End of explanation # Focus on the main part of the city # The geographical division is mine SE_zip = (78744, 78747, 78719, 78741) Central_zip = (78701, 78702, 78703, 78705, 78721, 78723, 78712, 78751, 78756) NE_zip = (78752, 78753, 78754) NW_zip = (78757, 78758, 78727, 78722, 78729, 78717, 78750, 78759, 78726, 78730, 78731, 78732) SW_zip = (78704, 78745, 78748, 78739, 78749, 78735, 78733, 78746) ACL_zip = SE_zip + Central_zip + NE_zip + NW_zip + SW_zip len(ACL_zip) Explanation: Those 56 Zip Codes cover the entire area of Travis County where the Austin/Travis County Health and Human Services Department conducts the inspections. After spending some time studying a detailed map, I decided to only keep a subset of those 56 Zip Codes from now on, even though this means I am trimming down my data set. I am interested in Austin proper: End of explanation # Create a new DataFrame only including Austin City Limits restaurants: ACL = df[df.Zip_Code.isin(ACL_zip)] ACL.describe() Explanation: 36 Zip Codes of a more or less contiguous area is what we will be focusing on from now on. End of explanation raw_data = { 'Zip_Code': [78701, 78702, 78703, 78705, 78721, 78723, 78712, 78751, 78756, 78744, 78747, 78719, 78741,\ 78752, 78753, 78754, 78704, 78745, 78748, 78739, 78749, 78735, 78733, 78746, 78757, 78758,\ 78727, 78722, 78729, 78717, 78750, 78759, 78726, 78730, 78731, 78732], 'Med_Income': [35757, 23348, 54591, 14740, 26646, 34242, np.nan, 29779, 36978, 38256, 60861, np.nan, 25369,\ 30207, 38206, 51810, 35733, 43458, 57710, 102707, 68244, 75204, 102239, 100571,\ 45090, 42398, 62648, 35794, 59497, 87290, 78428, 61284, 89891, 128524, 62404, 103951], 'Population': [3780, 22498, 19522, 26972, 10192, 30196, np.nan, 13829, 7253, 34028, 4908, np.nan, 40678,\ 17978, 43788, 5517, 43343, 53136, 25138, 8708, 28420, 9563, 9144, 25768,\ 21434, 42977, 22332, 6264, 24539, 8209, 23563, 40327, 6547, 4848, 24068, 3804], 'Home_Ownership': [0.377, 0.467, 0.515, 0.11, 0.576, 0.441, np.nan, 0.259, 0.395, 0.536, 0.89, np.nan, 0.146,\ 0.266, 0.407, 0.11, 0.302, 0.09, 0.739, 0.962, 0.36, 0.564, 0.901, 0.71,\ 0.566, 0.309, 0.573, 0.468, 0.516, 0.793, 0.717, 0.451, 0.7, 0.74, 0.593, 0.97]} df_numerics = pd.DataFrame(raw_data, columns = ['Zip_Code','Med_Income', 'Population', 'Home_Ownership']) df_numerics Explanation: Let us now introduce the U.S. 2014 Census numerics data: End of explanation merged = pd.merge(ACL, df_numerics, on='Zip_Code') merged.head() Explanation: 78712 is the special Zip Code for the University of Texas Campus and 78719 corresponds to Austin - Bergstrom International Airport. Let us now merge the two DataFrames, ACL and df_numerics into a common one: End of explanation # Creation of a DataFrame off of the constituent components of the 'Address' Column: merged_location = merged['Address'].apply(lambda x: pd.Series(x.split('\n'))) merged_location.head() Explanation: 5. DataFrame TLC and a very interesting visualization Our "main" DataFrame needs some TLC. We also have to exploit the "Address" column further by creating neat columns with street address, city information and coordinates for each row - even though we are focusing in the contiguous Austin area, we are still having the areas of Rollingwood, West Lake Hills, Sunset Valley and Bee Cave to take into account (and get some interesting associations between Restaurant Health Inspection Score and Income...) End of explanation geocode = merged_location[0] + ', ' + merged_location[1] geocode.head() # What about the coordinates column? merged_location[2].head(2) Explanation: Let's reformat the first two columns for geocoding: End of explanation # Get rid of the parentheses: coords = pd.Series(merged_location[2]) stripped_coords = [] for coord in coords: coord = coord.strip('()') stripped_coords.append(coord) stripped_coords[1] merged['Coordinates'] = pd.Series(stripped_coords) merged['Latitude'] = 0 merged['Longitude'] = 0 Explanation: Each element of column [2] of the merged_location DataFrame is a parenthesis with a pair of coordinates inside it, representing latitude and longitude. Those seem like an interesting addition to our dataframe, therefore we'll do a little bit more work to add them as individual columns: End of explanation # Create two lists for the loop results to be placed lat = [] lon = [] # For each row in a varible, for row in merged['Coordinates']: # Try to, try: # Split the row by comma, convert to float, and append # everything before the comma to lat lat.append(float(row.split(',')[0])) # Split the row by comma, convert to float, and append # everything after the comma to lon lon.append(float(row.split(',')[1])) # But if you get an error except: # append a missing value to lat lat.append(np.NaN) # append a missing value to lon lon.append(np.NaN) # Create two new columns from lat and lon merged['Latitude'] = lat merged['Longitude'] = lon merged.head() # Create a DataFrame off of column [1] of merged_location: City and State-Zip Code: cities = merged_location[1].apply(lambda x: pd.Series(x.split(','))) cities.head() # How many are the unique cities in our DataFrame? pd.unique(cities[0].ravel()) # All our restaurants are in Texas, and we have already dealt with Zip Codes. Ergo we don't need column [1]: del cities[1] cities.head() # Let us add this neatly divided information into our main DataFrame: merged['Street'] = merged_location[0] merged['City'] = cities[0] merged['Geocode'] = geocode merged.head(10) # How many kinds of inspections are there? pd.unique(merged['Process_Description'].ravel()) # Delete all columns we don't need anymore: del merged['Address'] del merged['Coordinates'] del merged['string_id'] # Rearrange remaining columns in a nice way: merged = merged[['Facility_ID', 'Restaurant_Name', 'Inspection_Date',\ 'Process_Description', 'Geocode', 'Street', 'City', 'Zip_Code', \ 'Score', 'Med_Income', 'Population', 'Home_Ownership', 'Latitude', 'Longitude']] merged.head(3) # We have 16113 rows with neat addresses which we will use for mapping len(merged['Geocode']) Explanation: Following Chris Albon's excellent code snippet: End of explanation # Visualize the relationship between the features and the response using scatterplots fig, axs = plt.subplots(1, 3, sharey=True) merged.plot(kind='scatter', x='Med_Income', y='Score', ax=axs[0], figsize=(16, 8)); merged.plot(kind='scatter', x='Population', y='Score', ax=axs[1]); merged.plot(kind='scatter', x='Home_Ownership', y='Score', ax=axs[2]); Explanation: Finally, the interesting visualization I promised. Even though this will be studied in more detail in the next "chapter" of this project, supervised learning, I was wondering whether there is any stark correlation between a restaurant's health inspection score (response) and each one of the numerics columns I have added to our data: Median Income, Population and Home Ownership percentage (features) End of explanation merged.to_csv('../data/master.csv', index=False) Explanation: At first sight the data looks scattered all over the place for all three plots, and there's possibly no correlation between any of the pairs. However there are some interesting remarks one could make: As we have already seen, the vast majority of restaurants scores over 70 (passing grade) In areas (Zip Codes) where the median income is relatively lower, it is more probable that there will be restaurants doing worse than average. Notice that when the median income is higher than \$120,000, there is not a single restaurant failing the inspection. However this might just be a random fluctuation of our data, since there are a few failing restaurants in areas where the median income lies between \$80,000 and \$120,000 Population really doesn't seem to play any role; There is a more or less even distribution of health inspection grades irrespective of the Zip Code's population: both the sparsely and the densely populated areas of town have their share of passing and failing restaurants. The scatterplot of restaurant health inspection scores vs. percentage of home ownership looks very similar to the one of scores vs median income. We can easily check whether the two quantities are correlated, which we intuitively expect. Next: Supervised Learning - Regression Analysis Let us save our working DataFrame as a csv file so we can import it into the next Notebook. End of explanation
7,599
Given the following text description, write Python code to implement the functionality described below step by step Description: Mock community quality control This notebook maps observed mock community sequences, which are technically from unknown organisms, to "trueish" taxonomies, i.e., the most likely taxonomic match given a list of expected sequences derived from the input strains. This serves two purposes Step1: Define paths to tax-credit repository directory, mockrobiota repository directory, and reference database directory. Step2: Identify location of your reference databases. Step3: Now generate reference sequence/taxonomy dictionaries. Step4: Establish expected sequences and taxonomies Step5: Map sequences that match to taxonomies
Python Code: from tax_credit import mock_quality from os.path import expandvars, join Explanation: Mock community quality control This notebook maps observed mock community sequences, which are technically from unknown organisms, to "trueish" taxonomies, i.e., the most likely taxonomic match given a list of expected sequences derived from the input strains. This serves two purposes: 1. We can then use trueish taxonomies to calculate per-sequence precision/recall scores 2. Mismatch profiles give us a quantitative assessment of the overall "quality" of a mock community (or at least the quality control methods used to process it). End of explanation data_dir = expandvars('$HOME/Desktop/projects/tax-credit/') mockrobiota_dir = expandvars('$HOME/Desktop/projects/mockrobiota/') ref_dir = expandvars("$HOME/Desktop/ref_dbs/") Explanation: Define paths to tax-credit repository directory, mockrobiota repository directory, and reference database directory. End of explanation ref_dbs = [('greengenes', join(ref_dir, 'gg_13_8_otus', 'rep_set', '99_otus.fasta'), join(ref_dir, 'gg_13_8_otus', 'taxonomy', '99_otu_taxonomy.txt')), ('unite', join(ref_dir, 'sh_qiime_release_20.11.2016', 'developer', 'sh_refs_qiime_ver7_99_20.11.2016_dev.fasta'), join(ref_dir, 'sh_qiime_release_20.11.2016', 'developer', 'sh_taxonomy_qiime_ver7_99_20.11.2016_dev.txt'))] Explanation: Identify location of your reference databases. End of explanation refs, taxs = mock_quality.ref_db_to_dict(ref_dbs) Explanation: Now generate reference sequence/taxonomy dictionaries. End of explanation mock_quality.match_expected_seqs_to_taxonomy(data_dir, mockrobiota_dir, refs, taxs) Explanation: Establish expected sequences and taxonomies End of explanation mock_quality.generate_trueish_taxonomies(data_dir) Explanation: Map sequences that match to taxonomies End of explanation