Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
5,100
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Assignment 3b Step2: Encoding issues with txt files For Windows users, the file “AnnaKarenina.txt” gets the encoding cp1252. In order to open the file, you have to add encoding='utf-8', i.e., python a_path = 'some path on your computer.txt' with open(a_path, mode='r', encoding='utf-8') Step3: 2.b) Store the function in the Python module utils.py. Import it in analyze.py. Edit analyze.py so that Step4: Please note that Exercises 3 and 4, respectively, are designed to be difficult. You will have to combine what you have learnt so far to complete these exercises. Exercise 3 Let's compare the books based on the statistics. Create a dictionary stats2book_with_highest_value in analyze.py with four keys
Python Code: # Downloading data - you get this for free :-) import requests import os def download_book(url): Download book given a url to a book in .txt format and return it as a string. text_request = requests.get(url) text = text_request.text return text book_urls = dict() book_urls['HuckFinn'] = 'http://www.gutenberg.org/cache/epub/7100/pg7100.txt' book_urls['Macbeth'] = 'http://www.gutenberg.org/cache/epub/1533/pg1533.txt' book_urls['AnnaKarenina'] = 'http://www.gutenberg.org/files/1399/1399-0.txt' if not os.path.isdir('../Data/books/'): os.mkdir('../Data/books/') for name, url in book_urls.items(): text = download_book(url) with open('../Data/books/'+name+'.txt', 'w', encoding='utf-8') as outfile: outfile.write(text) Explanation: Assignment 3b: Writing your own nlp program Due: Friday the 1st of October 2021 14:30. Please submit your assignment (notebooks of parts 3a and 3b + additional files) as a single .zip file using Canvas (Assignments --> Assignment 3) Please name your zip file with the following naming convention: ASSIGNMENT_3_FIRSTNAME_LASTNAME.zip IMPORTANTE NOTE: * The students who follow the Bachelor version of this course, i.e., the course Introduction to Python for Humanities and Social Sciences (L_AABAALG075) as part of the minor Digital Humanities, do not have to do Exercises 3 and 4 of Assignment 3b * The other students, i.e., who follow the Master version of course, which is Programming in Python for Text Analysis (L_AAMPLIN021), are required to do Exercises 3 and 4 of Assignment 3b If you have questions about this topic, please contact us (cltl.python.course@gmail.com). Questions and answers will be collected on Piazza, so please check if your question has already been answered first. In this part of the assignment, we will carry out our own little text analysis project. The goal is to gain some insights into longer texts without having to read them all in detail. This part of the assignment builds on some notions that have been revised in part A of the assignment. Please feel free to go back to part A and reuse your code whenever possible. The goals of this part are: divide a problem into smaller sub-problems and test code using small examples doing text analysis and writing results to a file combining small functions into bigger functions Tip: The assignment is split into four steps, which are divided into smaller steps. Instead of doing everything step by step, we highly recommend you read all sub-steps of a step first and then start coding. In many cases, the sub-steps are there to help you split the problem into manageable sub-problems, but it is still good to keep the overall goal in mind. Preparation: Data collection In the directory ../data/book/, you should find three .txt files. If not, you can use the following cell to download them. Also, feel free to look at the code to learn how to download .txt files from the web. We defined a function called download_book which downloads a book in .txt format. Then we define a dictionary with names and urls. We loop through the dictionary, download each book and write it to a file stored in the directory books in the current working directory. You don't need to do anything - just run the cell and the files will be downloaded to your computer. End of explanation from nltk.tokenize import sent_tokenize, word_tokenize text = 'Python is a programming language. It was created by Guido van Rossum.' for sent in sent_tokenize(text): print('SENTENCE', sent) tokens = word_tokenize(sent) print('TOKENS', tokens) Explanation: Encoding issues with txt files For Windows users, the file “AnnaKarenina.txt” gets the encoding cp1252. In order to open the file, you have to add encoding='utf-8', i.e., python a_path = 'some path on your computer.txt' with open(a_path, mode='r', encoding='utf-8'): # process file Exercise 1 Was the download successful? Let's start writing code! Please create the following two Python modules: Python module analyze.py This module you will call from the command line Python module utils.py This module will contain your helper functions More precisely, please create two files, analyze.py and utils.py, which are both placed in the same directory as this notebook. The two files are empty at this stage of the assignment. 1.a) Write a function called get_paths and store it in the Python module utils.py The function get_paths: * takes one positional parameter called input_folder * the function stores all paths to .txt files in the input_folder in a list * the function returns a list of strings, i.e., each string is a file path Once you've created the function and stored it in utils.py: * Import the function into analyze.py, using from utils import get_paths * Call the function inside analyze.py (input_folder="../Data/books") * Assign the output of the function to a variable and print this variable. * call analyze.py from the command line to test it Exercise 2 2.a) Let's get a little bit of an overview of what we can find in each text. Write a function called get_basic_stats. The function get_basic_stats: * has one positional parameter called txt_path which is the path to a txt file * reads the content of the txt file into a string * Computes the following statistics: * The number of sentences * The number of tokens * The size of the vocabulary used (i.e. unique tokens) * the number of chapters/acts: * count occurrences of 'CHAPTER' in HuckFinn.txt * count occurrences of 'Chapter ' (with the space) in AnnaKarenina.txt * count occurrences of 'ACT' in Macbeth.txt * return a dictionary with four key:value pairs, one for each statistic described above: * num_sents * num_tokens * vocab_size * num_chapters_or_acts In order to compute the statistics, you need to perform sentence splitting and tokenization. Here is an example snippet. End of explanation import os basename = os.path.basename('../Data/books/HuckFinn.txt') book = basename.strip('.txt') print(book) Explanation: 2.b) Store the function in the Python module utils.py. Import it in analyze.py. Edit analyze.py so that: * you first call the function get_paths * create an empty dictionary called book2stats, i.e., book2stats = {} * Loop over the list of txt files (the output from get_paths) and call the function get_basic_stats on each file * print the output of calling the function get_basic_stats on each file. * update the dictionary book2stats with each iteration of the for loop. Tip: book2stats is a dictionary mapping a book name (the key), e.g., ‘AnnaKarenina’, to a dictionary (the value) (the output from get_basic_stats) Tip: please use the following code snippet to obtain the basename name of a file path: End of explanation import operator token2freq = {'a': 1000, 'the': 100, 'cow' : 5} for token, freq in sorted(token2freq.items(), key=operator.itemgetter(1), reverse=True): print(token, freq) Explanation: Please note that Exercises 3 and 4, respectively, are designed to be difficult. You will have to combine what you have learnt so far to complete these exercises. Exercise 3 Let's compare the books based on the statistics. Create a dictionary stats2book_with_highest_value in analyze.py with four keys: * num_sents * num_tokens * vocab_size * num_chapters_or_acts The values are not the frequencies, but the book that has the highest value for the statistic. Make use of the book2stats dictionary to accomplish this. Exercise 4 4.a) The statistics above already provide some insights, but we want to know a bit more about what the books are about. To do this, we want to get the 30 most frequent tokens of each book. Edit the function get_basic_stats to add one more key:value pair: * the key is top_30_tokens * the value is a list of the 30 most frequent words in the text. 4.b) Write the top 30 tokens (one on each line) for each file to disk using the naming top_30_[FILENAME]: * top_30_AnnaKarenina.txt * top_30_HuckFinn.txt * top_30_Macbeth.txt Example of file (the and and may not be the most frequent tokens, these are just examples): the and .. The following code snippet can help you with obtaining the top 30 occurring tokens. The goal is to call the function you updated in Exercise 4a, i.e., get_basic_stats, in the file analyze.py. This also makes it possible to write the top 30 tokens to files. End of explanation
5,101
Given the following text description, write Python code to implement the functionality described below step by step Description: Data generation Step1: Preparing data set sweep First, we're going to define the data sets that we'll sweep over. The following cell does not need to be modified unless if you wish to change the datasets or reference databases used in the sweep. Step2: Preparing the method/parameter combinations and generating commands Now we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods. Note how method_parameters_combinations feeds method/parameter combinations to parameter_sweep() in the cell below. Step3: Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep(). Fields must adhere to following format Step4: As a sanity check, we can look at the first command that was generated and the number of commands generated. Step5: Finally, we run our commands. Step6: Generate per-method biom tables Modify the taxonomy_glob below to point to the taxonomy assignments that were generated above. This may be necessary if filepaths were altered in the preceding cells. Step7: Move result files to repository Add results to the tax-credit directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells. Uncomment and run when (and if) you want to move your new results to the tax-credit directory. Note that results needn't be in tax-credit to compare using the evaluation notebooks.
Python Code: from os.path import join, expandvars from joblib import Parallel, delayed from glob import glob from os import system from tax_credit.framework_functions import (parameter_sweep, generate_per_method_biom_tables, move_results_to_repository) project_dir = expandvars("$HOME/Desktop/projects/tax-credit") analysis_name= "mock-community" data_dir = join(project_dir, "data", analysis_name) reference_database_dir = expandvars("$HOME/Desktop/projects/tax-credit/data/ref_dbs/") results_dir = expandvars("$HOME/Desktop/projects/mock-community/") Explanation: Data generation: using python to sweep over methods and parameters This notebook serves as a template for using python to generate and run a list of commands. To use, follow these instructions: 1) select File -> Make a Copy... from the toolbar above to copy this notebook and provide a new name describing the method(s) that you are testing. 2) Modify file paths in cell 2 of Environment preparation to match the directory structure on your system. 3) Select the datasets you wish to test under Preparing data set sweep; choose from the list of datasets included in tax-credit, or add your own. 4) Prepare methods and command template. Enter your method / parameter combinations as a dictionary to method_parameters_combinations in cell 1, then provide a command_template in cell 2. This notebook example assumes that the method commands are passed to the command line, but the command list generated by parameter_sweep() can also be directed to the python interpreter, as shown in this example. Check command list in cell 3, and set number of jobs and joblib parameters in cell 4. 5) Run all cells and hold onto your hat. For an example of how to test classification methods in this notebook, see taxonomy assignment with Qiime 1. Environment preparation End of explanation dataset_reference_combinations = [ ('mock-1', 'gg_13_8_otus'), # formerly S16S-1 ('mock-2', 'gg_13_8_otus'), # formerly S16S-2 ('mock-3', 'gg_13_8_otus'), # formerly Broad-1 ('mock-4', 'gg_13_8_otus'), # formerly Broad-2 ('mock-5', 'gg_13_8_otus'), # formerly Broad-3 ('mock-6', 'gg_13_8_otus'), # formerly Turnbaugh-1 ('mock-7', 'gg_13_8_otus'), # formerly Turnbaugh-2 ('mock-8', 'gg_13_8_otus'), # formerly Turnbaugh-3 ('mock-9', 'unite_20.11.2016'), # formerly ITS1 ('mock-10', 'unite_20.11.2016'), # formerly ITS2-SAG ('mock-12', 'gg_13_8_otus'), # Extreme ('mock-13', 'gg_13_8_otus_full16S'), # kozich-1 ('mock-14', 'gg_13_8_otus_full16S'), # kozich-2 ('mock-15', 'gg_13_8_otus_full16S'), # kozich-3 ('mock-16', 'gg_13_8_otus'), # schirmer-1 ] reference_dbs = {'gg_13_8_otus' : (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean_515f-806r_trim250.fasta'), join(reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.tsv')), 'gg_13_8_otus_full16S' : (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean.fasta'), join(reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.tsv')), 'unite_20.11.2016' : (join(reference_database_dir, 'unite_20.11.2016/sh_refs_qiime_ver7_99_20.11.2016_dev_clean_ITS1Ff-ITS2r_trim250.fasta'), join(reference_database_dir, 'unite_20.11.2016/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.tsv'))} Explanation: Preparing data set sweep First, we're going to define the data sets that we'll sweep over. The following cell does not need to be modified unless if you wish to change the datasets or reference databases used in the sweep. End of explanation method_parameters_combinations = { 'awesome-method-number-1': {'confidence': [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]}, } Explanation: Preparing the method/parameter combinations and generating commands Now we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods. Note how method_parameters_combinations feeds method/parameter combinations to parameter_sweep() in the cell below. End of explanation command_template = "command_line_assignment -i {1} -o {0} -r {2} -t {3} -m {4} {5} --rdp_max_memory 16000" commands = parameter_sweep(data_dir, results_dir, reference_dbs, dataset_reference_combinations, method_parameters_combinations, command_template, infile='rep_seqs.fna', output_name='rep_seqs_tax_assignments.txt') Explanation: Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep(). Fields must adhere to following format: {0} = output directory {1} = input data {2} = reference sequences {3} = reference taxonomy {4} = method name {5} = other parameters End of explanation print(len(commands)) commands[0] Explanation: As a sanity check, we can look at the first command that was generated and the number of commands generated. End of explanation Parallel(n_jobs=4)(delayed(system)(command) for command in commands) Explanation: Finally, we run our commands. End of explanation taxonomy_glob = join(results_dir, '*', '*', '*', '*', 'rep_seqs_tax_assignments.txt') generate_per_method_biom_tables(taxonomy_glob, data_dir) Explanation: Generate per-method biom tables Modify the taxonomy_glob below to point to the taxonomy assignments that were generated above. This may be necessary if filepaths were altered in the preceding cells. End of explanation # precomputed_results_dir = join(project_dir, "data", "precomputed-results", analysis_name) # method_dirs = glob(join(results_dir, '*', '*', '*', '*')) # move_results_to_repository(method_dirs, precomputed_results_dir) Explanation: Move result files to repository Add results to the tax-credit directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells. Uncomment and run when (and if) you want to move your new results to the tax-credit directory. Note that results needn't be in tax-credit to compare using the evaluation notebooks. End of explanation
5,102
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: train-test split evaluation random forest on the housing dataset
Python Code:: from pandas import read_csv from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_absolute_error # load dataset url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.csv' dataframe = read_csv(url, header=None) data = dataframe.values X, y = data[:, :-1], data[:, -1] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1) model = RandomForestRegressor(random_state=1) model.fit(X_train, y_train) yhat = model.predict(X_test) mae = mean_absolute_error(y_test, yhat) print(mae)
5,103
Given the following text description, write Python code to implement the functionality described below step by step Description: Welcome to BASS! Version Step1: Instructions For help, check out the wiki Step2: OPTIONAL GRAPHS AND ANALYSIS The following blocks are optional calls to other figures and analysis Display Event Detection Tables Display Settings used for analysis Step3: Display Summary Results for Peaks Step4: Display Summary Results for Bursts Step5: Interactive Graphs Line Graphs One pannel, detected events Plot one time series by calling it's name Step6: Two pannel Create line plots of the raw data as well as the data analysis. Plots are saved by clicking the save button in the pop-up window with your graph. key = 'Mean1' start =100 end= 101 Results Line Plot Step7: Autocorrelation Display the Autocorrelation plot of your transformed data. Choose the start and end time in seconds. to capture whole time series, use end = -1. May be slow key = 'Mean1' start = 0 end = 10 Autocorrelation Plot Step8: Raster Plot Shows the temporal relationship of peaks in each column. Auto scales. Display only. Intended for more than one column of data Step9: Frequency Plot Use this block to plot changes of any measurement over time. Does not support 'all'. Example Step10: Analyze Events by Measurement Generates a line plot with error bars for a given event measurement. X axis is the names of each time series. Display Only. Intended for more than one column of data. This is not a box and whiskers plot. event_type = 'peaks' meas = 'Peaks Amplitude' Analyze Events by Measurement Step11: Poincare Plots Create a Poincare Plot of your favorite varible. Choose an event type (Peaks or Bursts), measurement type. Calling meas = 'All' is supported. Plots and tables are saved automatically Example Step12: Quick Poincare Plot Quickly call one poincare plot for display. Plot and Table are not saved automatically. Choose an event type (Peaks or Bursts), measurement type, and key. Calling meas = 'All' is not supported. Quick Poincare Step13: Power Spectral Density The following blocks allows you to asses the power of event measuments in the frequency domain. While you can call this block on any event measurement, it is intended to be used on interval data (or at least data with units in seconds). Reccomended Step14: Time Series Use the settings code block to set your frequency bands to calculate area under the curve. This block is not required. band output is always in raw power, even if the graph scale is dB/Hz. Power Spectral Density Step15: Use the block below to generate the PSD graph and power in bands results (if selected). scale toggles which units to use for the graph Step16: Spectrogram Use the block below to get the spectrogram of the signal. The frequency (y-axis) scales automatically to only show 'active' frequencies. This can take some time to run. version = 'original' key = 'Mean1' After transformation is run, you can call version = 'trans'. This graph is not automatically saved. Spectrogram Step17: Descriptive Statistics Moving/Sliding Averages, Standard Deviation, and Count Generates the moving mean, standard deviation, and count for a given measurement across all columns of the Data in the form of a DataFrame (displayed as a table). Saves out the dataframes of these three results automatically with the window size in the name as a .csv. If meas == 'All', then the function will loop and produce these tables for all measurements. event_type = 'Peaks' meas = 'all' window = 30 Moving Stats Step18: Entropy Histogram Entropy Calculates the histogram entropy of a measurement for each column of data. Also saves the histogram of each. If meas is set to 'all', then all available measurements from the event_type chosen will be calculated iteratevely. If all of the samples fall in one bin regardless of the bin size, it means we have the most predictable sitution and the entropy is 0. If we have uniformly dist function, the max entropy will be 1 event_type = 'Bursts' meas = 'all' Histogram Entropy Step19: Approximate entropy this only runs if you have pyeeg.py in the same folder as this notebook and bass.py. WARNING Step20: Time Series Step21: Sample Entropy this only runs if you have pyeeg.py in the same folder as this notebook and bass.py. WARNING Step22: Time Series Step23: Helpful Stuff While not completely up to date with some of the new changes, the Wiki can be useful if you have questions about some of the settings
Python Code: from BASS import * Explanation: Welcome to BASS! Version: Beta 2.0 Created by Abigail Dobyns and Ryan Thorpe BASS: Biomedical Analysis Software Suite for event detection and signal processing. Copyright (C) 2015 Abigail Dobyns This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/> Initalize Run the following code block to intialize the program. run this block one time End of explanation #Import and Initialize #Class BASS_Dataset(inputDir, fileName, outputDir, fileType='plain', timeScale='seconds') data1 = BASS_Dataset('C:\\Users\\Ryan\\Desktop\\sample_data\\','2016-06-22_C1a_P3_Base.txt','C:\\Users\\Ryan\\Desktop\\bass_output\\pleth') data2 = BASS_Dataset('C:\\Users\\Ryan\\Desktop\\sample_data\\','2016-06-26_C1a_P7_Base.txt','C:\\Users\\Ryan\\Desktop\\bass_output\\pleth') #transformation Settings data1.Settings['Absolute Value'] = False #Must be True if Savitzky-Golay is being used data1.Settings['Bandpass Highcut'] = 12 #in Hz data1.Settings['Bandpass Lowcut'] = 1 #in Hz data1.Settings['Bandpass Polynomial'] = 1 #integer data1.Settings['Linear Fit'] = False #between 0 and 1 on the whole time series data1.Settings['Linear Fit-Rolling R'] = 0.5 #between 0 and 1 data1.Settings['Linear Fit-Rolling Window'] = 1000 #window for rolling mean for fit, unit is index not time data1.Settings['Relative Baseline'] = 0 #default 0, unless data is normalized, then 1.0. Can be any float data1.Settings['Savitzky-Golay Polynomial'] = 'none' #integer data1.Settings['Savitzky-Golay Window Size'] = 'none' #must be odd. units are index not time #Baseline Settings data1.Settings['Baseline Type'] = r'rolling' #'linear', 'rolling', or 'static' #For Linear data1.Settings['Baseline Start'] = None #start time in seconds data1.Settings['Baseline Stop'] = None #end time in seconds #For Rolling data1.Settings['Rolling Baseline Window'] = 5 # in seconds. leave as 'none' if linear or static #Peaks data1.Settings['Delta'] = 0.05 data1.Settings['Peak Minimum'] = -0.50 #amplitude value data1.Settings['Peak Maximum'] = 0.50 #amplitude value #Bursts data1.Settings['Apnea Factor'] = 2 #factor to define apneas as a function of expiration data1.Settings['Burst Area'] = True #calculate burst area data1.Settings['Exclude Edges'] = True #False to keep edges, True to discard them data1.Settings['Inter-event interval minimum (time-scale units)'] = 0.0001 #only for bursts, not for peaks data1.Settings['Maximum Burst Duration (time-scale units)'] = 6 data1.Settings['Minimum Burst Duration (time-scale units)'] = 0.0001 data1.Settings['Minimum Peak Number'] = 1 #minimum number of peaks/burst, integer data1.Settings['Threshold']= 0.0001 #linear: proportion of baseline. #static: literal value. #rolling, linear ammount grater than rolling baseline at each time point. #Outputs data1.Settings['Generate Graphs'] = False #create and save the fancy graph outputs #Settings that you should not change unless you are a super advanced user: #These are settings that are still in development data1.Settings['Graph LCpro events'] = False ############################################################################################ data1.run_analysis('pleth', batch=False) Explanation: Instructions For help, check out the wiki: Protocol Or the video tutorial: Coming Soon! 1) Load Data File(s) Use the following block to create a BASS_Dataset object and initialize your settings. All settings are attributes of the dataset instance. Manual initialization of settings in this block is optional and is required only once for a given batch. All BASS_Dataset objects that are initialized are automatically added to the batch. class BASS_Dataset(inputDir, fileName, outputDir, fileType='plain', timeScale='seconds') Attributes: Batch: static list Contains all instances of the BASS_Dataset object in order to be referenced by the global runBatch function. Data: library instance data Settings: library instance settings Results: library instance results Methods: run_analysis(settings = self.Settings, analysis_module): BASS_Dataset method Highest level of the object-oriented analysis pipeline. First syncs the settings of all BASS_Dataset objects (stored in Batch), then runs the specified analysis module on each one. Analysis must be called after the object is initialize and Settings added if the Settings are to be added manually (not via the interactive check and load settings function). Analysis runs according to batch-oriented protocol and is specific to the analysis module determined by the "analysis_module" parameter. 2) Run Analysis Run BASS_Dataset.run_analysis(analysis_mod, settings, batch) Runs in either single (batch=False) or batch mode. Assuming batch mode, this function first syncs settings of each dataset within Bass_Dataset.Batch to the entered parameter "settings", then runs analysis on each instance within Batch. Be sure to select the correct module given your desired type of analysis. The current options (as of 9/21/16) are "ekg" and "pleth". Parameters are as follows: Parameters: analysis_mod: string the name of the BASS_Dataset module which will be used to analyze the batch of datasets settings: string or dictionary can be entered as the location of a settings file or the actual settings dictionary (default = self.Settings) batch: boolean determines if the analysis is performed on only the self-instance or as a batch on all object instances (default=True) More Info on Settings For more information about other settings, go to: Transforming Data Baseline Settings Peak Detection Settings Burst Detection Settings End of explanation display_settings(Settings) Explanation: OPTIONAL GRAPHS AND ANALYSIS The following blocks are optional calls to other figures and analysis Display Event Detection Tables Display Settings used for analysis End of explanation #grouped summary for peaks Results['Peaks-Master'].groupby(level=0).describe() Explanation: Display Summary Results for Peaks End of explanation #grouped summary for bursts Results['Bursts-Master'].groupby(level=0).describe() Explanation: Display Summary Results for Bursts End of explanation #Interactive, single time series by Key key = Settings['Label'] graph_ts(Data, Settings, Results, key) Explanation: Interactive Graphs Line Graphs One pannel, detected events Plot one time series by calling it's name End of explanation key = Settings['Label'] start =550 #start time in seconds end= 560#end time in seconds results_timeseries_plot(key, start, end, Data, Settings, Results) Explanation: Two pannel Create line plots of the raw data as well as the data analysis. Plots are saved by clicking the save button in the pop-up window with your graph. key = 'Mean1' start =100 end= 101 Results Line Plot End of explanation #autocorrelation key = Settings['Label'] start = 0 #seconds, where you want the slice to begin end = 1 #seconds, where you want the slice to end. autocorrelation_plot(Data['trans'][key][start:end]) plt.show() Explanation: Autocorrelation Display the Autocorrelation plot of your transformed data. Choose the start and end time in seconds. to capture whole time series, use end = -1. May be slow key = 'Mean1' start = 0 end = 10 Autocorrelation Plot End of explanation #raster raster(Data, Results) Explanation: Raster Plot Shows the temporal relationship of peaks in each column. Auto scales. Display only. Intended for more than one column of data End of explanation event_type = 'Peaks' meas = 'Intervals' key = Settings['Label'] frequency_plot(event_type, meas, key, Data, Settings, Results) Explanation: Frequency Plot Use this block to plot changes of any measurement over time. Does not support 'all'. Example: event_type = 'Peaks' meas = 'Intervals' key = 'Mean1' Frequency Plot End of explanation #Get average plots, display only event_type = 'Peaks' meas = 'Intervals' average_measurement_plot(event_type, meas,Results) Explanation: Analyze Events by Measurement Generates a line plot with error bars for a given event measurement. X axis is the names of each time series. Display Only. Intended for more than one column of data. This is not a box and whiskers plot. event_type = 'peaks' meas = 'Peaks Amplitude' Analyze Events by Measurement End of explanation #Batch event_type = 'Bursts' meas = 'Total Cycle Time' Results = poincare_batch(event_type, meas, Data, Settings, Results) pd.concat({'SD1':Results['Poincare SD1'],'SD2':Results['Poincare SD2']}) Explanation: Poincare Plots Create a Poincare Plot of your favorite varible. Choose an event type (Peaks or Bursts), measurement type. Calling meas = 'All' is supported. Plots and tables are saved automatically Example: event_type = 'Bursts' meas = 'Burst Duration' More on Poincare Plots Batch Poincare Batch Poincare End of explanation #quick event_type = 'Bursts' meas = 'Attack' key = Settings['Label'] poincare_plot(Results[event_type][key][meas]) Explanation: Quick Poincare Plot Quickly call one poincare plot for display. Plot and Table are not saved automatically. Choose an event type (Peaks or Bursts), measurement type, and key. Calling meas = 'All' is not supported. Quick Poincare End of explanation Settings['PSD-Event'] = Series(index = ['Hz','ULF', 'VLF', 'LF','HF','dx']) #Set PSD ranges for power in band Settings['PSD-Event']['hz'] = 100 #freqency that the interpolation and PSD are performed with. Settings['PSD-Event']['ULF'] = 1 #max of the range of the ultra low freq band. range is 0:ulf Settings['PSD-Event']['VLF'] = 2 #max of the range of the very low freq band. range is ulf:vlf Settings['PSD-Event']['LF'] = 5 #max of the range of the low freq band. range is vlf:lf Settings['PSD-Event']['HF'] = 50 #max of the range of the high freq band. range is lf:hf. hf can be no more than (hz/2) Settings['PSD-Event']['dx'] = 10 #segmentation for the area under the curve. event_type = 'Peaks' meas = 'Intervals' key = Settings['Label'] scale = 'raw' Results = psd_event(event_type, meas, key, scale, Data, Settings, Results) Results['PSD-Event'][key] Explanation: Power Spectral Density The following blocks allows you to asses the power of event measuments in the frequency domain. While you can call this block on any event measurement, it is intended to be used on interval data (or at least data with units in seconds). Reccomended: event_type = 'Bursts' meas = 'Total Cycle Time' key = 'Mean1' scale = 'raw' event_type = 'Peaks' meas = 'Intervals' key = 'Mean1' scale = 'raw' Because this data is in the frequency domain, we must interpolate it in order to perform a FFT on it. Does not support 'all'. Power Spectral Density: Events Events Use the code block below to specify your settings for event measurment PSD. End of explanation #optional Settings['PSD-Signal'] = Series(index = ['ULF', 'VLF', 'LF','HF','dx']) #Set PSD ranges for power in band Settings['PSD-Signal']['ULF'] = 25 #max of the range of the ultra low freq band. range is 0:ulf Settings['PSD-Signal']['VLF'] = 75 #max of the range of the very low freq band. range is ulf:vlf Settings['PSD-Signal']['LF'] = 150 #max of the range of the low freq band. range is vlf:lf Settings['PSD-Signal']['HF'] = 300 #max of the range of the high freq band. range is lf:hf. hf can be no more than (hz/2) where hz is the sampling frequency Settings['PSD-Signal']['dx'] = 2 #segmentation for integration of the area under the curve. Explanation: Time Series Use the settings code block to set your frequency bands to calculate area under the curve. This block is not required. band output is always in raw power, even if the graph scale is dB/Hz. Power Spectral Density: Signal End of explanation scale = 'raw' #raw or db Results = psd_signal(version = 'original', key = 'Mean1', scale = scale, Data = Data, Settings = Settings, Results = Results) Results['PSD-Signal'] Explanation: Use the block below to generate the PSD graph and power in bands results (if selected). scale toggles which units to use for the graph: raw = s^2/Hz db = dB/Hz = 10*log10(s^2/Hz) Graph and table are automatically saved in the PSD-Signal subfolder. End of explanation version = 'original' key = Settings['Label'] spectogram(version, key, Data, Settings, Results) Explanation: Spectrogram Use the block below to get the spectrogram of the signal. The frequency (y-axis) scales automatically to only show 'active' frequencies. This can take some time to run. version = 'original' key = 'Mean1' After transformation is run, you can call version = 'trans'. This graph is not automatically saved. Spectrogram End of explanation #Moving Stats event_type = 'Bursts' meas = 'Total Cycle Time' window = 30 #seconds Results = moving_statistics(event_type, meas, window, Data, Settings, Results) Explanation: Descriptive Statistics Moving/Sliding Averages, Standard Deviation, and Count Generates the moving mean, standard deviation, and count for a given measurement across all columns of the Data in the form of a DataFrame (displayed as a table). Saves out the dataframes of these three results automatically with the window size in the name as a .csv. If meas == 'All', then the function will loop and produce these tables for all measurements. event_type = 'Peaks' meas = 'all' window = 30 Moving Stats End of explanation #Histogram Entropy event_type = 'Bursts' meas = 'all' Results = histent_wrapper(event_type, meas, Data, Settings, Results) Results['Histogram Entropy'] Explanation: Entropy Histogram Entropy Calculates the histogram entropy of a measurement for each column of data. Also saves the histogram of each. If meas is set to 'all', then all available measurements from the event_type chosen will be calculated iteratevely. If all of the samples fall in one bin regardless of the bin size, it means we have the most predictable sitution and the entropy is 0. If we have uniformly dist function, the max entropy will be 1 event_type = 'Bursts' meas = 'all' Histogram Entropy End of explanation #Approximate Entropy event_type = 'Peaks' meas = 'Intervals' Results = ap_entropy_wrapper(event_type, meas, Data, Settings, Results) Results['Approximate Entropy'] Explanation: Approximate entropy this only runs if you have pyeeg.py in the same folder as this notebook and bass.py. WARNING: THIS FUNCTION RUNS SLOWLY run the below code to get the approximate entropy of any measurement or raw signal. Returns the entropy of the entire results array (no windowing). I am using the following M and R values: M = 2 R = 0.2*std(measurement) these values can be modified in the source code. alternatively, you can call ap_entropy directly. supports 'all' Interpretation: A time series containing many repetitive patterns has a relatively small ApEn; a less predictable process has a higher ApEn. Approximate Entropy in BASS Aproximate Entropy Source Events End of explanation #Approximate Entropy on raw signal #takes a VERY long time from pyeeg import ap_entropy version = 'original' #original, trans, shift, or rolling key = Settings['Label'] #Mean1 default key for one time series start = 0 #seconds, where you want the slice to begin end = 1 #seconds, where you want the slice to end. The absolute end is -1 ap_entropy(Data[version][key][start:end].tolist(), 2, (0.2*np.std(Data[version][key][start:end]))) Explanation: Time Series End of explanation #Sample Entropy event_type = 'Bursts' meas = 'Total Cycle Time' Results = samp_entropy_wrapper(event_type, meas, Data, Settings, Results) Results['Sample Entropy'] Results['Sample Entropy']['Attack'] Explanation: Sample Entropy this only runs if you have pyeeg.py in the same folder as this notebook and bass.py. WARNING: THIS FUNCTION RUNS SLOWLY run the below code to get the sample entropy of any measurement. Returns the entropy of the entire results array (no windowing). I am using the following M and R values: M = 2 R = 0.2*std(measurement) these values can be modified in the source code. alternatively, you can call samp_entropy directly. Supports 'all' Sample Entropy in BASS Sample Entropy Source Events End of explanation #on raw signal #takes a VERY long time version = 'original' #original, trans, shift, or rolling key = Settings['Label'] start = 0 #seconds, where you want the slice to begin end = 1 #seconds, where you want the slice to end. The absolute end is -1 samp_entropy(Data[version][key][start:end].tolist(), 2, (0.2*np.std(Data[version][key][start:end]))) Explanation: Time Series End of explanation help(moving_statistics) moving_statistics?? Explanation: Helpful Stuff While not completely up to date with some of the new changes, the Wiki can be useful if you have questions about some of the settings: https://github.com/drcgw/SWAN/wiki/Tutorial More Help? Stuck on a particular step or function? Try typing the function name followed by two ??. This will pop up the docstring and source code. You can also call help() to have the notebook print the doc string. Example: analyze?? help(analyze) End of explanation
5,104
Given the following text description, write Python code to implement the functionality described below step by step Description: GPU and CPU settings If GPU is not available, comment out the bottom block. Step1: Plot training and test accuracy
Python Code: # If GPU is not available: # GPU_USE = '/cpu:0' # config = tf.ConfigProto(device_count = {"GPU": 0}) # If GPU is available: config = tf.ConfigProto() config.log_device_placement = True config.allow_soft_placement = True config.gpu_options.allocator_type = 'BFC' # Limit the maximum memory used config.gpu_options.per_process_gpu_memory_fraction = 0.1 # set session config tf.keras.backend.set_session(tf.Session(config=config)) ########## HYPER PARAMETERS batch_size = 128 epochs = 10 optimizer = tf.keras.optimizers.RMSprop() ########## HYPER PARAMETERS ########## MODEL ARCHITECTURE model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(5, activation='relu', input_shape=(784,))) model.add(tf.keras.layers.Dense(num_classes, activation='softmax')) ########## MODEL ARCHITECTURE # Print summary model.summary() # compile model for training model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy']) history = model.fit(x_train, y_train_one_hot, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test_one_hot)) Explanation: GPU and CPU settings If GPU is not available, comment out the bottom block. End of explanation # use model for inference to get test accuracy y_test_pred = model.predict(x_test) y_test_pred = np.argmax(y_test_pred, axis=1) print ('\n Summary of the precision, recall, F1 score for each class:') print (sklearn.metrics.classification_report(y_test, y_test_pred)) print ('\n Confusion matrix: ') print (sklearn.metrics.confusion_matrix(y_test, y_test_pred)) import matplotlib.pyplot as plt plt.plot(history.history['val_acc'], label="Test Accuracy") plt.plot(history.history['acc'], label="Training Accuracy") plt.legend() # save model model.save("myModel.h5") Explanation: Plot training and test accuracy End of explanation
5,105
Given the following text description, write Python code to implement the functionality described below step by step Description: Hello World! Python Workshops @ Think Coffee 3-5pm, 7/30/17 Day 3, Alice NLP generator @python script author (original content) Step1: Setting params for model setup and build. Step2: Loading and reading Alice.txt corpus, saving characters (unique alphabet and punctuation characters in corpus) in array, and making dictionary associating each character with it's position in the character array (making two dictionaries where the key and position are either the key or value) Step3: Cutting the document into semi-redundant sentences, where each element in the sentences list contain 40 sentences that overlap with the previous element's sentences (also doing a step size of 3 through each line in the text). Also, storing character in each next_chars array's elements, where the current element is the 40th character after the previous character. Step4: Making X boolean (false) array with a shape of the length of the sentences by the step (40) by the length of the unique characters/punctuation in the document. Making y boolean (false) array with a shape of the length of the sentences by the length of the unique characters/punctuation in the document. Then, going through each sentence and character in the sentence, storing a 1 (converting false to true) in the respective sentence and characters in X and y. Step5: Defining helper functions. Step6: Building text embedding matrix and RNN model. This is what differentiates this tutorial from tutorial 03. Input layer Embedding layer - with embedding matrix as weights RNN Layer - LSTM instance with 256 nodes Dense layer (2 hidden layers) Activation (softmax) layer for converting to output probability full table given below Step7: Making batchloss class for more efficient epoch training and writing. Step8: Model training. Use one epoch instead of ten.
Python Code: from __future__ import print_function from keras.models import Model from keras.layers import Dense, Activation, Embedding from keras.layers import LSTM, Input from keras.layers.merge import concatenate from keras.optimizers import RMSprop, Adam from keras.utils.data_utils import get_file from keras.layers.normalization import BatchNormalization from keras.callbacks import Callback, ModelCheckpoint from sklearn.decomposition import PCA from keras.utils import plot_model import numpy as np import random import sys import csv import os import h5py import time Explanation: Hello World! Python Workshops @ Think Coffee 3-5pm, 7/30/17 Day 3, Alice NLP generator @python script author (original content): Rahul @jupyter notebook converted tutorial author: Nick Giangreco Ntbk of python script in same directory. Building an RNN based on Lewis Carrol's Alice in Wonderland text. Importing modules End of explanation embeddings_path = "./glove.840B.300d-char.txt" # http://nlp.stanford.edu/data/glove.840B.300d.zip embedding_dim = 300 batch_size = 32 use_pca = False lr = 0.001 lr_decay = 1e-4 maxlen = 300 consume_less = 2 # 0 for cpu, 2 for gpu Explanation: Setting params for model setup and build. End of explanation text = open('./Alice.txt').read() print('corpus length:', len(text)) chars = sorted(list(set(text))) print('total chars:', len(chars)) char_indices = dict((c, i) for i, c in enumerate(chars)) indices_char = dict((i, c) for i, c in enumerate(chars)) Explanation: Loading and reading Alice.txt corpus, saving characters (unique alphabet and punctuation characters in corpus) in array, and making dictionary associating each character with it's position in the character array (making two dictionaries where the key and position are either the key or value) End of explanation # cut the text in semi-redundant sequences of maxlen characters step = 3 sentences = [] next_chars = [] for i in range(0, len(text) - maxlen, step): sentences.append(text[i: i + maxlen]) next_chars.append(text[i + maxlen]) print('nb sequences:', len(sentences)) Explanation: Cutting the document into semi-redundant sentences, where each element in the sentences list contain 40 sentences that overlap with the previous element's sentences (also doing a step size of 3 through each line in the text). Also, storing character in each next_chars array's elements, where the current element is the 40th character after the previous character. End of explanation print('Vectorization...') X = np.zeros((len(sentences), maxlen), dtype=np.int) y = np.zeros((len(sentences), len(chars)), dtype=np.bool) for i, sentence in enumerate(sentences): for t, char in enumerate(sentence): X[i, t] = char_indices[char] y[i, char_indices[next_chars[i]]] = 1 Explanation: Making X boolean (false) array with a shape of the length of the sentences by the step (40) by the length of the unique characters/punctuation in the document. Making y boolean (false) array with a shape of the length of the sentences by the length of the unique characters/punctuation in the document. Then, going through each sentence and character in the sentence, storing a 1 (converting false to true) in the respective sentence and characters in X and y. End of explanation # test code to sample on 10% for functional model testing def random_subset(X, y, p=0.1): idx = np.random.randint(X.shape[0], size=int(X.shape[0] * p)) X = X[idx, :] y = y[idx] return (X, y) # https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html def generate_embedding_matrix(embeddings_path): print('Processing pretrained character embeds...') embedding_vectors = {} with open(embeddings_path, 'r') as f: for line in f: line_split = line.strip().split(" ") vec = np.array(line_split[1:], dtype=float) char = line_split[0] embedding_vectors[char] = vec embedding_matrix = np.zeros((len(chars), 300)) #embedding_matrix = np.random.uniform(-1, 1, (len(chars), 300)) for char, i in char_indices.items(): #print ("{}, {}".format(char, i)) embedding_vector = embedding_vectors.get(char) if embedding_vector is not None: embedding_matrix[i] = embedding_vector # Use PCA from sklearn to reduce 300D -> 50D if use_pca: pca = PCA(n_components=embedding_dim) pca.fit(embedding_matrix) embedding_matrix_pca = np.array(pca.transform(embedding_matrix)) embedding_matrix_result = embedding_matrix_pca print (embedding_matrix_pca) print (embedding_matrix_pca.shape) else: embedding_matrix_result = embedding_matrix return embedding_matrix_result def sample(preds, temperature=1.0): # helper function to sample an index from a probability array preds = np.asarray(preds).astype('float64') preds = np.log(preds + 1e-6) / temperature exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas) Explanation: Defining helper functions. End of explanation print('Build model...') main_input = Input(shape=(maxlen,)) embedding_matrix = generate_embedding_matrix(embeddings_path) embedding_layer = Embedding( len(chars), embedding_dim, input_length=maxlen, weights=[embedding_matrix]) # embedding_layer = Embedding( # len(chars), embedding_dim, input_length=maxlen) embedded = embedding_layer(main_input) # RNN Layer rnn = LSTM(256, implementation=consume_less)(embedded) aux_output = Dense(len(chars))(rnn) aux_output = Activation('softmax', name='aux_out')(aux_output) # Hidden Layers hidden_1 = Dense(512, use_bias=False)(rnn) hidden_1 = BatchNormalization()(hidden_1) hidden_1 = Activation('relu')(hidden_1) hidden_2 = Dense(256, use_bias=False)(hidden_1) hidden_2 = BatchNormalization()(hidden_2) hidden_2 = Activation('relu')(hidden_2) main_output = Dense(len(chars))(hidden_2) main_output = Activation('softmax', name='main_out')(main_output) model = Model(inputs=main_input, outputs=[main_output, aux_output]) optimizer = Adam(lr=lr, decay=lr_decay) model.compile(loss='categorical_crossentropy', optimizer=optimizer, loss_weights=[1., 0.2]) model.summary() #plot_model(model, to_file='model.png', show_shapes=True) if not os.path.exists('./output'): os.makedirs('./output') f = open('./log.csv', 'w') log_writer = csv.writer(f) log_writer.writerow(['iteration', 'batch', 'batch_loss', 'epoch_loss', 'elapsed_time']) checkpointer = ModelCheckpoint( "./output/model.hdf5", monitor='main_out_loss', save_best_only=True) Explanation: Building text embedding matrix and RNN model. This is what differentiates this tutorial from tutorial 03. Input layer Embedding layer - with embedding matrix as weights RNN Layer - LSTM instance with 256 nodes Dense layer (2 hidden layers) Activation (softmax) layer for converting to output probability full table given below End of explanation class BatchLossLogger(Callback): def on_epoch_begin(self, epoch, logs={}): self.losses = [] def on_batch_end(self, batch, logs={}): self.losses.append(logs.get('main_out_loss')) if batch % 50 == 0: log_writer.writerow([iteration, batch, logs.get('main_out_loss'), np.mean(self.losses), round(time.time() - start_time, 2)]) Explanation: Making batchloss class for more efficient epoch training and writing. End of explanation ep = 1 start_time = time.time() for iteration in range(1, 20): print() print('-' * 50) print('Iteration', iteration) logger = BatchLossLogger() # X_train, y_train = random_subset(X, y) # history = model.fit(X_train, [y_train, y_train], batch_size=batch_size, # epochs=1, callbacks=[logger, checkpointer]) history = model.fit(X, [y, y], batch_size=batch_size, epochs=ep, callbacks=[logger, checkpointer]) loss = str(history.history['main_out_loss'][-1]).replace(".", "_") f2 = open('./output/iter-{:02}-{:.6}.txt'.format(iteration, loss), 'w') start_index = random.randint(0, len(text) - maxlen - 1) for diversity in [0.2, 0.5, 1.0, 1.2]: print() print('----- diversity:', diversity) f2.write('----- diversity:' + ' ' + str(diversity) + '\n') generated = '' sentence = text[start_index: start_index + maxlen] generated += sentence print('----- Generating with seed: "' + sentence + '"') f2.write('----- Generating with seed: "' + sentence + '"' + '\n---\n') sys.stdout.write(generated) for i in range(1200): x = np.zeros((1, maxlen), dtype=np.int) for t, char in enumerate(sentence): x[0, t] = char_indices[char] preds = model.predict(x, verbose=0)[0][0] next_index = sample(preds, diversity) next_char = indices_char[next_index] generated += next_char sentence = sentence[1:] + next_char sys.stdout.write(next_char) sys.stdout.flush() f2.write(generated + '\n') print() f2.close() # Write embeddings for current characters to file # The second layer has the embeddings. embedding_weights = model.layers[1].get_weights()[0] f3 = open('./output/char-embeddings.txt', 'w') for char in char_indices: if ord(char) < 128: embed_vector = embedding_weights[char_indices[char], :] f3.write(char + " " + " ".join(str(x) for x in embed_vector) + "\n") f3.close() f.close() Explanation: Model training. Use one epoch instead of ten. End of explanation
5,106
Given the following text description, write Python code to implement the functionality described below step by step Description: Tutorial 4 Step1: Plotting the results
Python Code: import logging import sys import espressomd import espressomd.accumulators import espressomd.lb import espressomd.observables logging.basicConfig(level=logging.INFO, stream=sys.stdout) # Constants LOOPS = 40000 STEPS = 10 # System setup system = espressomd.System(box_l=[16] * 3) system.time_step = 0.01 system.cell_system.skin = 0.4 lbf = espressomd.lb.LBFluidGPU(kT=1, seed=132, agrid=1, dens=1, visc=5, tau=0.01) system.actors.add(lbf) system.part.add(pos=[0, 0, 0]) # Setup observable pos = espressomd.observables.ParticlePositions(ids=(0,)) # Run for different friction constants lb_gammas = [1.0, 2.0, 4.0, 10.0] msd_results = [] for gamma in lb_gammas: system.auto_update_accumulators.clear() system.thermostat.turn_off() system.thermostat.set_lb(LB_fluid=lbf, seed=123, gamma=gamma) # Equilibrate logging.info("Equilibrating the system.") system.integrator.run(1000) logging.info("Equilibration finished.") # Setup observable correlator correlator = espressomd.accumulators.Correlator(obs1=pos, tau_lin=16, tau_max=LOOPS * STEPS, delta_N=1, corr_operation="square_distance_componentwise", compress1="discard1") system.auto_update_accumulators.add(correlator) logging.info("Sampling started for gamma = {}.".format(gamma)) for i in range(LOOPS): system.integrator.run(STEPS) correlator.finalize() msd_results.append(correlator.result()) logging.info("Sampling finished.") Explanation: Tutorial 4 : The lattice-Boltzmann method in ESPResSo - Part 2 5 Polymer Diffusion In these exercises we want to use the LBM-MD-Hybrid to reproduce a classic result of polymer physics: the dependence of the diffusion coefficient of a polymer on its chain length. If no hydrodynamic interactions are present, one expects a scaling law $D \propto N ^{- 1}$ and if they are present, a scaling law $D \propto N^{- \nu}$ is expected. Here $\nu$ is the Flory exponent that plays a very prominent role in polymer physics. It has a value of $\sim 3/5$ in good solvent conditions in 3D. Discussions on these scaling laws can be found in polymer physics textbooks like [4–6]. The reason for the different scaling law is the following: when being transported, every monomer creates a flow field that follows the direction of its motion. This flow field makes it easier for other monomers to follow its motion. This makes a polymer (given it is sufficiently long) diffuse more like a compact object including the fluid inside it, although it does not have clear boundaries. It can be shown that its motion can be described by its hydrodynamic radius. It is defined as: \begin{equation} \left\langle \frac{1}{R_h} \right\rangle = \left\langle \frac{1}{N^2}\sum_{i\neq j} \frac{1}{\left| r_i - r_j \right|} \right\rangle \end{equation} This hydrodynamic radius exhibits the scaling law $R_h \propto N^{\nu}$ and the diffusion coefficient of a long polymer is proportional to its inverse $R_h$. For shorter polymers there is a transition region. It can be described by the Kirkwood-Zimm model: \begin{equation} D=\frac{D_0}{N} + \frac{k_B T}{6 \pi \eta } \left\langle \frac{1}{R_h} \right\rangle \end{equation} Here $D_0$ is the monomer diffusion coefficient and $\eta$ the viscosity of the fluid. For a finite system size the second part of the diffusion is subject of a $1/L$ finite size effect, because hydrodynamic interactions are proportional to the inverse distance and thus long ranged. It can be taken into account by a correction: \begin{equation} D=\frac{D_0}{N} + \frac{k_B T}{6 \pi \eta } \left\langle \frac{1}{R_h} \right\rangle \left( 1- \left\langle\frac{R_h}{L} \right\rangle \right) \end{equation} It is quite difficult to prove this formula computationally with good accuracy. It will need quite some computational effort and a careful analysis. So please don't be too disappointed if you don't manage to do so. We want to determine the long-time self diffusion coefficient from the mean square displacement of the center-of-mass a single polymer. For large $t$ the mean square displacement is proportional to the time and the diffusion coefficient occurs as a prefactor: \begin{equation} D = \lim_{t\to\infty}\left[ \frac{1}{6t} \left\langle \left(\vec{r}(t) - \vec{r}(0)\right)^2 \right\rangle \right]. \end{equation} This equation can be found in virtually any simulation textbook, like [7]. We will therefore set up a polymer in an LB fluid, simulate for an appropriate amount of time, calculate the mean square displacement as a function of time and obtain the diffusion coefficient from a linear fit. However we will have a couple of steps inbetween and divide the full problem into subproblems that allow to (hopefully) fully understand the process. 5.1 Step 1: Diffusion of a single particle Our first step is to investigate the diffusion of a single particle that is coupled to an LB fluid by the point coupling method: End of explanation %matplotlib notebook import matplotlib.pyplot as plt plt.rcParams.update({'font.size': 22}) plt.figure(figsize=(10,10)) plt.xlabel('tau [$\Delta t$]') plt.ylabel('msd [$\sigma^2$]') for index, r in enumerate(msd_results): # adding up componentwise MSDs # We skip the first entry since it's zero by definition and cannot be displayed # in a loglog plot. Furthermore, we only look at the first 100 entries due to # lack of good statistics for larger times. msd = r[1:100, 2] + r[1:100, 3] + r[1:100, 4] plt.loglog(r[1:100, 0], msd, label=r"$\gamma=${}".format(str(lb_gammas[index]))) plt.legend() plt.show() Explanation: Plotting the results End of explanation
5,107
Given the following text description, write Python code to implement the functionality described below step by step Description: Pagination and encapsulation Step1: Path parameters OAS3 allows to specify path parameters Step2: Exercise Step3: Exercise Step4: Now run the spec in a terminal using cd /code/notebooks/oas3/ connexion run /code/notebooks/oas3/ex-08-pagination-ok.yaml
Python Code: # Use this cell to test the output !curl http://localhost:5000/datetime/v1/timezones -vk Explanation: Pagination and encapsulation: In our standardization policy, we defined a common set of pagination parameters. Moreover we stated that responses should always be enclosed in json objects, eg: always return something that is extensible like ``` GET /timezones { "entries": [ "you", "can", "always", "add", "new", "keys" ] } ``` don't return string, number or array, because ``` GET /dont-do-this [ "you can't", "extend them"] ``` Support pagination in get /timezones We want to provide a /timezones path listing all the timezones supported by get /echo Our goal is the following: when invoking /datetime/v1/timezones the API will return the supported timezones in pytz.all_timezones; to limit resource consumption the server will return: by default 5 entries at most 10 entries the response is enveloped in the following example json object: { "entries": [ "Europe/Rome", "UTC", .. ], "count": 5, "offset": 10 } the status code for a successful response is 200 Remember: pagination should be implemented using a common set of parameters to use. Our choice is: limit: max number of entries to retrieve offset: the number of entries to skip in a paginated request cursor: an identifier (cursor) of the first entry to be returned Slack APIs provide an example of cursor-based pagination Exercise: write /timezone specs Edit the ex-08-pagination-ok.yaml and write the timezones specifications: 1- define the new Timezones schema to be used in the response; 2- define the new /timezones path supporting the get method: always write proper summary and description fields 3- get /timezones possible responses are: 200 returning a Timezones in json format, with a complete example for mocks 429 and 503 returning a problem+json 4- this operation is not authenticated 5- don't forget operationId: get_timezones ! Hint: feel free to reuse as much yaml code as possible. Exercise: test /timezones mocks Run your spec in the terminal and check that it properly returns the mock objects. Use connexion run --mock all /code/notebooks/oas3/ex-08-pagination-ok.yaml End of explanation # Use this cell to test your api Explanation: Path parameters OAS3 allows to specify path parameters: in the path, with braces eg. {continent} in parameters with the remaining details. REMEMBER: path parameters are always required so you must define a new path and a new operationId: get_timezones_by_continent paths: /timezones/{continent}: get: ... parameters: - name: continent in: path required: true schema: type: string /timezones: ... definition without path-parameters ... Exercise: adding path parameter to /timezones Let's add a continent path parameter to /timezones: create a #/components/parameters/continent_path parameter definition; add the continent_path query parameter to get /timezones/{continent} path checking the official OAS 3.0.2 documentation add a 404 Not Found response in case the continent is not present Finally, check that you can run the spec. connexion run --mock all /code/notebooks/oas3/ex-08-pagination-ok.yaml End of explanation # Check that default works !curl http://localhost:5000/datetime/v1/timezones -kv Explanation: Exercise: implement get_timezones operations Let's implement the get_timezones operation in api.py such that: is throttled takes the limit and offset parameters; returns a Timezones object containing all the timezones in pytz.all_timezones between offset and offset+limit End of explanation ### Exercise solution !grep '^def get_timezones' oas3/api-solution.py -A20 -B1 Explanation: Exercise: implement get_timezones_by_continent operations Let's implement the get_timezones_by_continent operation in api.py such that: extends get_timezones behavior; returns a Timezones object containing all the timezones in the given continent, eg: Europe -> [ "Europe/London", "Europe/Rome", ... ] End of explanation render_markdown(f''' Play a bit with the [Swagger UI]({api_server_url('ui')}) and try making a request! ''') # Use out-of-bound offset !curl http://localhost:5000/datetime/v1/timezones?offset=800 -kv # Pick in the middle !curl 'http://localhost:5000/datetime/v1/timezones?offset=450&limit=2' -kv # Pick in the middle !curl 'http://localhost:5000/datetime/v1/timezones/Europe?limit=2' -kv Explanation: Now run the spec in a terminal using cd /code/notebooks/oas3/ connexion run /code/notebooks/oas3/ex-08-pagination-ok.yaml End of explanation
5,108
Given the following text description, write Python code to implement the functionality described below step by step Description: Step5: Project on frame buckling Define a new class Frame_Buckling as a child of the class LinearFrame provided in frame.py. Add in this class methods to extract the normal stress $\bar N_0$ in each bar and to assemble the geometric stiffness $G$. Solve the buckling problem for a simply supported straight beam using the Frame_Buckling class and compare with the analytical solution (see e.g. cours/TD 3A103) when using different number of element. Solve the buckling problem for the frame of the Exercise 2 in the notebook 02-LinearFrame. Compare with the experimental findings on the first buckling load and mode. Represent the first 2 buckling modes. Propose an improved geometry of the frame of the Exercise 2 in the notebook 02-LinearFrame to increase the buckling load. Support your proposal with numerical results. (For classes in python you can look https Step6: Essai sur exercice 2 Step7: Essai sur d'autres structures Structure avec des piliers au milieu Step8: Structure avec deux forces et sans étage renforcé Step9: Structure croisée
Python Code: %matplotlib inline from sympy.interactive import printing printing.init_printing() from frame import * import sympy as sp import numpy as np import scipy.sparse as sparse import scipy.sparse.linalg as linalg class Frame_Buckling(LinearFrame): def N_local_stress(self,element): Returns the normal forces of an element. Ke= self.K_local() Ue= self.U_e_local_coord(element) F=Ke*Ue N_local = F[3] return N_local def N_local_stress_tot(self): Returns the normal force of all elements. Ns=[self.N_local_stress(e)for e in range (self.nelements)] return Ns def G_local(self): Returns the global geometric stiffness matrix L = sp.Symbol('L') s = sp.Symbol('s') S=self.S() Ge=sp.Matrix([[sp.integrate(S[1,i_local].diff(s)*S[1,j_local].diff(s),(s,0,L) )for i_local in range(6)] for j_local in range(6)]) return Ge def G_local_rotated(self): Gives the analytical expression the local geometric stiffness matrix in the global coordinate system as a function of the orientation angle alpha alpha = sp.Symbol("alpha") R = self.rotation_matrix(alpha) Ge = R.transpose()*self.G_local()*R return Ge def assemble_G(self): Returns the global stiffness matrix Ge = self.G_local_rotated() G = np.zeros([self.ndof,self.ndof]) N0=self.N_local_stress_tot() for e in range(self.nelements): Gen = -N0[e].subs({'EI': self.EI[e], 'ES': self.ES[e], 'L': self.Ls[e], 'alpha': self.angles[e]})*Ge.subs({'EI': self.EI[e], 'ES': self.ES[e], 'L': self.Ls[e], 'alpha': self.angles[e]}) for i_local in range(6): for j_local in range(6): G[self.dof_map(e, i_local),self.dof_map(e, j_local)] += Gen[i_local,j_local] return G def bc_apply_G(self,G,blocked_dof): for (dof) in enumerate(blocked_dof): Gbc = G Gbc[dof, :] = 0 Gbc[:, dof] = 0 Gbc[dof, dof] = 1 return Gbc def full_power_method(A, niterations_max=50, tol=1e-15): xn = np.zeros((len(A), niterations_max+1)) xn[:, 0] = np.ones((len(A),)) + 1e-7*np.random.rand(len(A)) rn = np.ones((niterations_max+1,)) for k in range(niterations_max): xn[:,k] = xn[:,k] / np.linalg.norm(xn[:,k]) xn[:,k+1] = np.dot(A, xn[:,k]) rn[k+1] = np.sum(xn[:,k+1])/np.sum(xn[:,k]) if (abs(rn[k+1]-rn[k]) < tol): break if k < niterations_max: rn[k+2:] = rn[k+1] # This ensures the later values are set to something sensible. return (rn[k+1], rn, xn[:,k+1]/ np.linalg.norm(xn[:,k+1])) def inverse_power_method(A, niterations_max=50, tol=1e-15): xn = np.zeros((len(A), niterations_max+1)) xn[:, 0] = np.ones((len(A),)) + 1e-7*np.random.rand(len(A)) rn = np.ones((niterations_max+1,)) for k in range(niterations_max): xn[:,k] = xn[:,k] / np.linalg.norm(xn[:,k]) xn[:,k+1] = np.linalg.solve(A, xn[:,k]) rn[k+1] = np.sum(xn[:,k+1])/np.sum(xn[:,k]) if (abs(rn[k+1]-rn[k]) < tol): break if k < niterations_max: rn[k+2:] = rn[k+1] # This ensures the later values are set to something sensible. return (1.0/rn[k+1], 1.0/rn, xn[:,k+1]/ np.linalg.norm(xn[:,k+1])) E=1.3 #en MPa h=7.5 #en mm b=20. #en mm Lx=55. #en mm Lyh=60. #en mm Lyb=45. #en mm I=b*(h**3)/12 #en mm^4 S=b*h #en mm^2 eps=10**(-3) g=9.81 #en m.s^(-2) m=1 #en kg n_elements = 10 xnodes = np.linspace(0,1000,n_elements + 1) ynodes = np.linspace(0,0,n_elements + 1) nodes = np.array([xnodes,ynodes]).transpose() n_nodes = xnodes.size elements=np.array([[0,1],[1,2],[2,3],[3,4],[4,5],[5,6],[6,7],[7,8],[8,9],[9,10]]) frame= Frame_Buckling(nodes,elements) frame.plot_with_label() ne = frame.nelements ndof = frame.ndof EI = np.ones(ne)*E*I ES = np.ones(ne)*E*S f_x = 0*np.ones(ne) f_y = 0*np.ones(ne) frame.set_distributed_loads(f_x, f_y) frame.set_stiffness(EI, ES) blocked_dof = np.array([0, 1, ndof-2]) bc_values = np.array([0, 0, 0]) K = frame.assemble_K() F=frame.assemble_F() #F[12]=F[12]-.5*EI[0]*np.pi**2 F[ndof-3]=F[ndof-3]-1. Kbc, Fbc = frame.bc_apply(K, F, blocked_dof, bc_values) Usol = np.linalg.solve(Kbc,Fbc) Usol frame.set_displacement(Usol) frame.plot_with_label() frame.plot_displaced() Gbc=frame.assemble_G() G=frame.bc_apply_G(Gbc,blocked_dof) Ks = sparse.csr_matrix(K) Gs = sparse.csr_matrix(G) val, vect = linalg.eigsh(Ks, 5, Gs, which = 'LA', sigma =4.) print(val) frame.set_displacement(10*vect[:,0]) frame.plot_with_label() frame.plot_displaced() E*I*np.pi**2/1000**2 Explanation: Project on frame buckling Define a new class Frame_Buckling as a child of the class LinearFrame provided in frame.py. Add in this class methods to extract the normal stress $\bar N_0$ in each bar and to assemble the geometric stiffness $G$. Solve the buckling problem for a simply supported straight beam using the Frame_Buckling class and compare with the analytical solution (see e.g. cours/TD 3A103) when using different number of element. Solve the buckling problem for the frame of the Exercise 2 in the notebook 02-LinearFrame. Compare with the experimental findings on the first buckling load and mode. Represent the first 2 buckling modes. Propose an improved geometry of the frame of the Exercise 2 in the notebook 02-LinearFrame to increase the buckling load. Support your proposal with numerical results. (For classes in python you can look https://en.wikibooks.org/wiki/A_Beginner%27s_Python_Tutorial/Classes, for example) End of explanation E=1.3 #en MPa h=7.5 #en mm b=20. #en mm Lx=55. #en mm Lyh=60. #en mm Lyb=45. #en mm I=b*(h**3)/12 #en mm^4 S=b*h #en mm^2 eps=10**(-3) g=9.81 #en m.s^(-2) m=0.05 #en kg nodes= np.array([[0.,0.],[0.,Lyb],[0.,Lyh+Lyb],[Lx/2,Lyh+Lyb],[Lx,Lyh+Lyb],[Lx,Lyb],[Lx,0.]]) elements=np.array([[0,1],[1,5],[1,2],[2,3],[3,4],[4,5],[5,6]]) frame= Frame_Buckling(nodes,elements) frame.plot_with_label() ne = frame.nelements ndof = frame.ndof EI = np.ones(ne)*E*I ES = np.ones(ne)*E*S EI[1]=100*E*I;EI[3]=100*E*I;EI[4]=100*E*I ES[1]=100*E*S;ES[3]=100*E*S;ES[4]=100*E*S f_x = 0*np.ones(7) f_y = 0*np.ones(7) frame.set_distributed_loads(f_x, f_y) frame.set_stiffness(EI, ES) blocked_dof = np.array([0, 1, 2, ndof-3, ndof-2, ndof-1]) bc_values = np.array([0, 0, 0, 0, 0, 0]) K = frame.assemble_K() F=frame.assemble_F() #F[10]=F[10]-.5*EI[0]*np.pi**2/(Lyb+Lyh)**2 F[10]=F[10]-1. Kbc, Fbc = frame.bc_apply(K, F, blocked_dof, bc_values) Usol = np.linalg.solve(Kbc,Fbc) Usol frame.set_displacement(Usol) Ge=frame.N_local_stress_tot() Gbc=frame.assemble_G() G=frame.bc_apply_G(Gbc,blocked_dof) Ks = sparse.csr_matrix(K) Gs = sparse.csr_matrix(G) val, vect = linalg.eigsh(Ks, 3, Gs, which = 'LA', sigma = 3.) print(val) print(vect[:,0]) frame.set_displacement(1*vect[:,0]) frame.plot_with_label() frame.plot_displaced() Explanation: Essai sur exercice 2 End of explanation nodes= np.array([[0.,0.],[0.,Lyb],[0.,Lyh+Lyb],[Lx/2,Lyh+Lyb],[Lx,Lyh+Lyb],[Lx,Lyb],[Lx/2,Lyh/2+Lyb],[Lx,0.]]) elements=np.array([[0,1],[1,5],[1,2],[2,3],[3,4],[4,5],[1,6],[2,6],[4,6],[5,6],[5,7]]) frame= Frame_Buckling(nodes,elements) frame.plot_with_label() ne = frame.nelements ndof = frame.ndof EI = np.ones(ne)*E*I ES = np.ones(ne)*E*S EI[1]=100*E*I;EI[3]=100*E*I;EI[4]=100*E*I ES[1]=100*E*S;ES[3]=100*E*S;ES[4]=100*E*S f_x = 0*np.ones(ne) f_y = 0*np.ones(ne) frame.set_distributed_loads(f_x, f_y) frame.set_stiffness(EI, ES) blocked_dof = np.array([0, 1, 2, ndof-3, ndof-2, ndof-1]) bc_values = np.array([0, 0, 0, 0, 0, 0]) K = frame.assemble_K() F=frame.assemble_F() #F[10]=F[10]-.5*EI[0]*np.pi**2/(Lyb+Lyh)**2 F[10]=F[10]-1. Kbc, Fbc = frame.bc_apply(K, F, blocked_dof, bc_values) Usol = np.linalg.solve(Kbc,Fbc) frame.set_displacement(Usol) Gbc=frame.assemble_G() G=frame.bc_apply_G(Gbc,blocked_dof) Ks = sparse.csr_matrix(K) Gs = sparse.csr_matrix(G) val, vect = linalg.eigsh(Ks, 3, Gs, which = 'LA', sigma =4.) print(val) print(vect[:,1]) frame.set_displacement(1*vect[:,0]) frame.plot_with_label() frame.plot_displaced() Explanation: Essai sur d'autres structures Structure avec des piliers au milieu : End of explanation nodes= np.array([[0.,0.],[0.,Lyb],[0.,Lyh+Lyb],[Lx/2,Lyh+Lyb],[Lx,Lyh+Lyb],[Lx,Lyb],[Lx,0.]]) elements=np.array([[0,1],[1,5],[1,2],[2,3],[3,4],[4,5],[5,6]]) frame= Frame_Buckling(nodes,elements) frame.plot_with_label() ne = frame.nelements ndof = frame.ndof EI = np.ones(ne)*E*I ES = np.ones(ne)*E*S f_x = 0*np.ones(7) f_y = 0*np.ones(7) frame.set_distributed_loads(f_x, f_y) frame.set_stiffness(EI, ES) blocked_dof = np.array([0, 1, 2, ndof-3, ndof-2, ndof-1]) bc_values = np.array([0, 0, 0, 0, 0, 0]) K = frame.assemble_K() F=frame.assemble_F() #F[7]=F[7]-.5*EI[0]*np.pi**2/(Lyb+Lyh)**2 #F[13]=F[13]-.5*EI[0]*np.pi**2/(Lyb+Lyh)**2 F[7]=F[7]-1. F[13]=F[13]-1. Kbc, Fbc = frame.bc_apply(K, F, blocked_dof, bc_values) Usol = np.linalg.solve(Kbc,Fbc) Usol frame.set_displacement(Usol) Gbc=frame.assemble_G() G=frame.bc_apply_G(Gbc,blocked_dof) Ks = sparse.csr_matrix(K) Gs = sparse.csr_matrix(G) val, vect = linalg.eigsh(Ks, 6, Gs, which = 'LA', sigma =1.2) print(val) print(vect[:,0]) frame.set_displacement(1*vect[:,0]) frame.plot_with_label() frame.plot_displaced() Explanation: Structure avec deux forces et sans étage renforcé : End of explanation nodes= np.array([[0.,0.],[0.,Lyb],[0.,Lyh+Lyb],[Lx/2,Lyh+Lyb],[Lx,Lyh+Lyb],[Lx,Lyb],[Lx,0.]]) elements=np.array([[0,1],[1,5],[1,2],[2,3],[3,4],[4,5],[2,5],[0,5],[5,6]]) frame= Frame_Buckling(nodes,elements) frame.plot_with_label() ne = frame.nelements ndof = frame.ndof EI = np.ones(ne)*E*I ES = np.ones(ne)*E*S f_x = 0*np.ones(ne) f_y = 0*np.ones(ne) frame.set_distributed_loads(f_x, f_y) frame.set_stiffness(EI, ES) blocked_dof = np.array([0, 1, 2, ndof-3, ndof-2, ndof-1]) bc_values = np.array([0, 0, 0, 0, 0, 0]) K = frame.assemble_K() F=frame.assemble_F() #F[7]=F[7]-.5*EI[0]*np.pi**2/(Lyb+Lyh)**2 #F[13]=F[13]-.5*EI[0]*np.pi**2/(Lyb+Lyh)**2 F[10]=F[10]-1. Kbc, Fbc = frame.bc_apply(K, F, blocked_dof, bc_values) Usol = np.linalg.solve(Kbc,Fbc) Usol frame.set_displacement(Usol) Gbc=frame.assemble_G() G=frame.bc_apply_G(Gbc,blocked_dof) Ks = sparse.csr_matrix(K) Gs = sparse.csr_matrix(G) val, vect = linalg.eigsh(Ks, 3, Gs, which = 'LA', sigma =9.) print(val) frame.set_displacement(1*vect[:,0]) frame.plot_with_label() frame.plot_displaced() Explanation: Structure croisée : End of explanation
5,109
Given the following text description, write Python code to implement the functionality described below step by step Description: Teorema de Norton Jupyter Notebook desenvolvido por Gustavo S.S. O teorema de Norton afirma que um circuito linear de dois terminais pode ser substituído por um circuito equivalente formado por uma fonte de corrente IN em paralelo com um resistor RN, em que IN é a corrente de curto- -circuito através dos terminais e RN é a resistência de entrada ou equivalente nos terminais quando as fontes independentes forem desligadas. \begin{align} {\Large R_{Th} = R_N} \end{align} Para descobrir a corrente IN de Norton, determinamos a corrente de curto--circuito que flui entre os terminais a e b em ambos os circuitos da Figura 4.37. Os circuitos equivalentes de Thévenin e de Norton estão relacionados por uma transformação de fontes. \begin{align} {\Large I_N = \frac{V_{Th}}{R_{Th}}} \end{align} Exemplo 4.11 Determine o circuito equivalente de Norton do circuito da Figura 4.39 nos terminais a-b Step1: Problema Prático 4.11 Determine o equivalente de Norton para o circuito da Figura 4.42 nos terminais a-b. Step2: Exemplo 4.12 Usando o teorema de Norton, determine RN e IN do circuito da Figura 4.43 nos terminais a-b. Step3: Problema Prático 4.12 Determine o circuito equivalente de Norton do circuito da Figura 4.45 nos terminais a-b. Step4: Máxima Transferência de Potência Para um dado circuito, VTh e RTh são fixas. Variando a resistência de carga RL, a potência liberada à carga varia conforme descrito na Figura 4.49. Percebemos, dessa figura, que a potência é pequena para valores pequenos ou grandes de RL, mas máxima para o mesmo valor de RL entre 0 e ∞. \begin{align} {\Large P = i^2R_L = (\frac{V_{Th}}{R_{Th} + R_L})^2R_L} \end{align} A potência máxima é transferida a uma carga quando a resistência de carga for igual à resistência de Thévenin quando vista da carga (RL = RTh). A fonte e a carga são ditas casadas quando RL = RTh. Assim, a potência máxima para elementos casados é Step5: Problema Prático 4.13 Determine o valor de RL que irá drenar a potência máxima do restante do circuito na Figura 4.52. Calcule a potência máxima.
Python Code: print("Exemplo 4.11") #Superposicao #Analise Fonte de Tensao #Req1 = 4 + 8 + 8 = 20 #i1 = 12/20 = 3/5 A #Analise Fonte de Corrente #i2 = 2*4/(4 + 8 + 8) = 8/20 = 2/5 A #in = i1 + i2 = 1A In = 1 #Req2 = paralelo entre Req 1 e 5 #20*5/(20 + 5) = 100/25 = 4 Rn = 4 print("Corrente In:",In,"A") print("Resistência Rn:",Rn) Explanation: Teorema de Norton Jupyter Notebook desenvolvido por Gustavo S.S. O teorema de Norton afirma que um circuito linear de dois terminais pode ser substituído por um circuito equivalente formado por uma fonte de corrente IN em paralelo com um resistor RN, em que IN é a corrente de curto- -circuito através dos terminais e RN é a resistência de entrada ou equivalente nos terminais quando as fontes independentes forem desligadas. \begin{align} {\Large R_{Th} = R_N} \end{align} Para descobrir a corrente IN de Norton, determinamos a corrente de curto--circuito que flui entre os terminais a e b em ambos os circuitos da Figura 4.37. Os circuitos equivalentes de Thévenin e de Norton estão relacionados por uma transformação de fontes. \begin{align} {\Large I_N = \frac{V_{Th}}{R_{Th}}} \end{align} Exemplo 4.11 Determine o circuito equivalente de Norton do circuito da Figura 4.39 nos terminais a-b End of explanation print("Problema Prático 4.11") #Analise Vs #i1 = 15/(3 + 3) = 15/6 A #Analise Cs #i2 = 4*3/(3 + 3) = 2 A #in = i1 + i2 = 15/6 + 2 = 27/6 = 4.5 In = 4.5 #Rn = 6*6/(6 + 6) = 3 Rn = 3 print("Corrente In:",In,"A") print("Resistência Rn:",Rn) Explanation: Problema Prático 4.11 Determine o equivalente de Norton para o circuito da Figura 4.42 nos terminais a-b. End of explanation print("Exemplo 4.12") #Aplica-se tensao Vo = 1V entre os terminais a-b #Assim Rth = Rn = 5 Rn = 5 #Analise Nodal #ix = 10/4 = 2.5 A #i1 = 10/5 = 2 A #in = 2ix + i1 = 5 + 2 = 7 A In = 7 print("Corrente In:",In,"A") print("Resistência Rn:",Rn) Explanation: Exemplo 4.12 Usando o teorema de Norton, determine RN e IN do circuito da Figura 4.43 nos terminais a-b. End of explanation print("Problema Prático 4.12") #Aplica-se Vo = 2V entre os terminais a-b #Assim Vx = 2V #Analise Nodal #V1 = tensao sobre resistor 6 = 3Vx = 6V #i1 = 1 A #i2 = 2/2 = 1 A #io = i1 + i2 = 2 A #Rn = Vo/io = 2/2 = 1 Rn = 1 #In = 10 = corrente de curto circuito In = 10 print("Resistência Rn:",Rn) print("Corrente In:",In,"A") Explanation: Problema Prático 4.12 Determine o circuito equivalente de Norton do circuito da Figura 4.45 nos terminais a-b. End of explanation print("Exemplo 4.13") #Req1 = 6*12/(6 + 12) = 4 #Rn = 4 + 3 + 2 = 9 Rn = 9 #Superposicao #Fonte de Corrente #i1 = 2*7/(7 + 2) = 14/9 #Fonte de Tensao #Req2 = 12*5/(12 + 5) = 60/17 #Req3 = 6 + 60/17 = 162/17 #it = 12/(162/17) = 12*17/162 #i2 = it*12/(12 + 5) = 8/9 #in = i1 + i2 = 14/9 + 8/9 = 22/9 In = 22/9 P = (Rn/4)*In**2 print("Corrente In:",In,"A") print("Potência Máxima Transferida:",P,"W") Explanation: Máxima Transferência de Potência Para um dado circuito, VTh e RTh são fixas. Variando a resistência de carga RL, a potência liberada à carga varia conforme descrito na Figura 4.49. Percebemos, dessa figura, que a potência é pequena para valores pequenos ou grandes de RL, mas máxima para o mesmo valor de RL entre 0 e ∞. \begin{align} {\Large P = i^2R_L = (\frac{V_{Th}}{R_{Th} + R_L})^2R_L} \end{align} A potência máxima é transferida a uma carga quando a resistência de carga for igual à resistência de Thévenin quando vista da carga (RL = RTh). A fonte e a carga são ditas casadas quando RL = RTh. Assim, a potência máxima para elementos casados é: \begin{align} {\Large P_{max} = \frac{V_{Th}^2}{4R_{Th}}} \end{align} Exemplo 4.13 Determine o valor de RL para a máxima transferência de potência no circuito da Figura 4.50. Determine a potência máxima. End of explanation print("Problema Prático 4.13") import numpy as np #Analise In #vx = 2i1 #vx + (i1 - i2) + 3vx = 9 #i1 - i2 + 4vx = 9 #9i1 - i2 = 9 #(i2 - i1) + 4i2 = 3vx #-i1 + 5i2 = 6i1 #-7i1 + 5i2 = 0 coef = np.matrix("9 -1;-7 5") res = np.matrix("9;0") I = np.linalg.inv(coef)*res In = -I[1] #Analise Rn #io = 1 A #vx = 2i1 #vx + (i1 + io) + 3vx = 0 #i1 + 4vx = -1 #i1 + 8i1 = -1 #i1 = -1/9 #vx = -2/9 #Vab = 4io + (io + i1) + 3vx #Vab = 4 + 1 -1/9 -6/9 = 38/9 #Rn = Vab/io = 38/9 Rn = 38/9 P = (Rn/4) * In**2 print("Resistencia Rl para potência Maxima:",Rn) print("Potencia maxima:",float(P),"W") Explanation: Problema Prático 4.13 Determine o valor de RL que irá drenar a potência máxima do restante do circuito na Figura 4.52. Calcule a potência máxima. End of explanation
5,110
Given the following text description, write Python code to implement the functionality described below step by step Description: Lyzenga Method I want to apply the Lyzenga 2006 method for comparison. Step1: Preprocessing That happened here. Step2: Depth Limit Lyzenga et al methods for determining shallow water don't work for me based on the high reflectance of the water column and extremely low reflectance of Ecklonia for the blue bands. So I'm just going to limit the depths under consideration using the multibeam data. Step3: Equalize Masks I need to make sure I'm dealing with the same pixels in depth and image data. Step4: Dark Pixel Subtraction I need to calculate a modified version of $X_i = ln(L_i - L_{si})$. In order to do that I'll first load the deep water means and standard deviations I calculated here. Step5: I applied the same modification as Armstrong (1993), 2 standard deviations from $L_{si}$, to avoid getting too many negative values because those can't be log transformed. Step6: I'll need to equalize the masks again. I'll call the depths h in reference to Lyzenga et al. 2006 (e.g. equation 14). Step7: Dataframe Put my $X_i$ and my $h$ values into a dataframe so I can regress them easily. Step8: Data Split I need to split my data into training and test sets. Step9: Find the Best Band Combo That's the one that returns the largest $R^2$ value. Step10: Build the model Step11: Check the Results Step12: Effect of Depth Limit on Model Accuracy Given a fixed number of training points (n=1500), what is the effect of limiting the depth of the model. Step13: Limited Training Data I want to see how the accuracy of this method is affected by the reduction of training data. Step14: Full Prediction Perform a prediction on all the data and find the errors. Save the outputs for comparison with KNN.
Python Code: %pylab inline import geopandas as gpd import pandas as pd from OpticalRS import * from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error from sklearn.cross_validation import train_test_split import itertools import statsmodels.formula.api as smf from collections import OrderedDict style.use('ggplot') cd ../data Explanation: Lyzenga Method I want to apply the Lyzenga 2006 method for comparison. End of explanation imrds = RasterDS('glint_corrected.tif') imarr = imrds.band_array deprds = RasterDS('Leigh_Depth_atAcq_Resampled.tif') darr = -1 * deprds.band_array.squeeze() Explanation: Preprocessing That happened here. End of explanation darr = np.ma.masked_greater( darr, 20.0 ) Explanation: Depth Limit Lyzenga et al methods for determining shallow water don't work for me based on the high reflectance of the water column and extremely low reflectance of Ecklonia for the blue bands. So I'm just going to limit the depths under consideration using the multibeam data. End of explanation imarr = ArrayUtils.mask3D_with_2D( imarr, darr.mask ) darr = np.ma.masked_where( imarr[...,0].mask, darr ) Explanation: Equalize Masks I need to make sure I'm dealing with the same pixels in depth and image data. End of explanation dwmeans = np.load('darkmeans.pkl') dwstds = np.load('darkstds.pkl') Explanation: Dark Pixel Subtraction I need to calculate a modified version of $X_i = ln(L_i - L_{si})$. In order to do that I'll first load the deep water means and standard deviations I calculated here. End of explanation dpsub = ArrayUtils.equalize_band_masks( \ np.ma.masked_less( imarr - (dwmeans - 2 * dwstds), 0.0 ) ) print "After that I still retain %.1f%% of my pixels." % ( 100 * dpsub.count() / float( imarr.count() ) ) X = np.log( dpsub ) # imrds.new_image_from_array(X.astype('float32'),'LyzengaX.tif') Explanation: I applied the same modification as Armstrong (1993), 2 standard deviations from $L_{si}$, to avoid getting too many negative values because those can't be log transformed. End of explanation h = np.ma.masked_where( X[...,0].mask, darr ) imshow( X[...,1] ) Explanation: I'll need to equalize the masks again. I'll call the depths h in reference to Lyzenga et al. 2006 (e.g. equation 14). End of explanation df = ArrayUtils.band_df( X ) df['depth'] = h.compressed() Explanation: Dataframe Put my $X_i$ and my $h$ values into a dataframe so I can regress them easily. End of explanation x_train, x_test, y_train, y_test = train_test_split( \ df[imrds.band_names],df.depth,train_size=300000,random_state=5) traindf = ArrayUtils.band_df( x_train ) traindf['depth'] = y_train.ravel() testdf = ArrayUtils.band_df( x_test ) testdf['depth'] = y_test.ravel() Explanation: Data Split I need to split my data into training and test sets. End of explanation def get_fit( ind, x_train, y_train ): skols = LinearRegression() skolsfit = skols.fit(x_train[...,ind],y_train) return skolsfit def get_selfscore( ind, x_train, y_train ): fit = get_fit( ind, x_train, y_train ) return fit.score( x_train[...,ind], y_train ) od = OrderedDict() for comb in itertools.combinations( range(8), 2 ): od[ get_selfscore(comb,x_train,y_train) ] = [ c+1 for c in comb ] od_sort = sorted( od.items(), key=lambda t: t[0], reverse=True ) od_sort best_ind = np.array( od_sort[0][1] ) - 1 best_ind Explanation: Find the Best Band Combo That's the one that returns the largest $R^2$ value. End of explanation skols = LinearRegression() skolsfit = skols.fit(x_train[...,best_ind],y_train) print "h0 = %.2f, h2 = %.2f, h3 = %.2f" % \ (skolsfit.intercept_,skolsfit.coef_[0],skolsfit.coef_[1]) Explanation: Build the model End of explanation print "R^2 = %.6f" % skolsfit.score(x_test[...,best_ind],y_test) pred = skolsfit.predict(x_test[...,best_ind]) fig,ax = plt.subplots(1,1,figsize=(8,6)) mapa = ax.hexbin(pred,y_test,mincnt=1,bins='log',gridsize=500,cmap=plt.cm.hot) # ax.scatter(pred,y_test,alpha=0.008,edgecolor='none') ax.set_ylabel('MB Depth') ax.set_xlabel('Predicted Depth') rmse = np.sqrt( mean_squared_error( y_test, pred ) ) n = x_train.shape[0] tit = "RMSE: %.4f, n=%i" % (rmse,n) ax.set_title(tit) ax.set_aspect('equal') ax.axis([-5,25,-5,25]) ax.plot([-5,25],[-5,25],c='white') cb = plt.colorbar(mapa) cb.set_label("Log10(N)") LyzPredVsMB = pd.DataFrame({'prediction':pred,'mb_depth':y_test}) LyzPredVsMB.to_pickle('LyzPredVsMB.pkl') Explanation: Check the Results End of explanation fullim = imrds.band_array fulldep = -1 * deprds.band_array.squeeze() fullim = ArrayUtils.mask3D_with_2D( fullim, fulldep.mask ) fulldep = np.ma.masked_where( fullim[...,0].mask, fulldep ) dlims = arange(5,31,2.5) drmses,meanerrs,stderrs = [],[],[] for dl in dlims: dlarr = np.ma.masked_greater( fulldep, dl ) iml = ArrayUtils.mask3D_with_2D( fullim, dlarr.mask ) imldsub = ArrayUtils.equalize_band_masks( \ np.ma.masked_less( iml - (dwmeans - 2 * dwstds), 0.0 ) ) imlX = np.log( imldsub ) dlarr = np.ma.masked_where( imlX[...,0].mask, dlarr ) xl_train, xl_test, yl_train, yl_test = train_test_split( \ imlX.compressed().reshape(-1,8),dlarr.compressed(),train_size=1500,random_state=5) linr = LinearRegression() predl = linr.fit(xl_train[...,best_ind],yl_train).predict( xl_test[...,best_ind] ) drmses.append( sqrt( mean_squared_error(yl_test,predl) ) ) meanerrs.append( (yl_test - predl).mean() ) stderrs.append( (yl_test - predl).std() ) fig,(ax1,ax2) = subplots(1,2,figsize=(12,6)) ax1.plot(dlims,np.array(drmses),marker='o',c='b') ax1.set_xlabel("Data Depth Limit (m)") ax1.set_ylabel("Model RMSE (m)") em,es = np.array(meanerrs), np.array(stderrs) ax2.plot(dlims,em,marker='o',c='b') ax2.plot(dlims,em+es,linestyle='--',c='k') ax2.plot(dlims,em-es,linestyle='--',c='k') ax2.set_xlabel("Data Depth Limit (m)") ax2.set_ylabel("Model Mean Error (m)") deplimdf = pd.DataFrame({'depth_lim':dlims,'rmse':drmses,\ 'mean_error':meanerrs,'standard_error':stderrs}) deplimdf.to_pickle('LyzengaDepthLimitDF.pkl') Explanation: Effect of Depth Limit on Model Accuracy Given a fixed number of training points (n=1500), what is the effect of limiting the depth of the model. End of explanation # ns = np.logspace(log10(0.00003*df.depth.count()),log10(0.80*df.depth.count()),15) int(ns.min()),int(ns.max()) ns = np.logspace(1,log10(0.80*df.depth.count()),15) ltdf = pd.DataFrame({'train_size':ns}) for rs in range(10): nrmses = [] for n in ns: xn_train,xn_test,yn_train,yn_test = train_test_split( \ df[imrds.band_names],df.depth,train_size=int(n),random_state=rs+100) thisols = LinearRegression() npred = thisols.fit(xn_train[...,best_ind],yn_train).predict(xn_test[...,best_ind]) nrmses.append( sqrt( mean_squared_error(yn_test,npred ) ) ) dflabel = 'rand_state_%i' % rs ltdf[dflabel] = nrmses print "min points: %i, max points: %i" % (int(ns.min()),int(ns.max())) fig,ax = subplots(1,1,figsize=(10,6)) for rs in range(10): dflabel = 'rand_state_%i' % rs ax.plot(ltdf['train_size'],ltdf[dflabel]) ax.set_xlabel("Number of Training Points") ax.set_ylabel("Model RMSE (m)") # ax.set_xlim(0,5000) ax.set_xscale('log') ax.set_title("Rapidly Increasing Accuracy With More Training Data") ltdf.to_pickle('LyzengaAccuracyDF.pkl') Explanation: Limited Training Data I want to see how the accuracy of this method is affected by the reduction of training data. End of explanation full_pred = skolsfit.predict(X[...,best_ind]) full_pred = np.ma.masked_where( h.mask, full_pred ) full_errs = full_pred - h blah = hist( full_errs.compressed(), 100 ) figure(figsize=(12,11)) vmin,vmax = np.percentile(full_errs.compressed(),0.1),np.percentile(full_errs.compressed(),99.9) imshow( full_errs, vmin=vmin, vmax=vmax ) ax = gca() ax.set_axis_off() ax.set_title("Depth Errors (m)") colorbar() full_pred.dump('LyzDepthPred.pkl') full_errs.dump('LyzDepthPredErrs.pkl') Explanation: Full Prediction Perform a prediction on all the data and find the errors. Save the outputs for comparison with KNN. End of explanation
5,111
Given the following text description, write Python code to implement the functionality described below step by step Description: Exploring netCDF Datasets Using the xarray Package This notebook provides discussion, examples, and best practices for working with netCDF datasets in Python using the xarray package. Topics include Step1: Note that we alias numpy to np and xarray to xr so that we don't have to type as much. xarray provides a open_dataset function that allows us to load a netCDF dataset into a Python data structure by simply passing in a file path/name, or an ERDDAP server URL and dataset ID. Let's explore the Salish Sea NEMO model bathymetry data Step2: See the Exploring netCDF Datasets from ERDDAP notebook for more information about ERDDAP dataset URLs. We could have opened the same dataset from a local file with Step3: Printing the string respresentation of the ds data structure that open_dataset() returns gives us lots of information about the dataset and its metadata Step4: open_dataset() returns an xarray.Dataset object that is xarray’s multi-dimensional equivalent of a pandas.DataFrame. It is a dict-like container of labeled arrays (DataArray objects) with aligned dimensions. It is designed as an in-memory representation of the data model from the netCDF file format. Dataset objects have four key properties Step5: So, we have a dataset that has 2 dimensions called gridX and gridY of size 398 and 898, respectively, 3 variables called longitude, latitude, and bathymetry, and 2 coordinates with the same names as the dimensions, gridX and gridY. The xarray docs have a good explanation and a diagram about the distinction between coordinates and data variables. If you are already familiar with working with netCDF datasets via the netCDF4-python package, you will note that the dims and data_vars attributes provide similar information to that produced by functions in the SalishSeaTools.nc_tools module. xarray provides a higher level Python interface to datasets. We'll see how the dimensions and variables are related, and how to work with the data in the variables in a moment, but first, let's look at the dataset attributes Step6: Dataset attributes are metadata. They tell us about the dataset as a whole Step7: This tells us a whole lot of useful information about the longitude data values in our bathymetry dataset, for instance Step8: Dataset variables are xarray.DataArray objects. In addition to their attributes, they carry a bunch of other useful properties and methods that you can read about in the xarray docs. Perhaps most importantly the data associated with the variables are stored as NumPy arrays. So, we can use NumPy indexing and slicing to access the data values. For instance, to get the latitudes and longitudes of the 4 corners of the domain Step9: You can also access the entire variable data array, or subsets of it using slicing.
Python Code: import numpy as np import xarray as xr Explanation: Exploring netCDF Datasets Using the xarray Package This notebook provides discussion, examples, and best practices for working with netCDF datasets in Python using the xarray package. Topics include: The xarray package Reading netCDF datasets into Python data structures Exploring netCDF dataset dimensions, variables, and attributes Working with netCDF variable data as NumPy arrays This notebook is a companion to the Exploring netCDF Files and Exploring netCDF Datasets from ERDDAP notebooks. Those notebooks focus on using the netcdf4-python package to read netCDF datasets from local files and ERDDAP servers on the Internet, respectively. This notebook is about using the xarray package to work with netCDF datasets. xarray uses the netcdf4-python package behind the scenes, so datasets can be read from either local files or from ERDDAP servers. xarray is a Python package that applies the concepts and tools for working with labeled data structures from the pandas package to the physical sciences. Whereas pandas excels at manipulating tablular data, xarray brings similar power to working with N-dimensional arrays. If you are already familiar with working with netCDF datasets via the netCDF4-python package, you can think of xarray as a higher level Python tools for working with those dataset. Creating netCDF files and working with their attribute metadata is documented elsewhere: http://salishsea-meopar-docs.readthedocs.org/en/latest/code-notes/salishsea-nemo/nemo-forcing/netcdf4.html. This notebook assumes that you are working in Python 3. If you don't have a Python 3 environment set up, please see our Anaconda Python Distribution docs for instructions on how to set one up. xarray and some of the packages that it depends on are not included in the default Anaconda collection of packages, so you may need to installed them explicitly: $ conda install xarray netCDF4 bottleneck bottleneck is a package that speeds up NaN-skipping and rolling window aggregations. If you are using a version of Python earlier than 3.5 (check with the command python --version), you should also install cyordereddict to speed internal operations with xarray data structures. It is not required for Python ≥3.5 because collections.OrderedDict has been rewritten in C, making it even faster than cyordereddict. Let's start with some imports. It's good Python form to keep all of our imports at the top of the file. End of explanation ds = xr.open_dataset('https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSnBathymetry2V1') Explanation: Note that we alias numpy to np and xarray to xr so that we don't have to type as much. xarray provides a open_dataset function that allows us to load a netCDF dataset into a Python data structure by simply passing in a file path/name, or an ERDDAP server URL and dataset ID. Let's explore the Salish Sea NEMO model bathymetry data: End of explanation lds = xr.open_dataset('../../NEMO-forcing/grid/bathy_meter_SalishSea2.nc') Explanation: See the Exploring netCDF Datasets from ERDDAP notebook for more information about ERDDAP dataset URLs. We could have opened the same dataset from a local file with: ds = xr.open_dataset('../../NEMO-forcing/grid/bathy_meter_SalishSea2.nc') End of explanation print(ds) Explanation: Printing the string respresentation of the ds data structure that open_dataset() returns gives us lots of information about the dataset and its metadata: End of explanation lds ds.dims ds.data_vars ds.coords Explanation: open_dataset() returns an xarray.Dataset object that is xarray’s multi-dimensional equivalent of a pandas.DataFrame. It is a dict-like container of labeled arrays (DataArray objects) with aligned dimensions. It is designed as an in-memory representation of the data model from the netCDF file format. Dataset objects have four key properties: dims: a dictionary mapping from dimension names to the fixed length of each dimension (e.g., {'x': 6, 'y': 6, 'time': 8}) data_vars: a dict-like container of DataArrays corresponding to variables coords: another dict-like container of DataArrays intended to label points used in data_vars (e.g., arrays of numbers, datetime objects or strings) attrs: an OrderedDict to hold arbitrary metadata Let's look at them one at a time: End of explanation ds.attrs Explanation: So, we have a dataset that has 2 dimensions called gridX and gridY of size 398 and 898, respectively, 3 variables called longitude, latitude, and bathymetry, and 2 coordinates with the same names as the dimensions, gridX and gridY. The xarray docs have a good explanation and a diagram about the distinction between coordinates and data variables. If you are already familiar with working with netCDF datasets via the netCDF4-python package, you will note that the dims and data_vars attributes provide similar information to that produced by functions in the SalishSeaTools.nc_tools module. xarray provides a higher level Python interface to datasets. We'll see how the dimensions and variables are related, and how to work with the data in the variables in a moment, but first, let's look at the dataset attributes: End of explanation ds.longitude Explanation: Dataset attributes are metadata. They tell us about the dataset as a whole: how, when, and by whom it was created, how it has been modified, etc. The meanings of the various attributes and the conventions for them that we use in the Salish Sea MEOPAR project are documented elsewhere. Variables also have attributes : End of explanation ds.bathymetry.units, ds.bathymetry.long_name Explanation: This tells us a whole lot of useful information about the longitude data values in our bathymetry dataset, for instance: They are 32-bit floating point values They are associated with the gridY and gridX dimensions, in that order The units are degrees measured eastward (from the Greenwich meridian) etc. We can access the attributes of the dataset variables using dotted notation: End of explanation ds.latitude.shape print('Latitudes and longitudes of domain corners:') pt = (0, 0) print(' 0, 0: ', ds.latitude.values[pt], ds.longitude.values[pt]) pt = (0, ds.latitude.shape[1] - 1) print(' 0, x-max: ', ds.latitude.values[pt], ds.longitude.values[pt]) pt = (ds.latitude.shape[0] - 1, 0) print(' y-max, 0: ', ds.latitude.values[pt], ds.longitude.values[pt]) pt = (ds.latitude.shape[0] - 1, ds.longitude.shape[1] - 1) print(' y-max, x-max:', ds.latitude.values[pt], ds.longitude.values[pt]) Explanation: Dataset variables are xarray.DataArray objects. In addition to their attributes, they carry a bunch of other useful properties and methods that you can read about in the xarray docs. Perhaps most importantly the data associated with the variables are stored as NumPy arrays. So, we can use NumPy indexing and slicing to access the data values. For instance, to get the latitudes and longitudes of the 4 corners of the domain: End of explanation ds.latitude.values ds.longitude.values[42:45, 128:135] ds.longitude.values[:2, :2], ds.latitude.values[-2:, -2:] Explanation: You can also access the entire variable data array, or subsets of it using slicing. End of explanation
5,112
Given the following text description, write Python code to implement the functionality described below step by step Description: The Sequential model is a linear stack of layers. You can create a Sequential model by passing a list of layer instances to the constructor Step1: You can also simply add layers via the .add() method Step2: Specifying the input shape The model needs to know what input shape it should expect. For this reason, the first layer in a Sequential model (and only the first, because following layers can do automatic shape inference) needs to receive information about its input shape. There are several possible ways to do this Step3: Compilation Before training a model, you need to configure the learning process, which is done via the compile method. It receives three arguments Step4: Training Keras models are trained on Numpy arrays of input data and labels. For training a model, you will typically use the fit function. Read its documentation here. Step5: Examples Here are a few examples to get you started! Multilayer Perceptron (MLP) for multi-class softmax classification Step6: MLP for binary classification
Python Code: from keras.models import Sequential from keras.layers import Dense, Activation model = Sequential([ Dense(32, input_shape=(784,)), Activation('relu'), Dense(10), Activation('softmax'), ]) Explanation: The Sequential model is a linear stack of layers. You can create a Sequential model by passing a list of layer instances to the constructor: End of explanation model = Sequential() model.add(Dense(32, input_dim=784)) model.add(Activation('relu')) Explanation: You can also simply add layers via the .add() method: End of explanation model = Sequential() model.add(Dense(32, input_shape=(784,))) model = Sequential() model.add(Dense(32, input_dim=784)) Explanation: Specifying the input shape The model needs to know what input shape it should expect. For this reason, the first layer in a Sequential model (and only the first, because following layers can do automatic shape inference) needs to receive information about its input shape. There are several possible ways to do this: Pass an input_shape argument to the first layer. This is a shape tuple (a tuple of integers or None entries, where None indicates that any positive integer may be expected). In input_shape, the batch dimension is not included. Some 2D layers, such as Dense, support the specification of their input shape via the argument input_dim, and some 3D temporal layers support the arguments input_dim and input_length. If you ever need to specify a fixed batch size for your inputs (this is useful for stateful recurrent networks), you can pass a batch_size argument to a layer. If you pass both batch_size=32 and input_shape=(6, 8) to a layer, it will then expect every batch of inputs to have the batch shape (32, 6, 8). As such, the following snippets are strictly equivalent: End of explanation # For a multi-class classification problem model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) # For a binary classification problem model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) # For a mean squared error regression problem model.compile(optimizer='rmsprop', loss='mse') # For custom metrics import keras.backend as K def mean_pred(y_true, y_pred): return K.mean(y_pred) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy', mean_pred]) Explanation: Compilation Before training a model, you need to configure the learning process, which is done via the compile method. It receives three arguments: An optimizer. This could be the string identifier of an existing optimizer (such as rmsprop or adagrad), or an instance of the Optimizer class. See: optimizers. A loss function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function (such as categorical_crossentropy or mse), or it can be an objective function. See: losses. A list of metrics. For any classification problem you will want to set this to metrics=['accuracy']. A metric could be the string identifier of an existing metric or a custom metric function. End of explanation # For a single-input model with 2 classes (binary classification): model = Sequential() model.add(Dense(32, activation='relu', input_dim=100)) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) # Generate dummy data import numpy as np data = np.random.random((1000, 100)) labels = np.random.randint(2, size=(1000, 1)) # Train the model, iterating on the data in batches of 32 samples model.fit(data, labels, epochs=10, batch_size=32) # For a single-input model with 10 classes (categorical classification): model = Sequential() model.add(Dense(32, activation='relu', input_dim=100)) model.add(Dense(10, activation='softmax')) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) # Generate dummy data import numpy as np data = np.random.random((1000, 100)) labels = np.random.randint(10, size=(1000, 1)) # Convert labels to categorical one-hot encoding one_hot_labels = keras.utils.to_categorical(labels, num_classes=10) # Train the model, iterating on the data in batches of 32 samples model.fit(data, one_hot_labels, epochs=10, batch_size=32) Explanation: Training Keras models are trained on Numpy arrays of input data and labels. For training a model, you will typically use the fit function. Read its documentation here. End of explanation import keras from keras.models import Sequential from keras.layers import Dense, Dropout, Activation from keras.optimizers import SGD # Generate dummy data import numpy as np x_train = np.random.random((1000, 20)) y_train = keras.utils.to_categorical(np.random.randint(10, size=(1000, 1)), num_classes=10) x_test = np.random.random((100, 20)) y_test = keras.utils.to_categorical(np.random.randint(10, size=(100, 1)), num_classes=10) model = Sequential() # Dense(64) is a fully-connected layer with 64 hidden units. # in the first layer, you must specify the expected input data shape: # here, 20-dimensional vectors. model.add(Dense(64, activation='relu', input_dim=20)) model.add(Dropout(0.5)) model.add(Dense(64, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(10, activation='softmax')) sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy']) model.fit(x_train, y_train, epochs=20, batch_size=128) score = model.evaluate(x_test, y_test, batch_size=128) Explanation: Examples Here are a few examples to get you started! Multilayer Perceptron (MLP) for multi-class softmax classification: End of explanation import numpy as np from keras.models import Sequential from keras.layers import Dense, Dropout # Generate dummy data x_train = np.random.random((1000, 20)) y_train = np.random.randint(2, size=(1000, 1)) x_test = np.random.random((100, 20)) y_test = np.random.randint(2, size=(100, 1)) model = Sequential() model.add(Dense(64, input_dim=20, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(64, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(x_train, y_train, epochs=20, batch_size=128) score = model.evaluate(x_test, y_test, batch_size=128) Explanation: MLP for binary classification: End of explanation
5,113
Given the following text description, write Python code to implement the functionality described below step by step Description: Getting started with mne.Report This tutorial covers making interactive HTML summaries with Step1: Before getting started with Step2: This report yields a textual summary of the Step3: The sample dataset also contains SSP projectors stored as individual files. To add them to a report, we also have to provide the path to a file containing an ~mne.Info dictionary, from which the channel locations can be read. Step4: This time we'll pass a specific subject and subjects_dir (even though there's only one subject in the sample dataset) and remove our render_bem=False parameter so we can see the MRI slices, with BEM contours overlaid on top if available. Since this is computationally expensive, we'll also pass the mri_decim parameter for the benefit of our documentation servers, and skip processing the Step5: Now let's look at how Step6: You have probably noticed that the EEG recordings look particularly odd. This is because by default, ~mne.Report does not apply baseline correction before rendering evoked data. So if the dataset you wish to add to the report has not been baseline-corrected already, you can request baseline correction here. The MNE sample dataset we're using in this example has not been baseline-corrected; so let's do this now for the report! To request baseline correction, pass a baseline argument to ~mne.Report, which should be a tuple with the starting and ending time of the baseline period. For more details, see the documentation on ~mne.Evoked.apply_baseline. Here, we will apply baseline correction for a baseline period from the beginning of the time interval to time point zero. Step7: To render whitened Step8: If you want to actually view the noise covariance in the report, make sure it is captured by the pattern passed to Step9: Adding custom plots to a report The python interface has greater flexibility compared to the command line interface &lt;mne report&gt;. For example, custom plots can be added via the Step10: Managing report sections The MNE report command internally manages the sections so that plots belonging to the same section are rendered consecutively. Within a section, the plots are ordered in the same order that they were added using the Step11: This allows the possibility of multiple scripts adding figures to the same report. To make this even easier,
Python Code: import os import mne Explanation: Getting started with mne.Report This tutorial covers making interactive HTML summaries with :class:mne.Report. :depth: 2 As usual we'll start by importing the modules we need and loading some example data &lt;sample-dataset&gt;: End of explanation path = mne.datasets.sample.data_path(verbose=False) report = mne.Report(verbose=True) report.parse_folder(path, pattern='*raw.fif', render_bem=False) report.save('report_basic.html', overwrite=True) Explanation: Before getting started with :class:mne.Report, make sure the files you want to render follow the filename conventions defined by MNE: .. cssclass:: table-bordered .. rst-class:: midvalign ============== ============================================================== Data object Filename convention (ends with) ============== ============================================================== raw -raw.fif(.gz), -raw_sss.fif(.gz), -raw_tsss.fif(.gz), _meg.fif events -eve.fif(.gz) epochs -epo.fif(.gz) evoked -ave.fif(.gz) covariance -cov.fif(.gz) SSP projectors -proj.fif(.gz) trans -trans.fif(.gz) forward -fwd.fif(.gz) inverse -inv.fif(.gz) ============== ============================================================== Alternatively, the dash - in the filename may be replaced with an underscore _. Basic reports The basic process for creating an HTML report is to instantiate the :class:~mne.Report class, then use the :meth:~mne.Report.parse_folder method to select particular files to include in the report. Which files are included depends on both the pattern parameter passed to :meth:~mne.Report.parse_folder and also the subject and subjects_dir parameters provided to the :class:~mne.Report constructor. .. sidebar: Viewing the report On successful creation of the report, the :meth:~mne.Report.save method will open the HTML in a new tab in the browser. To disable this, use the open_browser=False parameter of :meth:~mne.Report.save. For our first example, we'll generate a barebones report for all the :file:.fif files containing raw data in the sample dataset, by passing the pattern *raw.fif to :meth:~mne.Report.parse_folder. We'll omit the subject and subjects_dir parameters from the :class:~mne.Report constructor, but we'll also pass render_bem=False to the :meth:~mne.Report.parse_folder method — otherwise we would get a warning about not being able to render MRI and trans files without knowing the subject. End of explanation pattern = 'sample_audvis_filt-0-40_raw.fif' report = mne.Report(raw_psd=True, projs=True, verbose=True) report.parse_folder(path, pattern=pattern, render_bem=False) report.save('report_raw_psd.html', overwrite=True) Explanation: This report yields a textual summary of the :class:~mne.io.Raw files selected by the pattern. For a slightly more useful report, we'll ask for the power spectral density of the :class:~mne.io.Raw files, by passing raw_psd=True to the :class:~mne.Report constructor. We'll also visualize the SSP projectors stored in the raw data's ~mne.Info dictionary by setting projs=True. Lastly, let's also refine our pattern to select only the filtered raw recording (omitting the unfiltered data and the empty-room noise recordings): End of explanation info_fname = os.path.join(path, 'MEG', 'sample', 'sample_audvis_filt-0-40_raw.fif') pattern = 'sample_audvis_*proj.fif' report = mne.Report(info_fname=info_fname, verbose=True) report.parse_folder(path, pattern=pattern, render_bem=False) report.save('report_proj.html', overwrite=True) Explanation: The sample dataset also contains SSP projectors stored as individual files. To add them to a report, we also have to provide the path to a file containing an ~mne.Info dictionary, from which the channel locations can be read. End of explanation subjects_dir = os.path.join(path, 'subjects') report = mne.Report(subject='sample', subjects_dir=subjects_dir, verbose=True) report.parse_folder(path, pattern='', mri_decim=25) report.save('report_mri_bem.html', overwrite=True) Explanation: This time we'll pass a specific subject and subjects_dir (even though there's only one subject in the sample dataset) and remove our render_bem=False parameter so we can see the MRI slices, with BEM contours overlaid on top if available. Since this is computationally expensive, we'll also pass the mri_decim parameter for the benefit of our documentation servers, and skip processing the :file:.fif files: End of explanation pattern = 'sample_audvis-no-filter-ave.fif' report = mne.Report(verbose=True) report.parse_folder(path, pattern=pattern, render_bem=False) report.save('report_evoked.html', overwrite=True) Explanation: Now let's look at how :class:~mne.Report handles :class:~mne.Evoked data (we'll skip the MRIs to save computation time). The following code will produce butterfly plots, topomaps, and comparisons of the global field power (GFP) for different experimental conditions. End of explanation baseline = (None, 0) pattern = 'sample_audvis-no-filter-ave.fif' report = mne.Report(baseline=baseline, verbose=True) report.parse_folder(path, pattern=pattern, render_bem=False) report.save('report_evoked_baseline.html', overwrite=True) Explanation: You have probably noticed that the EEG recordings look particularly odd. This is because by default, ~mne.Report does not apply baseline correction before rendering evoked data. So if the dataset you wish to add to the report has not been baseline-corrected already, you can request baseline correction here. The MNE sample dataset we're using in this example has not been baseline-corrected; so let's do this now for the report! To request baseline correction, pass a baseline argument to ~mne.Report, which should be a tuple with the starting and ending time of the baseline period. For more details, see the documentation on ~mne.Evoked.apply_baseline. Here, we will apply baseline correction for a baseline period from the beginning of the time interval to time point zero. End of explanation cov_fname = os.path.join(path, 'MEG', 'sample', 'sample_audvis-cov.fif') baseline = (None, 0) report = mne.Report(cov_fname=cov_fname, baseline=baseline, verbose=True) report.parse_folder(path, pattern=pattern, render_bem=False) report.save('report_evoked_whitened.html', overwrite=True) Explanation: To render whitened :class:~mne.Evoked files with baseline correction, pass the baseline argument we just used, and add the noise covariance file. This will display ERP/ERF plots for both the original and whitened :class:~mne.Evoked objects, but scalp topomaps only for the original. End of explanation pattern = 'sample_audvis-cov.fif' info_fname = os.path.join(path, 'MEG', 'sample', 'sample_audvis-ave.fif') report = mne.Report(info_fname=info_fname, verbose=True) report.parse_folder(path, pattern=pattern, render_bem=False) report.save('report_cov.html', overwrite=True) Explanation: If you want to actually view the noise covariance in the report, make sure it is captured by the pattern passed to :meth:~mne.Report.parse_folder, and also include a source for an :class:~mne.Info object (any of the :class:~mne.io.Raw, :class:~mne.Epochs or :class:~mne.Evoked :file:.fif files that contain subject data also contain the measurement information and should work): End of explanation # generate a custom plot: fname_evoked = os.path.join(path, 'MEG', 'sample', 'sample_audvis-ave.fif') evoked = mne.read_evokeds(fname_evoked, condition='Left Auditory', baseline=(None, 0), verbose=True) fig = evoked.plot(show=False) # add the custom plot to the report: report.add_figs_to_section(fig, captions='Left Auditory', section='evoked') report.save('report_custom.html', overwrite=True) Explanation: Adding custom plots to a report The python interface has greater flexibility compared to the command line interface &lt;mne report&gt;. For example, custom plots can be added via the :meth:~mne.Report.add_figs_to_section method: End of explanation report.save('report.h5', overwrite=True) report_from_disk = mne.open_report('report.h5') print(report_from_disk) Explanation: Managing report sections The MNE report command internally manages the sections so that plots belonging to the same section are rendered consecutively. Within a section, the plots are ordered in the same order that they were added using the :meth:~mne.Report.add_figs_to_section command. Each section is identified by a toggle button in the top navigation bar of the report which can be used to show or hide the contents of the section. To toggle the show/hide state of all sections in the HTML report, press :kbd:t. <div class="alert alert-info"><h4>Note</h4><p>Although we've been generating separate reports in each example, you could easily create a single report for all :file:`.fif` files (raw, evoked, covariance, etc) by passing ``pattern='*.fif'``.</p></div> Editing a saved report Saving to HTML is a write-only operation, meaning that we cannot read an .html file back as a :class:~mne.Report object. In order to be able to edit a report once it's no longer in-memory in an active Python session, save it as an HDF5 file instead of HTML: End of explanation with mne.open_report('report.h5') as report: report.add_figs_to_section(fig, captions='Left Auditory', section='evoked', replace=True) report.save('report_final.html', overwrite=True) Explanation: This allows the possibility of multiple scripts adding figures to the same report. To make this even easier, :class:mne.Report can be used as a context manager: End of explanation
5,114
Given the following text description, write Python code to implement the functionality described below step by step Description: LAB 01 Step1: The source dataset Our dataset is hosted in BigQuery. The taxi fare data is a publically available dataset, meaning anyone with a GCP account has access. Click here to acess the dataset. The Taxi Fare dataset is relatively large at 55 million training rows, but simple to understand, with only six features. The fare_amount is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called feat_eng if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too. Step2: Create the training data table Since there is already a publicly available dataset, we can simply create the training data table using this raw input data. Note the WHERE clause in the below query Step3: Verify table creation Verify that you created the dataset. Step4: Baseline Model Step5: Note, the query takes several minutes to complete. After the first iteration is complete, your model (baseline_model) appears in the navigation panel of the BigQuery web UI. Because the query uses a CREATE MODEL statement to create a model, you do not see query results. You can observe the model as it's being trained by viewing the Model stats tab in the BigQuery web UI. As soon as the first iteration completes, the tab is updated. The stats continue to update as each iteration completes. Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook. Evaluate the baseline model Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. After creating your model, you evaluate the performance of the regressor using the ML.EVALUATE function. The ML.EVALUATE function evaluates the predicted values against the actual data. NOTE Step6: NOTE Step7: Model 1 Step8: Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook. Next, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1. Step9: Here we run a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model. Step10: Model 2 Step11: Model 3
Python Code: %%bash export PROJECT=$(gcloud config list project --format "value(core.project)") echo "Your current GCP Project Name is: "$PROJECT Explanation: LAB 01: Basic Feature Engineering in BQML Learning Objectives Create SQL statements to evaluate the model Extract temporal features Perform a feature cross on temporal features Introduction In this lab, we utilize feature engineering to improve the prediction of the fare amount for a taxi ride in New York City. We will use BigQuery ML to build a taxifare prediction model, using feature engineering to improve and create a final model. In this Notebook we set up the environment, create the project dataset, create a feature engineering table, create and evaluate a baseline model, extract temporal features, perform a feature cross on temporal features, and evaluate model performance throughout the process. Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. Set up environment variables and load necessary libraries End of explanation %%bash # Create a BigQuery dataset for feat_eng if it doesn't exist datasetexists=$(bq ls -d | grep -w feat_eng) if [ -n "$datasetexists" ]; then echo -e "BigQuery dataset already exists, let's not recreate it." else echo "Creating BigQuery dataset titled: feat_eng" bq --location=US mk --dataset \ --description 'Taxi Fare' \ $PROJECT:feat_eng echo "\nHere are your current datasets:" bq ls fi Explanation: The source dataset Our dataset is hosted in BigQuery. The taxi fare data is a publically available dataset, meaning anyone with a GCP account has access. Click here to acess the dataset. The Taxi Fare dataset is relatively large at 55 million training rows, but simple to understand, with only six features. The fare_amount is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called feat_eng if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too. End of explanation %%bigquery CREATE OR REPLACE TABLE feat_eng.feateng_training_data AS SELECT (tolls_amount + fare_amount) AS fare_amount, passenger_count*1.0 AS passengers, pickup_datetime, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat FROM `nyc-tlc.yellow.trips` WHERE MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 10000) = 1 AND fare_amount >= 2.5 AND passenger_count > 0 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 Explanation: Create the training data table Since there is already a publicly available dataset, we can simply create the training data table using this raw input data. Note the WHERE clause in the below query: This clause allows us to TRAIN a portion of the data (e.g. one hundred thousand rows versus one million rows), which keeps your query costs down. If you need a refresher on using MOD() for repeatable splits see this post. Note: The dataset in the create table code below is the one created previously, e.g. "feat_eng". The table name is "feateng_training_data". Run the query to create the table. End of explanation %%bigquery # LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM feat_eng.feateng_training_data LIMIT 0 Explanation: Verify table creation Verify that you created the dataset. End of explanation %%bigquery CREATE OR REPLACE MODEL feat_eng.baseline_model OPTIONS (model_type='linear_reg', input_label_cols=['fare_amount']) AS SELECT fare_amount, passengers, pickup_datetime, pickuplon, pickuplat, dropofflon, dropofflat FROM feat_eng.feateng_training_data Explanation: Baseline Model: Create the baseline model Next, you create a linear regression baseline model with no feature engineering. Recall that a model in BigQuery ML represents what an ML system has learned from the training data. A baseline model is a solution to a problem without applying any machine learning techniques. When creating a BQML model, you must specify the model type (in our case linear regression) and the input label (fare_amount). Note also that we are using the training data table as the data source. Now we create the SQL statement to create the baseline model. End of explanation %%bigquery # Eval statistics on the held out data. SELECT *, SQRT(loss) AS rmse FROM ML.TRAINING_INFO(MODEL feat_eng.baseline_model) %%bigquery SELECT * FROM ML.EVALUATE(MODEL feat_eng.baseline_model) Explanation: Note, the query takes several minutes to complete. After the first iteration is complete, your model (baseline_model) appears in the navigation panel of the BigQuery web UI. Because the query uses a CREATE MODEL statement to create a model, you do not see query results. You can observe the model as it's being trained by viewing the Model stats tab in the BigQuery web UI. As soon as the first iteration completes, the tab is updated. The stats continue to update as each iteration completes. Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook. Evaluate the baseline model Note that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. After creating your model, you evaluate the performance of the regressor using the ML.EVALUATE function. The ML.EVALUATE function evaluates the predicted values against the actual data. NOTE: The results are also displayed in the BigQuery Cloud Console under the Evaluation tab. Review the learning and eval statistics for the baseline_model. End of explanation %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL feat_eng.baseline_model) Explanation: NOTE: Because you performed a linear regression, the results include the following columns: mean_absolute_error mean_squared_error mean_squared_log_error median_absolute_error r2_score explained_variance Resource for an explanation of the Regression Metrics. Mean squared error (MSE) - Measures the difference between the values our model predicted using the test set and the actual values. You can also think of it as the distance between your regression (best fit) line and the predicted values. Root mean squared error (RMSE) - The primary evaluation metric for this ML problem is the root mean-squared error. RMSE measures the difference between the predictions of a model, and the observed values. A large RMSE is equivalent to a large average error, so smaller values of RMSE are better. One nice property of RMSE is that the error is given in the units being measured, so you can tell very directly how incorrect the model might be on unseen data. R2: An important metric in the evaluation results is the R2 score. The R2 score is a statistical measure that determines if the linear regression predictions approximate the actual data. Zero (0) indicates that the model explains none of the variability of the response data around the mean. One (1) indicates that the model explains all the variability of the response data around the mean. Next, we write a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model. End of explanation %%bigquery CREATE OR REPLACE MODEL feat_eng.model_1 OPTIONS (model_type='linear_reg', input_label_cols=['fare_amount']) AS SELECT fare_amount, passengers, pickup_datetime, EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek, pickuplon, pickuplat, dropofflon, dropofflat FROM feat_eng.feateng_training_data Explanation: Model 1: EXTRACT dayofweek from the pickup_datetime feature. As you recall, dayofweek is an enum representing the 7 days of the week. This factory allows the enum to be obtained from the int value. The int value follows the ISO-8601 standard, from 1 (Monday) to 7 (Sunday). If you were to extract the dayofweek from pickup_datetime using BigQuery SQL, the dataype returned would be integer. Next, we create a model titled "model_1" from the benchmark model and extract out the DayofWeek. End of explanation %%bigquery SELECT *, SQRT(loss) AS rmse FROM ML.TRAINING_INFO(MODEL feat_eng.model_1) %%bigquery SELECT * FROM ML.EVALUATE(MODEL feat_eng.model_1) Explanation: Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook. Next, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1. End of explanation %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL feat_eng.model_1) Explanation: Here we run a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model. End of explanation %%bigquery CREATE OR REPLACE MODEL feat_eng.model_2 OPTIONS (model_type='linear_reg', input_label_cols=['fare_amount']) AS SELECT fare_amount, passengers, #pickup_datetime, EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek, EXTRACT(HOUR FROM pickup_datetime) AS hourofday, pickuplon, pickuplat, dropofflon, dropofflat FROM `feat_eng.feateng_training_data` %%bigquery SELECT * FROM ML.EVALUATE(MODEL feat_eng.model_2) %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL feat_eng.model_2) Explanation: Model 2: EXTRACT hourofday from the pickup_datetime feature As you recall, pickup_datetime is stored as a TIMESTAMP, where the Timestamp format is retrieved in the standard output format – year-month-day hour:minute:second (e.g. 2016-01-01 23:59:59). Hourofday returns the integer number representing the hour number of the given date. Hourofday is best thought of as a discrete ordinal variable (and not a categorical feature), as the hours can be ranked (e.g. there is a natural ordering of the values). Hourofday has an added characteristic of being cyclic, since 12am follows 11pm and precedes 1am. Next, we create a model titled "model_2" and EXTRACT the hourofday from the pickup_datetime feature to improve our model's rmse. End of explanation %%bigquery CREATE OR REPLACE MODEL feat_eng.model_3 OPTIONS (model_type='linear_reg', input_label_cols=['fare_amount']) AS SELECT fare_amount, passengers, #pickup_datetime, #EXTRACT(DAYOFWEEK FROM pickup_datetime) AS dayofweek, #EXTRACT(HOUR FROM pickup_datetime) AS hourofday, CONCAT(CAST(EXTRACT(DAYOFWEEK FROM pickup_datetime) AS STRING), CAST(EXTRACT(HOUR FROM pickup_datetime) AS STRING)) AS hourofday, pickuplon, pickuplat, dropofflon, dropofflat FROM `feat_eng.feateng_training_data` %%bigquery SELECT * FROM ML.EVALUATE(MODEL feat_eng.model_3) %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL feat_eng.model_3) Explanation: Model 3: Feature cross dayofweek and hourofday using CONCAT First, let’s allow the model to learn traffic patterns by creating a new feature that combines the time of day and day of week (this is called a feature cross. Note: BQML by default assumes that numbers are numeric features, and strings are categorical features. We need to convert both the dayofweek and hourofday features to strings because the model (Neural Network) will automatically treat any integer as a numerical value rather than a categorical value. Thus, if not cast as a string, the dayofweek feature will be interpreted as numeric values (e.g. 1,2,3,4,5,6,7) and hour ofday will also be interpreted as numeric values (e.g. the day begins at midnight, 00:00, and the last minute of the day begins at 23:59 and ends at 24:00). As such, there is no way to distinguish the "feature cross" of hourofday and dayofweek "numerically". Casting the dayofweek and hourofday as strings ensures that each element will be treated like a label and will get its own coefficient associated with it. Create the SQL statement to feature cross the dayofweek and hourofday using the CONCAT function. Name the model "model_3" End of explanation
5,115
Given the following text description, write Python code to implement the functionality described below step by step Description: Please find torch implementation of this notebook here Step2: Implementation Utility functions. Step6: Main function. Step7: Example The shape of the multi-head attention output is (batch_size, num_queries, num_hiddens).
Python Code: import jax import jax.numpy as jnp # JAX NumPy from jax import lax import math from IPython import display try: from flax import linen as nn # The Linen API except ModuleNotFoundError: %pip install -qq flax from flax import linen as nn # The Linen API from flax.training import train_state # Useful dataclass to keep train state import numpy as np # Ordinary NumPy try: import optax # Optimizers except ModuleNotFoundError: %pip install -qq optax import optax # Optimizers rng = jax.random.PRNGKey(0) Explanation: Please find torch implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/15/multi_head_attention_torch.ipynb <a href="https://colab.research.google.com/github/codeboy5/probml-notebooks/blob/add-multi_head_attention_jax/notebooks-d2l/multi_head_attention_jax.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Multi-head attention. We show how to multi-head attention. Based on sec 10.5 of http://d2l.ai/chapter_attention-mechanisms/multihead-attention.html. End of explanation def transpose_qkv(X, num_heads): # Shape of input `X`: # (`batch_size`, no. of queries or key-value pairs, `num_hiddens`). # Shape of output `X`: # (`batch_size`, no. of queries or key-value pairs, `num_heads`, # `num_hiddens` / `num_heads`) X = X.reshape((X.shape[0], X.shape[1], num_heads, -1)) # Shape of output `X`: # (`batch_size`, `num_heads`, no. of queries or key-value pairs, # `num_hiddens` / `num_heads`) X = jnp.transpose(X, (0, 2, 1, 3)) # Shape of `output`: # (`batch_size` * `num_heads`, no. of queries or key-value pairs, # `num_hiddens` / `num_heads`) return X.reshape((-1, X.shape[2], X.shape[3])) def transpose_output(X, num_heads): Reverse the operation of `transpose_qkv` X = X.reshape((-1, num_heads, X.shape[1], X.shape[2])) X = jnp.transpose(X, (0, 2, 1, 3)) return X.reshape((X.shape[0], X.shape[1], -1)) Explanation: Implementation Utility functions. End of explanation def sequence_mask(X, valid_len, value=0): Mask irrelevant entries in sequences. maxlen = X.shape[1] mask = jnp.arange((maxlen), dtype=jnp.float32)[None, :] < valid_len[:, None] X = jnp.where(~mask, value, X) return X def masked_softmax(X, valid_lens): Perform softmax operation by masking elements on the last axis. # `X`: 3D tensor, `valid_lens`: 1D or 2D tensor if valid_lens is None: return nn.softmax(X, axis=-1) else: shape = X.shape if valid_lens.ndim == 1: valid_lens = jnp.repeat(valid_lens, shape[1]) else: valid_lens = valid_lens.reshape(-1) # On the last axis, replace masked elements with a very large negative # value, whose exponentiation outputs 0 X = sequence_mask(X.reshape(-1, shape[-1]), valid_lens, value=-1e6) return nn.softmax(X.reshape(shape), axis=-1) class DotProductAttention(nn.Module): Scaled dot product attention. dropout: float # Shape of `queries`: (`batch_size`, no. of queries, `d`) # Shape of `keys`: (`batch_size`, no. of key-value pairs, `d`) # Shape of `values`: (`batch_size`, no. of key-value pairs, value # dimension) # Shape of `valid_lens`: (`batch_size`,) or (`batch_size`, no. of queries) @nn.compact def __call__(self, queries, keys, values, valid_lens=None, deterministic=True): d = queries.shape[-1] scores = queries @ (keys.swapaxes(1, 2)) / math.sqrt(d) attention_weights = masked_softmax(scores, valid_lens) dropout_layer = nn.Dropout(self.dropout, deterministic=deterministic) return dropout_layer(attention_weights) @ values class MultiHeadAttention(nn.Module): num_hiddens: int num_heads: int dropout: float bias: bool = False @nn.compact def __call__(self, queries, keys, values, valid_lens): # Shape of `queries`, `keys`, or `values`: # (`batch_size`, no. of queries or key-value pairs, `num_hiddens`) # Shape of `valid_lens`: # (`batch_size`,) or (`batch_size`, no. of queries) # After transposing, shape of output `queries`, `keys`, or `values`: # (`batch_size` * `num_heads`, no. of queries or key-value pairs, # `num_hiddens` / `num_heads`) queries = transpose_qkv(nn.Dense(self.num_hiddens, use_bias=self.bias)(queries), self.num_heads) keys = transpose_qkv(nn.Dense(self.num_hiddens, use_bias=self.bias)(keys), self.num_heads) values = transpose_qkv(nn.Dense(self.num_hiddens, use_bias=self.bias)(values), self.num_heads) if valid_lens is not None: # On axis 0, copy the first item (scalar or vector) for # `num_heads` times, then copy the next item, and so on valid_lens = jnp.repeat(valid_lens, self.num_heads, axis=0) # Shape of `output`: (`batch_size` * `num_heads`, no. of queries, # `num_hiddens` / `num_heads`) output = DotProductAttention(self.dropout)(queries, keys, values, valid_lens) # Shape of `output_concat`: # (`batch_size`, no. of queries, `num_hiddens`) output_concat = transpose_output(output, self.num_heads) return nn.Dense(self.num_hiddens, use_bias=self.bias)(output_concat) Explanation: Main function. End of explanation num_hiddens, num_heads = 100, 5 attention = MultiHeadAttention(num_hiddens, num_heads, 0.5) batch_size, num_queries, num_kvpairs, valid_lens = 2, 4, 6, jnp.array([3, 2]) X = jnp.ones((batch_size, num_queries, num_hiddens)) Y = jnp.ones((batch_size, num_kvpairs, num_hiddens)) variables = attention.init(jax.random.PRNGKey(0), X, Y, Y, valid_lens) output = attention.apply(variables, X, Y, Y, valid_lens) output.shape Explanation: Example The shape of the multi-head attention output is (batch_size, num_queries, num_hiddens). End of explanation
5,116
Given the following text description, write Python code to implement the functionality described. Description: Count pairs of nodes having minimum distance between them equal to the difference of their distances from root Stores the count of pairs ; Store the adjacency list of the connecting vertex ; Function for theto perform DFS traversal of the given tree ; Traverse the adjacency list of the current node u ; If the current node is the parent node ; Add number of ancestors , which is same as depth of the node ; Function for DFS traversal of the given tree ; Print result ; Function to find the count of pairs such that the minimum distance between them is equal to the difference between distance of the nodes from root node ; Add edges to adj [ ] ; Driver Code
Python Code: ans = 0 adj =[[ ] for i in range(10 ** 5 + 1 ) ] def dfsUtil(u , par , depth ) : global adj , ans for it in adj[u ] : if(it != par ) : dfsUtil(it , u , depth + 1 )   ans += depth  def dfs(u , par , depth ) : global ans dfsUtil(u , par , depth ) print(ans )  def countPairs(edges ) : global adj for i in range(len(edges ) ) : u = edges[i ][0 ] v = edges[i ][1 ] adj[u ] . append(v ) adj[v ] . append(u )  dfs(1 , 1 , 1 )  if __name__== ' __main __' : edges =[[ 1 , 2 ] ,[1 , 3 ] ,[2 , 4 ] ] countPairs(edges ) 
5,117
Given the following text description, write Python code to implement the functionality described below step by step Description: A Simple Autoencoder We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data. In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset. Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits. Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input. Exercise Step3: Training Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss. Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed). Step5: Checking out the results Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
Python Code: %matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', validation_size=0) Explanation: A Simple Autoencoder We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data. In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset. End of explanation img = mnist.train.images[200] plt.imshow(img.reshape((28, 28)), cmap='Greys_r') Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits. End of explanation len(mnist.train.images) type(mnist.train.images[0]) inputs_ = [[image.flatten()] for image in mnist.train.images] inputs_[1] # Size of the encoding layer (the hidden layer) encoding_dim = 32 # feel free to change this value image_size = 784 inputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs') targets_ = tf.placeholder(tf.float32, (None, image_size), name='targets') # Output of hidden layer encoded = tf.contrib.layers.fully_connected(inputs_, encoding_dim, activation_fn = tf.nn.relu) # Output layer logits logits = tf.contrib.layers.fully_connected(encoded, image_size, activation_fn = None) # Sigmoid output from logits decoded = tf.nn.sigmoid(logits) # Sigmoid cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_,logits = logits) # Mean of the loss cost = tf.reduce_mean(loss) # Adam optimizer opt = tf.train.AdamOptimizer().minimize(cost) Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input. Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function. End of explanation # Create the session sess = tf.Session() Explanation: Training End of explanation epochs = 20 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) feed = {inputs_: batch[0], targets_: batch[0]} batch_cost, _ = sess.run([cost, opt], feed_dict=feed) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss. Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed). End of explanation fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs}) for images, row in zip([in_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) sess.close() Explanation: Checking out the results Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. End of explanation
5,118
Given the following text description, write Python code to implement the functionality described below step by step Description: django-postgre-copy speed tests By Ben Welsh This notebook tests the effect of dropping database constraints and indexes prior to loading a large data file. The official PostgreSQL documentation suggests it lead to significant gains. We will test this claim by dropping constraints and indexes prior to loading data from the California Civic Data Coalition via the django-postgres-copy wrapper on the database's COPY command. Connect California Civic Data Coalition Django project Import Python tools Step1: Add the Django settings module to the environment. Step2: Verify we have the correct version of django-postgres-copy Step3: Boot the Django project Step4: Prep for speed tests Download raw data we will test Step5: Import database models Step6: Analyze the results Step7: What we loaded Step8: How long it took in seconds Step9: How long it took in minutes Step10: The change Step11: Charge the changes by table size
Python Code: import os import sys import csv import calculate import scipy as sp import pandas as pd import scipy.optimize from pprint import pprint import matplotlib as mpl from matplotlib import rcParams rcParams['font.family'] = 'monospace' import matplotlib.pyplot as plt import matplotlib.ticker as mtick import matplotlib.patches as patches import warnings warnings.simplefilter("ignore") import logging logger = logging.getLogger('postgres_copy') logger.addHandler(logging.NullHandler()) logger.propagate = False mpl.style.use('seaborn-paper') %matplotlib inline Explanation: django-postgre-copy speed tests By Ben Welsh This notebook tests the effect of dropping database constraints and indexes prior to loading a large data file. The official PostgreSQL documentation suggests it lead to significant gains. We will test this claim by dropping constraints and indexes prior to loading data from the California Civic Data Coalition via the django-postgres-copy wrapper on the database's COPY command. Connect California Civic Data Coalition Django project Import Python tools End of explanation sys.path.insert(0, '/home/palewire/.virtualenvs/django-calaccess-raw-data/src/') sys.path.insert(0, '/home/palewire/.virtualenvs/django-calaccess-raw-data/lib/python2.7/') sys.path.insert(0, '/home/palewire/.virtualenvs/django-calaccess-raw-data/lib/python2.7/site-packages/') sys.path.insert(0, '/home/palewire/Code/django-calaccess-raw-data/') sys.path.insert(0, '/home/palewire/Code/django-calaccess-raw-data/example/') sys.path.insert(0, '/home/palewire/Code/django-postgres-copy/') os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings") Explanation: Add the Django settings module to the environment. End of explanation import postgres_copy postgres_copy.__version__ Explanation: Verify we have the correct version of django-postgres-copy End of explanation %%capture import django django.setup() Explanation: Boot the Django project End of explanation !curl -O https://calaccess.download/latest/raw.zip !rm *.csv !unzip raw.zip Explanation: Prep for speed tests Download raw data we will test End of explanation import calaccess_raw from calaccess_raw import models def truncate(model): from django.db import connection cursor = connection.cursor() cursor.execute('TRUNCATE TABLE "{}";'.format(model._meta.db_table)) # Set a blank list to store the results result_list = [] # Loop through the models and run the tests for model in calaccess_raw.get_model_list(): print "Testing {}".format(model.__name__) # Find the CSV for the model csv_name = model().get_csv_name() if not os.path.exists(csv_name): print("No csv. Skipping.") continue # Count its rows raw_rows = !wc -l $csv_name rows = int(raw_rows[0].split()[0]) if not rows > 0: print("No data. Skipping.") continue # Map the CSV to the model mapping = dict( (f.name, f.db_column) for f in model._meta.fields if f.db_column ) # Set the test function f = model.objects.from_csv # Test it with dropped indexes drop_kwargs = dict(drop_constraints=True, drop_indexes=True) drop = %%timeit -r 3 -oq truncate(model); f(csv_name, mapping, **drop_kwargs) # Test it without dropped indexes dont_kwargs = dict(drop_constraints=False, drop_indexes=False) dont_drop = %%timeit -r 3 -oq truncate(model); f(csv_name, mapping, **dont_kwargs) # Return the results result = { 'model_name': model.__name__, 'rows': rows, 'indexed_fields': len(model.objects.indexed_fields), 'constrained_fields': len(model.objects.constrained_fields), 'dont_drop': dont_drop.best, 'drop': drop.best } pprint(result) result_list.append(result) outfile = csv.DictWriter( open("speed-test.csv", 'w'), result_list[0].keys() ) outfile.writeheader() outfile.writerows(result_list) Explanation: Import database models End of explanation df = pd.read_csv("speed-test.csv") Explanation: Analyze the results End of explanation print "Tables: {:,}".format(len(df)) print "Rows: {:,}".format(df.rows.sum()) Explanation: What we loaded End of explanation print "Without drops: {:,.0f} seconds".format(df.dont_drop.sum()) print "With drops: {:,.0f} seconds".format(df['drop'].sum()) Explanation: How long it took in seconds End of explanation print "Without drops: {:.0f} minutes, {:.0f} seconds".format(*divmod(df.dont_drop.sum(), 60)) print "With drops: {:.0f} minutes, {:.0f} seconds".format(*divmod(df['drop'].sum(), 60)) Explanation: How long it took in minutes End of explanation print "Absolute change: {:.0f} minutes, {:,.0f} seconds".format(*divmod(df['dont_drop'].sum() - df['drop'].sum(), 60)) print "Percent change: {:,.0f}%".format(((df['drop'].sum() - df.dont_drop.sum())/df['dont_drop'].sum())*100) Explanation: The change End of explanation df['change'] = df['dont_drop'] - df['drop'] df['percent_change'] = (df['drop'] - df['dont_drop']) / df['dont_drop'] fig = plt.figure(figsize=(15, 10)) ax = fig.add_subplot(1, 1, 1) ax.set_xlabel("Row count", fontsize=14, color="#333333") ax.set_ylabel("Change in load time (seconds)", fontsize=14, color="#333333") ax.scatter(df.rows, df.change, color="#ba0a35", s=65) plt.xlim(-500000, plt.xlim()[1]-1000000) plt.tick_params( axis='both', which='both', bottom='off', top='off', left="off", right="off" ) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.get_xaxis().get_major_formatter().set_scientific(False) fmt = '{x:,.0f}' tick = mtick.StrMethodFormatter(fmt) ax.xaxis.set_major_formatter(tick) plt.show() fig = plt.figure(figsize=(15, 10)) ax = fig.add_subplot(1, 1, 1) ax.set_xlabel("Row count", fontsize=14, color="#333333") ax.set_ylabel("Percent change in load time", fontsize=14, color="#333333") ax.scatter(df.rows, df.percent_change, color="#ba0a35", s=65) plt.xlim(-500000, plt.xlim()[1]-1000000) plt.axhline(0, lw=0.2, color="#ba0a35") plt.annotate("FASTER", (8000000, -0.1), fontsize=14, color="#333333") plt.annotate("SLOWER", (8000000, 0.1), fontsize=14, color="#333333") ax.add_patch( patches.Rectangle( (plt.xlim()[0], 0), plt.xlim()[1] + 500000, plt.ylim()[0], color="#ba0a35", alpha=0.1 ) ) plt.tick_params( axis='both', which='both', bottom='off', top='off', left="off", right="off" ) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.get_xaxis().get_major_formatter().set_scientific(False) fmt = '{x:,.0f}' tick = mtick.StrMethodFormatter(fmt) ax.xaxis.set_major_formatter(tick) plt.show() Explanation: Charge the changes by table size End of explanation
5,119
Given the following text description, write Python code to implement the functionality described below step by step Description: San Diego Burrito Analytics Step1: Load data Step2: Brief metadata Step3: What types of burritos have been rated? Step4: Progress in number of burritos rated Step5: Burrito dimension distributions Step6: Fraction of burritos recommended
Python Code: %config InlineBackend.figure_format = 'retina' %matplotlib inline import numpy as np import scipy as sp import matplotlib.pyplot as plt import pandas as pd import seaborn as sns sns.set_style("white") Explanation: San Diego Burrito Analytics: Data characterization Scott Cole 21 May 2016 This notebook characterizes the collection of reviewers of San Diego burritos including: Metadata How many of each kind of burrito have been reviewed? For each of burrito dimension, what is the distribution of its scores across all samples? Default imports End of explanation import util df = util.load_burritos() N = df.shape[0] Explanation: Load data End of explanation print 'Number of burritos:', df.shape[0] print 'Number of restaurants:', len(df.Location.unique()) print 'Number of reviewers:', len(df.Reviewer.unique()) print 'Number of reviews by Scott:', df.Reviewer.value_counts()['Scott'] print 'Number of reviews by Emily:', df.Reviewer.value_counts()['Emily'] uniqlocidx = df.Location.drop_duplicates().index print 'Percentage of taco shops with free chips:', np.round(100 - 100*df.Chips[uniqlocidx].isnull().sum()/np.float(len(df.Location.unique())),1) Explanation: Brief metadata End of explanation # Number of each type of burrito def burritotypes(x, types = {'California':'cali', 'Carnitas':'carnita', 'Carne asada':'carne asada', 'Chicken':'chicken', 'Surf & Turf':'surf.*turf', 'Adobada':'adobad'}): import re T = len(types) Nmatches = {} for b in x: matched = False for t in types.keys(): re4str = re.compile('.*'+types[t]+'.*', re.IGNORECASE) if np.logical_and(re4str.match(b) is not None, matched is False): try: Nmatches[t] +=1 except KeyError: Nmatches[t] = 1 matched = True if matched is False: try: Nmatches['other'] +=1 except KeyError: Nmatches['other'] = 1 return Nmatches typecounts = burritotypes(df.Burrito) plt.figure(figsize=(6,6)) ax = plt.axes([0.1, 0.1, 0.65, 0.65]) # The slices will be ordered and plotted counter-clockwise. labels = typecounts.keys() fracs = typecounts.values() explode=[.1]*len(typecounts) patches, texts, autotexts = plt.pie(fracs, explode=explode, labels=labels, autopct=lambda(p): '{:.0f}'.format(p * np.sum(fracs) / 100), shadow=False, startangle=0) # The default startangle is 0, which would start # the Frogs slice on the x-axis. With startangle=90, # everything is rotated counter-clockwise by 90 degrees, # so the plotting starts on the positive y-axis. plt.title('Types of burritos',size=30) for t in texts: t.set_size(20) for t in autotexts: t.set_size(20) autotexts[0].set_color('w') autotexts[6].set_color('w') figname = 'burritotypes' plt.savefig('C:/gh/fig/burrito/'+figname + '.png') Explanation: What types of burritos have been rated? End of explanation # Time series of ratings import math def dates2ts(dates): from datetime import datetime D = len(dates) start = datetime.strptime('1/1/2016','%m/%d/%Y') ts = np.zeros(D,dtype=int) for d in range(D): burrdate = datetime.strptime(df.Date[d],'%m/%d/%Y') diff = burrdate - start ts[d] = diff.days return ts def cumburritos(days): from statsmodels.distributions.empirical_distribution import ECDF ecdf = ECDF(days) t = np.arange(days[-1]+1) return t, ecdf(t)*len(days) def datelabels(startdate = '1/1/2016', M = 9): from datetime import datetime start = datetime.strptime(startdate,'%m/%d/%Y') datestrs = [] ts = np.zeros(M) for m in range(M): datestrs.append(str(m+1) + '/1') burrdate = datetime.strptime(datestrs[m]+'/2016','%m/%d/%Y') diff = burrdate - start ts[m] = diff.days return datestrs, ts burrdays = dates2ts(df.Date) t, burrcdf = cumburritos(burrdays) datestrs, datets = datelabels() plt.figure(figsize=(4,4)) plt.plot(t,burrcdf,'k-') plt.xlabel('Date',size=20) plt.ylabel('# burritos rated',size=20) plt.xticks(datets,datestrs,size=15) plt.yticks((0,int(math.ceil(len(burrdays) / 10.0)) * 10),size=15) plt.tight_layout() figname = 'burritoprogress' plt.savefig('C:/Users/Scott/Google Drive/qwm/burritos/figs/'+figname + '.png') Explanation: Progress in number of burritos rated End of explanation # Distribution of hunger level plt.figure(figsize=(5,5)) n, _, _ = plt.hist(df.Hunger,np.arange(-.25,5.5,.5),color='k') plt.xlabel('Hunger level',size=20) plt.xticks(np.arange(0,5.5,.5),size=15) plt.xlim((-.25,5.25)) plt.ylabel('Count',size=20) plt.yticks((0,int(math.ceil(np.max(n) / 5.)) * 5),size=15) plt.tight_layout() figname = 'hungerleveldist' plt.savefig('C:/gh/fig/burrito/'+figname + '.png') # Average burrito cost plt.figure(figsize=(5,5)) n, _, _ = plt.hist(df.Cost,np.arange(4,10.25,.5),color='k') plt.xlabel('Cost ($)',size=20) plt.xticks(np.arange(4,11,1),size=15) plt.xlim((4,10)) plt.ylabel('Count',size=20) plt.yticks((0,int(math.ceil(np.max(n) / 5.)) * 5),size=15) plt.tight_layout() figname = 'costdist' plt.savefig('C:/gh/fig/burrito/'+figname + '.png') print np.mean(df.Cost) # Volume dist plt.figure(figsize=(5,5)) n, _, _ = plt.hist(df.Volume.dropna(),np.arange(0.5,1.3,.05),color='k') plt.xlabel('Volume (L)',size=20) plt.xticks(np.arange(0.5,1.3,.1),size=15) plt.xlim((0.5,1.2)) plt.ylabel('Count',size=20) plt.yticks((0,int(math.ceil(np.max(n) / 5.)) * 5),size=15) plt.tight_layout() figname = 'volumedist' plt.savefig('C:/gh/fig/burrito/'+figname + '.png') print np.mean(df.Volume) def metrichist(metricname): plt.figure(figsize=(5,5)) n, _, _ = plt.hist(df[metricname].dropna(),np.arange(-.25,5.5,.5),color='k') plt.xlabel(metricname + ' rating',size=20) plt.xticks(np.arange(0,5.5,.5),size=15) plt.xlim((-.25,5.25)) plt.ylabel('Count',size=20) plt.yticks((0,int(math.ceil(np.max(n) / 5.)) * 5),size=15) plt.tight_layout() if metricname == 'Meat:filling': metricname = 'meattofilling' figname = metricname + 'dist' plt.savefig('C:/gh/fig/burrito/'+figname + '.png') m_Hist = ['Tortilla','Temp','Meat','Fillings','Meat:filling','Uniformity','Salsa','Synergy','Wrap','overall'] for m in m_Hist: metrichist(m) Explanation: Burrito dimension distributions End of explanation # Overall recommendations plt.figure(figsize=(6,6)) ax = plt.axes([0.1, 0.1, 0.8, 0.8]) # The slices will be ordered and plotted counter-clockwise. labels = ['Yes','No'] fracs = np.array([np.sum(df.Rec==labels[0]),np.sum(df.Rec==labels[1])]) explode=[.01]*len(labels) patches, texts, autotexts = plt.pie(fracs, explode=explode, labels=labels, autopct=lambda(p): '{:.0f}'.format(p * np.sum(fracs) / 100), shadow=False, startangle=90) # The default startangle is 0, which would start # the Frogs slice on the x-axis. With startangle=90, # everything is rotated counter-clockwise by 90 degrees, # so the plotting starts on the positive y-axis. plt.title('Would you recommend this burrito?',size=30) for t in texts: t.set_size(20) for t in autotexts: t.set_size(30) autotexts[0].set_color('w') autotexts[1].set_color('w') figname = 'recspie' plt.savefig('C:/gh/fig/burrito/'+figname + '.png') Explanation: Fraction of burritos recommended End of explanation
5,120
Given the following text description, write Python code to implement the functionality described below step by step Description: Step2: PYT-DS Step3: This is not quite how we want to see the data. We need to flip it, or swap axes. Transpose will do. Then lets change the column names. Finally, we'll sort.
Python Code: import pandas as pd import numpy as np Created on Thu Jun 29 10:17:02 2017 Rewritten for get_data package on Oct 25, 2017 @author: Kirby Urner Decorated generator used IN PLACE OF: class Url: def __init__(self, the_url): self.url = the_url def __enter__(self): self.rq = urlopen(self.url) return self.rq def __exit__(self, *oops): if oops[0]: print("Failed to connect") return False self.rq.close() return True from urllib.request import urlopen import json from contextlib import contextmanager PREFIX = "http://thekirbster.pythonanywhere.com/" @contextmanager def url(target): try: yield urlopen(target) except: print("Failed to connect") raise def get_chems(): Get the element data from the web using API Typical record: [1, "H", "Hydrogen", 1.008, "diatomic nonmetal", 1498013115, "KTU"] global chems with url(PREFIX + "api/elements?elem=all") as httpreq: data = json.loads(httpreq.read()) # getting JSON data chems = pd.DataFrame(data) get_chems() chems.head() Explanation: PYT-DS: Harvesting Data We're used to reading JSON and CSV files over the internet, using Pandas. However, if you have control of a server, there's no reason you can't make scripts such as the one below fetch everything for you under the hood, using URL requests. By the time the data surfaces in the Notebook, it's already an up-to-date Dataframe, sorted and massaged. I'm exposing the pipeline here, but it's easy to imagine the Notebook actually starting around the last cell, having already done the job behind the scenes, of harvesting data. As a data scientist, your role may be as much about storing data for convenient access, in a usable form, as it is about end user analysis of said data. Your role may be part DBA (database administrator) at the end of the day. That's not a bad thing. End of explanation chems = chems.T chems.columns=["Protons", "Symbol", "Name", "Mass", "Category", "Changed", "Initials"] chems.sort_values("Protons") chems.head() chems = chems.sort_index() # lets get this in index order chems.head() Explanation: This is not quite how we want to see the data. We need to flip it, or swap axes. Transpose will do. Then lets change the column names. Finally, we'll sort. End of explanation
5,121
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: <H1>Solving 1st order ODEs</H1> The logistic equation is a first-order non-linear differential equation that describes the evolution of a population as a function of the population size at a give generation. It can be written as Step2: <H2>Numerical solution to the differential equation</H2> It requieres the initial condition and the independent variable Step4: <H2>Introducing optional arguments to the differential equation</H2>
Python Code: def diff(p, generation): Returns the as size of the population as a function of the generation defined in the following differential equation: dp/dg = p*(k-p)/tau, where p is the population size, g is the generation index, k is the maximal population size (fixed to 1000) and tau a constant that describes the number of individuals per generation (fixed to 1e4) p -- (int) population size, dependent variable generation -- (int) generation index, independent variable tau = 1e4 # rate of individuals per generation pMax = 1000 # max population size return (p * (pMax-p) ) / tau # define the independent variable (i.e., generations) g = np.arange(200) # 200 generations Explanation: <H1>Solving 1st order ODEs</H1> The logistic equation is a first-order non-linear differential equation that describes the evolution of a population as a function of the population size at a give generation. It can be written as: ${\displaystyle \tau {dp(t) \over dt} = p(t)(k-p(t))},$ where $p(t)$ y the population size at generation $t$, $k$ is the maximal size of the population, and $\tau$ a proportionality factor <H2>Define the equation</H2> End of explanation # solve the differential equation p = odeint(diff, 2, g) # initial conditions is 2 individuals # plot the solution plt.plot(g,p); plt.ylabel('Population size'); plt.xlabel('Generation'); Explanation: <H2>Numerical solution to the differential equation</H2> It requieres the initial condition and the independent variable End of explanation def diff2(p, generation, tau, pMax): Returns the as size of the population as a function of the generation defined in the following differential equation: dp/dg = p*(pMax-p/tau, where p is the population size, g is the generation index, pMax is the maximal population size and tau a constant that describes the number of individuals per generation p -- (int) population size generation -- (int) generation index pMax -- (int) maximal number of individuals in a population tau -- (float) rate of individuals per generation return (p*(pMax-p))/tau y = odeint(diff2, 2, g, args=(1e4, 1000)) # initial conditions is 2 individuals, tau = 1e4, pMax = 1000 # plot the solution plt.plot(g,y); plt.ylabel('Population size'); plt.xlabel('Generation'); Explanation: <H2>Introducing optional arguments to the differential equation</H2> End of explanation
5,122
Given the following text description, write Python code to implement the functionality described below step by step Description: Feature Engineering |Session | Session | |-----------|---------| |Feature Engineering I | Feature Transformation and Dimension Reduction (PCA)| |Feature Engineering II | Nonlinear Dimension Reduction (Autoencoder)| |Feature Engineering III | Random Projections | Goals of this Lesson Feature Transformations Standard Normal Transform Domain-Specific Transform Mystery Transform Dimensionality Reduction PCA Step1: I've created a function that we'll use later to create visualizations. It is a bit messy and not essential to the material so don't worry about understanding it. I'll be happy to explain it to anyone interested during a break or after the session. Step2: We also need functions for shuffling the data and calculating classification errrors. Step3: 1. Warm-up Let's start with a warm-up exercise. In the data directory you'll find a dataset of recent movies and their ratings according to several popular websites. Let's load it with Pandas... Step4: Logistic Regression Review Data We observe pairs $(\mathbf{x}{i},y{i})$ where \begin{eqnarray} y_{i} \in { 0, 1} & Step5: 2. Feature Transformations Good features are crucial for training well-performing classifiers Step6: Standard Normal scaling is a common and usually default first step, especially when you know the data in measured in different units. 2.2 Domain-Specific Transformations Sometimes the data calls for a specific transformation. We'll demonstrate this on the NBA dataset used in our first workshop. Let's load it... Step7: And let's run logistic regression on the data just as we did before... Step8: Now let's transform the Cartesian coordinates into polar coordinates Step9: <span style="color Step10: 3. Dimensionality Reduction Sometimes the data calls for more aggressive transformations. High-dimensional data is usually hard to model because classifiers are likely to overfit. Regularization is one way to combat high dimensionality, but often it can not be enough. This section will cover dimensionality reduction--a technique for reducing the number of features while still preserving curcial information. This is a form of unsupervised learning since we use no class information. 3.1 Image Dataset Step11: and then visualize two of the images... Step12: 3.2 Principal Component Analysis As we've seen, the dataset has many, many, many more features (columns) than examples (rows). Simple Lasso or Ridge regularization probably won't be enough to prevent overfitting so we have to do something more drastic. In this section, we'll cover Principal Component Analysis, a popular technique for reducing the dimensionality of data. Unsupervised Learning PCA does not take into consideration labels, only the input features. We can think of PCA as performing unsupervised 'inverse' prediction. Our goal is Step13: Let's visualize two of the paintings... Step14: We can also visualize the transformation matrix $\mathbf{W}^{T}$. It's rows act as 'filters' or 'feature detectors'... Step15: 3.3 PCA for Visualization PCA can also be done for visualization purposes. Let's perform PCA on the movie ratings dataset and see if any semblence of the class structure can be seen. Step16: <span style="color Step17: This dataset contains 400 64x64 pixel images of 40 people each exhibiting 10 facial expressions. The images are in gray-scale, not color, and therefore flattened vectors contain 4096 dimensions. <span style="color Step18: <span style="color Step19: Your output should look something like what's below (although could be a different face) Step20: Your output should look something like what's below (although could have differently ranked components)
Python Code: from IPython.display import Image import matplotlib.pyplot as plt import numpy as np import pandas as pd import time %matplotlib inline Explanation: Feature Engineering |Session | Session | |-----------|---------| |Feature Engineering I | Feature Transformation and Dimension Reduction (PCA)| |Feature Engineering II | Nonlinear Dimension Reduction (Autoencoder)| |Feature Engineering III | Random Projections | Goals of this Lesson Feature Transformations Standard Normal Transform Domain-Specific Transform Mystery Transform Dimensionality Reduction PCA: Model and Learning PCA for Images PCA for Visualization References Chapter 14 of Elements of Statistical Learning by Hastie, Tibshirani, Friedman A Few Useful Things to Know about Machine Learning SciKit-Learn's documentation on data preprocessing SciKit-Learn's documentation on dimensionality reduction 0. Preliminaries First we need to import Numpy, Pandas, MatPlotLib... End of explanation from matplotlib.colors import ListedColormap # Another messy looking function to make pretty plots of basketball courts def visualize_court(log_reg_model, coord_type='cart', court_image = '../data/nba/nba_court.jpg'): two_class_cmap = ListedColormap(['#FFAAAA', '#AAFFAA']) # light red for miss, light green for make x_min, x_max = 0, 50 #width (feet) of NBA court y_min, y_max = 0, 47 #length (feet) of NBA half-court grid_step_size = 0.2 grid_x, grid_y = np.meshgrid(np.arange(x_min, x_max, grid_step_size), np.arange(y_min, y_max, grid_step_size)) features = np.c_[grid_x.ravel(), grid_y.ravel()] # change coordinate system if coord_type == 'polar': features = np.c_[grid_x.ravel(), grid_y.ravel()] hoop_location = np.array([25., 0.]) features -= hoop_location dists = np.sqrt(np.sum(features**2, axis=1)) angles = np.arctan2(features[:,1], features[:,0]) features = np.hstack([dists[np.newaxis].T, angles[np.newaxis].T]) grid_predictions = log_reg_model.predict(features) grid_predictions = grid_predictions.reshape(grid_x.shape) fig, ax = plt.subplots() court_image = plt.imread(court_image) ax.imshow(court_image, interpolation='bilinear', origin='lower',extent=[x_min,x_max,y_min,y_max]) ax.imshow(grid_predictions, cmap=two_class_cmap, interpolation = 'nearest', alpha = 0.60, origin='lower',extent=[x_min,x_max,y_min,y_max]) plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) plt.title( "Make / Miss Prediction Boundaries" ) plt.show() Explanation: I've created a function that we'll use later to create visualizations. It is a bit messy and not essential to the material so don't worry about understanding it. I'll be happy to explain it to anyone interested during a break or after the session. End of explanation ### function for shuffling the data and labels def shuffle_in_unison(features, labels): rng_state = np.random.get_state() np.random.shuffle(features) np.random.set_state(rng_state) np.random.shuffle(labels) ### calculate classification errors # return a percentage: (number misclassified)/(total number of datapoints) def calc_classification_error(predictions, class_labels): n = predictions.size num_of_errors = 0. for idx in xrange(n): if (predictions[idx] >= 0.5 and class_labels[idx]==0) or (predictions[idx] < 0.5 and class_labels[idx]==1): num_of_errors += 1 return num_of_errors/n Explanation: We also need functions for shuffling the data and calculating classification errrors. End of explanation # load a dataset of recent movies and their ratings across several websites movie_data = pd.read_csv('../data/movie_ratings.csv') # reduce it to just the ratings categories movie_data = movie_data[['FILM','RottenTomatoes','RottenTomatoes_User','Metacritic','Metacritic_User','Fandango_Ratingvalue', 'IMDB']] movie_data.head() movie_data.describe() Explanation: 1. Warm-up Let's start with a warm-up exercise. In the data directory you'll find a dataset of recent movies and their ratings according to several popular websites. Let's load it with Pandas... End of explanation from sklearn.linear_model import LogisticRegression # set the random number generator for reproducability np.random.seed(123) # let's try to predict the IMDB rating from the others features = movie_data[['RottenTomatoes','RottenTomatoes_User','Metacritic','Metacritic_User','Fandango_Ratingvalue']].as_matrix() # create classes: more or less that 7/10 rating labels = (movie_data['IMDB'] >= 7.).astype('int').tolist() shuffle_in_unison(features, labels) ### Your Code Goes Here ### # initialize and train a logistic regression model # compute error on training data model_LogReg = LogisticRegression() model_LogReg.fit(features, labels) predicted_labels = model_LogReg.predict(features) train_error_rate = calc_classification_error(predicted_labels, labels) ########################### print "Classification error on training set: %.2f%%" %(train_error_rate*100) # compute the baseline error since the classes are imbalanced print "Baseline Error: %.2f%%" %((sum(labels)*100.)/len(labels)) Explanation: Logistic Regression Review Data We observe pairs $(\mathbf{x}{i},y{i})$ where \begin{eqnarray} y_{i} \in { 0, 1} &:& \mbox{class label} \ \mathbf{x}{i} = (x{i,1}, \dots, x_{i,D}) &:& \mbox{set of $D$ explanatory variables (aka features)} \end{eqnarray} Parameters \begin{eqnarray} \mathbf{\beta}^{T} = (\beta_{0}, \dots, \beta_{D}) : \mbox{values encoding the relationship between the features and label} \end{eqnarray} Transformation Function \begin{equation} f(z_{i}=\mathbf{x}{i} \mathbf{\beta} ) = (1+e^{-\mathbf{x}{i} \mathbf{\beta} })^{-1} \end{equation} Error Function \begin{eqnarray} \mathcal{L} = \sum_{i=1}^{N} -y_{i} \log f(\mathbf{x}{i} \mathbf{\beta} ) - (1-y{i}) \log (1-f(\mathbf{x}_{i} \mathbf{\beta} )) \end{eqnarray} Learning $\beta$ - Randomly initialize $\beta$ - Until $\alpha || \nabla \mathcal{L} || < tol $: - $\mathbf{\beta}{t+1} = \mathbf{\beta}{t} - \alpha \nabla_{\mathbf{\beta}} \mathcal{L}$ <span style="color:red">STUDENT ACTIVITY (10 MINS)</span> Let's run a logistic regression classifier on this data via SciKit-Learn. If you need a refresher, check out the notebook from the first course and the SciKit-Learn documentation on logistic regression. The goal is to predict if the IMDB rating is under or over 7/10, using the other ratings as features. I've started the code. You just need to fill-in the lines for training and computing the error. Note there is no test set yet. End of explanation # perform z-score scaling features_mu = np.mean(features, axis=0) features_sigma = np.std(features, axis=0) std_features = (features - features_mu)/features_sigma # re-train model lm = LogisticRegression() lm.fit(std_features, labels) ### compute error on training data predictions = lm.predict(std_features) print "Classification error on training set: %.3f%%" %(calc_classification_error(predictions, labels)*100) # compute the baseline error since the classes are imbalanced print "Baseline Error: %.2f%%" %((sum(labels)*100.)/len(labels)) Explanation: 2. Feature Transformations Good features are crucial for training well-performing classifiers: 'garbage in, garbage out.' In this section we introduce several transformations that are commonly applied to data as a preprocessing step before training a classifier. 2.1 Normal Standardization Recall the formula of the standard linear model: $$\hat Y = f(\beta^{T} \mathbf{X}) $$ where $\hat Y$ is the predictions, $f(\cdot)$ is the transformation function, $\beta$ is the weights (parameters), and $X$ is the $N \times D$ matrix of features. For simplicity, assume there are just two features: $$ \beta^{T} \mathbf{x}{i} = \beta{1}x_{i,1} + \beta_{2}x_{i,2}.$$ Usually $x_{i,1}$ and $x_{i,2}$ will be measured in different units. For instance, in the movie ratings data, the Rotten Tomatoes dimension is on a $0-100$ scale and the Fandango ratings are on $0-5$. The difference in scale causes one dimension to dominate the inner product. Linear models can learn to cope with this imbalance by changing the scales of the weights accordingly, but this makes optimization harder because gradient steps are unequal across dimensions. One way to get rid of hetergeneous scales is to standardize the data so that the values in each dimension are distributed according to the standard Normal distribution. In math, this means we'll transform the data like so: $$\mathbf{X}{std} = \frac{\mathbf{X} - \boldsymbol{\mu}{X}}{\boldsymbol{\sigma}_{X}}. $$ This is also called 'z-score scaling.' Let's examine the affect of this transformation on training error. End of explanation nba_shot_data = pd.read_csv('../data/nba/NBA_xy_features.csv') nba_shot_data.head() Explanation: Standard Normal scaling is a common and usually default first step, especially when you know the data in measured in different units. 2.2 Domain-Specific Transformations Sometimes the data calls for a specific transformation. We'll demonstrate this on the NBA dataset used in our first workshop. Let's load it... End of explanation # split data into train and test train_set_size = int(.80*len(nba_shot_data)) train_features = nba_shot_data.ix[:train_set_size,['x_Coordinate','y_Coordinate']].as_matrix() test_features = nba_shot_data.ix[train_set_size:,['x_Coordinate','y_Coordinate']].as_matrix() train_class_labels = nba_shot_data.ix[:train_set_size,['shot_outcome']].as_matrix() test_class_labels = nba_shot_data.ix[train_set_size:,['shot_outcome']].as_matrix() #Train logistic regression model start_time = time.time() lm.fit(train_features, np.ravel(train_class_labels)) end_time = time.time() print "Training ended after %.2f seconds." %(end_time-start_time) # compute the classification error on training data predictions = lm.predict(test_features) print "Classification Error on the Test Set: %.2f%%" %(calc_classification_error(predictions, np.array(test_class_labels)) * 100) # compute the baseline error since the classes are imbalanced print "Baseline Error: %.2f%%" %(np.sum(test_class_labels)/len(test_class_labels)*100) # visualize the boundary on the basketball court visualize_court(lm) Explanation: And let's run logistic regression on the data just as we did before... End of explanation ### Transform coordinate system # radius coordinate: calculate distance from point to hoop hoop_location = np.array([25.5, 0.]) train_features -= hoop_location test_features -= hoop_location train_dists = np.sqrt(np.sum(train_features**2, axis=1)) test_dists = np.sqrt(np.sum(test_features**2, axis=1)) # angle coordinate: use arctan2 function train_angles = np.arctan2(train_features[:,1], train_features[:,0]) test_angles = np.arctan2(test_features[:,1], test_features[:,0]) # combine vectors into polar coordinates polar_train_features = np.hstack([train_dists[np.newaxis].T, train_angles[np.newaxis].T]) polar_test_features = np.hstack([test_dists[np.newaxis].T, test_angles[np.newaxis].T]) pd.DataFrame(polar_train_features, columns=["Radius","Angle"]).head() #Train model start_time = time.time() lm.fit(polar_train_features, np.ravel(train_class_labels)) end_time = time.time() print "Training ended after %.2f seconds." %(end_time-start_time) # compute the classification error on test data predictions = lm.predict(polar_test_features) print "Classification Error on the Test Set: %.2f%%" %(calc_classification_error(predictions, np.array(test_class_labels)) * 100) # compute the baseline error since the classes are imbalanced print "Baseline Error: %.2f%%" %(np.sum(test_class_labels)/len(test_class_labels)*100) # visualize the boundary on the basketball court visualize_court(lm, coord_type='polar') Explanation: Now let's transform the Cartesian coordinates into polar coordinates: (x,y) --> (radius, angle)... End of explanation from sklearn.linear_model import LinearRegression # load (x,y) where y is the mystery data x = np.arange(0, 30, .2)[np.newaxis].T y = np.load(open('../data/mystery_data.npy','rb')) ### transformation goes here ### x = np.cos(x) ################################ # initialize regression model lm = LinearRegression() lm.fit(x,y) y_hat = lm.predict(x) squared_error = np.sum((y - y_hat)**2) if not np.isclose(squared_error,0): print "The squared error should be zero! Yours is %.8f." %(squared_error) else: print "You found the secret transformation! Your squared error is %.8f." %(squared_error) Explanation: <span style="color:red">STUDENT ACTIVITY (10 mins)</span> 2.3 Mystery Data The data folder contains some mysterious data that can't be modeled well with a linear function. Running the code below, we see the squared error is over 70. However, the error can be driven to zero using one of two transformations. See if you can find one or both. The transformations are common ones you surely know. End of explanation # un-zip the paintings file import zipfile zipper = zipfile.ZipFile('../data/bob_ross/bob_ross_paintings.npy.zip') zipper.extractall('../data/bob_ross/') # load the 403 x 360,000 matrix br_paintings = np.load(open('../data/bob_ross/bob_ross_paintings.npy','rb')) print "Dataset size: %d x %d"%(br_paintings.shape) Explanation: 3. Dimensionality Reduction Sometimes the data calls for more aggressive transformations. High-dimensional data is usually hard to model because classifiers are likely to overfit. Regularization is one way to combat high dimensionality, but often it can not be enough. This section will cover dimensionality reduction--a technique for reducing the number of features while still preserving curcial information. This is a form of unsupervised learning since we use no class information. 3.1 Image Dataset: Bob Ross Paintings In this section and throughout the next session, we'll use a dataset of Bob Ross' paintings. Images are a type of data that notoriously have redundant features and whose dimensionality can be reduced significantly, without much loss of information. We'll explore this phenomenom via 403 $400 \times 300$ full-color images of natural landscape paintings. Before we load the data, let's take a minute to review how image data is stored on a computer. Of course, all the computer sees are numbers ranging from 0 to 255. Each pixel takes on one of these values. Furthermore, there are three layers to color images--one for red, blue, and green values. Therefore, the painting we'll examine are represented as $300 \times 400 \times 3$-dimensional tensors (multi-dimensional arrays). This layering is depicted below. While images need to be represented with three dimensions to be visualized, the learning algorithms we'll consider don't need any notion of color values so I've already flattened the images into vector form, i.e. to create a matrix of size $403 \times 360000$. Let's load the dataset... End of explanation # subplot containing first image ax1 = plt.subplot(1,2,1) br_painting = br_paintings[70,:] ax1.imshow(np.reshape(br_painting, (300, 400, 3))) # subplot containing second image ax2 = plt.subplot(1,2,2) br_painting = br_paintings[33,:] ax2.imshow(np.reshape(br_painting, (300, 400, 3))) plt.show() Explanation: and then visualize two of the images... End of explanation from sklearn.decomposition import PCA pca = PCA(n_components=400) start_time = time.time() reduced_paintings = pca.fit_transform(br_paintings) end_time = time.time() print "Training took a total of %.2f seconds." %(end_time-start_time) print "Preserved percentage of original variance: %.2f%%" %(pca.explained_variance_ratio_.sum() * 100) print "Dataset is now of size: %d x %d"%(reduced_paintings.shape) Explanation: 3.2 Principal Component Analysis As we've seen, the dataset has many, many, many more features (columns) than examples (rows). Simple Lasso or Ridge regularization probably won't be enough to prevent overfitting so we have to do something more drastic. In this section, we'll cover Principal Component Analysis, a popular technique for reducing the dimensionality of data. Unsupervised Learning PCA does not take into consideration labels, only the input features. We can think of PCA as performing unsupervised 'inverse' prediction. Our goal is: for a datapoint $\mathbf{x}{i}$, find a lower-dimensional representation $\mathbf{h}{i}$ such that $\mathbf{x}{i}$ can be 'predicted' from $\mathbf{h}{i}$ using a linear transformation. In math, this statement can be written as $$\mathbf{\tilde x}{i} = \mathbf{h}{i} \mathbf{W}^{T} \text{ where } \mathbf{h}{i} = \mathbf{x}{i} \mathbf{W}. $$ $\mathbf{W}$ is a $D \times K$ matrix of parameters that need to be learned--much like the $\beta$ vector in regression models. $D$ is the dimensionality of the original data, and $K$ is the dimensionality of the compressed representation $\mathbf{h}_{i}$. The graphic below reiterates the above described PCA pipline: Optimization Having defined the PCA model, we look to write learning as an optimization process. Recall that we wish to make a reconstruction of the data, denoted $\mathbf{\tilde x}{i}$, as close as possible to the original input: $$\mathcal{L} = \sum{i=1}^{N} (\mathbf{x}{i} - \mathbf{\tilde x}{i})^{2}.$$ We can make a substitution for $\mathbf{\tilde x}{i}$ from the equation above: $$ = \sum{i=1}^{N} (\mathbf{x}{i} - \mathbf{h}{i}\mathbf{W}^{T})^{2}.$$ And we can make another substitution for $\mathbf{h}{i}$, bringing us to the final form of the loss function: $$ = \sum{i=1}^{N} (\mathbf{x}{i} - \mathbf{x}{i}\mathbf{W}\mathbf{W}^{T})^{2}.$$ We could perform gradient descent on $\mathcal{L}$, just like we do for logistic regression models, but there exists a deterministic solution. We won't show the derivation here, but you can find it here. $\mathbf{W}$ is optimal when it contains the eigenvectors of the data's covariance matrix, and thus we can use a standard eigen decomposition to learn the transform: $$ \boldsymbol{\Sigma}{\mathbf{X}} = \mathbf{W} \boldsymbol{\Lambda} \boldsymbol{W}^{T} $$ where $\boldsymbol{\Sigma}{\mathbf{X}}$ is the data's empirical covariance matrix and $\boldsymbol{\Lambda}$ is a diagonal matrix of eigenvalues. Eigen decompositions can be performed effeciently by any scientific computing library, including numpy. Intuition The connection to the data's (co-)variance becomes a little more clear when the intuitions behind PCA are examined. The PCA transformation projects the data onto linear subspaces oriented in the directions of highest variance. To elaborate, assume the data resides in two dimensions according to the following scatter plot. The columns of $\mathbf{W}$--the $K=2$ principal components--would be the green lines below: 'PCA 1st Dimension' is the direction of greatest variance, and if the data is projected down to one dimension, the new representations would be produced by collapsing the data onto that line. Principal Component Analysis (PCA) Overview Data We observe $\mathbf{x}_{i}$ where \begin{eqnarray} \mathbf{x}{i} = (x{i,1}, \dots, x_{i,D}) &:& \mbox{set of $D$ explanatory variables (aka features). No labels.} \end{eqnarray} Parameters $\mathbf{W}$: Matrix with dimensionality $D \times K$, where $D$ is the dimensionality of the original data and $K$ the dimensionality of the new features. The matrix encodes the transformation between the original and new feature spaces. Error Function \begin{eqnarray} \mathcal{L} = \sum_{i=1}^{N} ( \mathbf{x}{i} - \mathbf{x}{i} \mathbf{W} \mathbf{W}^{T})^{2} \end{eqnarray} PCA on Bob Ross dataset Now let's run PCA on the Bob Ross paintings dataset... <span style="color:red">Caution: Running PCA on this dataset can take from 30 seconds to several minutes, depending on your computer's processing power.</span> End of explanation img_idx = 70 reconstructed_img = pca.inverse_transform(reduced_paintings[img_idx,:]) original_img = br_paintings[70,:] # subplot for original image ax1 = plt.subplot(1,2,1) ax1.imshow(np.reshape(original_img, (300, 400, 3))) ax1.set_title("Original Painting") # subplot for reconstruction ax2 = plt.subplot(1,2,2) ax2.imshow(np.reshape(reconstructed_img, (300, 400, 3))) ax2.set_title("Reconstruction") plt.show() Explanation: Let's visualize two of the paintings... End of explanation # get the transformation matrix transformation_mat = pca.components_ # This is the W^T matrix # two components to show comp1 = 13 comp2 = 350 # subplot ax1 = plt.subplot(1,2,1) filter1 = transformation_mat[comp1-1,:] ax1.imshow(np.reshape(filter1, (300, 400, 3))) ax1.set_title("%dth Principal Component"%(comp1)) # subplot ax2 = plt.subplot(1,2,2) filter2 = transformation_mat[comp2-1,:] ax2.imshow(np.reshape(filter2, (300, 400, 3))) ax2.set_title("%dth Principal Component"%(comp2)) plt.show() Explanation: We can also visualize the transformation matrix $\mathbf{W}^{T}$. It's rows act as 'filters' or 'feature detectors'... End of explanation # get the movie features movie_features = movie_data[['RottenTomatoes','RottenTomatoes_User','Metacritic','Metacritic_User','Fandango_Ratingvalue']].as_matrix() # perform standard scaling again but via SciKit-Learn from sklearn.preprocessing import StandardScaler z_scaler = StandardScaler() movie_features = z_scaler.fit_transform(movie_features) pca = PCA(n_components=2) start_time = time.time() movie_2d_proj = pca.fit_transform(movie_features) end_time = time.time() print "Training took a total of %.4f seconds." %(end_time-start_time) print "Preserved percentage of original variance: %.2f%%" %(pca.explained_variance_ratio_.sum() * 100) print "Dataset is now of size: %d x %d"%(movie_2d_proj.shape) labels = movie_data['FILM'].tolist() classes = movie_data['IMDB'].tolist() # color the points by IMDB ranking labels_to_show = [] colors = [] for idx, c in enumerate(classes): if c > 7.25: colors.append('g') if c > 8.: labels_to_show.append(labels[idx]) else: colors.append('r') if c < 4.75: labels_to_show.append(labels[idx]) # plot data plt.scatter(movie_2d_proj[:, 0], movie_2d_proj[:, 1], marker = 'o', c = colors, s = 150, alpha = .6) # add movie title annotations for label, x, y in zip(labels, movie_2d_proj[:, 0].tolist(), movie_2d_proj[:, 1].tolist()): if label not in labels_to_show: continue if x < 0: text_x = -20 else: text_x = 150 plt.annotate(label.decode('utf-8'),xy = (x, y), xytext = (text_x, 40), textcoords = 'offset points', ha = 'right', va = 'bottom', arrowprops = dict(arrowstyle = '->', connectionstyle = 'arc3, rad=0'), bbox = dict(boxstyle = 'round,pad=0.5', fc = 'b', alpha = 0.2)) plt.title('PCA Projection of Movies') plt.show() Explanation: 3.3 PCA for Visualization PCA can also be done for visualization purposes. Let's perform PCA on the movie ratings dataset and see if any semblence of the class structure can be seen. End of explanation from sklearn.datasets import fetch_olivetti_faces faces_dataset = fetch_olivetti_faces(shuffle=True) faces = faces_dataset.data # 400 flattened 64x64 images person_ids = faces_dataset.target # denotes the identity of person (40 total) print "Dataset size: %d x %d" %(faces.shape) print "And the images look like this..." plt.imshow(np.reshape(faces[200,:], (64, 64)), cmap='Greys_r') plt.show() Explanation: <span style="color:red">STUDENT ACTIVITY (until end of session)</span> Your task is to reproduce the above PCA examples on a new dataset of images. Let's load it... End of explanation ?PCA ### Your code goes here ### # train PCA model on 'faces' from sklearn.decomposition import PCA pca = PCA(n_components=100) start_time = time.time() faces_reduced = pca.fit_transform(faces) end_time = time.time() ########################### print "Training took a total of %.2f seconds." %(end_time-start_time) print "Preserved percentage of original variance: %.2f%%" %(pca.explained_variance_ratio_.sum() * 100) print "Dataset is now of size: %d x %d"%(faces_reduced.shape) Explanation: This dataset contains 400 64x64 pixel images of 40 people each exhibiting 10 facial expressions. The images are in gray-scale, not color, and therefore flattened vectors contain 4096 dimensions. <span style="color:red">Subtask 1: Run PCA</span> End of explanation ### Your code goes here ### # Use learned transformation matrix to project back to the original 4096-dimensional space # Remember you need to use np.reshape() ########################### img_idx = 70 reconstructed_img = pca.inverse_transform(faces_reduced[img_idx,:]) original_img = faces[70,:] # subplot for original image ax1 = plt.subplot(1,2,1) ax1.imshow(np.reshape(original_img, (64, 64)), cmap='Greys_r') ax1.set_title("Original Image") # subplot for reconstruction ax2 = plt.subplot(1,2,2) ax2.imshow(np.reshape(reconstructed_img, (64, 64)), cmap='Greys_r') ax2.set_title("Reconstruction") plt.show() Explanation: <span style="color:red">Subtask 2: Reconstruct an image</span> End of explanation ### Your code goes here ### # Now visualize one of the principal components # Again, remember you need to use np.reshape() ########################### transformation_mat = pca.components_ # two components to show comp1 = 5 comp2 = 90 # subplot ax1 = plt.subplot(1,2,1) filter1 = transformation_mat[comp1,:] ax1.imshow(np.reshape(filter1, (64, 64)), cmap='Greys_r') ax1.set_title("%dth Principal Component"%(comp1)) # subplot ax2 = plt.subplot(1,2,2) filter2 = transformation_mat[comp2,:] ax2.imshow(np.reshape(filter2, (64, 64)), cmap='Greys_r') ax2.set_title("%dth Principal Component"%(comp2)) plt.show() Explanation: Your output should look something like what's below (although could be a different face): <span style="color:red">Subtask 3: Visualize one or more components of the transformation matrix (W)</span> End of explanation ### Your code goes here ### # Train another PCA model to project the data into two dimensions # Bonus: color the scatter plot according to the person_ids to see if any structure can be seen # Run PCA for 2 components # Generate plot ########################### pca = PCA(n_components=2) start_time = time.time() faces_2d_proj = pca.fit_transform(faces) end_time = time.time() print "Training took a total of %.2f seconds." %(end_time-start_time) print "Preserved percentage of original variance: %.2f%%" %(pca.explained_variance_ratio_.sum() * 100) print "Dataset is now of size: %d x %d"%(faces_2d_proj.shape) # Generate plot # color the points by the person ids colors = [plt.cm.Set1((c+1)/40.) for c in person_ids] # plot data plt.scatter(faces_2d_proj[:, 0], faces_2d_proj[:, 1], marker = 'o', c = colors, s = 175, alpha = .6) plt.title('2D Projection of Faces Dataset') plt.show() Explanation: Your output should look something like what's below (although could have differently ranked components): <span style="color:red">Subtask 4: Generate a 2D scatter plot</span> End of explanation
5,123
Given the following text description, write Python code to implement the functionality described below step by step Description: Cahn-Hilliard Example This example demonstrates how to use PyMKS to solve the Cahn-Hilliard equation. The first section provides some background information about the Cahn-Hilliard equation as well as details about calibrating and validating the MKS model. The example demonstrates how to generate sample data, calibrate the influence coefficients and then pick an appropriate number of local states when state space is continuous. The MKS model and a spectral solution of the Cahn-Hilliard equation are compared on a larger test microstructure over multiple time steps. Cahn-Hilliard Equation The Cahn-Hilliard equation is used to simulate microstructure evolution during spinodial decomposition and has the following form, $$ \dot{\phi} = \nabla^2 \left( \phi^3 - \phi \right) - \gamma \nabla^4 \phi $$ where $\phi$ is a conserved ordered parameter and $\sqrt{\gamma}$ represents the width of the interface. In this example, the Cahn-Hilliard equation is solved using a semi-implicit spectral scheme with periodic boundary conditions, see Chang and Rutenberg for more details. Step1: Modeling with MKS In this example the MKS equation will be used to predict microstructure at the next time step using $$p[s, 1] = \sum_{r=0}^{S-1} \alpha[l, r, 1] \sum_{l=0}^{L-1} m[l, s - r, 0] + ...$$ where $p[s, n + 1]$ is the concentration field at location $s$ and at time $n + 1$, $r$ is the convolution dummy variable and $l$ indicates the local states varable. $\alpha[l, r, n]$ are the influence coefficients and $m[l, r, 0]$ the microstructure function given to the model. $S$ is the total discretized volume and $L$ is the total number of local states n_states choosen to use. The model will march forward in time by recussively replacing discretizing $p[s, n]$ and substituing it back for $m[l, s - r, n]$. Calibration Datasets Unlike the elastostatic examples, the microstructure (concentration field) for this simulation doesn't have discrete phases. The microstructure is a continuous field that can have a range of values which can change over time, therefore the first order influence coefficients cannot be calibrated with delta microstructures. Instead a large number of simulations with random initial conditions are used to calibrate the first order influence coefficients using linear regression. The function make_cahn_hilliard from pymks.datasets provides an interface to generate calibration datasets for the influence coefficients. To use make_cahn_hilliard, we need to set the number of samples we want to use to calibrate the influence coefficients using n_samples, the size of the simulation domain using size and the time step using dt. Step2: The function make_cahnHilliard generates n_samples number of random microstructures, X, and the associated updated microstructures, y, after one time step y. The following cell plots one of these microstructures along with its update. Step3: Calibrate Influence Coefficients As mentioned above, the microstructures (concentration fields) does not have discrete phases. This leaves the number of local states in local state space as a free hyper parameter. In previous work it has been shown that as you increase the number of local states, the accuracy of MKS model increases (see Fast et al.), but as the number of local states increases, the difference in accuracy decreases. Some work needs to be done in order to find the practical number of local states that we will use. Optimizing the Number of Local States Let's split the calibrate dataset into testing and training datasets. The function train_test_split for the machine learning python module sklearn provides a convenient interface to do this. 80% of the dataset will be used for training and the remaining 20% will be used for testing by setting test_size equal to 0.2. The state of the random number generator used to make the split can be set using random_state. Step4: We are now going to calibrate the influence coefficients while varying the number of local states from 2 up to 20. Each of these models will then predict the evolution of the concentration fields. Mean square error will be used to compared the results with the testing dataset to evaluate how the MKS model's performance changes as we change the number of local states. First we need to import the class MKSLocalizationModel from pymks. Step5: Next we will calibrate the influence coefficients while varying the number of local states and compute the mean squared error. The following demonstrates how to use Scikit-learn's GridSearchCV to optimize n_states as a hyperparameter. Of course, the best fit is always with a larger value of n_states. Increasing this parameter does not overfit the data. Step6: As expected the accuracy of the MKS model monotonically increases as we increase n_states, but accuracy doesn't improve significantly as n_states gets larger than signal digits. In order to save on computation costs let's set calibrate the influence coefficients with n_states equal to 6, but realize that if we need slightly more accuracy the value can be increased. Step7: Here are the first 4 influence coefficients. Step8: Predict Microstructure Evolution With the calibrated influence coefficients, we are ready to predict the evolution of a concentration field. In order to do this, we need to have the Cahn-Hilliard simulation and the MKS model start with the same initial concentration phi0 and evolve in time. In order to do the Cahn-Hilliard simulation we need an instance of the class CahnHilliardSimulation. Step9: In order to move forward in time, we need to feed the concentration back into the Cahn-Hilliard simulation and the MKS model. Step10: Let's take a look at the concentration fields. Step11: The MKS model was able to capture the microstructure evolution with 6 local states. Resizing the Coefficients to use on Larger Systems Now let's try and predict a larger simulation by resizing the coefficients and provide a larger initial concentratio field. Step12: Once again we are going to march forward in time by feeding the concentration fields back into the Cahn-Hilliard simulation and the MKS model. Step13: Let's take a look at the results.
Python Code: %matplotlib inline %load_ext autoreload %autoreload 2 import numpy as np import matplotlib.pyplot as plt Explanation: Cahn-Hilliard Example This example demonstrates how to use PyMKS to solve the Cahn-Hilliard equation. The first section provides some background information about the Cahn-Hilliard equation as well as details about calibrating and validating the MKS model. The example demonstrates how to generate sample data, calibrate the influence coefficients and then pick an appropriate number of local states when state space is continuous. The MKS model and a spectral solution of the Cahn-Hilliard equation are compared on a larger test microstructure over multiple time steps. Cahn-Hilliard Equation The Cahn-Hilliard equation is used to simulate microstructure evolution during spinodial decomposition and has the following form, $$ \dot{\phi} = \nabla^2 \left( \phi^3 - \phi \right) - \gamma \nabla^4 \phi $$ where $\phi$ is a conserved ordered parameter and $\sqrt{\gamma}$ represents the width of the interface. In this example, the Cahn-Hilliard equation is solved using a semi-implicit spectral scheme with periodic boundary conditions, see Chang and Rutenberg for more details. End of explanation import pymks from pymks.datasets import make_cahn_hilliard n = 41 n_samples = 400 dt = 1e-2 np.random.seed(99) X, y = make_cahn_hilliard(n_samples=n_samples, size=(n, n), dt=dt) Explanation: Modeling with MKS In this example the MKS equation will be used to predict microstructure at the next time step using $$p[s, 1] = \sum_{r=0}^{S-1} \alpha[l, r, 1] \sum_{l=0}^{L-1} m[l, s - r, 0] + ...$$ where $p[s, n + 1]$ is the concentration field at location $s$ and at time $n + 1$, $r$ is the convolution dummy variable and $l$ indicates the local states varable. $\alpha[l, r, n]$ are the influence coefficients and $m[l, r, 0]$ the microstructure function given to the model. $S$ is the total discretized volume and $L$ is the total number of local states n_states choosen to use. The model will march forward in time by recussively replacing discretizing $p[s, n]$ and substituing it back for $m[l, s - r, n]$. Calibration Datasets Unlike the elastostatic examples, the microstructure (concentration field) for this simulation doesn't have discrete phases. The microstructure is a continuous field that can have a range of values which can change over time, therefore the first order influence coefficients cannot be calibrated with delta microstructures. Instead a large number of simulations with random initial conditions are used to calibrate the first order influence coefficients using linear regression. The function make_cahn_hilliard from pymks.datasets provides an interface to generate calibration datasets for the influence coefficients. To use make_cahn_hilliard, we need to set the number of samples we want to use to calibrate the influence coefficients using n_samples, the size of the simulation domain using size and the time step using dt. End of explanation from pymks.tools import draw_concentrations draw_concentrations((X[0], y[0]), labels=('Input Concentration', 'Output Concentration')) Explanation: The function make_cahnHilliard generates n_samples number of random microstructures, X, and the associated updated microstructures, y, after one time step y. The following cell plots one of these microstructures along with its update. End of explanation import sklearn from sklearn.cross_validation import train_test_split split_shape = (X.shape[0],) + (np.product(X.shape[1:]),) X_train, X_test, y_train, y_test = train_test_split(X.reshape(split_shape), y.reshape(split_shape), test_size=0.5, random_state=3) Explanation: Calibrate Influence Coefficients As mentioned above, the microstructures (concentration fields) does not have discrete phases. This leaves the number of local states in local state space as a free hyper parameter. In previous work it has been shown that as you increase the number of local states, the accuracy of MKS model increases (see Fast et al.), but as the number of local states increases, the difference in accuracy decreases. Some work needs to be done in order to find the practical number of local states that we will use. Optimizing the Number of Local States Let's split the calibrate dataset into testing and training datasets. The function train_test_split for the machine learning python module sklearn provides a convenient interface to do this. 80% of the dataset will be used for training and the remaining 20% will be used for testing by setting test_size equal to 0.2. The state of the random number generator used to make the split can be set using random_state. End of explanation from pymks import MKSLocalizationModel from pymks.bases import PrimitiveBasis Explanation: We are now going to calibrate the influence coefficients while varying the number of local states from 2 up to 20. Each of these models will then predict the evolution of the concentration fields. Mean square error will be used to compared the results with the testing dataset to evaluate how the MKS model's performance changes as we change the number of local states. First we need to import the class MKSLocalizationModel from pymks. End of explanation from sklearn.grid_search import GridSearchCV parameters_to_tune = {'n_states': np.arange(2, 11)} prim_basis = PrimitiveBasis(2, [-1, 1]) model = MKSLocalizationModel(prim_basis) gs = GridSearchCV(model, parameters_to_tune, cv=5, fit_params={'size': (n, n)}) gs.fit(X_train, y_train) print(gs.best_estimator_) print(gs.score(X_test, y_test)) from pymks.tools import draw_gridscores draw_gridscores(gs.grid_scores_, 'n_states', score_label='R-squared', param_label='L-Number of Local States') Explanation: Next we will calibrate the influence coefficients while varying the number of local states and compute the mean squared error. The following demonstrates how to use Scikit-learn's GridSearchCV to optimize n_states as a hyperparameter. Of course, the best fit is always with a larger value of n_states. Increasing this parameter does not overfit the data. End of explanation model = MKSLocalizationModel(basis=PrimitiveBasis(6, [-1, 1])) model.fit(X, y) Explanation: As expected the accuracy of the MKS model monotonically increases as we increase n_states, but accuracy doesn't improve significantly as n_states gets larger than signal digits. In order to save on computation costs let's set calibrate the influence coefficients with n_states equal to 6, but realize that if we need slightly more accuracy the value can be increased. End of explanation from pymks.tools import draw_coeff draw_coeff(model.coeff[...,:4]) Explanation: Here are the first 4 influence coefficients. End of explanation from pymks.datasets.cahn_hilliard_simulation import CahnHilliardSimulation np.random.seed(191) phi0 = np.random.normal(0, 1e-9, (1, n, n)) ch_sim = CahnHilliardSimulation(dt=dt) phi_sim = phi0.copy() phi_pred = phi0.copy() Explanation: Predict Microstructure Evolution With the calibrated influence coefficients, we are ready to predict the evolution of a concentration field. In order to do this, we need to have the Cahn-Hilliard simulation and the MKS model start with the same initial concentration phi0 and evolve in time. In order to do the Cahn-Hilliard simulation we need an instance of the class CahnHilliardSimulation. End of explanation time_steps = 10 for ii in range(time_steps): ch_sim.run(phi_sim) phi_sim = ch_sim.response phi_pred = model.predict(phi_pred) Explanation: In order to move forward in time, we need to feed the concentration back into the Cahn-Hilliard simulation and the MKS model. End of explanation from pymks.tools import draw_concentrations_compare draw_concentrations((phi_sim[0], phi_pred[0]), labels=('Simulation', 'MKS')) Explanation: Let's take a look at the concentration fields. End of explanation m = 3 * n model.resize_coeff((m, m)) phi0 = np.random.normal(0, 1e-9, (1, m, m)) phi_sim = phi0.copy() phi_pred = phi0.copy() Explanation: The MKS model was able to capture the microstructure evolution with 6 local states. Resizing the Coefficients to use on Larger Systems Now let's try and predict a larger simulation by resizing the coefficients and provide a larger initial concentratio field. End of explanation for ii in range(1000): ch_sim.run(phi_sim) phi_sim = ch_sim.response phi_pred = model.predict(phi_pred) Explanation: Once again we are going to march forward in time by feeding the concentration fields back into the Cahn-Hilliard simulation and the MKS model. End of explanation from pymks.tools import draw_concentrations_compare draw_concentrations_compare((phi_sim[0], phi_pred[0]), labels=('Simulation', 'MKS')) Explanation: Let's take a look at the results. End of explanation
5,124
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Land MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Description Is Required Step7: 1.4. Land Atmosphere Flux Exchanges Is Required Step8: 1.5. Atmospheric Coupling Treatment Is Required Step9: 1.6. Land Cover Is Required Step10: 1.7. Land Cover Change Is Required Step11: 1.8. Tiling Is Required Step12: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required Step13: 2.2. Water Is Required Step14: 2.3. Carbon Is Required Step15: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required Step16: 3.2. Time Step Is Required Step17: 3.3. Timestepping Method Is Required Step18: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required Step19: 4.2. Code Version Is Required Step20: 4.3. Code Languages Is Required Step21: 5. Grid Land surface grid 5.1. Overview Is Required Step22: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required Step23: 6.2. Matches Atmosphere Grid Is Required Step24: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required Step25: 7.2. Total Depth Is Required Step26: 8. Soil Land surface soil 8.1. Overview Is Required Step27: 8.2. Heat Water Coupling Is Required Step28: 8.3. Number Of Soil layers Is Required Step29: 8.4. Prognostic Variables Is Required Step30: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required Step31: 9.2. Structure Is Required Step32: 9.3. Texture Is Required Step33: 9.4. Organic Matter Is Required Step34: 9.5. Albedo Is Required Step35: 9.6. Water Table Is Required Step36: 9.7. Continuously Varying Soil Depth Is Required Step37: 9.8. Soil Depth Is Required Step38: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required Step39: 10.2. Functions Is Required Step40: 10.3. Direct Diffuse Is Required Step41: 10.4. Number Of Wavelength Bands Is Required Step42: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required Step43: 11.2. Time Step Is Required Step44: 11.3. Tiling Is Required Step45: 11.4. Vertical Discretisation Is Required Step46: 11.5. Number Of Ground Water Layers Is Required Step47: 11.6. Lateral Connectivity Is Required Step48: 11.7. Method Is Required Step49: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required Step50: 12.2. Ice Storage Method Is Required Step51: 12.3. Permafrost Is Required Step52: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required Step53: 13.2. Types Is Required Step54: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required Step55: 14.2. Time Step Is Required Step56: 14.3. Tiling Is Required Step57: 14.4. Vertical Discretisation Is Required Step58: 14.5. Heat Storage Is Required Step59: 14.6. Processes Is Required Step60: 15. Snow Land surface snow 15.1. Overview Is Required Step61: 15.2. Tiling Is Required Step62: 15.3. Number Of Snow Layers Is Required Step63: 15.4. Density Is Required Step64: 15.5. Water Equivalent Is Required Step65: 15.6. Heat Content Is Required Step66: 15.7. Temperature Is Required Step67: 15.8. Liquid Water Content Is Required Step68: 15.9. Snow Cover Fractions Is Required Step69: 15.10. Processes Is Required Step70: 15.11. Prognostic Variables Is Required Step71: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required Step72: 16.2. Functions Is Required Step73: 17. Vegetation Land surface vegetation 17.1. Overview Is Required Step74: 17.2. Time Step Is Required Step75: 17.3. Dynamic Vegetation Is Required Step76: 17.4. Tiling Is Required Step77: 17.5. Vegetation Representation Is Required Step78: 17.6. Vegetation Types Is Required Step79: 17.7. Biome Types Is Required Step80: 17.8. Vegetation Time Variation Is Required Step81: 17.9. Vegetation Map Is Required Step82: 17.10. Interception Is Required Step83: 17.11. Phenology Is Required Step84: 17.12. Phenology Description Is Required Step85: 17.13. Leaf Area Index Is Required Step86: 17.14. Leaf Area Index Description Is Required Step87: 17.15. Biomass Is Required Step88: 17.16. Biomass Description Is Required Step89: 17.17. Biogeography Is Required Step90: 17.18. Biogeography Description Is Required Step91: 17.19. Stomatal Resistance Is Required Step92: 17.20. Stomatal Resistance Description Is Required Step93: 17.21. Prognostic Variables Is Required Step94: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required Step95: 18.2. Tiling Is Required Step96: 18.3. Number Of Surface Temperatures Is Required Step97: 18.4. Evaporation Is Required Step98: 18.5. Processes Is Required Step99: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required Step100: 19.2. Tiling Is Required Step101: 19.3. Time Step Is Required Step102: 19.4. Anthropogenic Carbon Is Required Step103: 19.5. Prognostic Variables Is Required Step104: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required Step105: 20.2. Carbon Pools Is Required Step106: 20.3. Forest Stand Dynamics Is Required Step107: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required Step108: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required Step109: 22.2. Growth Respiration Is Required Step110: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required Step111: 23.2. Allocation Bins Is Required Step112: 23.3. Allocation Fractions Is Required Step113: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required Step114: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required Step115: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required Step116: 26.2. Carbon Pools Is Required Step117: 26.3. Decomposition Is Required Step118: 26.4. Method Is Required Step119: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required Step120: 27.2. Carbon Pools Is Required Step121: 27.3. Decomposition Is Required Step122: 27.4. Method Is Required Step123: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required Step124: 28.2. Emitted Greenhouse Gases Is Required Step125: 28.3. Decomposition Is Required Step126: 28.4. Impact On Soil Properties Is Required Step127: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required Step128: 29.2. Tiling Is Required Step129: 29.3. Time Step Is Required Step130: 29.4. Prognostic Variables Is Required Step131: 30. River Routing Land surface river routing 30.1. Overview Is Required Step132: 30.2. Tiling Is Required Step133: 30.3. Time Step Is Required Step134: 30.4. Grid Inherited From Land Surface Is Required Step135: 30.5. Grid Description Is Required Step136: 30.6. Number Of Reservoirs Is Required Step137: 30.7. Water Re Evaporation Is Required Step138: 30.8. Coupled To Atmosphere Is Required Step139: 30.9. Coupled To Land Is Required Step140: 30.10. Quantities Exchanged With Atmosphere Is Required Step141: 30.11. Basin Flow Direction Map Is Required Step142: 30.12. Flooding Is Required Step143: 30.13. Prognostic Variables Is Required Step144: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required Step145: 31.2. Quantities Transported Is Required Step146: 32. Lakes Land surface lakes 32.1. Overview Is Required Step147: 32.2. Coupling With Rivers Is Required Step148: 32.3. Time Step Is Required Step149: 32.4. Quantities Exchanged With Rivers Is Required Step150: 32.5. Vertical Grid Is Required Step151: 32.6. Prognostic Variables Is Required Step152: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required Step153: 33.2. Albedo Is Required Step154: 33.3. Dynamics Is Required Step155: 33.4. Dynamic Lake Extent Is Required Step156: 33.5. Endorheic Basins Is Required Step157: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-2', 'land') Explanation: ES-DOC CMIP6 Model Properties - Land MIP Era: CMIP6 Institute: MIROC Source ID: SANDBOX-2 Topic: Land Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. Properties: 154 (96 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-20 15:02:41 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code (e.g. MOSES2.2) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.3. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "water" # "energy" # "carbon" # "nitrogen" # "phospherous" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Land Atmosphere Flux Exchanges Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Fluxes exchanged with the atmopshere. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.5. Atmospheric Coupling Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bare soil" # "urban" # "lake" # "land ice" # "lake ice" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.6. Land Cover Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Types of land cover defined in the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover_change') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.7. Land Cover Change Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how land cover change is managed (e.g. the use of net or gross transitions) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.8. Tiling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.energy') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.water') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Water Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how water is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a time step dependent on the frequency of atmosphere coupling? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overall timestep of land surface model (i.e. time between calls) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Timestepping Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of time stepping method and associated time step(s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Grid Land surface grid 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the horizontal grid (not including any tiling) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the horizontal grid match the atmosphere? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the vertical grid in the soil (not including any tiling) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.total_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.2. Total Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The total depth of the soil (in metres) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Soil Land surface soil 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of soil in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_water_coupling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Heat Water Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the coupling between heat and water in the soil End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.number_of_soil layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 8.3. Number Of Soil layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the soil scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of soil map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil structure map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.texture') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.3. Texture Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil texture map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.organic_matter') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.4. Organic Matter Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil organic matter map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.5. Albedo Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil albedo map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.water_table') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.6. Water Table Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil water table map, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 9.7. Continuously Varying Soil Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the soil properties vary continuously with depth? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.8. Soil Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil depth map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow free albedo prognostic? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "soil humidity" # "vegetation state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, describe the dependancies on snow free albedo calculations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "distinction between direct and diffuse albedo" # "no distinction between direct and diffuse albedo" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.3. Direct Diffuse Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe the distinction between direct and diffuse albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 10.4. Number Of Wavelength Bands Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, enter the number of wavelength bands used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the soil hydrological model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river soil hydrology in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil hydrology tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.5. Number Of Ground Water Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers that may contain water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "perfect connectivity" # "Darcian flow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.6. Lateral Connectivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe the lateral connectivity between tiles End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Bucket" # "Force-restore" # "Choisnel" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.7. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The hydrological dynamics scheme in the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How many soil layers may contain ground ice End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Ice Storage Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of ice storage End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.3. Permafrost Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of permafrost, if any, within the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General describe how drainage is included in the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Gravity drainage" # "Horton mechanism" # "topmodel-based" # "Dunne mechanism" # "Lateral subsurface flow" # "Baseflow from groundwater" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Different types of runoff represented by the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of how heat treatment properties are defined End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of soil heat scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil heat treatment tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Force-restore" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.5. Heat Storage Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the method of heat storage End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "soil moisture freeze-thaw" # "coupling with snow temperature" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.6. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe processes included in the treatment of soil heat End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Snow Land surface snow 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of snow in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.number_of_snow_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.3. Number Of Snow Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of snow levels used in the land surface scheme/model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.density') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.4. Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow density End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.water_equivalent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.5. Water Equivalent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the snow water equivalent End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.heat_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.6. Heat Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the heat content of snow End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.temperature') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.7. Temperature Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow temperature End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.liquid_water_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.8. Liquid Water Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow liquid water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_cover_fractions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ground snow fraction" # "vegetation snow fraction" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.9. Snow Cover Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify cover fractions used in the surface snow scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "snow interception" # "snow melting" # "snow freezing" # "blowing snow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.10. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Snow related processes in the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.11. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the snow scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "prescribed" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of snow-covered land albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "snow age" # "snow density" # "snow grain type" # "aerosol deposition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N *If prognostic, * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Vegetation Land surface vegetation 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vegetation in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 17.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of vegetation scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.dynamic_vegetation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 17.3. Dynamic Vegetation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there dynamic evolution of vegetation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.4. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vegetation tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation types" # "biome types" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.5. Vegetation Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Vegetation classification used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "broadleaf tree" # "needleleaf tree" # "C3 grass" # "C4 grass" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.6. Vegetation Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of vegetation types in the classification, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biome_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "evergreen needleleaf forest" # "evergreen broadleaf forest" # "deciduous needleleaf forest" # "deciduous broadleaf forest" # "mixed forest" # "woodland" # "wooded grassland" # "closed shrubland" # "opne shrubland" # "grassland" # "cropland" # "wetlands" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.7. Biome Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of biome types in the classification, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_time_variation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed (not varying)" # "prescribed (varying from files)" # "dynamical (varying from simulation)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.8. Vegetation Time Variation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How the vegetation fractions in each tile are varying with time End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.9. Vegetation Map Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.interception') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 17.10. Interception Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is vegetation interception of rainwater represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic (vegetation map)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.11. Phenology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation phenology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.12. Phenology Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation phenology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.13. Leaf Area Index Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation leaf area index End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.14. Leaf Area Index Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of leaf area index End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.15. Biomass Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Treatment of vegetation biomass * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.16. Biomass Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biomass End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.17. Biogeography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation biogeography End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.18. Biogeography Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biogeography End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "light" # "temperature" # "water availability" # "CO2" # "O3" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.19. Stomatal Resistance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify what the vegetation stomatal resistance depends on End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.20. Stomatal Resistance Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation stomatal resistance End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.21. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the vegetation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of energy balance in land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the energy balance tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 18.3. Number Of Surface Temperatures Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "alpha" # "beta" # "combined" # "Monteith potential evaporation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.4. Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify the formulation method for land surface evaporation, from soil and vegetation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "transpiration" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe which processes are included in the energy balance scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of carbon cycle in land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the carbon cycle tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of carbon cycle in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "grand slam protocol" # "residence time" # "decay time" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.4. Anthropogenic Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Describe the treament of the anthropogenic carbon pool End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the carbon scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.3. Forest Stand Dynamics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of forest stand dyanmics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for maintainence respiration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Growth Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for growth respiration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the allocation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "leaves + stems + roots" # "leaves + stems + roots (leafy + woody)" # "leaves + fine roots + coarse roots + stems" # "whole plant (no distinction)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. Allocation Bins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify distinct carbon bins used in allocation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "function of vegetation type" # "function of plant allometry" # "explicitly calculated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.3. Allocation Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the fractions of allocation are calculated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the phenology scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the mortality scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is permafrost included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.2. Emitted Greenhouse Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the GHGs emitted End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.4. Impact On Soil Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the impact of permafrost on soil properties End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the nitrogen cycle in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the notrogen cycle tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 29.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of nitrogen cycle in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the nitrogen scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30. River Routing Land surface river routing 30.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of river routing in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the river routing, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river routing scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.4. Grid Inherited From Land Surface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the grid inherited from land surface? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.5. Grid Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of grid, if not inherited from land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.number_of_reservoirs') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.6. Number Of Reservoirs Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of reservoirs End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.water_re_evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "flood plains" # "irrigation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.7. Water Re Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N TODO End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.8. Coupled To Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is river routing coupled to the atmosphere model component? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_land') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.9. Coupled To Land Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the coupling between land and rivers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.10. Quantities Exchanged With Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "adapted for other periods" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.11. Basin Flow Direction Map Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of basin flow direction map is being used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.flooding') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.12. Flooding Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the representation of flooding, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.13. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the river routing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "direct (large rivers)" # "diffuse" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify how rivers are discharged to the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.2. Quantities Transported Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Quantities that are exchanged from river-routing to the ocean model component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32. Lakes Land surface lakes 32.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lakes in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.coupling_with_rivers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.2. Coupling With Rivers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are lakes coupled to the river routing model component? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 32.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of lake scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.4. Quantities Exchanged With Rivers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If coupling with rivers, which quantities are exchanged between the lakes and rivers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.vertical_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.5. Vertical Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vertical grid of lakes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the lake scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.ice_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is lake ice included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33.2. Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of lake albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "No lake dynamics" # "vertical" # "horizontal" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33.3. Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which dynamics of lakes are treated? horizontal, vertical, etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33.4. Dynamic Lake Extent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a dynamic lake extent scheme included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.endorheic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33.5. Endorheic Basins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basins not flowing to ocean included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.wetlands.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of wetlands, if any End of explanation
5,125
Given the following text description, write Python code to implement the functionality described below step by step Description: Display Exercise 1 Imports Put any needed imports needed to display rich output the following cell Step1: Basic rich display Find a Physics related image on the internet and display it in this notebook using the Image object. Load it using the url argument to Image (don't upload the image to this server). Make sure the set the embed flag so the image is embedded in the notebook data. Set the width and height to 600px. Step2: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
Python Code: from IPython.display import HTML from IPython.display import Image from IPython.display import IFrame assert True # leave this to grade the import statements Explanation: Display Exercise 1 Imports Put any needed imports needed to display rich output the following cell: End of explanation Image(url = "http://thumbs.dreamstime.com/z/biology-chemistry-physics-26975213.jpg", width = 600, height = 600) assert True # leave this to grade the image display Explanation: Basic rich display Find a Physics related image on the internet and display it in this notebook using the Image object. Load it using the url argument to Image (don't upload the image to this server). Make sure the set the embed flag so the image is embedded in the notebook data. Set the width and height to 600px. End of explanation %%html <table> <th>Name </th> <th>Symbol</th> <th>Antiparticle</th> <th>Charge (e)</th> <th>Mass(MeV/$c^2$)</th> </tr> <tr> <td> up </td> <td> u </td> <td> $\bar{u}$ </td> <td> +$\frac{2}{3}$ </td> <td> 0.511 </td> </tr> <tr> <td> down </td> <td> d </td> <td> $\bar{d}$ </td> <td> -$\frac{1}{3}$ </td> <td> 3.5-6.0 </td> </tr> <tr> <td> charm </td> <td> c </td> <td> $\bar{c}$ </td> <td> +$\frac{2}{3}$ </td> <td> 1,160-1,340 </td> </tr> <tr> <td> strange </td> <td> s </td> <td> $\bar{s}$ </td> <td> -$\frac{1}{3}$ </td> <td> 70-130 </td> </tr> <tr> <td> top </td> <td> t </td> <td> $\bar{t}$ </td> <td> +$\frac{2}{3}$ </td> <td> 169,100-173,300 </td> </tr> <tr> <td> bottom </td> <td> b </td> <td> $\bar{b}$ </td> <td> -$\frac{1}{3}$ </td> <td> 4,130-4,370 </td> assert True # leave this here to grade the quark table Explanation: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate. End of explanation
5,126
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1 align = 'center'> Neural Networks Demystified </h1> <h2 align = 'center'> Part 7 Step1: Last time, we trained our Neural Network, and it made suspiciously good predictions of your test score based on how many hours you slept, and how many hours you studied the night before. Before we celebrate and begin changing our sleep and study habits, we need some way to ensure that our model reflects the real world. To do this, let’s first spend some time thinking about data. Like a lot of data, our input and output values come from real world observations. The assumption here is that there is some underlying process, and our observations give us insight into the process - BUT our observations are not the same thing as the process, they are just a sample. Our observation says that when we sleep for 3 hours and study for 5 hours, the grade we earned was a 75. But does this mean that every time you sleep for 3 hours and study for 5 hours you will earn a 75? Of course not, because there are other variables that matter here, such as the difficulty of test, or whether you’ve been paying attention in lectures – we could quantify these variables to build a better model, but even if we did, there would still an element of uncertainty that we could never explicitly model – for example, maybe the test was multiple choice, and you guessed on a few problems. One way to think about this problem is that observations are composed of signal and noise. Nate Silver, the guy who correctly predicted the US election results for 50 out of 50 US states in 2012, wrote a great book on exactly this. The idea is that we’re interested in an underlying process, the signal, but in real data, our signal will always be obscured by some level of noise. An interesting example of this shows up when comparing the SAT scores of students who take the SAT both Junior and Senior year. Right on the college board’s website it says Step2: So it appears our model is overfitting, but how do we know for sure? A widely accepted method is to split our data into 2 portions Step3: So now that we know overfitting is a problem, but how do we fix it? One way is to throw more data at the problem. A simple rule of thumb as presented by Yaser Abu-Mostaf is his excellent machine learning course available from Caltech, is that you should have at least 10 times as many examples as the degrees for freedom in your model. For us, since we have 9 weights that can change, we would need 90 observations, which we certainly don’t have. Link to course Step4: If we train our model now, we see that the fit is still good, but our model is no longer interested in “exactly” fitting our data. Further, our training and testing errors are much closer, and we’ve successfully reduced overfitting on this dataset. To further reduce overfitting, we could increase lambda.
Python Code: from IPython.display import YouTubeVideo YouTubeVideo('S4ZUwgesjS8') Explanation: <h1 align = 'center'> Neural Networks Demystified </h1> <h2 align = 'center'> Part 7: Overfitting, Testing, and Regularization </h2> <h4 align = 'center' > @stephencwelch </h4> End of explanation %pylab inline from partSix import * NN = Neural_Network() # X = (hours sleeping, hours studying), y = Score on test X = np.array(([3,5], [5,1], [10,2], [6,1.5]), dtype=float) y = np.array(([75], [82], [93], [70]), dtype=float) #Plot projections of our new data: fig = figure(0,(8,3)) subplot(1,2,1) scatter(X[:,0], y) grid(1) xlabel('Hours Sleeping') ylabel('Test Score') subplot(1,2,2) scatter(X[:,1], y) grid(1) xlabel('Hours Studying') ylabel('Test Score') #Normalize X = X/np.amax(X, axis=0) y = y/100 #Max test score is 100 #Train network with new data: T = trainer(NN) T.train(X,y) #Plot cost during training: plot(T.J) grid(1) xlabel('Iterations') ylabel('Cost') #Test network for various combinations of sleep/study: hoursSleep = linspace(0, 10, 100) hoursStudy = linspace(0, 5, 100) #Normalize data (same way training data way normalized) hoursSleepNorm = hoursSleep/10. hoursStudyNorm = hoursStudy/5. #Create 2-d versions of input for plotting a, b = meshgrid(hoursSleepNorm, hoursStudyNorm) #Join into a single input matrix: allInputs = np.zeros((a.size, 2)) allInputs[:, 0] = a.ravel() allInputs[:, 1] = b.ravel() allOutputs = NN.forward(allInputs) #Contour Plot: yy = np.dot(hoursStudy.reshape(100,1), np.ones((1,100))) xx = np.dot(hoursSleep.reshape(100,1), np.ones((1,100))).T CS = contour(xx,yy,100*allOutputs.reshape(100, 100)) clabel(CS, inline=1, fontsize=10) xlabel('Hours Sleep') ylabel('Hours Study') #3D plot: #Uncomment to plot out-of-notebook (you'll be able to rotate) #%matplotlib qt from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax = fig.gca(projection='3d') #Scatter training examples: ax.scatter(10*X[:,0], 5*X[:,1], 100*y, c='k', alpha = 1, s=30) surf = ax.plot_surface(xx, yy, 100*allOutputs.reshape(100, 100), \ cmap=cm.jet, alpha = 0.5) ax.set_xlabel('Hours Sleep') ax.set_ylabel('Hours Study') ax.set_zlabel('Test Score') Explanation: Last time, we trained our Neural Network, and it made suspiciously good predictions of your test score based on how many hours you slept, and how many hours you studied the night before. Before we celebrate and begin changing our sleep and study habits, we need some way to ensure that our model reflects the real world. To do this, let’s first spend some time thinking about data. Like a lot of data, our input and output values come from real world observations. The assumption here is that there is some underlying process, and our observations give us insight into the process - BUT our observations are not the same thing as the process, they are just a sample. Our observation says that when we sleep for 3 hours and study for 5 hours, the grade we earned was a 75. But does this mean that every time you sleep for 3 hours and study for 5 hours you will earn a 75? Of course not, because there are other variables that matter here, such as the difficulty of test, or whether you’ve been paying attention in lectures – we could quantify these variables to build a better model, but even if we did, there would still an element of uncertainty that we could never explicitly model – for example, maybe the test was multiple choice, and you guessed on a few problems. One way to think about this problem is that observations are composed of signal and noise. Nate Silver, the guy who correctly predicted the US election results for 50 out of 50 US states in 2012, wrote a great book on exactly this. The idea is that we’re interested in an underlying process, the signal, but in real data, our signal will always be obscured by some level of noise. An interesting example of this shows up when comparing the SAT scores of students who take the SAT both Junior and Senior year. Right on the college board’s website it says: “The higher a student's scores as a junior, the more likely that student's subsequent scores will drop”. Why would this be? It seems like students who did well junior year would also do well senior year. We can make sense of this by considering that SAT scores are composed of a signal and a noise component – the signal being the underlying aptitude of the student, and the noise being other factors that effect test scores, basically if the student had a good day or not. Of the students who did well the first time, we expect a disproportionate number to have had a good day – and since having a good day is random, when these students have a regular or bad test day on their next test, their scores will go down. So if we can convince our model to fit the signal and not the noise, we should be able to avoid overfitting. First, we’ll work on diagnosing overfitting, then we’ll work on fixing it. Last time we showed our model predictions across the input space for various combinations of hours sleeping and hours studying. We’ll add a couple more data points to make overfitting a bit more obvious and retrain our model on the new dataset. If we re-examine our predictions across our sample space, we begin to see some strange behavior. Neural networks are really powerful learning models, and we see here that all that power has been used to fit our data really closely – which creates a problem - our model is no longer reflective of the real world. According to our model, in some cases, studying more will actually push our score down, this seems unlikely - hopefully studying more will not decrease your score. End of explanation #Training Data: trainX = np.array(([3,5], [5,1], [10,2], [6,1.5]), dtype=float) trainY = np.array(([75], [82], [93], [70]), dtype=float) #Testing Data: testX = np.array(([4, 5.5], [4.5,1], [9,2.5], [6, 2]), dtype=float) testY = np.array(([70], [89], [85], [75]), dtype=float) #Normalize: trainX = trainX/np.amax(trainX, axis=0) trainY = trainY/100 #Max test score is 100 #Normalize by max of training data: testX = testX/np.amax(trainX, axis=0) testY = testY/100 #Max test score is 100 ##Need to modify trainer class a bit to check testing error during training: class trainer(object): def __init__(self, N): #Make Local reference to network: self.N = N def callbackF(self, params): self.N.setParams(params) self.J.append(self.N.costFunction(self.X, self.y)) self.testJ.append(self.N.costFunction(self.testX, self.testY)) def costFunctionWrapper(self, params, X, y): self.N.setParams(params) cost = self.N.costFunction(X, y) grad = self.N.computeGradients(X,y) return cost, grad def train(self, trainX, trainY, testX, testY): #Make an internal variable for the callback function: self.X = trainX self.y = trainY self.testX = testX self.testY = testY #Make empty list to store training costs: self.J = [] self.testJ = [] params0 = self.N.getParams() options = {'maxiter': 200, 'disp' : True} _res = optimize.minimize(self.costFunctionWrapper, params0, jac=True, method='BFGS', \ args=(trainX, trainY), options=options, callback=self.callbackF) self.N.setParams(_res.x) self.optimizationResults = _res #Train network with new data: NN = Neural_Network() T = trainer(NN) T.train(trainX, trainY, testX, testY) #Plot cost during training: plot(T.J) plot(T.testJ) grid(1) xlabel('Iterations') ylabel('Cost') Explanation: So it appears our model is overfitting, but how do we know for sure? A widely accepted method is to split our data into 2 portions: training and testing. We won’t touch our testing data while training the model, and only use it to see how we’re doing – our testing data is a simulation of the real world. We can plot the error on our training and testing sets as we train our model and identify the exact point at which overfitting begins. We can also plot testing and training error as a function of model complexity a see similar behavior. End of explanation #Regularization Parameter: Lambda = 0.0001 #Need to make changes to costFunction and costFunctionPrim: def costFunction(self, X, y): #Compute cost for given X,y, use weights already stored in class. self.yHat = self.forward(X) #We don't want cost to increase with the number of examples, so normalize by dividing the error term by number of examples(X.shape[0]) J = 0.5*sum((y-self.yHat)**2)/X.shape[0] + (self.Lambda/2)*(sum(self.W1**2)+sum(self.W2**2)) return J def costFunctionPrime(self, X, y): #Compute derivative with respect to W and W2 for a given X and y: self.yHat = self.forward(X) delta3 = np.multiply(-(y-self.yHat), self.sigmoidPrime(self.z3)) #Add gradient of regularization term: dJdW2 = np.dot(self.a2.T, delta3)/X.shape[0] + self.Lambda*self.W2 delta2 = np.dot(delta3, self.W2.T)*self.sigmoidPrime(self.z2) #Add gradient of regularization term: dJdW1 = np.dot(X.T, delta2)/X.shape[0] + self.Lambda*self.W1 return dJdW1, dJdW2 #New complete class, with changes: class Neural_Network(object): def __init__(self, Lambda=0): #Define Hyperparameters self.inputLayerSize = 2 self.outputLayerSize = 1 self.hiddenLayerSize = 3 #Weights (parameters) self.W1 = np.random.randn(self.inputLayerSize,self.hiddenLayerSize) self.W2 = np.random.randn(self.hiddenLayerSize,self.outputLayerSize) #Regularization Parameter: self.Lambda = Lambda def forward(self, X): #Propogate inputs though network self.z2 = np.dot(X, self.W1) self.a2 = self.sigmoid(self.z2) self.z3 = np.dot(self.a2, self.W2) yHat = self.sigmoid(self.z3) return yHat def sigmoid(self, z): #Apply sigmoid activation function to scalar, vector, or matrix return 1/(1+np.exp(-z)) def sigmoidPrime(self,z): #Gradient of sigmoid return np.exp(-z)/((1+np.exp(-z))**2) def costFunction(self, X, y): #Compute cost for given X,y, use weights already stored in class. self.yHat = self.forward(X) J = 0.5*sum((y-self.yHat)**2)/X.shape[0] + (self.Lambda/2)*(np.sum(self.W1**2)+np.sum(self.W2**2)) return J def costFunctionPrime(self, X, y): #Compute derivative with respect to W and W2 for a given X and y: self.yHat = self.forward(X) delta3 = np.multiply(-(y-self.yHat), self.sigmoidPrime(self.z3)) #Add gradient of regularization term: dJdW2 = np.dot(self.a2.T, delta3)/X.shape[0] + self.Lambda*self.W2 delta2 = np.dot(delta3, self.W2.T)*self.sigmoidPrime(self.z2) #Add gradient of regularization term: dJdW1 = np.dot(X.T, delta2)/X.shape[0] + self.Lambda*self.W1 return dJdW1, dJdW2 #Helper functions for interacting with other methods/classes def getParams(self): #Get W1 and W2 Rolled into vector: params = np.concatenate((self.W1.ravel(), self.W2.ravel())) return params def setParams(self, params): #Set W1 and W2 using single parameter vector: W1_start = 0 W1_end = self.hiddenLayerSize*self.inputLayerSize self.W1 = np.reshape(params[W1_start:W1_end], \ (self.inputLayerSize, self.hiddenLayerSize)) W2_end = W1_end + self.hiddenLayerSize*self.outputLayerSize self.W2 = np.reshape(params[W1_end:W2_end], \ (self.hiddenLayerSize, self.outputLayerSize)) def computeGradients(self, X, y): dJdW1, dJdW2 = self.costFunctionPrime(X, y) return np.concatenate((dJdW1.ravel(), dJdW2.ravel())) Explanation: So now that we know overfitting is a problem, but how do we fix it? One way is to throw more data at the problem. A simple rule of thumb as presented by Yaser Abu-Mostaf is his excellent machine learning course available from Caltech, is that you should have at least 10 times as many examples as the degrees for freedom in your model. For us, since we have 9 weights that can change, we would need 90 observations, which we certainly don’t have. Link to course: https://work.caltech.edu/telecourse.html Another popular and effective way to mitigate overfitting is to use a technique called regularization. One way to implement regularization is to add a term to our cost function that penalizes overly complex models. A simple, but effective way to do this is to add together the square of our weights to our cost function, this way, models with larger magnitudes of weights, cost more. We’ll need to normalize the other part of our cost function to ensure that our ratio of the two error terms does not change with respect to the number of examples. We’ll introduce a regularization hyper parameter, lambda, that will allow us to tune the relative cost – higher values of lambda will impose bigger penalties for high model complexity. End of explanation NN = Neural_Network(Lambda=0.0001) #Make sure our gradients our correct after making changes: numgrad = computeNumericalGradient(NN, X, y) grad = NN.computeGradients(X,y) #Should be less than 1e-8: norm(grad-numgrad)/norm(grad+numgrad) T = trainer(NN) T.train(X,y,testX,testY) plot(T.J) plot(T.testJ) grid(1) xlabel('Iterations') ylabel('Cost') allOutputs = NN.forward(allInputs) #Contour Plot: yy = np.dot(hoursStudy.reshape(100,1), np.ones((1,100))) xx = np.dot(hoursSleep.reshape(100,1), np.ones((1,100))).T CS = contour(xx,yy,100*allOutputs.reshape(100, 100)) clabel(CS, inline=1, fontsize=10) xlabel('Hours Sleep') ylabel('Hours Study') #3D plot: ##Uncomment to plot out-of-notebook (you'll be able to rotate) #%matplotlib qt from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax = fig.gca(projection='3d') ax.scatter(10*X[:,0], 5*X[:,1], 100*y, c='k', alpha = 1, s=30) surf = ax.plot_surface(xx, yy, 100*allOutputs.reshape(100, 100), \ cmap=cm.jet, alpha = 0.5) ax.set_xlabel('Hours Sleep') ax.set_ylabel('Hours Study') ax.set_zlabel('Test Score') Explanation: If we train our model now, we see that the fit is still good, but our model is no longer interested in “exactly” fitting our data. Further, our training and testing errors are much closer, and we’ve successfully reduced overfitting on this dataset. To further reduce overfitting, we could increase lambda. End of explanation
5,127
Given the following text description, write Python code to implement the functionality described below step by step Description: Я запустил алгоритм на случайных тестах, размера начиная с 2 и он обсчитывает по 10 тестов одного размера. Я хочу попробовать проанализировать данные, которые получу в результате его работы. А именно я получаю на выход, время работы на тесте, вес цикла и сам цикл. Step1: Вытащим данные из файла и преобразуем в удобный формат. Step2: Построим график, видим, что есть три каких-то очень плохих теста, которые выглядят, как пики на этом графике. Давайте запомним, что они есть и выкиним их из эксперементальных данных. Step3: График из максимумов, похож на рост по экспоненте. Step4: Ну чтож попробуем выкинуть два максимума из рассмотрения. Хотя уже сейчас график выглядит намного лучше. Step5: Ну вот уже намного лучше. посмотрим для интереса, ещё и на начало графика. Step6: Не красивый график, давайте запустим по 100 тестов для каждого размера. По значениям находящимя в середине. Step7: Давайте попробуем понять, есть ли какая-то видимая разница, между тестами на которых алгоритм работает плохо и тех на которых он работает хорошо. Step8: Нарисуем и посмотрим. Step9: Сомневаюсь, что здесь можно найти закономерность. Это и понятно в этом алгоритме многое зависит от того в каком порядке заданы вершины, от этого зависит, то на сколько быстро мы найдем действительно хороший путь, который позволит перебирать нам меньшее количество вершин. До этого момента, старался найти честное полное решение задачи, и иногда это получалось сделать за малое время, но видно что например на тесте 2980 алгоритм работал почти то же время, что динамика за $2^n * n^2$. Давайте теперь построим несколько графиков, времени работы алгоритма, в зависимости от точности, которая нам требуется. Step10: Видна зависимость между качеством апроксимации и временем работы программы. Будет интересно посмотреть, например на зависимость времени работы программы на одном и том же тесте от качества решения задачи. Возьмем например размер задачи 26, чтобы не ждать два часа пока все посчитается точно.
Python Code: class Node: def __init__(self, number, cost, time, answer): self.number = int(number) self.cost = float(cost) self.time = float(time) / 10**9 self.size = self.number / 100 self.answer = answer def write(self): print("n = ", self.number," \n") print("cost = ", self.cost, " \n") print("time = ", self.time, " \n") print("size = ", self.size, "\n") print("answer = ", self.answer, "\n") def getTime(self): return self.time def getSize(self): return self.size def getNumber(self): return self.number def getAnswer(self): return self.answer class Point: def __init__(self, x, y): self.x = x self.y = y def constructNode(a): c = a.split('\n') number = c[0] answerStr = c[1].split("to") answer = [] for i in range(len(answerStr)): answer.append(int(answerStr[i])) cost = (c[2].split())[1] time = (c[3].split())[1] return Node(number, cost, time, answer) Explanation: Я запустил алгоритм на случайных тестах, размера начиная с 2 и он обсчитывает по 10 тестов одного размера. Я хочу попробовать проанализировать данные, которые получу в результате его работы. А именно я получаю на выход, время работы на тесте, вес цикла и сам цикл. End of explanation import math import pylab from matplotlib import mlab %pylab inline def plotPoints(a, size, showed): Y = [a[i].getTime() for i in range(size)] X = [a[i].getSize() for i in range(size)] pylab.plot (X, Y) if(showed): pylab.show() def readNodes(name): fin = open(name, 'r') a = fin.read() nodesToSplit = a.split("i ="); nodes = [] for i in range(len(nodesToSplit) -1): nodes.append(constructNode(nodesToSplit[i+1])) return nodes nodes = readNodes('out0.txt') nodesOne = [] nodesOne.append(nodes[240]) plotPoints(nodes, len(nodes), True) Explanation: Вытащим данные из файла и преобразуем в удобный формат. End of explanation def findMaxTime(l, a, b): maxEl = l[a].getTime() deleteEl = 0 for i in range(a, b): if(l[i].getTime() >= maxEl): maxEl = l[i].getTime() value = l[i] deleteEl = i l.pop(deleteEl) return value maxTimeNodes =[] for i in range(len(nodes) // 10 - 1, -1, -1): maxTimeNodes.append(findMaxTime(nodes, i*10, (i + 1) * 10)) plotPoints(maxTimeNodes, len(maxTimeNodes),True) Explanation: Построим график, видим, что есть три каких-то очень плохих теста, которые выглядят, как пики на этом графике. Давайте запомним, что они есть и выкиним их из эксперементальных данных. End of explanation plotPoints(nodes, len(nodes),True) Explanation: График из максимумов, похож на рост по экспоненте. End of explanation for i in range(len(nodes) // 9 - 1, -1, -1): maxTimeNodes.append(findMaxTime(nodes, i*9, (i + 1) * 9)) plotPoints(nodes, len(nodes), True) Explanation: Ну чтож попробуем выкинуть два максимума из рассмотрения. Хотя уже сейчас график выглядит намного лучше. End of explanation plotPoints(nodes, len(nodes)//2, True) Explanation: Ну вот уже намного лучше. посмотрим для интереса, ещё и на начало графика. End of explanation def findMinTime(l, a, b): minEl = l[a].getTime() deleteEl = 0 for i in range(a, b): if(l[i].getTime() <= minEl): minEl = l[i].getTime() value = l[i] deleteEl = i l.pop(deleteEl) return value fin = open('16100.txt', 'r') a = fin.read() nodesToSplitSmall = a.split("i ="); smallNodes = [] for i in range(len(nodesToSplitSmall) -1): smallNodes.append(constructNode(nodesToSplitSmall[i+1])) smallMaxTime = [] for i in range(len(smallNodes) // 100 - 1, -1, -1): smallMaxTime.append(findMaxTime(smallNodes, i*100, (i + 1) * 100)) for j in range(35): for i in range(len(smallNodes)// (99 - j * 2 + 1) - 1, -1, -1): findMaxTime(smallNodes, i * (99 - j * 2 + 1), (i + 1) * (99 - j * 2 + 1)) findMinTime(smallNodes, i * (99 - j * 2), (i + 1) * (99 - j * 2)) plotPoints(smallNodes, len(smallNodes), True) Explanation: Не красивый график, давайте запустим по 100 тестов для каждого размера. По значениям находящимя в середине. End of explanation smallMinTime = [] for i in range(len(smallNodes) // 100 - 1, -1, -1): smallMinTime.append(findMinTime(smallNodes, i*100, (i + 1) * 100)) Explanation: Давайте попробуем понять, есть ли какая-то видимая разница, между тестами на которых алгоритм работает плохо и тех на которых он работает хорошо. End of explanation import numpy as np from bokeh.plotting import * def show(node): fin = open(str(node.getNumber()) + ".txt", 'r') a = fin.read() lines = a.split('\n') lines.pop(0) lines.pop(0) lines.pop(len(lines) - 1) lines.pop(len(lines) - 1) points = [] X = [] Y = [] for i in range(len(lines)): c = lines[i].split(' ') points.append(Point(float(c[1]), float(c[2]))) for i in range(len(node.getAnswer())): c = lines[node.getAnswer()[i] - 1].split(' ') X.append(float(c[1])) Y.append(float(c[2])) plot(X, Y) for i in range(0, 12, 4): subplot(221 + i // 4) p1 = show(smallMinTime[i]) #синий тест с минимальным временем работы p2 = show(smallMaxTime[i]) #зеленый тест с максимальным временем работы for i in range(12, 15): subplot(221 + i % 4) p1 = show(smallMinTime[i]) #синий p2 = show(smallMaxTime[i]) #зеленый Explanation: Нарисуем и посмотрим. End of explanation nodes0 = nodes nodes010 = readNodes("out10.txt") for i in range(len(nodes010) // 10 - 1, -1, -1): findMaxTime(nodes010, i*10, (i + 1) * 10) for i in range(len(nodes010) // 9 - 1, -1, -1): findMaxTime(nodes010, i*9, (i + 1) * 9) nodes025 = readNodes("out25.txt") for i in range(len(nodes025) // 10 - 1, -1, -1): findMaxTime(nodes025, i*10, (i + 1) * 10) for i in range(len(nodes025) // 9 - 1, -1, -1): findMaxTime(nodes025, i*9, (i + 1) * 9) plotPoints(nodes010, len(nodes010), False) # синий plotPoints(nodes0, len(nodes0), False) #зеленый plotPoints(nodes025, len(nodes025), False) #красный plotPoints(nodes010[0:120], 120, False) # синий ошибка до 10% plotPoints(nodes0[0:120], 120, False) #зеленый без ошибки plotPoints(nodes025[0:120], 120, False) #красный ошибка до 25% plotPoints(nodes010[110:150], 40, False) # синий plotPoints(nodes0[110:150], 40, False) #зеленый plotPoints(nodes025[110:150], 40, False) #красный Explanation: Сомневаюсь, что здесь можно найти закономерность. Это и понятно в этом алгоритме многое зависит от того в каком порядке заданы вершины, от этого зависит, то на сколько быстро мы найдем действительно хороший путь, который позволит перебирать нам меньшее количество вершин. До этого момента, старался найти честное полное решение задачи, и иногда это получалось сделать за малое время, но видно что например на тесте 2980 алгоритм работал почти то же время, что динамика за $2^n * n^2$. Давайте теперь построим несколько графиков, времени работы алгоритма, в зависимости от точности, которая нам требуется. End of explanation nodesOne.append(readNodes("26out05.txt")[0]) nodesOne.append(readNodes("26out1.txt")[0]) nodesOne.append(readNodes("26out15.txt")[0]) nodesOne.append(readNodes("26out20.txt")[0]) nodesOne.append(readNodes("26out25.txt")[0]) nodesOne.append(readNodes("26out30.txt")[0]) Y = [nodesOne[i].getTime() for i in range(len(nodesOne))] X = [0.05 * i for i in range(len(nodesOne))] pylab.plot (X, Y) show() Explanation: Видна зависимость между качеством апроксимации и временем работы программы. Будет интересно посмотреть, например на зависимость времени работы программы на одном и том же тесте от качества решения задачи. Возьмем например размер задачи 26, чтобы не ждать два часа пока все посчитается точно. End of explanation
5,128
Given the following text description, write Python code to implement the functionality described below step by step Description: <h2>Assignment 1 – Problem 2 Step1: <h3>Michell Structure Geometry</h3> I approach generating the Michell structure by repeating a series of steps <b><i>up</i></b> and <b><i>across</i></b>. Because the structure is symetric about the X-axis, these steps are only performed to solve for points in the Y+ coordinate region. The first step is different from all of the subsequent ones since they all originate from the same support node. The lengths of the links are found using trigonometric relationships including the law of cosines and the law of sines. Once the length of a given member is determined, its endpoint can be appended to an array by taking into account the angle of the truss element with respect to the x-axis. I decided to parameterize the structure based on the number of baseline points (points that touch the baseline). For example, a structure with 1 baseline-point (blp) has 2 beam elements, 2 blp --> 8 beam elements, 3 blp --> 18 beam elements, etc... Once all of the points are solved for, they need to be ordered and connected to correctly represent the beams between the points. These ordered points are represented in two arrays Step2: Now that this is all working nicely, we'd like to make the geometry generation parametric and reusable. Here, I copy that work and wrap it in a function. <ul> <h4>The function takes arguments Step3: To plot the truss, we make a function that takes the points and the ordered lists of start and end nodes and returns the plotted result. Step4: <h3>Searching for a structure of a given length</h3> Since the structure is naturally parameterized by gamma and H (but not L), we need to search for a gamma that gives us the right L. To do this, I created a function <b>residual</b> which takes gamma, blp, h, and a desired length. It generates a structure given those parameters and then returns the square of difference between the desired length and the actual length. We can then use optimization techniques to minimize this residual by varying gamma. The function will reach a minimum when the actual length is equal to the desired length. In this case, I use a nelder-mead simplex algorithm implemented through SciPY's minimize function. This is a relatively simple (but effective algorithm) that works well for this problem since it doesn't require the evaluation of a deriviate of the function. Step5: Using this function, we can now generate the geometry of a structure in terms of H, L, and the number of baseline points. Step6: <h3>Loading the Structure</h3> To apply loads to the structure, I will use a structural analysis tool called Frame3DD for which there are python bindings Step7: We can then import our geometry into a format that frame3dd expects Step8: Now that we can solve for displacements, forces, and reactions in the structure, let's wrap that capability in a function Step9: Then we can create a function to run the simulation and evaluate the performance of the truss by multiplying the axial force in each member by its length. This gives a measure of performance that is proportional to its volume. Step10: To visualize the forces in the struts, we can map color to the axial force values and increase the stroke width where there is more force. Step11: These plotting tools are helpful for visualizing how the forces are distributed in the structures. By plotting three structures with varying values of H, it's clear that the forces move through the structures in different ways. When H=16ft, the most loaded members are on the extremeties of the structure, meaning that more length is being effectively utilized for its load bearing capability. On the other hand, when H=4ft, the most loaded members are the ones that are left-most and inner-most. These members are also the shortest members of the structure and so the structure is not utilizing the longer members well. Step12: Using interactive widgets, we can interactively generate the structure and visualize the deformation... Step13: Now we can do a parametric sweep over a range of heights and lengths and determine the relative performance of the structures. (Lower values are better for the performance metric) Step14: From the plot, it is clear that optimal performance is achieved with more members in the structure (larger number of baseline points) and with a larger distance between support nodes. The plots above, which color code the beam elements with their amount of axial loading, demonstrate that by spreading the support nodes apart, longer beam elements are better utilized. <h3>Stretching and Scaling the Structures</h3> We can analyze how well these structures perform if they're stretched and scaled from their original dimensions Step15: Let's analyze the performance for an 10x10 array of X-scale and Y-scale values Step16: This plot shows that there is a clear loss in performance for structures scaled in the X-axis (which is to say, stretched to be thinner). From this plot, though, it's unclear whether squishing the structure (making it shorter and fatter) causes a loss in performance. To get a better sense of the effect scaling has on the performance of these structures, it's helpful to compare them to a true Michell truss generated from stretched/squished boundary conditions. These are shown in gray in the figure of 4 subplots above. If, instead of plotting the absolute performance metric, we plot the relative difference between the deformed trusses and the ideal ones, then we get a plot like this Step17: From this, it's clear that when X-scale = Y-scale the performance of the structure is nominal. This makes sense since it is just uniformly scaling the structure to be larger or smaller. However, when the structure is made thinner or thicker, it's performance becomes sub-optimal. Furthermore, the plot shows that it is more sub-optimal to stretch the structure lengthwise (making it thinner) than it is to squish the structure (making it shorter and wider). We can get a better sense of this performance loss by looking at a cross section of the surface along the plane orthogonal to x_scale = y_scale
Python Code: import numpy as np from scipy import integrate from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D from matplotlib.colors import LinearSegmentedColormap from scipy.optimize import fsolve import scipy.spatial.distance as dist import math from frame3dd import Frame, NodeData, ReactionData, ElementData, Options, \ StaticLoadCase from IPython.html.widgets import interact, interactive, fixed from IPython.html import widgets # Magic function to make matplotlib inline; other style specs must come AFTER #%matplotlib notebook %matplotlib inline %config InlineBackend.figure_formats = {'png', 'retina'} # %config InlineBackend.figure_formats = {'svg',} plt.ioff() # this stops the graphs from overwriting each other font = {'family' : 'Bitstream Vera Sans', 'weight' : 'normal', 'size' : 14} plt.rc('font', **font) Explanation: <h2>Assignment 1 – Problem 2: Michell Structures</h2> <br> Will Langford<br> Computation Structural Design and Optimization<br> 10/02/16 <h3>Handle imports and setup plotting...</h3> End of explanation n = 25 h = 16 # the spacing between the supports gamma = (np.pi/2.)*0.85 # the parameteric angle p = [] # a list of the points p.append([0,h/2.]) # the first support point l = np.zeros([n,n]) # number of baseline points # blp n (# of beam elements) # 1 2 # 2 8 # 3 18 # 4 32 # 5 50 blp = 5 # First step (across) l[0,0] = h/2./np.tan(gamma/2.) l[0,1] = np.sqrt((h/2.)**2+(l[0,0])**2) p.append([l[0,0],0]) # Second step (up) psi = np.pi/2.-gamma/2. for n in range(blp-1): l[1,n] = np.tan(np.pi/2.-gamma)*l[0,n+1] l[0,n+2] = np.sqrt((l[1,n])**2+(l[0,n+1])**2) dx = np.cos(psi)*l[1,n] dy = np.sin(psi)*l[1,n] p.append([p[-1][0]+dx,p[-1][1]+dy]) psi = np.pi-(np.pi/2.-psi)-gamma baseline = l[0,0] index = 0 while (blp-(2+index)) >= 0: k = 2*index # STEP ACROSS # print("across "+str(k+2)) l[2+k,0] = l[1+k,0]/np.cos(np.pi/2.-gamma/2.) baseline = baseline + l[2+k,0] p.append([baseline,0]) # STEPS UP phi = np.pi-gamma #angle between l[2,1] and l[1,1] psi = np.pi/2.-gamma/2. l[2+k,1] = np.sqrt(l[2+k,0]**2 + l[1+k,0]**2 - 2*l[2+k,0]*l[1+k,0]*np.cos(psi)) for n in range(blp-(2+int(k/2))): # STEP UP # print(" up step " +str(n)) # after the first step have a quadrilateral (rather than a triangle) # and so we need to divide into two triangles and use law of cosines # lspan is the distance from the end of l[0,3] to the end of l[2,1] lspan = np.sqrt(l[2+k,n+1]**2 + l[1+k,n+1]**2 -2*l[2+k,n+1]*l[1,n+1]*np.cos(phi)) phi1 = np.arcsin(l[1+k,n+1]/lspan*np.sin(phi)) #angle between l[3,0] and l[2,1] phi2 = np.pi-phi-phi1 #angle between l[1,1] and l[2,2] alpha1 = np.pi/2. - phi1 #angle between lspan and l[2,2] alpha2 = np.pi/2. - phi2 #angle between lspan and l[3,0] l[3+k,n] = np.sin(alpha2)/np.sin(gamma)*lspan l[2+k,n+2] = np.sin(alpha1)/np.sin(gamma)*lspan dx = np.cos(psi)*l[3+k,n] dy = np.sin(psi)*l[3+k,n] p.append([p[-1][0]+dx,p[-1][1]+dy]) psi = np.pi-(np.pi/2.-psi)-gamma index = index+1 print(np.shape(p)) print() print("Michell Truss Joint Locations:") print(np.array(p)) # ORDER AND CONNECT THE POINTS N1 = np.zeros(80,dtype=np.int8) # start nodes N2 = np.zeros(80,dtype=np.int8) # end nodes index = 0 for i in range(blp): N1[i] = 0 N2[i] = i+1 index = index+1 for j in reversed(range(1,blp)): start_index = index for i in range(j): N1[index] = N1[index-1]+1 N2[index] = N1[index-1]+2 index = index+1 for i in range(j): N1[index] = N1[start_index]+1*(i+1) N2[index] = N2[index-1]+1 index = index+1 # trim trailing zeros N2 = np.trim_zeros(N2,'b') N1 = N1[0:len(N2)] print() print("Point Connectivity:") print(N1) print(N2) # PLOT TRUSS STRUCTURE plt.close('all') p = np.array(p) fig = plt.figure() ax = fig.add_subplot(111) # plot topside ax.plot(p[:,0], p[:,1],'bo') # joint locations ax.plot([p[N1,0],p[N2,0]],[p[N1,1],p[N2,1]],'k') # beam elements # plot bottomside ax.plot(p[:,0], -p[:,1],'bo') # joint locations ax.plot([p[N1,0],p[N2,0]],[-p[N1,1],-p[N2,1]],'k') # beam elements plt.axes().set_aspect('equal', 'datalim') fig Explanation: <h3>Michell Structure Geometry</h3> I approach generating the Michell structure by repeating a series of steps <b><i>up</i></b> and <b><i>across</i></b>. Because the structure is symetric about the X-axis, these steps are only performed to solve for points in the Y+ coordinate region. The first step is different from all of the subsequent ones since they all originate from the same support node. The lengths of the links are found using trigonometric relationships including the law of cosines and the law of sines. Once the length of a given member is determined, its endpoint can be appended to an array by taking into account the angle of the truss element with respect to the x-axis. I decided to parameterize the structure based on the number of baseline points (points that touch the baseline). For example, a structure with 1 baseline-point (blp) has 2 beam elements, 2 blp --> 8 beam elements, 3 blp --> 18 beam elements, etc... Once all of the points are solved for, they need to be ordered and connected to correctly represent the beams between the points. These ordered points are represented in two arrays: one for the nodes at the start of each beam element and one for the nodes at the end of each beam element. End of explanation def generate_truss_points(blp,h,alpha): # blp is the number of baseline points # where h is the distance between supports # and alpha is a value from 0 to 1 and is the ratio of gamma to 90-degrees n = 25 gamma = (np.pi/2.)*alpha # the parameteric angle p = [] # a list of the points p.append([0,h/2.]) # the first support point l = np.zeros([n,n]) # number of baseline points # blp n (# of beam elements) # 1 2 # 2 8 # 3 18 # 4 32 # 5 50 # First step (across) l[0,0] = h/2./np.tan(gamma/2.) l[0,1] = np.sqrt((h/2.)**2+(l[0,0])**2) p.append([l[0,0],0]) # Second step (up) psi = np.pi/2.-gamma/2. for n in range(blp-1): l[1,n] = np.tan(np.pi/2.-gamma)*l[0,n+1] l[0,n+2] = np.sqrt((l[1,n])**2+(l[0,n+1])**2) dx = np.cos(psi)*l[1,n] dy = np.sin(psi)*l[1,n] p.append([p[-1][0]+dx,p[-1][1]+dy]) psi = np.pi-(np.pi/2.-psi)-gamma baseline = l[0,0] index = 0 while (blp-(2+index)) >= 0: k = 2*index # STEP ACROSS # print("across "+str(k+2)) l[2+k,0] = l[1+k,0]/np.cos(np.pi/2.-gamma/2.) baseline = baseline + l[2+k,0] p.append([baseline,0]) # STEPS UP phi = np.pi-gamma #angle between l[2,1] and l[1,1] psi = np.pi/2.-gamma/2. l[2+k,1] = np.sqrt(l[2+k,0]**2 + l[1+k,0]**2 - 2*l[2+k,0]*l[1+k,0]*np.cos(psi)) for n in range(blp-(2+int(k/2))): # STEP UP # print(" up step " +str(n)) # after the first step have a quadrilateral (rather than a triangle) # and so we need to divide into two triangles and use law of cosines # lspan is the distance from the end of l[0,3] to the end of l[2,1] lspan = np.sqrt(l[2+k,n+1]**2 + l[1+k,n+1]**2 -2*l[2+k,n+1]*l[1,n+1]*np.cos(phi)) phi1 = np.arcsin(l[1+k,n+1]/lspan*np.sin(phi)) #angle between l[3,0] and l[2,1] phi2 = np.pi-phi-phi1 #angle between l[1,1] and l[2,2] alpha1 = np.pi/2. - phi1 #angle between lspan and l[2,2] alpha2 = np.pi/2. - phi2 #angle between lspan and l[3,0] l[3+k,n] = np.sin(alpha2)/np.sin(gamma)*lspan l[2+k,n+2] = np.sin(alpha1)/np.sin(gamma)*lspan dx = np.cos(psi)*l[3+k,n] dy = np.sin(psi)*l[3+k,n] p.append([p[-1][0]+dx,p[-1][1]+dy]) psi = np.pi-(np.pi/2.-psi)-gamma index = index+1 # ORDER AND CONNECT THE POINTS N1 = np.zeros(80,dtype=np.int8) # start nodes N2 = np.zeros(80,dtype=np.int8) # end nodes index = 0 for i in range(blp): N1[i] = 0 N2[i] = i+1 index = index+1 for j in reversed(range(1,blp)): start_index = index for i in range(j): N1[index] = N1[index-1]+1 N2[index] = N1[index-1]+2 index = index+1 for i in range(j): N1[index] = N1[start_index]+1*(i+1) N2[index] = N2[index-1]+1 index = index+1 # trim trailing zeros N2 = np.trim_zeros(N2,'b') N1 = N1[0:len(N2)] return [np.array(p), N1, N2, gamma] Explanation: Now that this is all working nicely, we'd like to make the geometry generation parametric and reusable. Here, I copy that work and wrap it in a function. <ul> <h4>The function takes arguments:</h4> <ul> <li><b>blp</b>, the number of baseline points <li><b>h</b>, the vertical spacing of the support nodes <li>and <b>alpha</b>, a normalized representation of gamma between between 0 and 1 </ul> <h4>and returns:</h4> <ul> <li><b>p</b>, an array of all of the points in the Y+ region <li><b>N1</b>, the ordered list of the start nodes of the beam elements <li><b>N2</b>, the ordered list of the end nodes of the beam elements <li>and <b>gamma</b>, the angle used to determine the structure. </ul> <ul> End of explanation def plot_truss(p,N1,N2,ax,label=''): # PLOT TRUSS STRUCTURE # plot topside ax.plot(p[:,0], p[:,1],'bo') # joint locations ax.plot([p[N1,0],p[N2,0]],[p[N1,1],p[N2,1]],'k') # beam elements # plot bottomside ax.plot(p[:,0], -p[:,1],'bo') # joint locations ax.plot([p[N1,0],p[N2,0]],[-p[N1,1],-p[N2,1]],'k') # beam elements ax.set_aspect('equal', 'datalim') ax.set_xlim([-10,50]) ax.set_ylabel(label) ax.set_yticks([]) return fig def clear_axis(ax): ax.yaxis.set_visible(False) ax.xaxis.set_visible(False) ax.set_frame_on(False) [points, start_points, end_points, gam] = generate_truss_points(3,16,0.8) fig = plt.figure() ax = fig.add_subplot(111) plot_truss(points,start_points,end_points,ax) Explanation: To plot the truss, we make a function that takes the points and the ordered lists of start and end nodes and returns the plotted result. End of explanation from scipy.optimize import minimize def residual(x,blp,h,length): [points, start_points, end_points, gam] = generate_truss_points(blp,h,x) return (length-points[-1][0])**2 def find_truss_points(blp,h,length): res = minimize(residual, 0.8, args=(blp,h,length), method='nelder-mead', options={'xtol': 1e-8, 'disp': False}) [points, start_points, end_points, gam] = generate_truss_points(blp,h,res.x) return [points, start_points, end_points, gam] def find_truss(blp,h,length,figure,label): res = minimize(residual, 0.8, args=(blp,h,length), method='nelder-mead', options={'xtol': 1e-8, 'disp': False}) [points, start_points, end_points, gam] = generate_truss_points(blp,h,res.x) return plot_truss(points,start_points,end_points, figure,label+", $\gamma=$"+str(np.round(gam[0]*180/np.pi,1))) Explanation: <h3>Searching for a structure of a given length</h3> Since the structure is naturally parameterized by gamma and H (but not L), we need to search for a gamma that gives us the right L. To do this, I created a function <b>residual</b> which takes gamma, blp, h, and a desired length. It generates a structure given those parameters and then returns the square of difference between the desired length and the actual length. We can then use optimization techniques to minimize this residual by varying gamma. The function will reach a minimum when the actual length is equal to the desired length. In this case, I use a nelder-mead simplex algorithm implemented through SciPY's minimize function. This is a relatively simple (but effective algorithm) that works well for this problem since it doesn't require the evaluation of a deriviate of the function. End of explanation h = 16 plt.close('all') # Six axes, returned as a 2-d array f, axarr = plt.subplots(3, 2) find_truss(1,h,40,axarr[0, 0],'n=2') find_truss(2,h,40,axarr[0, 1],'n=8') find_truss(3,h,40,axarr[1, 0],'n=18') find_truss(4,h,40,axarr[1, 1],'n=32') find_truss(5,h,40,axarr[2, 0],'n=50') clear_axis(axarr[2,1]) plt.suptitle('H = ' + str(h) + 'ft') f h = 8 plt.close('all') # Six axes, returned as a 2-d array f, axarr = plt.subplots(3, 2) find_truss(1,h,40,axarr[0, 0],'n=2') find_truss(2,h,40,axarr[0, 1],'n=8') find_truss(3,h,40,axarr[1, 0],'n=18') find_truss(4,h,40,axarr[1, 1],'n=32') find_truss(5,h,40,axarr[2, 0],'n=50') clear_axis(axarr[2,1]) plt.suptitle('H = ' + str(h) + 'ft') f h = 4 plt.close('all') # Six axes, returned as a 2-d array f, axarr = plt.subplots(3, 2) find_truss(1,h,40,axarr[0, 0],'n=2') find_truss(2,h,40,axarr[0, 1],'n=8') find_truss(3,h,40,axarr[1, 0],'n=18') find_truss(4,h,40,axarr[1, 1],'n=32') find_truss(5,h,40,axarr[2, 0],'n=50') clear_axis(axarr[2,1]) plt.suptitle('H = ' + str(h) + 'ft') plt.show() Explanation: Using this function, we can now generate the geometry of a structure in terms of H, L, and the number of baseline points. End of explanation [p, N1, N2, gam] = find_truss_points(5,16,40) tophalf = p bottomhalf = np.array([p[:,0],-p[:,1]]).T bottomhalf2 = bottomhalf[np.nonzero(bottomhalf[:,1])] p = np.vstack((tophalf,bottomhalf2)) def reformat_connectivity(N,zeros,inc): output = np.copy(N) for i in range(len(N)): if (len(np.argwhere(zeros == N[i]))): output[i] = np.copy(N[i]) output[i+1:] = output[i+1:] - 1 else: output[i] = output[i]+inc return output zeros = np.ravel(np.argwhere(bottomhalf[:,1]==0)) bottom_N1 = reformat_connectivity(N1,zeros,N2[-1]+1) bottom_N2 = reformat_connectivity(N2,zeros,N2[-1]+1) N1 = np.hstack((N1, bottom_N1)) N2 = np.hstack((N2, bottom_N2)) print(N1) print(N2) Explanation: <h3>Loading the Structure</h3> To apply loads to the structure, I will use a structural analysis tool called Frame3DD for which there are python bindings: <a href="https://github.com/WISDEM/pyFrame3DD">pyFrame3DD</a>. The first step in simulating loads in this structure is to reformat the connectivity of the nodes to ensure there are no duplicates. This is necessary because I used symmetry above to only solve for the upper half of the structure and created the lower half by reflecting the upper half about the X-axis. This has the side-effect of doubling up the baseline points and would result in a structure which is not well connected. To fix this, I need to iterate through the list of start and end nodes in the lower half of the structure and strip out the ones where the Y coordinate is 0. End of explanation node = np.arange(1,len(p[:,0])+1) x = p[:,0]*100 y = np.zeros(len(p[:,0])) z = p[:,1]*100 r = np.zeros(len(p[:,0])) nodes = NodeData(node, x, y, z, r) # ------ reaction data ------------ node = np.array([1, 1+bottom_N1[0]]) Rx = np.ones(2) Ry = np.ones(2) Rz = np.ones(2) Rxx = np.ones(2) Ryy = np.ones(2) Rzz = np.ones(2) reactions = ReactionData(node, Rx, Ry, Rz, Rxx, Ryy, Rzz, rigid=1) # ------ frame element data ------------ element = np.arange(1,len(N1)+1) N1a = N1+1 N2a = N2+1 Ax = 36.0*np.ones(len(element)) Asy = 20.0*np.ones(len(element)) Asz = 20.0*np.ones(len(element)) Jx = 1000.0*np.ones(len(element)) Iy = 492.0*np.ones(len(element)) Iz = 492.0*np.ones(len(element)) E = 200000.0*np.ones(len(element)) G = 79300.0*np.ones(len(element)) roll = np.zeros(len(element)) density = 7.85e-9*np.ones(len(element)) elements = ElementData(element, N1a, N2a, Ax, Asy, Asz, Jx, Iy, Iz, E, G, roll, density) # ------ other data ------------ shear = 1 # 1: include shear deformation geom = 1 # 1: include geometric stiffness dx = 10.0 # x-axis increment for internal forces other = Options(shear, geom, dx) # initialize frame3dd object frame = Frame(nodes, reactions, elements, other) # gravity in the X, Y, Z, directions (global) gx = 0.0 gy = 0.0 gz = 0.0#-9806.33 load = StaticLoadCase(gx, gy, gz) # point load nF = np.array([N2[-1]+1]) Fx = np.array([0.0]) Fy = np.array([0.0]) Fz = np.array([-1.0]) Mxx = np.array([0.0]) Myy = np.array([0.0]) Mzz = np.array([0.0]) load.changePointLoads(nF, Fx, Fy, Fz, Mxx, Myy, Mzz) frame.addLoadCase(load) # run the analysis displacements, forces, reactions, internalForces, mass, modal = frame.run() displacements.dyrot Explanation: We can then import our geometry into a format that frame3dd expects: nodes, reaction nodes, and elements. We then apply a point load to the last node in the negative Z direction*. *Frame3DD seems to expect loading to be applied primarily in the Z-axis so the structure had to be rotated from the XY plane to the XZ plane. End of explanation def reformat_connectivity(N,zeros,inc): output = np.copy(N) for i in range(len(N)): if (len(np.argwhere(zeros == N[i]))): output[i] = np.copy(N[i]) output[i+1:] = output[i+1:] - 1 else: output[i] = output[i]+inc return output def setup_frame3dd_simulation(blp,height,leng,xscale=1.0,yscale=1.0): # format point array and node ordering [p, N1, N2, gam] = find_truss_points(blp,height,leng) p[:,0] = xscale*p[:,0] p[:,1] = yscale*p[:,1] tophalf = p bottomhalf = np.array([p[:,0],-p[:,1]]).T bottomhalf2 = bottomhalf[np.nonzero(bottomhalf[:,1])] p = np.vstack((tophalf,bottomhalf2)) zeros = np.ravel(np.argwhere(bottomhalf[:,1]==0)) bottom_N1 = reformat_connectivity(N1,zeros,N2[-1]+1) bottom_N2 = reformat_connectivity(N2,zeros,N2[-1]+1) N1 = np.hstack((N1, bottom_N1)) N2 = np.hstack((N2, bottom_N2)) # ------ Node data ----------- node = np.arange(1,len(p[:,0])+1) x = p[:,0]*100 y = np.zeros(len(p[:,0])) z = p[:,1]*100 r = np.zeros(len(p[:,0])) nodes = NodeData(node, x, y, z, r) # ------ reaction data ------------ node = np.array([1, 1+bottom_N1[0]]) Rx = np.ones(2) Ry = np.ones(2) Rz = np.ones(2) Rxx = np.ones(2) Ryy = np.ones(2) Rzz = np.ones(2) reactions = ReactionData(node, Rx, Ry, Rz, Rxx, Ryy, Rzz, rigid=1) # ------ frame element data ------------ element = np.arange(1,len(N1)+1) N1a = N1+1 N2a = N2+1 Ax = 36.0*np.ones(len(element)) Asy = 20.0*np.ones(len(element)) Asz = 20.0*np.ones(len(element)) Jx = 1000.0*np.ones(len(element)) Iy = 492.0*np.ones(len(element)) Iz = 492.0*np.ones(len(element)) E = 200000.0*np.ones(len(element)) G = 79300.0*np.ones(len(element)) roll = np.zeros(len(element)) density = 7.85e-9*np.ones(len(element)) elements = ElementData(element, N1a, N2a, Ax, Asy, Asz, Jx, Iy, Iz, E, G, roll, density) # ------ other data ------------ shear = 1 # 1: include shear deformation geom = 1 # 1: include geometric stiffness dx = 10.0 # x-axis increment for internal forces other = Options(shear, geom, dx) # initialize frame3dd object frame = Frame(nodes, reactions, elements, other) # gravity in the X, Y, Z, directions (global) gx = 0.0 gy = 0.0 gz = 0.0#-9806.33 load = StaticLoadCase(gx, gy, gz) # point load nF = np.array([N2[-1]+1]) Fx = np.array([0.0]) Fy = np.array([0.0]) Fz = np.array([-1.0]) Mxx = np.array([0.0]) Myy = np.array([0.0]) Mzz = np.array([0.0]) load.changePointLoads(nF, Fx, Fy, Fz, Mxx, Myy, Mzz) frame.addLoadCase(load) return [frame, p, N1, N2] Explanation: Now that we can solve for displacements, forces, and reactions in the structure, let's wrap that capability in a function: End of explanation def simulate_truss(frame): # run the analysis displacements, forces, reactions, internalForces, mass, modal = frame.run() return forces.Nx.T def simulate_truss_disp(frame): # run the analysis displacements, forces, reactions, internalForces, mass, modal = frame.run() return [forces.Nx.T, displacements] def calculate_metric(forces, p, N1, N2): axial_force = np.abs(forces[np.arange(0,len(forces),2)]) l = np.zeros(len(N1)) for i in range(len(N1)): l[i] = dist.euclidean(p[N1[i],:],p[N2[i],:]) return [np.sum(l*axial_force.T), axial_force] Explanation: Then we can create a function to run the simulation and evaluate the performance of the truss by multiplying the axial force in each member by its length. This gives a measure of performance that is proportional to its volume. End of explanation def norm(arr): # return (arr - np.min(arr)) / float(np.max(arr) - np.min(arr)) return (arr) / float(np.max(arr)) def plot_truss_forces(p,N1,N2,ax,forces): # PLOT TRUSS STRUCTURE WITH FORCES # assumes unified p array (both tophalf and bottomhalf) ax.plot(p[:,0], p[:,1],'bo',MarkerSize=4) # joint location colors = plt.cm.rainbow(norm(forces)) for i in range(len(N1)): ax.plot([p[N1[i],0],p[N2[i],0]],[p[N1[i],1],p[N2[i],1]], c=np.ravel(colors[i]),lw=norm(forces)[i]*4+0.5) # beam elements ax.set_aspect('equal', 'datalim') # ax.set_xlim([-10,90]) ax.set_xlim([-10,50]) ax.set_yticks([]) return fig def plot_truss_forces_alpha(p,N1,N2,ax,forces): # PLOT TRUSS STRUCTURE WITH FORCES # ax.plot(p[:,0], p[:,1],'ko',alpha=0.25) # joint location colors = plt.cm.bone(1-norm(forces)) for i in range(len(N1)): ax.plot([p[N1[i],0],p[N2[i],0]],[p[N1[i],1],p[N2[i],1]], c=np.ravel(colors[i]),lw=norm(forces)[i]*4+0.5, alpha=0.25) # beam elements ax.set_aspect('equal', 'datalim') # ax.set_xlim([-10,90]) ax.set_xlim([-10,50]) ax.set_yticks([]) return fig Explanation: To visualize the forces in the struts, we can map color to the axial force values and increase the stroke width where there is more force. End of explanation plt.close('all') f, axarr = plt.subplots(3, 1) [f, p, N1, N2] = setup_frame3dd_simulation(5,4,40,1.0) forces = simulate_truss(f) [pl,ax_f] = calculate_metric(forces, p, N1, N2) plot_truss_forces(p,N1,N2,axarr[0],ax_f) axarr[0].set_ylabel('H=4ft') [f, p, N1, N2] = setup_frame3dd_simulation(5,8,40,1.0) forces = simulate_truss(f) [pl,ax_f] = calculate_metric(forces, p, N1, N2) plot_truss_forces(p,N1,N2,axarr[1],ax_f) axarr[1].set_ylabel('H=8ft') [f, p, N1, N2] = setup_frame3dd_simulation(5,16,40,1.0) forces = simulate_truss(f) [pl,ax_f] = calculate_metric(forces, p, N1, N2) plot_truss_forces(p,N1,N2,axarr[2],ax_f) axarr[2].set_ylabel('H=16ft') plt.suptitle('N = ' + str(50)) plt.show() Explanation: These plotting tools are helpful for visualizing how the forces are distributed in the structures. By plotting three structures with varying values of H, it's clear that the forces move through the structures in different ways. When H=16ft, the most loaded members are on the extremeties of the structure, meaning that more length is being effectively utilized for its load bearing capability. On the other hand, when H=4ft, the most loaded members are the ones that are left-most and inner-most. These members are also the shortest members of the structure and so the structure is not utilizing the longer members well. End of explanation def plot_truss_displacements(p,N1,N2,ax,forces,d,scale=1.0): # PLOT TRUSS STRUCTURE WITH FORCES print(scale) p_disp = np.copy(p) p_disp[:,0] = p[:,0] + d.dx*np.exp(scale) p_disp[:,1] = p[:,1] + d.dz*np.exp(scale) # assumes unified p array (both tophalf and bottomhalf) ax.plot(p_disp[:,0], p_disp[:,1],'bo',MarkerSize=4) # joint location colors = plt.cm.gray(1-norm(forces)-0.25) for i in range(len(N1)): ax.plot([p[N1[i],0],p[N2[i],0]],[p[N1[i],1],p[N2[i],1]], c=np.ravel(colors[i]),lw=norm(forces)[i]*4+0.5, alpha=0.5) # beam elements colors = plt.cm.rainbow(norm(forces)) for i in range(len(N1)): ax.plot([p_disp[N1[i],0],p_disp[N2[i],0]],[p_disp[N1[i],1],p_disp[N2[i],1]], c=np.ravel(colors[i]),lw=norm(forces)[i]*6+0.5) # beam elements ax.set_aspect('equal', 'datalim') ax.set_xlim([-10,50]) ax.set_yticks([]) return fig def interactive_plot(scale,h,n): ax = plt.subplot(111) [f, pp, n1, n2] = setup_frame3dd_simulation(n,h,40,1.0) [forces,d] = simulate_truss_disp(f) [pl,axf] = calculate_metric(forces, pp, n1, n2) plot_truss_displacements(pp,n1,n2,ax,axf,d,scale) plt.show() plt.close('all') interact(interactive_plot,scale=(0.1,10),h=(4,16,1),n=(1,8,1)) Explanation: Using interactive widgets, we can interactively generate the structure and visualize the deformation... End of explanation pls = np.zeros([3,5]) h = 4 print("H = " + str(h) +"ft") for i in range(5): [f, p, N1, N2] = setup_frame3dd_simulation(i+1,h,40) forces = simulate_truss(f) [pl, ax_f] = calculate_metric(forces, p, N1, N2) pls[0,i] = pl print("blp = " + str(i+1) + " sum(p*L) = " + str(pl)) h = 8 print("H = " + str(h) +"ft") for i in range(5): [f, p, N1, N2] = setup_frame3dd_simulation(i+1,h,40) forces = simulate_truss(f) [pl, ax_f] = calculate_metric(forces, p, N1, N2) pls[1,i] = pl print("blp = " + str(i+1) + " sum(p*L) = " + str(pl)) h = 16 print("H = " + str(h) +"ft") for i in range(5): [f, p, N1, N2] = setup_frame3dd_simulation(i+1,h,40) forces = simulate_truss(f) [pl, ax_f] = calculate_metric(forces, p, N1, N2) pls[2,i] = pl print("blp = " + str(i+1) + " sum(p*L) = " + str(pl)) plt.close('all') fig = plt.figure() ax = fig.add_subplot(111, projection='3d') X = [4,8,16]*np.ones([5,3]) Y = [1,2,3,4,5]*np.ones([3,5]) ax.scatter(X.T,Y,pls,s=norm(pls)*100) cset = ax.contourf(X.T, Y, pls, zdir='z', offset=100, cmap=plt.cm.coolwarm,extend3d=True) # ax.plot_surface(X.T,Y,pls,rstride=1, cstride=1,cmap=plt.cm.coolwarm) ax.set_xlabel('Height (ft)') ax.set_ylabel('# of Baseline Points') ax.set_zlabel('Performance Metric') plt.show() Explanation: Now we can do a parametric sweep over a range of heights and lengths and determine the relative performance of the structures. (Lower values are better for the performance metric) End of explanation plt.close('all') f, axarr = plt.subplots(2, 2) [f, p, N1, N2] = setup_frame3dd_simulation(5,16,40,0.5) forces = simulate_truss(f) [pl,ax_f] = calculate_metric(forces, p, N1, N2) plot_truss_forces(p,N1,N2,axarr[0,0],ax_f) axarr[0,0].set_xlim([-10,30]) axarr[0,0].set_ylabel('x:0.5, y:1.0') [f, p, N1, N2] = setup_frame3dd_simulation(5,16,20,1.0) forces = simulate_truss(f) [pl,ax_f] = calculate_metric(forces, p, N1, N2) plot_truss_forces_alpha(p,N1,N2,axarr[0,0],ax_f) axarr[0,0].set_xlim([-10,30]) axarr[0,0].set_ylabel('x:0.5, y:1.0') [f, p, N1, N2] = setup_frame3dd_simulation(5,16,40,2.0) forces = simulate_truss(f) [pl,ax_f] = calculate_metric(forces, p, N1, N2) plot_truss_forces(p,N1,N2,axarr[1,0],ax_f) axarr[1,0].set_xlim([-10,90]) axarr[1,0].set_ylabel('x:2.0, y:1.0') [f, p, N1, N2] = setup_frame3dd_simulation(5,16,80,1.0) forces = simulate_truss(f) [pl,ax_f] = calculate_metric(forces, p, N1, N2) plot_truss_forces_alpha(p,N1,N2,axarr[1,0],ax_f) axarr[1,0].set_xlim([-10,90]) axarr[1,0].set_ylabel('x:2.0, y:1.0') [f, p, N1, N2] = setup_frame3dd_simulation(5,16,40,1.0,0.5) forces = simulate_truss(f) [pl,ax_f] = calculate_metric(forces, p, N1, N2) plot_truss_forces(p,N1,N2,axarr[0,1],ax_f) axarr[0,1].set_xlim([-10,50]) axarr[0,1].set_ylabel('x:1.0, y:0.5') [f, p, N1, N2] = setup_frame3dd_simulation(5,8,40,1.0,1.0) forces = simulate_truss(f) [pl,ax_f] = calculate_metric(forces, p, N1, N2) plot_truss_forces_alpha(p,N1,N2,axarr[0,1],ax_f) axarr[0,1].set_xlim([-10,50]) axarr[0,1].set_ylabel('x:1.0, y:0.5') [f, p, N1, N2] = setup_frame3dd_simulation(5,16,40,1.0,2.0) forces = simulate_truss(f) [pl,ax_f] = calculate_metric(forces, p, N1, N2) plot_truss_forces(p,N1,N2,axarr[1,1],ax_f) axarr[1,1].set_xlim([-10,50]) axarr[1,1].set_ylabel('x:1.0, y:2.0') [f, p, N1, N2] = setup_frame3dd_simulation(5,32,40,1.0,1.0) forces = simulate_truss(f) [pl,ax_f] = calculate_metric(forces, p, N1, N2) plot_truss_forces_alpha(p,N1,N2,axarr[1,1],ax_f) axarr[1,1].set_xlim([-10,50]) axarr[1,1].set_ylabel('x:1.0, y:2.0') plt.suptitle('N = ' + str(50)) plt.show() Explanation: From the plot, it is clear that optimal performance is achieved with more members in the structure (larger number of baseline points) and with a larger distance between support nodes. The plots above, which color code the beam elements with their amount of axial loading, demonstrate that by spreading the support nodes apart, longer beam elements are better utilized. <h3>Stretching and Scaling the Structures</h3> We can analyze how well these structures perform if they're stretched and scaled from their original dimensions: End of explanation # Generate the structures n_xscale = 10 n_yscale = 10 xscales = np.linspace(0.5,2.0,n_xscale) yscales = np.linspace(0.5,2.0,n_xscale) pl3d = np.zeros([n_xscale,n_yscale]) pl3d2 = np.zeros([n_xscale,n_yscale]) for i in range(n_xscale): for j in range(n_yscale): # Stretching the michell structure [f, p, N1, N2] = setup_frame3dd_simulation(5,16,40,xscales[i],yscales[j]) forces = simulate_truss(f) [pl,ax_f] = calculate_metric(forces, p, N1, N2) pl3d[i,j] = pl # VS # Generating a michell structure based on stretched boundary conditions [f, p, N1, N2] = setup_frame3dd_simulation(5,16*yscales[j],40*xscales[i],1.0,1.0) forces = simulate_truss(f) [pl,ax_f] = calculate_metric(forces, p, N1, N2) pl3d2[i,j] = pl # Plot the results plt.close('all') fig = plt.figure() ax = fig.add_subplot(111, projection='3d') xscale3d = xscales*np.ones([n_xscale,n_yscale]) yscale3d = yscales*np.ones([n_yscale,n_xscale]) ax.plot_surface(xscale3d.T,yscale3d,pl3d,rstride=1, cstride=1, cmap=plt.cm.coolwarm, alpha=1.0) ax.set_xlabel('X-Scale') ax.set_ylabel('Y-Scale') ax.set_zlabel('Performance Metric') plt.show() Explanation: Let's analyze the performance for an 10x10 array of X-scale and Y-scale values: End of explanation # Plot the results plt.close('all') fig = plt.figure() ax = fig.add_subplot(111, projection='3d') xscale3d = xscales*np.ones([n_xscale,n_yscale]) yscale3d = yscales*np.ones([n_yscale,n_xscale]) ax.plot_surface(xscale3d.T,yscale3d,(pl3d-pl3d2)/pl3d2*100,rstride=1, cstride=1,cmap=plt.cm.coolwarm) ax.set_xlabel('X-Scale') ax.set_ylabel('Y-Scale') ax.set_zlabel('% Change in Performance') plt.show() Explanation: This plot shows that there is a clear loss in performance for structures scaled in the X-axis (which is to say, stretched to be thinner). From this plot, though, it's unclear whether squishing the structure (making it shorter and fatter) causes a loss in performance. To get a better sense of the effect scaling has on the performance of these structures, it's helpful to compare them to a true Michell truss generated from stretched/squished boundary conditions. These are shown in gray in the figure of 4 subplots above. If, instead of plotting the absolute performance metric, we plot the relative difference between the deformed trusses and the ideal ones, then we get a plot like this: End of explanation # Generate the structures n_xscale = 20 s = np.linspace(0.5,2,n_xscale) pm = np.zeros([n_xscale]) pm2 = np.zeros([n_xscale]) for i in range(len(s)): y_scale = 1/s[i] x_scale = s[i]#s[i] # Stretching the michell structure [f, p, N1, N2] = setup_frame3dd_simulation(5,16,40,x_scale,y_scale) forces = simulate_truss(f) [pl,ax_f] = calculate_metric(forces, p, N1, N2) pm[i] = pl # VS # Generating a michell structure based on stretched boundary conditions [f, p, N1, N2] = setup_frame3dd_simulation(5,16*y_scale,40*x_scale,1.0,1.0) forces = simulate_truss(f) [pl,ax_f] = calculate_metric(forces, p, N1, N2) pm2[i] = pl # Plot the results plt.close('all') f,axarr = plt.subplots(2,1) axarr[0].plot(s,pm) axarr[0].plot(s,pm2) axarr[0].legend(['Deformed Truss','Michell Truss with deformed boundaries'], fontsize=12,loc=2) axarr[1].plot(s,(((pm-pm2)/pm2)*100)) # pm = np.zeros([n_xscale]) # pm2 = np.zeros([n_xscale]) # for i in range(len(s)): # y_scale = 1#1/s[i] # x_scale = s[i]#s[i] # # Stretching the michell structure # [f, p, N1, N2] = setup_frame3dd_simulation(5,16,40,x_scale,y_scale) # forces = simulate_truss(f) # [pl,ax_f] = calculate_metric(forces, p, N1, N2) # pm[i] = pl # # VS # # Generating a michell structure based on stretched boundary conditions # [f, p, N1, N2] = setup_frame3dd_simulation(5,16*y_scale,40*x_scale,1.0,1.0) # forces = simulate_truss(f) # [pl,ax_f] = calculate_metric(forces, p, N1, N2) # pm2[i] = pl axarr[1].plot(s,(((pm-pm2)/pm2)*100)) axarr[1].set_xlabel('X-Scale') axarr[1].set_ylabel('% change in performance') axarr[0].set_ylabel('Perfomance Metric') axarr[0].set_xlim([0.5,2.0]) axarr[1].set_xlim([0.5,2.0]) plt.show() Explanation: From this, it's clear that when X-scale = Y-scale the performance of the structure is nominal. This makes sense since it is just uniformly scaling the structure to be larger or smaller. However, when the structure is made thinner or thicker, it's performance becomes sub-optimal. Furthermore, the plot shows that it is more sub-optimal to stretch the structure lengthwise (making it thinner) than it is to squish the structure (making it shorter and wider). We can get a better sense of this performance loss by looking at a cross section of the surface along the plane orthogonal to x_scale = y_scale: End of explanation
5,129
Given the following text description, write Python code to implement the functionality described below step by step Description: Sebastian Raschka, 2015 Python Machine Learning Essentials Chapter 11 - Working with Unlabeled Data – Clustering Analysis Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s). Step1: <br> <br> Sections Grouping objects by similarity using k-means Using the elbow method to find the optimal number of clusters Quantifying the quality of clustering via silhouette plots Organizing clusters as a hierarchical tree Performing hierarchical clustering on a distance matrix Attaching dendrograms to a heat map Applying agglomerative clustering via scikit-learn Locating regions of high density via DBSCAN <br> <br> Grouping objects by similarity using k-means [back to top] Step2: <br> Using the elbow method to find the optimal number of clusters [back to top] Step3: <br> Quantifying the quality of clustering via silhouette plots [back to top] Step4: Comparison to "bad" clustering Step5: <br> <br> Organizing clusters as a hierarchical tree [back to top] Step6: <br> Performing hierarchical clustering on a distance matrix [back to top] Step7: We can either pass a condensed distance matrix (upper triangular) from the pdist function, or we can pass the "original" data array and define the 'euclidean' metric as function argument n linkage. However, we should nott pass the squareform distance matrix, which would yield different distance values although the overall clustering could be the same. Step8: <br> Attaching dendrograms to a heat map [back to top] Step9: <br> Applying agglomerative clustering via scikit-learn [back to top] Step10: <br> <br> Locating regions of high density via DBSCAN [back to top] Step11: K-means and hierarchical clustering Step12: Density-based clustering
Python Code: %load_ext watermark %watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,scipy,scikit-learn # to install watermark just uncomment the following line: #%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py Explanation: Sebastian Raschka, 2015 Python Machine Learning Essentials Chapter 11 - Working with Unlabeled Data – Clustering Analysis Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s). End of explanation from sklearn.datasets import make_blobs X, y = make_blobs(n_samples=150, n_features=2, centers=3, cluster_std=0.5, shuffle=True, random_state=0) import matplotlib.pyplot as plt %matplotlib inline plt.scatter(X[:,0], X[:,1], c='white', marker='o', s=50) plt.grid() plt.tight_layout() #plt.savefig('./figures/spheres.png', dpi=300) plt.show() from sklearn.cluster import KMeans km = KMeans(n_clusters=3, init='random', n_init=10, max_iter=300, tol=1e-04, random_state=0) y_km = km.fit_predict(X) plt.scatter(X[y_km==0,0], X[y_km==0,1], s=50, c='lightgreen', marker='s', label='cluster 1') plt.scatter(X[y_km==1,0], X[y_km==1,1], s=50, c='orange', marker='o', label='cluster 2') plt.scatter(X[y_km==2,0], X[y_km==2,1], s=50, c='lightblue', marker='v', label='cluster 3') plt.scatter(km.cluster_centers_[:,0], km.cluster_centers_[:,1], s=250, marker='*', c='red', label='centroids') plt.legend() plt.grid() plt.tight_layout() #plt.savefig('./figures/centroids.png', dpi=300) plt.show() Explanation: <br> <br> Sections Grouping objects by similarity using k-means Using the elbow method to find the optimal number of clusters Quantifying the quality of clustering via silhouette plots Organizing clusters as a hierarchical tree Performing hierarchical clustering on a distance matrix Attaching dendrograms to a heat map Applying agglomerative clustering via scikit-learn Locating regions of high density via DBSCAN <br> <br> Grouping objects by similarity using k-means [back to top] End of explanation print('Distortion: %.2f' % km.inertia_) distortions = [] for i in range(1, 11): km = KMeans(n_clusters=i, init='k-means++', n_init=10, max_iter=300, random_state=0) km.fit(X) distortions .append(km.inertia_) plt.plot(range(1,11), distortions , marker='o') plt.xlabel('Number of clusters') plt.ylabel('Distortion') plt.tight_layout() #plt.savefig('./figures/elbow.png', dpi=300) plt.show() Explanation: <br> Using the elbow method to find the optimal number of clusters [back to top] End of explanation import numpy as np from matplotlib import cm from sklearn.metrics import silhouette_samples km = KMeans(n_clusters=3, init='k-means++', n_init=10, max_iter=300, tol=1e-04, random_state=0) y_km = km.fit_predict(X) cluster_labels = np.unique(y_km) n_clusters = cluster_labels.shape[0] silhouette_vals = silhouette_samples(X, y_km, metric='euclidean') y_ax_lower, y_ax_upper = 0, 0 yticks = [] for i, c in enumerate(cluster_labels): c_silhouette_vals = silhouette_vals[y_km == c] c_silhouette_vals.sort() y_ax_upper += len(c_silhouette_vals) color = cm.jet(i / n_clusters) plt.barh(range(y_ax_lower, y_ax_upper), c_silhouette_vals, height=1.0, edgecolor='none', color=color) yticks.append((y_ax_lower + y_ax_upper) / 2) y_ax_lower += len(c_silhouette_vals) silhouette_avg = np.mean(silhouette_vals) plt.axvline(silhouette_avg, color="red", linestyle="--") plt.yticks(yticks, cluster_labels + 1) plt.ylabel('Cluster') plt.xlabel('Silhouette coefficient') plt.tight_layout() # plt.savefig('./figures/silhouette.png', dpi=300) plt.show() Explanation: <br> Quantifying the quality of clustering via silhouette plots [back to top] End of explanation km = KMeans(n_clusters=2, init='k-means++', n_init=10, max_iter=300, tol=1e-04, random_state=0) y_km = km.fit_predict(X) plt.scatter(X[y_km==0,0], X[y_km==0,1], s=50, c='lightgreen', marker='s', label='cluster 1') plt.scatter(X[y_km==1,0], X[y_km==1,1], s=50, c='orange', marker='o', label='cluster 2') plt.scatter(km.cluster_centers_[:,0], km.cluster_centers_[:,1], s=250, marker='*', c='red', label='centroids') plt.legend() plt.grid() plt.tight_layout() #plt.savefig('./figures/centroids_bad.png', dpi=300) plt.show() cluster_labels = np.unique(y_km) n_clusters = cluster_labels.shape[0] silhouette_vals = silhouette_samples(X, y_km, metric='euclidean') y_ax_lower, y_ax_upper = 0, 0 yticks = [] for i, c in enumerate(cluster_labels): c_silhouette_vals = silhouette_vals[y_km == c] c_silhouette_vals.sort() y_ax_upper += len(c_silhouette_vals) color = cm.jet(i / n_clusters) plt.barh(range(y_ax_lower, y_ax_upper), c_silhouette_vals, height=1.0, edgecolor='none', color=color) yticks.append((y_ax_lower + y_ax_upper) / 2) y_ax_lower += len(c_silhouette_vals) silhouette_avg = np.mean(silhouette_vals) plt.axvline(silhouette_avg, color="red", linestyle="--") plt.yticks(yticks, cluster_labels + 1) plt.ylabel('Cluster') plt.xlabel('Silhouette coefficient') plt.tight_layout() # plt.savefig('./figures/silhouette_bad.png', dpi=300) plt.show() Explanation: Comparison to "bad" clustering: End of explanation import pandas as pd import numpy as np np.random.seed(123) variables = ['X', 'Y', 'Z'] labels = ['ID_0','ID_1','ID_2','ID_3','ID_4'] X = np.random.random_sample([5,3])*10 df = pd.DataFrame(X, columns=variables, index=labels) df Explanation: <br> <br> Organizing clusters as a hierarchical tree [back to top] End of explanation from scipy.spatial.distance import pdist,squareform row_dist = pd.DataFrame(squareform(pdist(df, metric='euclidean')), columns=labels, index=labels) row_dist Explanation: <br> Performing hierarchical clustering on a distance matrix [back to top] End of explanation # 1. incorrect approach: Squareform distance matrix from scipy.cluster.hierarchy import linkage row_clusters = linkage(row_dist, method='complete', metric='euclidean') pd.DataFrame(row_clusters, columns=['row label 1', 'row label 2', 'distance', 'no. of items in clust.'], index=['cluster %d' %(i+1) for i in range(row_clusters.shape[0])]) # 2. correct approach: Condensed distance matrix row_clusters = linkage(pdist(df, metric='euclidean'), method='complete') pd.DataFrame(row_clusters, columns=['row label 1', 'row label 2', 'distance', 'no. of items in clust.'], index=['cluster %d' %(i+1) for i in range(row_clusters.shape[0])]) # 3. correct approach: Input sample matrix row_clusters = linkage(df.values, method='complete', metric='euclidean') pd.DataFrame(row_clusters, columns=['row label 1', 'row label 2', 'distance', 'no. of items in clust.'], index=['cluster %d' %(i+1) for i in range(row_clusters.shape[0])]) from scipy.cluster.hierarchy import dendrogram # make dendrogram black (part 1/2) # from scipy.cluster.hierarchy import set_link_color_palette # set_link_color_palette(['black']) row_dendr = dendrogram(row_clusters, labels=labels, # make dendrogram black (part 2/2) # color_threshold=np.inf ) plt.tight_layout() plt.ylabel('Euclidean distance') #plt.savefig('./figures/dendrogram.png', dpi=300, # bbox_inches='tight') plt.show() Explanation: We can either pass a condensed distance matrix (upper triangular) from the pdist function, or we can pass the "original" data array and define the 'euclidean' metric as function argument n linkage. However, we should nott pass the squareform distance matrix, which would yield different distance values although the overall clustering could be the same. End of explanation # plot row dendrogram fig = plt.figure(figsize=(8,8)) axd = fig.add_axes([0.09,0.1,0.2,0.6]) row_dendr = dendrogram(row_clusters, orientation='right') # reorder data with respect to clustering df_rowclust = df.ix[row_dendr['leaves'][::-1]] axd.set_xticks([]) axd.set_yticks([]) # remove axes spines from dendrogram for i in axd.spines.values(): i.set_visible(False) # plot heatmap axm = fig.add_axes([0.23,0.1,0.6,0.6]) # x-pos, y-pos, width, height cax = axm.matshow(df_rowclust, interpolation='nearest', cmap='hot_r') fig.colorbar(cax) axm.set_xticklabels([''] + list(df_rowclust.columns)) axm.set_yticklabels([''] + list(df_rowclust.index)) # plt.savefig('./figures/heatmap.png', dpi=300) plt.show() Explanation: <br> Attaching dendrograms to a heat map [back to top] End of explanation from sklearn.cluster import AgglomerativeClustering ac = AgglomerativeClustering(n_clusters=2, affinity='euclidean', linkage='complete') labels = ac.fit_predict(X) print('Cluster labels: %s' % labels) Explanation: <br> Applying agglomerative clustering via scikit-learn [back to top] End of explanation from sklearn.datasets import make_moons X, y = make_moons(n_samples=200, noise=0.05, random_state=0) plt.scatter(X[:,0], X[:,1]) plt.tight_layout() #plt.savefig('./figures/moons.png', dpi=300) plt.show() Explanation: <br> <br> Locating regions of high density via DBSCAN [back to top] End of explanation f, (ax1, ax2) = plt.subplots(1, 2, figsize=(8,3)) km = KMeans(n_clusters=2, random_state=0) y_km = km.fit_predict(X) ax1.scatter(X[y_km==0,0], X[y_km==0,1], c='lightblue', marker='o', s=40, label='cluster 1') ax1.scatter(X[y_km==1,0], X[y_km==1,1], c='red', marker='s', s=40, label='cluster 2') ax1.set_title('K-means clustering') ac = AgglomerativeClustering(n_clusters=2, affinity='euclidean', linkage='complete') y_ac = ac.fit_predict(X) ax2.scatter(X[y_ac==0,0], X[y_ac==0,1], c='lightblue', marker='o', s=40, label='cluster 1') ax2.scatter(X[y_ac==1,0], X[y_ac==1,1], c='red', marker='s', s=40, label='cluster 2') ax2.set_title('Agglomerative clustering') plt.legend() plt.tight_layout() #plt.savefig('./figures/kmeans_and_ac.png', dpi=300) plt.show() Explanation: K-means and hierarchical clustering: End of explanation from sklearn.cluster import DBSCAN db = DBSCAN(eps=0.2, min_samples=5, metric='euclidean') y_db = db.fit_predict(X) plt.scatter(X[y_db==0,0], X[y_db==0,1], c='lightblue', marker='o', s=40, label='cluster 1') plt.scatter(X[y_db==1,0], X[y_db==1,1], c='red', marker='s', s=40, label='cluster 2') plt.legend() plt.tight_layout() #plt.savefig('./figures/moons_dbscan.png', dpi=300) plt.show() Explanation: Density-based clustering: End of explanation
5,130
Given the following text description, write Python code to implement the functionality described below step by step Description: Author Step1: First let's check if there are new or deleted files (only matching by file names). Step2: So we have the same set of files in both versions Step3: Let's make sure the structure hasn't changed Step4: All files have the same columns as before Step5: There are some minor changes in many files, but based on my knowledge of ROME, none from the main files. The most interesting ones are in referentiel_appellation, item, and liens_rome_referentiels, so let's see more precisely. Step6: Alright, so the only change seems to be 4 new jobs added. Let's take a look (only showing interesting fields) Step7: These seems to be refinements of existing jobs, but that's fine. OK, let's check at the changes in items Step8: As anticipated it is a very minor change (hard to see it visually) Step9: The new ones seem legit to me. Let's check the obsolete one Step10: Hmm, it seems to be simple renaming, but they preferred to create a new one and retire the old one. The changes in liens_rome_referentiels include changes for those items, so let's only check the changes not related to those. Step11: So in addition to the added and removed items, there are few fixes. Let's have a look at them
Python Code: import collections import glob import os from os import path import matplotlib_venn import pandas as pd rome_path = path.join(os.getenv('DATA_FOLDER'), 'rome/csv') OLD_VERSION = '337' NEW_VERSION = '338' old_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(OLD_VERSION))) new_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(NEW_VERSION))) Explanation: Author: Pascal, pascal@bayesimpact.org Date: 2019-03-22 ROME update from v337 to v338 In March 2019 a new version of the ROME was released. I want to investigate what changed and whether we need to do anything about it. You might not be able to reproduce this notebook, mostly because it requires to have the two versions of the ROME in your data/rome/csv folder which happens only just before we switch to v338. You will have to trust me on the results ;-) Skip the run test because it requires older versions of the ROME. End of explanation new_files = new_version_files - frozenset(f.replace(OLD_VERSION, NEW_VERSION) for f in old_version_files) deleted_files = old_version_files - frozenset(f.replace(NEW_VERSION, OLD_VERSION) for f in new_version_files) print('{:d} new files'.format(len(new_files))) print('{:d} deleted files'.format(len(deleted_files))) Explanation: First let's check if there are new or deleted files (only matching by file names). End of explanation # Load all ROME datasets for the two versions we compare. VersionedDataset = collections.namedtuple('VersionedDataset', ['basename', 'old', 'new']) rome_data = [VersionedDataset( basename=path.basename(f), old=pd.read_csv(f.replace(NEW_VERSION, OLD_VERSION)), new=pd.read_csv(f)) for f in sorted(new_version_files)] def find_rome_dataset_by_name(data, partial_name): for dataset in data: if 'unix_{}_v{}_utf8.csv'.format(partial_name, NEW_VERSION) == dataset.basename: return dataset raise ValueError('No dataset named {}, the list is\n{}'.format(partial_name, [d.basename for d in data])) Explanation: So we have the same set of files in both versions: good start. Now let's set up a dataset that, for each table, links both the old and the new file together. End of explanation for dataset in rome_data: if set(dataset.old.columns) != set(dataset.new.columns): print('Columns of {} have changed.'.format(dataset.basename)) Explanation: Let's make sure the structure hasn't changed: End of explanation same_row_count_files = 0 for dataset in rome_data: diff = len(dataset.new.index) - len(dataset.old.index) if diff > 0: print('{:d}/{:d} values added in {}'.format( diff, len(dataset.new.index), dataset.basename)) elif diff < 0: print('{:d}/{:d} values removed in {}'.format( -diff, len(dataset.old.index), dataset.basename)) else: same_row_count_files += 1 print('{:d}/{:d} files with the same number of rows'.format( same_row_count_files, len(rome_data))) Explanation: All files have the same columns as before: still good. Now let's see for each file if there are more or less rows. End of explanation jobs = find_rome_dataset_by_name(rome_data, 'referentiel_appellation') new_jobs = set(jobs.new.code_ogr) - set(jobs.old.code_ogr) obsolete_jobs = set(jobs.old.code_ogr) - set(jobs.new.code_ogr) stable_jobs = set(jobs.new.code_ogr) & set(jobs.old.code_ogr) matplotlib_venn.venn2((len(obsolete_jobs), len(new_jobs), len(stable_jobs)), (OLD_VERSION, NEW_VERSION)); Explanation: There are some minor changes in many files, but based on my knowledge of ROME, none from the main files. The most interesting ones are in referentiel_appellation, item, and liens_rome_referentiels, so let's see more precisely. End of explanation pd.options.display.max_colwidth = 2000 jobs.new[jobs.new.code_ogr.isin(new_jobs)][['code_ogr', 'libelle_appellation_long', 'code_rome']] Explanation: Alright, so the only change seems to be 4 new jobs added. Let's take a look (only showing interesting fields): End of explanation items = find_rome_dataset_by_name(rome_data, 'item') new_items = set(items.new.code_ogr) - set(items.old.code_ogr) obsolete_items = set(items.old.code_ogr) - set(items.new.code_ogr) stable_items = set(items.new.code_ogr) & set(items.old.code_ogr) matplotlib_venn.venn2((len(obsolete_items), len(new_items), len(stable_items)), (OLD_VERSION, NEW_VERSION)); Explanation: These seems to be refinements of existing jobs, but that's fine. OK, let's check at the changes in items: End of explanation items.new[items.new.code_ogr.isin(new_items)].head() Explanation: As anticipated it is a very minor change (hard to see it visually): there is one obsolete item and 2 new ones have been created. Let's have a look at them. End of explanation items.old[items.old.code_ogr.isin(obsolete_items)].head() Explanation: The new ones seem legit to me. Let's check the obsolete one: End of explanation links = find_rome_dataset_by_name(rome_data, 'liens_rome_referentiels') old_links_on_stable_items = links.old[links.old.code_ogr.isin(stable_items)] new_links_on_stable_items = links.new[links.new.code_ogr.isin(stable_items)] old = old_links_on_stable_items[['code_rome', 'code_ogr']] new = new_links_on_stable_items[['code_rome', 'code_ogr']] links_merged = old.merge(new, how='outer', indicator=True) links_merged['_diff'] = links_merged._merge.map({'left_only': 'removed', 'right_only': 'added'}) links_merged._diff.value_counts() Explanation: Hmm, it seems to be simple renaming, but they preferred to create a new one and retire the old one. The changes in liens_rome_referentiels include changes for those items, so let's only check the changes not related to those. End of explanation job_group_names = find_rome_dataset_by_name(rome_data, 'referentiel_code_rome').new.set_index('code_rome').libelle_rome item_names = items.new.set_index('code_ogr').libelle.drop_duplicates() links_merged['job_group_name'] = links_merged.code_rome.map(job_group_names) links_merged['item_name'] = links_merged.code_ogr.map(item_names) display(links_merged[links_merged._diff == 'removed'].dropna().head(5)) links_merged[links_merged._diff == 'added'].dropna().head(5) Explanation: So in addition to the added and removed items, there are few fixes. Let's have a look at them: End of explanation
5,131
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href="https Step1: Chapter 10 - Dictionaries This notebook uses code snippets and explanation from this course The last type of container we will introduce in this topic is dictionaries. Programming is mostly about solving real-world problems as efficiently as possible, but it is also important to write and organize code in a human-readable fashion. A dictionary offers a kind of abstraction that comes in handy often Step2: However, you're not happy about the solution. Every time you request a grade, we need to first determine the position of the student in the list and then use that index + 1 to obtain the grade. That's pretty inefficient. The take-home message here is that lists are not really good if we want two pieces of information together. Dictionaries for the rescue! Step3: 2. How to create a dictionary Let's take another look at the student_grades dictionary. Step4: a dictionary is surrounded by curly brackets, and the key/value pairs are separated by commas. A dictionary consists of one or more key Step5: This does not (list as keys) Step6: Please note that the values in a dictionary can by any python object This works (integers as values) Step7: But this as well (lists as values) Step8: Please note that a dictionary can be empty (use dict()) Step9: 3. How to add items to a dictionary There is one very simple way in order to add a key Step10: Please note that dictionary keys should be unique identifiers for the values in the dictionary. Key Step11: 4. How to access data in a dictionary The most basic operation on a dictionary is a look-up. Simply enter the key and the dictionary returns the value. Step12: If the key is not in the dictionary, it will return a KeyError. Step13: In order to avoid getting an error, you can use an if-statement Step14: the keys method returns the keys in a dictionary Step15: the values method returns the values in a dictionary Step16: We can use the built-in functions to inspect the keys and values. For example Step17: However, what if we want to know which students got a 8 or higher? The items method is very useful for this scenario. Please carefully look at the following code snippet. Step18: The items method returns a list of tuples. We can combine what we have learnt about looping and tuples to access the keys (the students' names) and values (their grades) Step19: This also makes it possible to detect which students obtained a grade of 8 or higher. Step20: 5. Counting with a dictionary Dictionaries are very useful to derive statistics. For example, we can easily determine the frequency of each letter in a word. Step21: You can do this as well with lists Step22: Python actually has a module, which is very useful for counting. It's called collections. Step23: Feel free to start using this module after the assignment of this block. 6. Nested dictionaries Since dictionaries consists of key Step24: Please note that the value is in fact a dictionary Step25: In order to access the nested value, we must do a look up for each key on each nested level Step26: Practice questions Step27: Exercise 2 Step28: Exercise 3
Python Code: %%capture !wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip !wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip !wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip !unzip Data.zip -d ../ !unzip images.zip -d ./ !unzip Extra_Material.zip -d ../ !rm Data.zip !rm Extra_Material.zip !rm images.zip Explanation: <a href="https://colab.research.google.com/github/cltl/python-for-text-analysis/blob/colab/Chapters-colab/Chapter_10_Dictionaries.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> End of explanation student_grades = ['Frank', 8, 'Susan', 7, 'Guido', 10] student = 'Frank' index_of_student = student_grades.index(student) # we use the index method (list.index) print('grade of', student, 'is', student_grades[index_of_student + 1]) Explanation: Chapter 10 - Dictionaries This notebook uses code snippets and explanation from this course The last type of container we will introduce in this topic is dictionaries. Programming is mostly about solving real-world problems as efficiently as possible, but it is also important to write and organize code in a human-readable fashion. A dictionary offers a kind of abstraction that comes in handy often: it is a type of "associative memory" or key:value storage. It allows you to describe two pieces of data and their relationship. At the end of this chapter, you will: * understand the relevance of dictionaries * know how to create a dictionary * know how to add items to a dictionary * know how to inspect/extract items from a dictionary * know how to count with a dictionary * know how to create nested dictionaries If you want to learn more about these topics, you might find the following links useful: * Python documentation If you have questions about this chapter, please contact us (cltl.python.course@gmail.com). 1. Dictionaries Imagine that you are a teacher, and you've graded exams (everyone got high grades, of course). You would like to store this information so that you can ask the program for the grade of a particular student. After some thought, you first try to accomplish this using a list. End of explanation student_grades = {'Frank': 8, 'Susan': 7, 'Guido': 10} student_grades['Frank'] Explanation: However, you're not happy about the solution. Every time you request a grade, we need to first determine the position of the student in the list and then use that index + 1 to obtain the grade. That's pretty inefficient. The take-home message here is that lists are not really good if we want two pieces of information together. Dictionaries for the rescue! End of explanation student_grades = {'Frank': 8, 'Susan': 7, 'Guido': 10} Explanation: 2. How to create a dictionary Let's take another look at the student_grades dictionary. End of explanation student_grades = {'Frank': 8, 'Susan': 7, 'Guido': 10} Explanation: a dictionary is surrounded by curly brackets, and the key/value pairs are separated by commas. A dictionary consists of one or more key:value pairs. The key is the 'identifier' or "name" that is used to describe the value. the keys in a dictionary are unique the syntax for a key/value pair is: KEY : VALUE the keys (e.g. 'Frank') in a dictionary have to be immutable the values (e.g., 8) in a dictionary can by any python object a dictionary can be empty Please note that keys in a dictionary have to immutable. This works (strings as keys) End of explanation a_dict = {['a', 'list']: 8} Explanation: This does not (list as keys) End of explanation a_dict = {'Frank': 8, 'Susan': 7} Explanation: Please note that the values in a dictionary can by any python object This works (integers as values) End of explanation another_dict = {'Frank' : [8], 'Susan' : [7]} Explanation: But this as well (lists as values) End of explanation an_empty_dict = dict() another_empty_dict = {} # This works too, but it is less readable and confusing (looks similar to sets) print(type(another_empty_dict), type(an_empty_dict)) Explanation: Please note that a dictionary can be empty (use dict()): End of explanation a_dict = dict() print(a_dict) a_dict['Frank'] = 8 print(a_dict) Explanation: 3. How to add items to a dictionary There is one very simple way in order to add a key:value pair to a dictionary. Please look at the following code snippet: End of explanation a_dict = dict() a_dict['Frank'] = 8 print(a_dict) a_dict['Frank'] = 7 print(a_dict) a_dict['Frank'] = 9 print(a_dict) Explanation: Please note that dictionary keys should be unique identifiers for the values in the dictionary. Key:value pairs get overwritten if you assign a different value to an existing key. End of explanation student_grades = {'Frank': 8, 'Susan': 7, 'Guido': 10} print(student_grades['Frank']) Explanation: 4. How to access data in a dictionary The most basic operation on a dictionary is a look-up. Simply enter the key and the dictionary returns the value. End of explanation student_grades['Piet'] Explanation: If the key is not in the dictionary, it will return a KeyError. End of explanation key = 'Piet' if key in student_grades: print(student_grades[key]) else: print(key, 'not in dictionary') key = 'Frank' if key in student_grades: print(student_grades[key]) else: print(key, 'not in dictionary') Explanation: In order to avoid getting an error, you can use an if-statement End of explanation student_grades = {'Frank': 8, 'Susan': 7, 'Guido': 10} the_keys = student_grades.keys() print(the_keys) Explanation: the keys method returns the keys in a dictionary End of explanation the_values = student_grades.values() print(the_values) Explanation: the values method returns the values in a dictionary End of explanation the_values = student_grades.values() print(len(the_values)) # number of values in a dict print(max(the_values)) # highest value of values in a dict print(min(the_values)) # lowest value of values in a dict print(sum(the_values)) # sum of all values of values in a dict Explanation: We can use the built-in functions to inspect the keys and values. For example: End of explanation student_grades = {'Frank': 8, 'Susan': 7, 'Guido': 10} print(student_grades.items()) Explanation: However, what if we want to know which students got a 8 or higher? The items method is very useful for this scenario. Please carefully look at the following code snippet. End of explanation for key, value in student_grades.items(): # please note the tuple unpacking print(key, value) Explanation: The items method returns a list of tuples. We can combine what we have learnt about looping and tuples to access the keys (the students' names) and values (their grades): End of explanation for student, grade in student_grades.items(): if grade > 7: print(student, grade) Explanation: This also makes it possible to detect which students obtained a grade of 8 or higher. End of explanation letter2freq = dict() word = 'hippo' for letter in word: if letter in letter2freq: # add 1 to the dictionary if the keys exists letter2freq[letter] += 1 # note: x +=1 does the same as x = x + 1 else: letter2freq[letter] = 1 # set default value to 1 if key does not exists print(letter, letter2freq) print() print(letter2freq) Explanation: 5. Counting with a dictionary Dictionaries are very useful to derive statistics. For example, we can easily determine the frequency of each letter in a word. End of explanation a_sentence = ['Obama', 'was', 'the', 'president', 'of', 'the', 'USA'] word2freq = dict() for word in a_sentence: if word in word2freq: # add 1 to the dictionary if the keys exists word2freq[word] += 1 else: word2freq[word] = 1 # set default value to 1 if key does not exists print(word, word2freq) print() print(word2freq) Explanation: You can do this as well with lists End of explanation from collections import Counter word_freq = Counter(['Obama', 'was', 'the', 'president', 'of', 'the', 'USA']) print(word_freq) Explanation: Python actually has a module, which is very useful for counting. It's called collections. End of explanation a_nested_dictionary = {'a_key': {'nested_key1': 1, 'nested_key2': 2, 'nested_key3': 3} } print(a_nested_dictionary) Explanation: Feel free to start using this module after the assignment of this block. 6. Nested dictionaries Since dictionaries consists of key:value pairs, we can actually make another dictionary the value of a dictionary. End of explanation print(a_nested_dictionary['a_key']) Explanation: Please note that the value is in fact a dictionary: End of explanation the_nested_value = a_nested_dictionary['a_key']['nested_key1'] print(the_nested_value) Explanation: In order to access the nested value, we must do a look up for each key on each nested level End of explanation # your code here Explanation: Practice questions: What do sets and dictionaries have in common? What do lists and tuples have in common? Can you add things to a list? Can you add things to a tuples? An overview: | property | set | list | tuple | dict keys | dict values | |------------------------------- |-------------------|-----------------|-------------|-----------|-------------| | mutable (can you add add/remove?) | yes | yes | no | yes | yes | | can contain duplicates | no | yes | yes | no | yes | | ordered | no | yes | yes | yes, but do not rely on it | depends on type of value | | finding element(s) | quick | slow | slow | quick | depends on type of value | | can contain | immutables | all | all | immutables | all | Exercises Exercise 1: You are tying to keep track of your groceries using a python dictionary. Please add 'tomatoes', 'bread', 'chocolate bars' and 'pineapples' to your shopping dictionary and assign values according to how many items of each you would like to buy. End of explanation # your code here Explanation: Exercise 2: Print the number of pineapples you would like to buy by using only one line of code and without printing the entire dictionary. End of explanation # you code here Explanation: Exercise 3: Use a loop and unpacking to print the items and numbers on your shopping list in the following format: Item: [Item], number: [number] e.g. Item: tomatoes, number: 3 End of explanation
5,132
Given the following text description, write Python code to implement the functionality described below step by step Description: First, we need to set up our test data. We'll use two relaxation modes that are themselves log-normally distributed. Step1: Now, let's construct the moduli. We'll have both a true version and a noisy version with some random noise added to simulate experimental variance. Step2: Now, we can build the model with PyMC3. I'll make 2 Step3: Now we can sample the models to get our parameter distributions Step4: Load trace
Python Code: def H(tau): g1 = 1; tau1 = 0.03; sd1 = 0.5; g2 = 7; tau2 = 10; sd2 = 0.5; term1 = g1/np.sqrt(2*sd1**2*np.pi) * np.exp(-(np.log10(tau/tau1)**2)/(2*sd1**2)) term2 = g2/np.sqrt(2*sd2**2*np.pi) * np.exp(-(np.log10(tau/tau2)**2)/(2*sd2**2)) return term1 + term2 Nfreq = 50 Nmodes = 30 w = np.logspace(-4,4,Nfreq).reshape((1,Nfreq)) tau = np.logspace(-np.log10(w.max()),-np.log10(w.min()),Nmodes).reshape((Nmodes,1)) # get equivalent discrete spectrum delta_log_tau = np.log10(tau[1]/tau[0]) g_true = (H(tau) * delta_log_tau).reshape((1,Nmodes)) plt.loglog(tau,H(tau), label='Continuous spectrum') plt.plot(tau.ravel(),g_true.ravel(), 'or', label='Equivalent discrete spectrum') plt.legend(loc=4) plt.xlabel(r'$\tau$') plt.ylabel(r'$H(\tau)$ or $g$') Explanation: First, we need to set up our test data. We'll use two relaxation modes that are themselves log-normally distributed. End of explanation wt = tau*w Kp = wt**2/(1+wt**2) Kpp = wt/(1+wt**2) noise_level = 0.02 Gp_true = np.dot(g_true,Kp) Gp_noise = Gp_true + Gp_true*noise_level*np.random.randn(Nfreq) Gpp_true = np.dot(g_true,Kpp) Gpp_noise = Gpp_true + Gpp_true*noise_level*np.random.randn(Nfreq) plt.loglog(w.ravel(),Gp_true.ravel(),label="True G'") plt.plot(w.ravel(),Gpp_true.ravel(), label='True G"') plt.plot(w.ravel(),Gp_noise.ravel(),'xr',label="Noisy G'") plt.plot(w.ravel(),Gpp_noise.ravel(),'+r',label='Noisy G"') plt.xlabel(r'$\omega$') plt.ylabel("Moduli") plt.legend(loc=4) Explanation: Now, let's construct the moduli. We'll have both a true version and a noisy version with some random noise added to simulate experimental variance. End of explanation noisyModel = pm.Model() with noisyModel: g = pm.Uniform('g', lower=Gp_noise.min()/1e4, upper=Gp_noise.max(), shape=g_true.shape) sd1 = pm.HalfNormal('sd1',tau=1) sd2 = pm.HalfNormal('sd2',tau=1) # we'll log-weight the moduli as in other fitting methods logGp = pm.Normal('logGp',mu=np.log(tt.dot(g,Kp)), sd=sd1, observed=np.log(Gp_noise)) logGpp = pm.Normal('logGpp',mu=np.log(tt.dot(g,Kpp)), sd=sd2, observed=np.log(Gpp_noise)) trueModel = pm.Model() with trueModel: g = pm.Uniform('g', lower=Gp_true.min()/1e4, upper=Gp_true.max(), shape=g_true.shape) sd1 = pm.HalfNormal('sd1',tau=1) sd2 = pm.HalfNormal('sd2',tau=1) # we'll log-weight the moduli as in other fitting methods logGp = pm.Normal('logGp',mu=np.log(tt.dot(g,Kp)), sd=sd1, observed=np.log(Gp_true)) logGpp = pm.Normal('logGpp',mu=np.log(tt.dot(g,Kpp)), sd=sd2, observed=np.log(Gpp_true)) Explanation: Now, we can build the model with PyMC3. I'll make 2: one with noise, and one without. End of explanation Nsamples = 5000 trueMapEstimate = pm.find_MAP(model=trueModel) with trueModel: trueTrace = pm.sample(Nsamples, start=trueMapEstimate) pm.backends.text.dump('./Double_Maxwell_true', trueTrace) noisyMapEstimate = pm.find_MAP(model=noisyModel) with noisyModel: noisyTrace = pm.sample(Nsamples, start=noisyMapEstimate) pm.backends.text.dump('./Double_Maxwell_noisy', noisyTrace) Explanation: Now we can sample the models to get our parameter distributions: End of explanation noisyTrace = pm.backends.text.load('./Double_Maxwell_noisy',model=noisyModel) trueTrace = pm.backends.text.load('./Double_Maxwell_true', model=trueModel) burn = 500 trueQ = pm.quantiles(trueTrace[burn:]) noisyQ = pm.quantiles(noisyTrace[burn:]) def plot_quantiles(Q,ax): ax.fill_between(tau.ravel(), y1=Q['g'][2.5], y2=Q['g'][97.5], color='c', alpha=0.25) ax.fill_between(tau.ravel(), y1=Q['g'][25], y2=Q['g'][75], color='c', alpha=0.5) ax.plot(tau.ravel(), Q['g'][50], 'b-') # sampling localization lines: ax.axvline(x=np.exp(np.pi/2)/w.max(), color='k', linestyle='--') ax.axvline(x=(np.exp(np.pi/2)*w.min())**-1, color='k', linestyle='--') fig,ax = plt.subplots(nrows=2, sharex=True, subplot_kw={'xscale':'log','yscale':'log', 'ylabel':'$g_i$'}) plot_quantiles(trueQ,ax[0]) plot_quantiles(noisyQ,ax[1]) # true spectrum trueSpectrumline0 = ax[0].plot(tau.ravel(), g_true.ravel(),'xr', label='True Spectrum') trueSpectrumline1 = ax[1].plot(tau.ravel(), g_true.ravel(),'xr', label='True Spectrum') ax[0].legend(loc=4) ax[0].set_title('Using True Moduli') ax[1].set_xlabel(r'$\tau$') ax[1].legend(loc=4) ax[1].set_title('Using Noisy Moduli') fig.set_size_inches(5,8) fig.savefig('True,Noisy_moduli_uniform_prior.png',dpi=500) Explanation: Load trace: End of explanation
5,133
Given the following text description, write Python code to implement the functionality described below step by step Description: Notebook Title <a class="tocSkip"> Make some notes here for the thought process, steps taken, TODOs. Imports Import deps Map all dependencies under categories, for easier tracking / readability. Also check Step2: Inserts for Jupyter Any kind of IPython/Jupyter related stuff Step3: Import data Make sure to check if the data is present in the targeted scope, and that the size is readable with current RAM. Feel free to have subheadings for multiple datasets, data descriptions etc. Step4: Main Pick a more flat structure (h1 headings with #) or nested structure appropriately - not too nested, not too cluttered. Use the bash, Luke Step5: Make use of IPython stuff F-strings (>= Python 3.6) can be combined with IPython's <kbd>display</kbd> module for fun and profit. Step6: For more nested json's or dictionaries, it's best to use something interactive like RenderJSON. Step8: Use slides, and decouple declaration from run (if you're not using "Hide codecell inputs"). Step9: Use tqdm for every run that takes more than a couple of seconds and can be tracked by some iterator.
Python Code: # BASE ------------------------------------ from datetime import datetime as dt nb_start = dt.now() # Be mindful when you have this activated. # import warnings # warnings.filterwarnings('ignore') import json from pathlib import Path from time import sleep # Display libs from IPython.display import display, HTML from tqdm import tqdm, tqdm_notebook tqdm.pandas() SEED = 24 %%time # ETL ------------------------------------ import numpy as np import pandas as pd # VIZ ------------------------------------ import matplotlib.cm as cm import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns import plotly.graph_objs as go import plotly.figure_factory as ff from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot init_notebook_mode(connected=True) import plotly.express as px import plotly.io as pio from plotly.tools import mpl_to_plotly # NETWORK ANALYSIS ------------------------------------ import networkx as nx import community as community_louvain Explanation: Notebook Title <a class="tocSkip"> Make some notes here for the thought process, steps taken, TODOs. Imports Import deps Map all dependencies under categories, for easier tracking / readability. Also check: https://github.com/xR86/ml-stuff/tree/master/scripts End of explanation # https://stackoverflow.com/a/37124230 import uuid from IPython.display import display_javascript, display_html, display import json class RenderJSON(object): def __init__(self, json_data): if isinstance(json_data, dict): self.json_str = json.dumps(json_data) else: self.json_str = json_data self.uuid = str(uuid.uuid4()) def _ipython_display_(self): display_html('<div id="{}" style="height: 100%; width:100%;"></div>'.format(self.uuid), raw=True) display_javascript( require(["https://rawgit.com/caldwell/renderjson/master/renderjson.js"], function() { document.getElementById('%s').appendChild(renderjson(%s)) }); % (self.uuid, self.json_str), raw=True) %%javascript /*Increase timeout to load properly*/ var rto = 120; console.log('[Custom]: Increase require() timeout to', rto, 'seconds.'); window.requirejs.config({waitSeconds: rto}); %%html <style> /* font for TODO */ @import url('https://fonts.googleapis.com/css?family=Oswald&display=swap'); .hl { padding: 0.25rem 0.3rem; border-radius: 5px; } /* used: https://www.color-hex.com/color-palette/87453 */ .hl.hl-yellow { background-color: rgba(204,246,43,0.5); /*#fdef41;*/ } .hl.hl-orange { background-color: rgba(255,150,42,0.5); } .hl.hl-magenta { background-color: rgba(244,73,211,0.5); } .hl.hl-blue { background-color: rgba(80,127,255,0.5); } .hl.hl-violet { background-color: rgba(149,47,255,0.5); } .todo { font-family: 'Oswald', sans-serif; font-size: 2rem; } input.checkmark { height: 1.5rem; margin-right: 0.5rem; } kbd.cr { padding: 2px 3px; background-color: red; color: #FFF; border-radius: 5px; } kbd.xmltag { background-color: #ff8c8c; color: #FFF; } kbd.xmltag.xmltag--subnode { background-color: #9f8cff; color: #FFF; } kbd.xmltag.xmltag--subsubnode { background-color: #de8cff; color: #FFF; } </style> <!-- ========================================== --> <h3 style="margin-top:1rem; margin-bottom:2rem"> Examples: </h3> <div>Highlighted text in: <span class="hl hl-yellow">yellow</span>, <span class="hl hl-orange">orange</span>, <span class="hl hl-magenta">magenta</span>, <span class="hl hl-blue">blue</span>, <span class="hl hl-violet">violet</span>, </div> <br/> <div class="todo">TODO</div> <input class="checkmark" type="checkbox" checked="checked" disabled>Finished TODO text. <input class="checkmark" type="checkbox" disabled>TODO text. <br/><br/> Tags: <kbd class="cr">CR</kbd> (CR for Camera-Ready, graphs/sections that are important) Explanation: Inserts for Jupyter Any kind of IPython/Jupyter related stuff: - classes that leverage the <kbd>display</kbd> module, - javascript inserts, - HTML/CSS inserts to be reused in multiple displays for highlighting, - table of contents marking, etc. End of explanation %%bash ls -l tests/ %%time # df = pd.read_csv() # df.info() # df.head() Explanation: Import data Make sure to check if the data is present in the targeted scope, and that the size is readable with current RAM. Feel free to have subheadings for multiple datasets, data descriptions etc. End of explanation %%bash ls -l tests/ %%bash ls data/raw | wc -l | xargs printf '%s files' du -h data/raw | cut -f1 | xargs printf ', total of %s' ls data/raw/ | head -n 4 | xargs printf '\n\t%s' ls data/raw/ | tail -n 4 | xargs printf '\n\t%s' Explanation: Main Pick a more flat structure (h1 headings with #) or nested structure appropriately - not too nested, not too cluttered. Use the bash, Luke End of explanation tm = (dt.now() - nb_start).total_seconds() display(HTML(f'Started notebook <span class="hl hl-yellow">{tm:.0f}s</span> ago.')) # If you use type specifiers, don't put space after the specifier # display(HTML(f'{ tm:.0f}')) # works # display(HTML(f'{ tm:.0f }')) # breaks Explanation: Make use of IPython stuff F-strings (>= Python 3.6) can be combined with IPython's <kbd>display</kbd> module for fun and profit. End of explanation RenderJSON({ 'a': { 'c': 0 }, 'b': 1 }) Explanation: For more nested json's or dictionaries, it's best to use something interactive like RenderJSON. End of explanation slide_1 = HTML( <h3>Lex Fridman<br/><br/> Deep Learning Basics: Introduction and Overview<br/>&nbsp;</h3> <iframe width="560" height="315" src="https://www.youtube.com/embed/O5xeyoRL95U" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> ) slide_1 Explanation: Use slides, and decouple declaration from run (if you're not using "Hide codecell inputs"). End of explanation # For most simple stuff, use tqdm files = list(range(10)) for file in tqdm_notebook(files): sleep(0.1) # When you need more control over the progress bar, # use decoupled tqdm with tqdm_notebook(total=len(files)) as pbar: for file in files: sleep(0.1) pbar.update(1) nb_end = dt.now() print('Time elapsed: %s' % (nb_end - nb_start)) 'Time elapsed: %.2f minutes' % ( (nb_end - nb_start).total_seconds() / 60 ) Explanation: Use tqdm for every run that takes more than a couple of seconds and can be tracked by some iterator. End of explanation
5,134
Given the following text description, write Python code to implement the functionality described below step by step Description: Plot RBM maps to start, just plot the RBM maps, shows what is going on here Step1: Calculate On Counts and Acceptance This is done by reading in the on counts and integral acceptance maps and using them to determin the total counts and acceptance within each of the test regions. Since we are dealing with circular regions we can simply use the standard caclulation for separation from astropy Step2: Just because I can, I am going to see what the integrated counts are within an ellipse to see if the stats are there to say anything Step3: Calculate Background Counts and Acceptance The background counts are harder, we are trying to use an ellipse in camera coordinates (that is a flat coord scheme) whereas the data is saved in a spherical coord scheme, to overcome this we cheat a bit by defining the eclipses and then saving them as fits files with a header copied from the data, this means that the projections etc are correct. We can now check this ellipse is okay by drawing it (as a contour) over the skymap and checking that the correct regions are included/excluded. The background will be integrated between the two ellipses (less the bit cut out for nuAndromadae) Step4: Sum Background counts and acceptance Note Step5: Again, just for fun, checking counts within elipse Step6: Calculate the UL on Counts This is done using TRolke, since it is not in python we have to import it from Root, fortunately that is easy enough Step7: Effective Area Needto sum all of the Effective Areas from each of the test positions and then work out the expected flux given a test spectrum Issue Step8: We will do the calculation of the spectrum $\times$ the EA within the loop, not sure if it makes any difference but better safe than sorry Remeber, EA is in $m^2$, spectrum is in TeV and live time is in seconds - so flux will be $m^{-2} s^{-1} TeV^{-1}$ Step9: Flux UL from M31 Total UL
Python Code: if plot: fig = plt.figure(figsize=(12, 12)) fig1 = aplpy.FITSFigure(fitsF, figure=fig, subplot=(2,3,1), hdu=5) fig1.show_colorscale() standard_setup(fig1) fig1.set_title("Acceptance") fig1 = aplpy.FITSFigure(fitsF, figure=fig, subplot=(2,3,2), hdu=7) fig1.show_colorscale() standard_setup(fig1) fig1.set_title("Acceptance") fig1 = aplpy.FITSFigure(fitsF, figure=fig, subplot=(2,3,3), hdu=9) fig1.show_colorscale() standard_setup(fig1) fig1.set_title("Acceptance") fig1 = aplpy.FITSFigure(fitsF, figure=fig, subplot=(2,3,4), hdu=11) fig1.show_colorscale() standard_setup(fig1) fig1.set_title("Acceptance") fig1 = aplpy.FITSFigure(fitsF, figure=fig, subplot=(2,3,5), hdu=13) fig1.show_colorscale() standard_setup(fig1) fig1.set_title("Acceptance") fig1 = aplpy.FITSFigure(fitsF, figure=fig, subplot=(2,3,6), hdu=15) fig1.show_colorscale() standard_setup(fig1) fig1.set_title("Acceptance") Explanation: Plot RBM maps to start, just plot the RBM maps, shows what is going on here End of explanation onCounts = 0 onAcc = 0 nbinsOn = 0 onC = [] onA = [] totCounts = 0 grAcc = [] ptAcc = np.zeros([6, nTestReg]) for group in range(6): #counts setup extname = 'RawOnMap'+str(group) onData, onHeader = fits.getdata(fitsF, header=True, extname=extname) wcs_transformation_on = wcs.WCS(onHeader) yOn, xOn = np.mgrid[:onData.shape[0], :onData.shape[1]] raOn, decOn = wcs_transformation_on.all_pix2world(xOn, yOn, 0) onPos = SkyCoord(raOn, decOn, unit='deg', frame='icrs') totCounts += np.nansum(onData) #acceptance setup accData = fits.getdata(fitsF, header=False, extname='AcceptanceMap'+str(group)) #bin area reading (getting this from the root file as easier) fName = "rootdir/St6/All" + cuts + theta + clean + "s6.root" f = TFile(fName, "read") RBM = f.Get("RingBackgroundModelAnalysis/SkyMapOn") for i in range(accData.shape[0]): for j in range(accData.shape[1]): accData[i][j] = accData[i][j] * RBM.GetBinArea(i,j) gAcc = 0 for i in range(nTestReg): Pt = SkyCoord(ULpos['col1'][i], ULpos['col2'][i], unit='deg', frame='icrs') onSep = Pt.separation(onPos) cnts = np.nansum(onData[onSep.deg<sepDist]) accSep = Pt.separation(onPos) acc = np.nansum(accData[onSep.deg<sepDist])*np.nansum(onData) gAcc += acc ptAcc[group, i] = acc if group == 0: onC.append(cnts) onA.append(acc) else: onC[i] += cnts onA[i] += acc grAcc.append(gAcc) #print i+1, ULpos['col1'][i], ULpos['col2'][i], counts, acc print onC print onA onCounts = np.sum(onC) onAcc = np.sum(onA) print "Total", onCounts, onAcc, totCounts print grAcc Explanation: Calculate On Counts and Acceptance This is done by reading in the on counts and integral acceptance maps and using them to determin the total counts and acceptance within each of the test regions. Since we are dealing with circular regions we can simply use the standard caclulation for separation from astropy End of explanation onCountsE = 0 onAccE = 0 nbinsOnE = 0 onCE = 0 onAE = 0 totCountsE = 0 grAccE = 0 inElcent1E = SkyCoord(11.5, 42.00, unit='deg', frame='icrs') inElcent2E = SkyCoord(10.0, 40.55, unit='deg', frame='icrs') inElDistE = 1.8 for group in range(6): #counts setup extname = 'RawOnMap' + str(group) onData, onHeader = fits.getdata(fitsF, header=True, extname=extname) wcs_transformation_on = wcs.WCS(onHeader) yOn, xOn = np.mgrid[:onData.shape[0], :onData.shape[1]] raOn, decOn = wcs_transformation_on.all_pix2world(xOn, yOn, 0) onPos = SkyCoord(raOn, decOn, unit='deg', frame='icrs') totCountsE += np.nansum(onData) #acceptance setup accData = fits.getdata(fitsF, header=False, extname='AcceptanceMap'+str(group)) #bin area reading (getting this from the root file as easier) fName = "rootdir/St6/All" + cuts + theta + clean + "s6.root" f = TFile(fName, "read") RBM = f.Get("RingBackgroundModelAnalysis/SkyMapOn") for i in range(accData.shape[0]): for j in range(accData.shape[1]): accData[i][j] = accData[i][j] * RBM.GetBinArea(i,j) gAccE = 0 onSep = ((inElcent1E.separation(onPos).deg + inElcent2E.separation(onPos).deg) > inElDistE) onCE += np.nansum(onData[onSep < inElDistE]) accSep = Pt.separation(onPos) onAE += np.nansum(accData[onSep < inElDistE])*np.nansum(onData) gAccE += acc print onCE print onAE print grAcc Explanation: Just because I can, I am going to see what the integrated counts are within an ellipse to see if the stats are there to say anything End of explanation inElcent1 = SkyCoord(11.5, 42.00, unit='deg', frame='icrs') inElcent2 = SkyCoord(10.0, 40.55, unit='deg', frame='icrs') outElcent1 = inElcent1 outElcent2 = inElcent2 inElDist = 2.2 outElDist = 3.9 inEl = ((inElcent1.separation(onPos).deg + inElcent2.separation(onPos).deg) > inElDist) outEl = ((outElcent1.separation(onPos).deg + outElcent2.separation(onPos).deg) < outElDist) El = np.logical_and(inEl, outEl) nuAndrom = SkyCoord(12.4535, 41.0790, unit='deg', frame='icrs') sepnuAn = (nuAndrom.separation(onPos).deg > 0.4) El = np.logical_and(El, sepnuAn) ! rm BGReg.fits hdu = fits.PrimaryHDU(El*1., header=onHeader) hdu.writeto('BGReg.fits') if plot: fig = plt.figure(figsize=(8, 8)) fig1 = aplpy.FITSFigure(fitsF, figure=fig, hdu=1) fig1.show_colorscale(vmin=-5,vmax=5,cmap=cx1) standard_setup(fig1) fig1.set_title("Significance") fig1.show_contour("BGReg.fits", colors='k') for i in range(nTestReg): fig1.show_circles(ULpos['col1'][i], ULpos['col2'][i], sepDist, color='purple', linewidth=2, zorder=5) fig1.add_label(ULpos['col1'][i], ULpos['col2'][i], ULpos['col3'][i], size=16, weight='bold', color='purple') if plot: fig = plt.figure(figsize=(10, 10)) fig1 = aplpy.FITSFigure("M31_IRIS_smoothed.fits", figure=fig) fig1.show_colorscale(cmap='Blues',vmin=0, vmax=7e3) fig1.recenter(10.6847, 41.2687, width=4, height=4) fig1.ticks.show() fig1.ticks.set_color('black') fig1.tick_labels.set_xformat('dd.dd') fig1.tick_labels.set_yformat('dd.dd') fig1.ticks.set_xspacing(1) # degrees fig1.set_frame_color('black') fig1.set_tick_labels_font(size='14') fig1.set_axis_labels_font(size='16') fig1.show_grid() fig1.set_grid_color('k') fig1.add_label(12.4535, 41.09, r'$\nu$' + '-Andromedae', size=10, weight='demi', color='black') #fig1.show_contour("BGReg.fits", colors='k') fig1.show_contour("BGReg.fits", lw=0.5, filled=True, hatches=[None,'/'], colors='none') fig1.show_contour("BGReg.fits", linewidths=1., filled=False, colors='k', levels=1) plt.savefig("Plots/M31BgReg.pdf") Explanation: Calculate Background Counts and Acceptance The background counts are harder, we are trying to use an ellipse in camera coordinates (that is a flat coord scheme) whereas the data is saved in a spherical coord scheme, to overcome this we cheat a bit by defining the eclipses and then saving them as fits files with a header copied from the data, this means that the projections etc are correct. We can now check this ellipse is okay by drawing it (as a contour) over the skymap and checking that the correct regions are included/excluded. The background will be integrated between the two ellipses (less the bit cut out for nuAndromadae) End of explanation bgC = [] bgA = [] ptAlpha = np.empty([6, nTestReg]) for group in range(6): #counts setup extname = 'RawOnMap'+str(group) onData, onHeader = fits.getdata(fitsF, header=True, extname=extname) wcs_transformation_on = wcs.WCS(onHeader) yOn, xOn = np.mgrid[:onData.shape[0], :onData.shape[1]] raOn, decOn = wcs_transformation_on.all_pix2world(xOn, yOn, 0) onPos = SkyCoord(raOn, decOn, unit='deg', frame='icrs') #acceptance setup accData = fits.getdata(fitsF, header=False, extname='AcceptanceMap'+str(group)) #bin area reading (getting this from the root file as easier) fName = "rootdir/St6/All" + cuts + theta + clean + "s6.root" f = TFile(fName, "read") RBM = f.Get("RingBackgroundModelAnalysis/SkyMapOn") for i in range(accData.shape[0]): for j in range(accData.shape[1]): accData[i][j] = accData[i][j] * RBM.GetBinArea(i,j) bgC.append(np.nansum(onData[El])) bgA.append(np.nansum(accData[El]))#*np.nansum(onData)) ptAlpha[group,] = ptAcc[group,] / np.nansum(accData[El])/ totCounts #ptAlpha = ptAcc[group, :] / np.nansum(accData[El])/ totCounts grAlpha = np.array(grAcc) / np.array(bgA)/ totCounts ptAlpha = np.sum(ptAlpha, axis=0) alpha = np.sum(grAlpha) bgCounts = np.sum(bgC) excess = onCounts - bgCounts * alpha print "Total:", onCounts, bgCounts, excess, alpha print "Point Alpha:", ptAlpha Explanation: Sum Background counts and acceptance Note: - I have corrected for the varying bin size my multiplying the acc by the bin area End of explanation stats.significance_on_off(onCE, bgCounts, ) if plot: bins = np.linspace(-4.5, 4.5, 100) sigData, sigHeader = fits.getdata(fitsF, header=True, extname="SignificanceMap") fig = plt.figure(figsize=(3, 3)) fig, ax = plt.subplots(1) hist = plt.hist(sigData[(~np.isnan(sigData)) & El], bins=bins, histtype="step") plt.semilogy() hist, bins2 = np.histogram(sigData[(~np.isnan(sigData)) & El], bins = bins) (xf, yf), params, err, chi = fit.fit(fit.gaus, (bins2[0:-1] + bins2[1:])/2, hist) plt.plot(xf, yf, 'r-', label='Fit') textstr1 = '$\mu = %.2f $' % (params[1]) textstr2 = '$ %.3f$\n$\sigma = %.2f$' % (err[1], params[2]) textstr3 = '$ %.3f$' % (err[2]) textstr = textstr1 + u"\u00B1" + textstr2 + u"\u00B1" + textstr3 #textstr = textstr1 + textstr2 + textstr3 props = dict(boxstyle='square', alpha=0.5, fc="white") ax.text(0.95, 0.95, textstr, transform=ax.transAxes, fontsize=14, verticalalignment='top', horizontalalignment='right', bbox=props) plt.ylim(ymin=1e0) Explanation: Again, just for fun, checking counts within elipse End of explanation def RUL(on, off, alpha): rolke = TRolke(rolkeUL) rolke.SetBounding(True) rolke.SetPoissonBkgKnownEff(int(on), int(off), 1./(alpha), 1.) return rolke.GetUpperLimit() def FCUL(on, off, alpha): fc = TFeldmanCousins(rolkeUL) return fc.CalculateUpperLimit((on), (off) * alpha) if False: for i in range(nTestReg): ULCounts = RUL((onC[i]), (bgCounts), ptAlpha[i]) excess = onC[i] - bgCounts * ptAlpha[i] print "Point", i print 'On = {0:.0f}, Off = {1:.0f}, alpha = {2:.4f}'.format(onC[i], bgCounts, ptAlpha[i]) print 'Excess = {0:.2f}'.format(excess) print 'Signif = {0:.3f}'.format(gammapy.stats.significance_on_off(onC[i], bgCounts, ptAlpha[i])) print 'ULCount = {0:0.3f}'.format(ULCounts) print '' excess = onCounts - bgCounts * alpha ULCounts = RUL((onCounts), (bgCounts), (alpha)) print 'On = {0:.0f}, Off = {1:.0f}, alpha = {2:.4f}'.format(onCounts, bgCounts, alpha) print 'Excess = {0:.2f}'.format(excess) print 'Signif = {0:.3f}'.format(gammapy.stats.significance_on_off(onCounts, bgCounts, alpha)) print 'ULCount = {0:0.3f}'.format(ULCounts) print '' Explanation: Calculate the UL on Counts This is done using TRolke, since it is not in python we have to import it from Root, fortunately that is easy enough End of explanation pointData = np.copy(onData) pointData.fill(0) pointData[pointData.shape[0]/2., pointData.shape[1]/2.] = 1000 pointData1 = ndimage.gaussian_filter(pointData, sigma=(-sigma/onHeader['CDELT1'], sigma/onHeader['CDELT2']), order=0) wcs_transformation = wcs.WCS(onHeader) initPos = wcs_transformation.wcs_pix2world(pointData.shape[0]/2., pointData.shape[1]/2., 0) pointSourceCor = sumInRegion(pointData1, onHeader, initPos[0], initPos[1], sepDist)/np.sum(pointData1) IRISdata, IRISheader = fits.getdata("M31_IRIS_cropped_ds9.fits", header=True) IRISdata2 = ndimage.gaussian_filter(IRISdata, sigma=(-sigma/IRISheader['CDELT1'], sigma/IRISheader['CDELT2']), order=0) M31total = np.sum(IRISdata2) Explanation: Effective Area Needto sum all of the Effective Areas from each of the test positions and then work out the expected flux given a test spectrum Issue:- This is using a point source EA, we dont have a point source. What we need to correct each EA by the difference in the flux distribtuion in its test region. To do this we use the following relation: $\frac{Frac\; Region\; Flux\; in\; thetaSq}{Frac\; Point\; Source\; Flux\; in\; thetaSq}$ For the bottom bit I take the point source and convolve with PSF, work out the fraction of counts that remain within the thetaSq For the top bit, I take the model, convolve with the PSF and work out the fraction of counts before to after Logic: think, not all counts fall within thetaSq, thus the EA is slightly under estimate, since Flux * EA = counts. Thus we need to undo this for the point source and redo this for the extended source. The smoothing factor is put in such that 68\% of the flux falls within a 0.1deg region (this is the standard quoted number) for a point source. I would like to check this with hard cuts etc for sims, but that should be a secondary effect End of explanation %%rootprint nPts = 100 En = np.linspace(-1, 2, num=nPts) Sp1 = (10**En)**index EA = np.empty([nPts]) EA1 = np.empty([nPts]) minSafeE = 0 #this is the minimum safe energy, I will quote the spectrum here decorE = 0 EstCounts1 = 0 for j in range(nTestReg): fName = "rootdir/St6/All" + cuts + theta + clean + str(j+1) + "s6.root" f = TFile(fName, "read") UL = f.Get("UpperLimit/VAUpperLimit") g = UL.GetEffectiveArea() if UL.GetEnergy() > minSafeE: minSafeE = UL.GetEnergy() if UL.GetEdecorr() > decorE: decorE = UL.GetEdecorr() # Weight EAs by expected flux from that region irisReg = sumInRegion(IRISdata, IRISheader, ULpos['col1'][j], ULpos['col2'][j], sepDist) regW = irisReg / M31total # Correct for PSF effects M31RegCor = sumInRegion(IRISdata2, IRISheader, ULpos['col1'][j], ULpos['col2'][j], sepDist) / irisReg print j, regW, M31RegCor for i, xval in np.ndenumerate(En): EA[i] = g.Eval(xval) / pointSourceCor * M31RegCor * regW EA1[i] += g.Eval(xval) / pointSourceCor * M31RegCor * regW Fl1 = Sp1 * EA EstCounts1 += np.trapz(Fl1, 10**En)*livetime FluxULReg1 = ULCounts / EstCounts1 print FluxULReg1 Explanation: We will do the calculation of the spectrum $\times$ the EA within the loop, not sure if it makes any difference but better safe than sorry Remeber, EA is in $m^2$, spectrum is in TeV and live time is in seconds - so flux will be $m^{-2} s^{-1} TeV^{-1}$ End of explanation FluxULM31 = FluxULReg1 FluxULM31_eMin = FluxULM31 * minSafeE **index FluxULM31_eDec = FluxULM31 * decorE **index intULM31_eMin = gammapy.spectrum.powerlaw.power_law_integral_flux(FluxULM31, index, 1, minSafeE, 30) intULM31_eDec = gammapy.spectrum.powerlaw.power_law_integral_flux(FluxULM31, index, 1, decorE, 30) intULM31_eMin_pcCrab = intULM31_eMin /(gammapy.spectrum.crab_integral_flux(minSafeE, 30, 'hess_pl')[0] *1e2) intULM31_eDec_pcCrab = intULM31_eDec /(gammapy.spectrum.crab_integral_flux(decorE, 30, 'hess_pl')[0] *1e2) print 'On = {0:.0f}, Off = {1:.0f}, alpha = {2:.4f}'.format(onCounts, bgCounts, alpha) print 'Excess = {0:.2f}'.format(excess) print 'Signif = {0:.3f}'.format(gammapy.stats.significance_on_off(onCounts, bgCounts, alpha)) print 'ULCount = {0:0.3f}'.format(ULCounts) print '' print "Differential UL @ 1TeV = {0:.3e}".format(FluxULM31) print "Differential UL @ min safe E ({0:.1f}GeV) = {1:.3e}".format(minSafeE*1e3, FluxULM31_eMin) print "Differential UL @ decorrelation ({0:.1f}GeV) = {1:.3e}".format(decorE*1e3, FluxULM31_eDec) print "Differential UL units = TeV-1 m-2 s-1" print "" print "Integral UL between min safe energy and 30TeV = {0:.3e}".format(intULM31_eMin) print "Integral UL between decorrel energy and 30TeV = {0:.3e}".format(intULM31_eDec) print "Integral UL units = m-2 s-1" print "" print "Integral UL between min safe energy and 30TeV = {0:.3f} %Crab".format(intULM31_eMin_pcCrab) print "Integral UL between decorrel energy and 30TeV = {0:.3f} %Crab".format(intULM31_eDec_pcCrab) print 244. / 5.4, 244. / 0.67, 244. / 17.3 print 382. / 7.9, 382. / 3.90, 382. / 25.1 print 65. / 2.4, 65. / 0.30, 65. / 7.75 print 137. / 6.0, 137. / 3.00, 137. / 19.1 Explanation: Flux UL from M31 Total UL End of explanation
5,135
Given the following text description, write Python code to implement the functionality described below step by step Description: Executed Step1: Load software and filenames definitions Step2: Data folder Step3: List of data files Step4: Data load Initial loading of the data Step5: Load the leakage coefficient from disk Step6: Load the direct excitation coefficient ($d_{exAA}$) from disk Step7: Update d with the correction coefficients Step8: Laser alternation selection At this point we have only the timestamps and the detector numbers Step9: We need to define some parameters Step10: We should check if everithing is OK with an alternation histogram Step11: If the plot looks good we can apply the parameters with Step12: Measurements infos All the measurement data is in the d variable. We can print it Step13: Or check the measurements duration Step14: Compute background Compute the background using automatic threshold Step15: Burst search and selection Step16: Donor Leakage fit Step17: Burst sizes Step18: Fret fit Max position of the Kernel Density Estimation (KDE) Step19: Weighted mean of $E$ of each burst Step20: Gaussian fit (no weights) Step21: Gaussian fit (using burst size as weights) Step22: Stoichiometry fit Max position of the Kernel Density Estimation (KDE) Step23: The Maximum likelihood fit for a Gaussian population is the mean Step24: Computing the weighted mean and weighted standard deviation we get Step25: Save data to file Step26: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved. Step27: This is just a trick to format the different variables
Python Code: ph_sel_name = "None" data_id = "17d" # data_id = "7d" Explanation: Executed: Mon Mar 27 11:38:52 2017 Duration: 7 seconds. usALEX-5samples - Template This notebook is executed through 8-spots paper analysis. For a direct execution, uncomment the cell below. End of explanation from fretbursts import * init_notebook() from IPython.display import display Explanation: Load software and filenames definitions End of explanation data_dir = './data/singlespot/' import os data_dir = os.path.abspath(data_dir) + '/' assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir Explanation: Data folder: End of explanation from glob import glob file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f) ## Selection for POLIMI 2012-11-26 datatset labels = ['17d', '27d', '7d', '12d', '22d'] files_dict = {lab: fname for lab, fname in zip(labels, file_list)} files_dict data_id Explanation: List of data files: End of explanation d = loader.photon_hdf5(filename=files_dict[data_id]) Explanation: Data load Initial loading of the data: End of explanation leakage_coeff_fname = 'results/usALEX - leakage coefficient DexDem.csv' leakage = np.loadtxt(leakage_coeff_fname) print('Leakage coefficient:', leakage) Explanation: Load the leakage coefficient from disk: End of explanation dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_aa.csv' dir_ex_aa = np.loadtxt(dir_ex_coeff_fname) print('Direct excitation coefficient (dir_ex_aa):', dir_ex_aa) Explanation: Load the direct excitation coefficient ($d_{exAA}$) from disk: End of explanation d.leakage = leakage d.dir_ex = dir_ex_aa Explanation: Update d with the correction coefficients: End of explanation d.ph_times_t, d.det_t Explanation: Laser alternation selection At this point we have only the timestamps and the detector numbers: End of explanation d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0) Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations: End of explanation plot_alternation_hist(d) Explanation: We should check if everithing is OK with an alternation histogram: End of explanation loader.alex_apply_period(d) Explanation: If the plot looks good we can apply the parameters with: End of explanation d Explanation: Measurements infos All the measurement data is in the d variable. We can print it: End of explanation d.time_max Explanation: Or check the measurements duration: End of explanation d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7) dplot(d, timetrace_bg) d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa Explanation: Compute background Compute the background using automatic threshold: End of explanation d.burst_search(L=10, m=10, F=7, ph_sel=Ph_sel('all')) print(d.ph_sel) dplot(d, hist_fret); # if data_id in ['7d', '27d']: # ds = d.select_bursts(select_bursts.size, th1=20) # else: # ds = d.select_bursts(select_bursts.size, th1=30) ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30) n_bursts_all = ds.num_bursts[0] def select_and_plot_ES(fret_sel, do_sel): ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel) ds_do = ds.select_bursts(select_bursts.ES, **do_sel) bpl.plot_ES_selection(ax, **fret_sel) bpl.plot_ES_selection(ax, **do_sel) return ds_fret, ds_do ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1) if data_id == '7d': fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False) do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '12d': fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False) do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '17d': fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False) do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '22d': fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False) do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '27d': fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False) do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) n_bursts_do = ds_do.num_bursts[0] n_bursts_fret = ds_fret.num_bursts[0] n_bursts_do, n_bursts_fret d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret) print('D-only fraction:', d_only_frac) dplot(ds_fret, hist2d_alex, scatter_alpha=0.1); dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False); Explanation: Burst search and selection End of explanation bandwidth = 0.03 E_range_do = (-0.1, 0.15) E_ax = np.r_[-0.2:0.401:0.0002] E_pr_do_kde = bext.fit_bursts_kde_peak(ds_do, bandwidth=bandwidth, weights='size', x_range=E_range_do, x_ax=E_ax, save_fitter=True) mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, bins=np.r_[E_ax.min(): E_ax.max(): bandwidth]) plt.xlim(-0.3, 0.5) print("%s: E_peak = %.2f%%" % (ds.ph_sel, E_pr_do_kde*100)) Explanation: Donor Leakage fit End of explanation nt_th1 = 50 dplot(ds_fret, hist_size, which='all', add_naa=False) xlim(-0, 250) plt.axvline(nt_th1) Th_nt = np.arange(35, 120) nt_th = np.zeros(Th_nt.size) for i, th in enumerate(Th_nt): ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th) nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th plt.figure() plot(Th_nt, nt_th) plt.axvline(nt_th1) nt_mean = nt_th[np.where(Th_nt == nt_th1)][0] nt_mean Explanation: Burst sizes End of explanation E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size') E_fitter = ds_fret.E_fitter E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03]) E_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5) fig, ax = plt.subplots(1, 2, figsize=(14, 4.5)) mfit.plot_mfit(E_fitter, ax=ax[0]) mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1]) print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100)) display(E_fitter.params*100) Explanation: Fret fit Max position of the Kernel Density Estimation (KDE): End of explanation ds_fret.fit_E_m(weights='size') Explanation: Weighted mean of $E$ of each burst: End of explanation ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None) Explanation: Gaussian fit (no weights): End of explanation ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size') E_kde_w = E_fitter.kde_max_pos[0] E_gauss_w = E_fitter.params.loc[0, 'center'] E_gauss_w_sig = E_fitter.params.loc[0, 'sigma'] E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0])) E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err Explanation: Gaussian fit (using burst size as weights): End of explanation S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True) S_fitter = ds_fret.S_fitter S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03]) S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5) fig, ax = plt.subplots(1, 2, figsize=(14, 4.5)) mfit.plot_mfit(S_fitter, ax=ax[0]) mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1]) print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100)) display(S_fitter.params*100) S_kde = S_fitter.kde_max_pos[0] S_gauss = S_fitter.params.loc[0, 'center'] S_gauss_sig = S_fitter.params.loc[0, 'sigma'] S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0])) S_kde, S_gauss, S_gauss_sig, S_gauss_err Explanation: Stoichiometry fit Max position of the Kernel Density Estimation (KDE): End of explanation S = ds_fret.S[0] S_ml_fit = (S.mean(), S.std()) S_ml_fit Explanation: The Maximum likelihood fit for a Gaussian population is the mean: End of explanation weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.) S_mean = np.dot(weights, S)/weights.sum() S_std_dev = np.sqrt( np.dot(weights, (S - S_mean)**2)/weights.sum()) S_wmean_fit = [S_mean, S_std_dev] S_wmean_fit Explanation: Computing the weighted mean and weighted standard deviation we get: End of explanation sample = data_id Explanation: Save data to file End of explanation variables = ('sample n_bursts_all n_bursts_do n_bursts_fret ' 'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err S_kde S_gauss S_gauss_sig S_gauss_err ' 'E_pr_do_kde nt_mean\n') Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved. End of explanation variables_csv = variables.replace(' ', ',') fmt_float = '{%s:.6f}' fmt_int = '{%s:d}' fmt_str = '{%s}' fmt_dict = {**{'sample': fmt_str}, **{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}} var_dict = {name: eval(name) for name in variables.split()} var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n' data_str = var_fmt.format(**var_dict) print(variables_csv) print(data_str) # NOTE: The file name should be the notebook name but with .csv extension with open('results/usALEX-5samples-PR-leakage-dir-ex-all-ph.csv', 'a') as f: f.seek(0, 2) if f.tell() == 0: f.write(variables_csv) f.write(data_str) Explanation: This is just a trick to format the different variables: End of explanation
5,136
Given the following text description, write Python code to implement the functionality described below step by step Description: Cyclical figurate numbers Problem 61 Triangle, square, pentagonal, hexagonal, heptagonal, and octagonal numbers are all figurate (polygonal) numbers and are generated by the following formulae Step1: Cubic permutations Problem 62 The cube, $41063625$ ($345^3$), can be permuted to produce two other cubes Step2: Powerful digit counts Problem 63 The 5-digit number, $16807=7^5$, is also a fifth power. Similarly, the 9-digit number, $134217728=8^9$, is a ninth power. How many $n$-digit positive integers exist which are also an $n$th power? Step3: Odd period square roots Problem 64 All square roots are periodic when written as continued fractions and can be written in the form Step4: Convergents of e Problem 65 The square root of 2 can be written as an infinite continued fraction. The infinite continued fraction can be written, √2 = [1;(2)], (2) indicates that 2 repeats ad infinitum. In a similar way, √23 = [4;(1,3,1,8)]. It turns out that the sequence of partial values of continued fractions for square roots provide the best rational approximations. Let us consider the convergents for √2. Hence the sequence of the first ten convergents for √2 are Step5: Diophantine equation Problem 66 Consider quadratic Diophantine equations of the form Step6: Maximum path sum II Problem 67 By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23. <p style="text-align Step7: Magic 5-gon ring Problem 68 <div class="problem_content" role="problem"> <p>Consider the following "magic" 3-gon ring, filled with the numbers 1 to 6, and each line adding to nine.</p> <div style="text-align
Python Code: from euler import timer, Seq, fst, snd def p061(): values = ([lambda n: n*(n+1)/2, lambda n: n*n, lambda n: n*(3*n-1)/2, lambda n: n*(2*n-1), lambda n: n*(5*n-3)/2, lambda n: n*(3*n-2)] >> Seq.mapi(lambda n: n) >> Seq.collect(lambda (i,fun): Seq.initInfinite(fun) >> Seq.skipWhile(lambda n: n < 1000) >> Seq.takeWhile(lambda n: n < 10000) >> Seq.map(lambda n: (i+3, str(n)))) >> Seq.toList) result = values >> Seq.map(lambda n: [n]) for i in range(5): result = (result >> Seq.collect(lambda a: values >> Seq.filter(lambda b: (a[0][1][2:] == b[1][:2])) >> Seq.filter(lambda b: b[0] not in (a >> Seq.map(fst))) >> Seq.map(lambda b: [b] + a))) return (result >> Seq.filter(lambda a: a[5][1][:2] == a[0][1][2:]) >> Seq.head >> Seq.sumBy(lambda n: int(n[1]))) timer(p061) Explanation: Cyclical figurate numbers Problem 61 Triangle, square, pentagonal, hexagonal, heptagonal, and octagonal numbers are all figurate (polygonal) numbers and are generated by the following formulae: | | | | |------------|-----------------------|------------------------| | Triangle | $P_{3,n}=n(n+1)/2$ | 1, 3, 6, 10, 15, ... | | Square | $P_{4,n}=n2$ | 1, 4, 9, 16, 25, ... | | Pentagonal | $P_{5,n}=n(3n−1)/2$ | 1, 5, 12, 22, 35, ... | | Hexagonal | $P_{6,n}=n(2n−1)$ | 1, 6, 15, 28, 45, ... | | Heptagonal | $P_{7,n}=n(5n−3)/2$ | 1, 7, 18, 34, 55, ... | | Octagonal | $P_{8,n}=n(3n−2)$ | 1, 8, 21, 40, 65, ... | The ordered set of three 4-digit numbers: 8128, 2882, 8281, has three interesting properties. The set is cyclic, in that the last two digits of each number is the first two digits of the next number (including the last number with the first). Each polygonal type: triangle ($P_{3,127}=8128$), square ($P_{4,91}=8281$), and pentagonal ($P_{5,44}=2882$), is represented by a different number in the set. This is the only set of 4-digit numbers with this property. Find the sum of the only ordered set of six cyclic 4-digit numbers for which each polygonal type: triangle, square, pentagonal, hexagonal, heptagonal, and octagonal, is represented by a different number in the set. End of explanation from euler import timer class Cube: def __init__(self, n, perms): self.n = n self.perms = perms def p062(): def make_largest_perm(n): k = n digits = [0] * 10 ret_val = 0 while (k>0): digits[k%10] += 1 k /= 10 for i in range(9,-1,-1): for j in range(0, digits[i]): ret_val = ret_val * 10 + i return ret_val n = 345 cubes = {} while True: n += 1 smallest_perm = make_largest_perm(n*n*n) if not(cubes.has_key(smallest_perm)): cubes[smallest_perm] = Cube(n, 0) cubes[smallest_perm].perms += 1 if (cubes[smallest_perm].perms == 5): return cubes[smallest_perm].n ** 3 timer(p062) Explanation: Cubic permutations Problem 62 The cube, $41063625$ ($345^3$), can be permuted to produce two other cubes: $56623104$ ($384^3$) and $66430125$ ($405^3$). In fact, $41063625$ is the smallest cube which has exactly three permutations of its digits which are also cube. Find the smallest cube for which exactly five permutations of its digits are cube. End of explanation from euler import timer, Seq def p063(): f = lambda(n): (Seq.initInfinite(lambda n: n+1) >> Seq.map(lambda m: m ** n) >> Seq.skipWhile(lambda m: len(str(m)) < n) >> Seq.takeWhile(lambda m: len(str(m)) == n) >> Seq.length) return (Seq.initInfinite(lambda n: n+1) >> Seq.map(f) >> Seq.takeWhile(lambda l: l > 0) >> Seq.sum) timer(p063) Explanation: Powerful digit counts Problem 63 The 5-digit number, $16807=7^5$, is also a fifth power. Similarly, the 9-digit number, $134217728=8^9$, is a ninth power. How many $n$-digit positive integers exist which are also an $n$th power? End of explanation from euler import timer, Seq def p064(): def is_odd_period(n): r = limit = int(sqrt(n)) if limit**2 == n: return False else: k, period = 1, 0 while k !=1 or period == 0: k = (n - r * r) // k r = (limit + r) // k * k - r period += 1 if period % 2 == 1: return True else: return False return (range(2, 10001) >> Seq.filter(is_odd_period) >> Seq.length) timer(p064) Explanation: Odd period square roots Problem 64 All square roots are periodic when written as continued fractions and can be written in the form: $\sqrt{N} = a0 + \frac {1} {a1 + \frac {1} {a2 + \frac {1} {a3 + ...}}}$ Exactly four continued fractions, for $N ≤ 13$, have an odd period. How many continued fractions for $N ≤ 10000$ have an odd period? End of explanation from euler import timer, Seq def p065(): return (str((Seq.initInfinite(lambda x: [1, 2*x, 1]) >> Seq.skip(1) >> Seq.flatten >> Seq.skip(1) >> Seq.scan(lambda (n,n1), i: (i*n+n1, n), (3,2)) >> Seq.skip(98) >> Seq.head)[0]) >> Seq.map(str) >> Seq.map(int) >> Seq.sum) timer(p065) Explanation: Convergents of e Problem 65 The square root of 2 can be written as an infinite continued fraction. The infinite continued fraction can be written, √2 = [1;(2)], (2) indicates that 2 repeats ad infinitum. In a similar way, √23 = [4;(1,3,1,8)]. It turns out that the sequence of partial values of continued fractions for square roots provide the best rational approximations. Let us consider the convergents for √2. Hence the sequence of the first ten convergents for √2 are: 1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, 577/408, 1393/985, 3363/2378, ... What is most surprising is that the important mathematical constant, e = [2; 1,2,1, 1,4,1, 1,6,1 , ... , 1,2k,1, ...]. The first ten terms in the sequence of convergents for e are: 2, 3, 8/3, 11/4, 19/7, 87/32, 106/39, 193/71, 1264/465, 1457/536, ... The sum of digits in the numerator of the 10th convergent is 1+4+5+7=17. Find the sum of digits in the numerator of the 100th convergent of the continued fraction for e. End of explanation from math import sqrt from euler import timer, Seq, fst def p066(): isqrt = lambda x: int(sqrt(x)) is_square = lambda x: x == isqrt(x) ** 2 def continued_fraction_expansion(s): if is_square(s): return None a0 = isqrt(s) def next((d,m)): d = (s-m*m)/d return ((a0+m)/d, (d, ((a0+m)/d * d-m))) return Seq.unfold(next, (1,a0)) >> Seq.append([a0]) def solve_pell_eq(s): return (continued_fraction_expansion(s) >> Seq.scan(lambda (h, k, h1, k1), a: (a*h+h1, a*k+k1, h, k), (1,0,0,1)) >> Seq.skip(1) >> Seq.find(lambda (h,k,h1,k1): h*h - s*k*k == 1)) return (Seq.initInfinite(lambda x: x+2) >> Seq.filter(lambda x: not(is_square(x))) >> Seq.takeWhile(lambda x: x <= 1000) >> Seq.map(lambda x: (solve_pell_eq(x)[0], x)) >> Seq.maxBy(fst))[1] timer(p066) Explanation: Diophantine equation Problem 66 Consider quadratic Diophantine equations of the form: $x^2 – Dy^2 = 1$ For example, when $D=13$, the minimal solution in $x$ is $649^2 – 13×180^2 = 1$. It can be assumed that there are no solutions in positive integers when $D$ is square. By finding minimal solutions in $x$ for $D = {2, 3, 5, 6, 7}$, we obtain the following: $3^2 – 2×2^2 = 1$ $2^2 – 3×1^2 = 1$ $9^2 – 5×4^2 = 1$ $5^2 – 6×2^2 = 1$ $8^2 – 7×3^2 = 1$ Hence, by considering minimal solutions in $x$ for $D ≤ 7$, the largest $x$ is obtained when $D=5$. Find the value of $D ≤ 1000$ in minimal solutions of $x$ for which the largest value of $x$ is obtained. End of explanation from euler import Seq, timer def p067(): return ( open('data/p067.txt').read().splitlines() >> Seq.map(lambda s: s.split(' ') >> Seq.map(int)) >> Seq.rev >> Seq.reduce(lambda a,b: a >> Seq.window(2) >> Seq.map(max) >> Seq.zip(b) >> Seq.map(sum)) >> Seq.head) timer(p067) Explanation: Maximum path sum II Problem 67 By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23. <p style="text-align:center;font-family:'courier new';font-size:12pt;"><span style="color:#ff0000;"><b>3</b></span><br><span style="color:#ff0000;"><b>7</b></span> 4<br> 2 <span style="color:#ff0000;"><b>4</b></span> 6<br> 8 5 <span style="color:#ff0000;"><b>9</b></span> 3</p> That is, $3 + 7 + 4 + 9 = 23$. Find the maximum total from top to bottom in triangle.txt (right click and 'Save Link/Target As...'), a 15K text file containing a triangle with one-hundred rows. NOTE: This is a much more difficult version of Problem 18. It is not possible to try every route to solve this problem, as there are $2^{99}$ altogether! If you could check one trillion ($10^{12}$) routes every second it would take over twenty billion years to check them all. There is an efficient algorithm to solve it. ;o) End of explanation chunk_by_size = lambda x, i: zip(*[iter(x)]*i) from euler import timer, Seq from itertools import permutations perms = (permutations(range(1,11)) >> Seq.map(lambda x: chunk_by_size(x, 2)) >> Seq.toList) get_sum = lambda a: a[0][0] + a[0][1] + a[1][1] # def pred(perm): # #load "Common.fs" # open Common # #time # let permutations = permute [1..10] |> List.map (fun lst -> groupsOfAtMost 2 lst |> Seq.toList) # let sum [| [| a0; a1 |]; [| _; b1 |] |] = a0 + a1 + b1 # let pred (permutation : int[] list) = # let (hd::_tl as heads) = permutation |> List.map Seq.head # if heads |> List.forall (fun x -> hd <= x) then # let pairs = seq { # yield! permutation |> Seq.windowed 2 # yield [| Seq.last permutation; Seq.head permutation |] # } |> Seq.toArray # let target = sum pairs.[0] # pairs |> Array.forall (sum >> (=) target) # else false # let answer = permutations # |> List.filter pred # |> List.map (fun [a; b; c; d; e] -> # [| # yield! a; yield Seq.last b # yield! b; yield Seq.last c # yield! c; yield Seq.last d # yield! d; yield Seq.last e # yield! e; yield Seq.last a # |] # |> Array.map string # |> fun arr -> System.String.Join("", arr)) # |> List.sort # |> Seq.last Explanation: Magic 5-gon ring Problem 68 <div class="problem_content" role="problem"> <p>Consider the following "magic" 3-gon ring, filled with the numbers 1 to 6, and each line adding to nine.</p> <div style="text-align:center;"> <img src="https://projecteuler.net/project/images/p068_1.gif" alt=""><br></div> <p>Working <b>clockwise</b>, and starting from the group of three with the numerically lowest external node (4,3,2 in this example), each solution can be described uniquely. For example, the above solution can be described by the set: 4,3,2; 6,2,1; 5,1,3.</p> <p>It is possible to complete the ring with four different totals: 9, 10, 11, and 12. There are eight solutions in total.</p> <div style="text-align:center;"> <table width="400" cellspacing="0" cellpadding="0"><tbody><tr><td width="100"><b>Total</b></td><td width="300"><b>Solution Set</b></td> </tr><tr><td>9</td><td>4,2,3; 5,3,1; 6,1,2</td> </tr><tr><td>9</td><td>4,3,2; 6,2,1; 5,1,3</td> </tr><tr><td>10</td><td>2,3,5; 4,5,1; 6,1,3</td> </tr><tr><td>10</td><td>2,5,3; 6,3,1; 4,1,5</td> </tr><tr><td>11</td><td>1,4,6; 3,6,2; 5,2,4</td> </tr><tr><td>11</td><td>1,6,4; 5,4,2; 3,2,6</td> </tr><tr><td>12</td><td>1,5,6; 2,6,4; 3,4,5</td> </tr><tr><td>12</td><td>1,6,5; 3,5,4; 2,4,6</td> </tr></tbody></table></div> <p>By concatenating each group it is possible to form 9-digit strings; the maximum string for a 3-gon ring is 432621513.</p> <p>Using the numbers 1 to 10, and depending on arrangements, it is possible to form 16- and 17-digit strings. What is the maximum <b>16-digit</b> string for a "magic" 5-gon ring?</p> <div style="text-align:center;"> <img src="https://projecteuler.net/project/images/p068_2.gif" alt=""><br></div> </div> End of explanation
5,137
Given the following text description, write Python code to implement the functionality described below step by step Description: MongoBase starting guide Step1: ObjectId First, let's talk about ObjectId. Step2: Actually, ObjectId is usuful. It is unique, sortable and memory efficient. http Step3: The __structure__ part represents the definition of the model. Basic instractions. (insert, update, find, remove) Let's try basic instractions like inserts, updates, find and remove. Firstly, let's begin with creating an instance to be stored. Step4: Good chicken. Let's save while it is fresh. Step5: Chickens are considered to be unable to fly by default. We can let it be enable by updating. Step6: You would be able to see 'is_able_to_fly' Step7: Next let's try find methods. Step8: Now we can retrieve the same document from database. Step9: It is the same chicken, isn't it? great. Let's clear (eat) it. Step10: Now we get all chickens which we stored so far. Step11: Or we can count with count() method directly. Step12: Let's check if the latest chicken is equal to the one which we just saved. Step13: Is that True, right? Contextual database MongoBase automatically creates mongodb client for each process. But in some cases, some instances must be written or read for a different client or db. If you use db context, it uses a designated database within the context. Let's get try on it. Step14: Bulk Operation Many insert operations takes a large computing cost. Fortunately, MongoDB provides an operation named "bulk write". It enables to insert many documents in one operation. Bulk Insert Step15: Bulk Update Step16: Check if all ages are updated Step17: No error? Cool. Step18: Multi Threading and Processing Step19: Threading (using the same memory space) The threading module uses threads, the multiprocessing module uses processes. The difference is that threads run in the same memory space, while processes have separate memory. This makes it a bit harder to share objects between processes with multiprocessing. Since threads use the same memory, precautions have to be taken or two threads will write to the same memory at the same time. This is what the global interpreter lock is for. https Step20: Multiprocessing (using the separated memory for each process) PyMongo is not fork-safe. Care must be taken when using instances of MongoClient with fork(). Specifically, instances of MongoClient must not be copied from a parent process to a child process. Instead, the parent process and each child process must create their own instances of MongoClient. Instances of MongoClient copied from the parent process have a high probability of deadlock in the child process due to the inherent incompatibilities between fork(), threads, and locks described below. PyMongo will attempt to issue a warning if there is a chance of this deadlock occurring. http
Python Code: %load_ext autoreload %autoreload 2 %matplotlib inline import sys import time import threading import multiprocessing import datetime as dt from mongobase.mongobase import MongoBase, db_context from bson import ObjectId Explanation: MongoBase starting guide End of explanation x = ObjectId() time.sleep(1) y = ObjectId() time.sleep(1) z = ObjectId() x str(x) x.generation_time y.generation_time x < y and y < z Explanation: ObjectId First, let's talk about ObjectId. End of explanation class Bird(MongoBase): __collection__ = 'birds' __structure__ = { '_id': ObjectId, 'name': str, 'age': int, 'is_able_to_fly': bool, 'created': dt.datetime, 'updated': dt.datetime } __required_fields__ = ['_id', 'name'] __default_values__ = { '_id': ObjectId(), 'is_able_to_fly': False, 'created': dt.datetime.now(dt.timezone.utc), 'updated': dt.datetime.now(dt.timezone.utc) } __validators__ = {} __indexed_keys__ = {} Explanation: Actually, ObjectId is usuful. It is unique, sortable and memory efficient. http://api.mongodb.com/python/current/api/bson/objectid.html An ObjectId is a 12-byte unique identifier consisting of: a 4-byte value representing the seconds since the Unix epoch, a 3-byte machine identifier, a 2-byte process id, and a 3-byte counter, starting with a random value. And also ObjectId is fast for inserting or indexing. The index size is small. https://github.com/Restuta/mongo.Guid-vs-ObjectId-performance Define a database model So now, we create a simple test collection with MongoBase. End of explanation chicken = Bird({'_id': ObjectId(), 'name': 'chicken', 'age': 3}) chicken chicken._id.generation_time Explanation: The __structure__ part represents the definition of the model. Basic instractions. (insert, update, find, remove) Let's try basic instractions like inserts, updates, find and remove. Firstly, let's begin with creating an instance to be stored. End of explanation chicken.save() Explanation: Good chicken. Let's save while it is fresh. End of explanation chicken.is_able_to_fly chicken.is_able_to_fly = True chicken.update() Explanation: Chickens are considered to be unable to fly by default. We can let it be enable by updating. End of explanation chicken.age = 5 chicken = chicken.update() assert chicken.age == 5, 'something wrong on update()' chicken = Bird.findAndUpdateById(chicken._id, {'age': 6}) assert chicken.age == 6, 'something wrong on findAndUpdateById()' Explanation: You would be able to see 'is_able_to_fly': True. Chickens grow up in several ways. End of explanation mother_chicken = Bird({'_id': ObjectId(), 'name': 'mother chicken', 'age': 63}) mother_chicken.save() Explanation: Next let's try find methods. End of explanation Bird.findOne({'name': 'mother chicken'}) Explanation: Now we can retrieve the same document from database. End of explanation mother_chicken.remove() if not Bird.findOne({'_id': mother_chicken._id}): print('Yes. The mother chicken not found. Someone might ate it.') Explanation: It is the same chicken, isn't it? great. Let's clear (eat) it. End of explanation all_chickens = Bird.find({'name': 'chicken'}, sort=[('_id', 1)]) len(all_chickens) Explanation: Now we get all chickens which we stored so far. End of explanation Bird.count({'name': 'chicken'}) Explanation: Or we can count with count() method directly. End of explanation all_chickens[-1]._id.generation_time == chicken._id.generation_time Explanation: Let's check if the latest chicken is equal to the one which we just saved. End of explanation with db_context(db_uri='localhost', db_name='test') as db: print(db) flamingo = Bird({'_id': ObjectId(), 'name': 'flamingo', 'age': 20}) flamingo.save(db=db) flamingo.age = 23 flamingo = flamingo.update(db=db) assert flamingo.age == 23, 'something wrong on update()' flamingo = Bird.findAndUpdateById(flamingo._id, {'age': 24}, db=db) assert flamingo.age == 24, 'something wrong on findAndUpdateById()' n_flamingo = Bird.count({'name': 'flamingo'}, db=db) print(f'{n_flamingo} flamingo found in the test database.') n_flamingo = Bird.count({'name': 'flamingo'}) print(f'{n_flamingo} flamingo found in the default database.') assert n_flamingo == 0 Explanation: Is that True, right? Contextual database MongoBase automatically creates mongodb client for each process. But in some cases, some instances must be written or read for a different client or db. If you use db context, it uses a designated database within the context. Let's get try on it. End of explanation many_pigeon = [] for i in range(10000): many_pigeon += [Bird({'_id': ObjectId(), 'name': f'pigeon', 'age': i})] print(many_pigeon[1]) %%time Bird.bulk_insert(many_pigeon) Bird.count({'name': 'pigeon'}) Explanation: Bulk Operation Many insert operations takes a large computing cost. Fortunately, MongoDB provides an operation named "bulk write". It enables to insert many documents in one operation. Bulk Insert End of explanation updates = [] for pigeon in many_pigeon: pigeon.age *= 3 updates += [pigeon] %%time print(len(updates)) Bird.bulk_update(updates) Explanation: Bulk Update End of explanation %%time for i, pigeon in enumerate(many_pigeon): check = Bird.findOne({'_id': pigeon._id}) assert check.age == i*3 Explanation: Check if all ages are updated End of explanation Bird.delete({'name': 'pigeon'}) Explanation: No error? Cool. End of explanation def breed(i): try: sparrow = Bird({'_id': ObjectId(), 'name': f'sparrow', 'age': 0}) sparrow.save() sparrow.age += 1 sparrow.update() except Exception as e: print(f'Exception occured. {e} in thread {threading.current_thread()}') else: print(f'{i} saved in thread {threading.current_thread()}.') Explanation: Multi Threading and Processing End of explanation %%time for i in range(1000): t = threading.Thread(target=breed, name=f'breed sparrow {i}', args=(i,)) t.start() Bird.delete({'name':'sparrow'}) Explanation: Threading (using the same memory space) The threading module uses threads, the multiprocessing module uses processes. The difference is that threads run in the same memory space, while processes have separate memory. This makes it a bit harder to share objects between processes with multiprocessing. Since threads use the same memory, precautions have to be taken or two threads will write to the same memory at the same time. This is what the global interpreter lock is for. https://stackoverflow.com/questions/3044580/multiprocessing-vs-threading-python End of explanation def breed2(tasks): db = Bird._db() # create a MongoDB Client for the forked process try: for i in range(len(tasks)): sparrow = Bird({'_id': ObjectId(), 'name': f'sparrow', 'age': 0}) sparrow.save(db=db) sparrow.age += 1 sparrow.update(db=db) except Exception as e: print(f'Exception occured. {e} in process {multiprocessing.current_process()}') else: print(f'{len(tasks)} sparrow saved in process {multiprocessing.current_process()}.') %%time print(f'{multiprocessing.cpu_count()} cpu resources found.') tasks = [[f'sparrow {i}' for i in range(250)] for j in range(4)] process_pool = multiprocessing.Pool(4) process_pool.map(breed2, tasks) Explanation: Multiprocessing (using the separated memory for each process) PyMongo is not fork-safe. Care must be taken when using instances of MongoClient with fork(). Specifically, instances of MongoClient must not be copied from a parent process to a child process. Instead, the parent process and each child process must create their own instances of MongoClient. Instances of MongoClient copied from the parent process have a high probability of deadlock in the child process due to the inherent incompatibilities between fork(), threads, and locks described below. PyMongo will attempt to issue a warning if there is a chance of this deadlock occurring. http://api.mongodb.com/python/current/faq.html#pymongo-fork-safe%3E End of explanation
5,138
Given the following text description, write Python code to implement the functionality described below step by step Description: Vertex client library Step1: Install the latest GA version of google-cloud-storage library as well. Step2: Restart the kernel Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages. Step3: Before you begin GPU runtime Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex APIs and Compute Engine APIs. The Google Cloud SDK is already installed in Google Cloud Notebook. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note Step4: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you. Americas Step5: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial. Step6: Authenticate your Google Cloud account If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps Step7: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for exporting the trained model. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. Step8: Only if your bucket doesn't already exist Step9: Finally, validate access to your Cloud Storage bucket by examining its contents Step10: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex client library Import the Vertex client library into our Python environment. Step11: Vertex constants Setup up the following constants for Vertex Step12: AutoML constants Set constants unique to AutoML datasets and training Step13: Tutorial Now you are ready to start creating your own AutoML image classification model. Set up clients The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server. You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront. Dataset Service for Dataset resources. Model Service for Model resources. Pipeline Service for training. Step14: Dataset Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it. Create Dataset resource instance Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following Step15: Now save the unique dataset identifier for the Dataset resource instance you created. Step16: Data preparation The Vertex Dataset resource for images has some requirements for your data Step17: Quick peek at your data You will use a version of the Flowers dataset that is stored in a public Cloud Storage bucket, using a CSV index file. Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows. Step18: Import data Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following Step19: Train the model Now train an AutoML image classification model using your Vertex Dataset resource. To train the model, do the following steps Step20: Construct the task requirements Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion. The minimal fields we need to specify are Step21: Now save the unique identifier of the training pipeline you created. Step22: Get information on a training pipeline Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter Step23: Deployment Training the above model may take upwards of 30 minutes time. Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name. Step24: Model information Now that your model is trained, you can get some information on your model. Evaluate the Model resource Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model. List evaluations for all slices Use this helper function list_model_evaluations, which takes the following parameter Step25: Export as Edge model You can export an AutoML image classification model as an Edge model which you can then custom deploy to an edge device, such as a mobile phone or IoT device, or download locally. Use this helper function export_model to export the model to Google Cloud, which takes the following parameters Step26: Download the TFLite model artifacts Now that you have an exported TFLite version of your model, you can test the exported model locally, but first downloading it from Cloud Storage. Step27: Instantiate a TFLite interpreter The TFLite version of the model is not a TensorFlow SavedModel format. You cannot directly use methods like predict(). Instead, one uses the TFLite interpreter. You must first setup the interpreter for the TFLite model as follows Step28: Get test item You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction. Step29: Make a prediction with TFLite model Finally, you do a prediction using your TFLite model, as follows Step30: Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial
Python Code: import os import sys # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install -U google-cloud-aiplatform $USER_FLAG Explanation: Vertex client library: AutoML image classification model for export to edge <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_classification_export_edge.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_classification_export_edge.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex client library for Python to create image classification models to export as an Edge model using Google Cloud's AutoML. Dataset The dataset used for this tutorial is the Flowers dataset from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip. Objective In this tutorial, you create a AutoML image classification model from a Python script using the Vertex client library, and then export the model as an Edge model in TFLite format. You can alternatively create models with AutoML using the gcloud command-line tool or online using the Google Cloud Console. The steps performed include: Create a Vertex Dataset resource. Train the model. Export the Edge model from the Model resource to Cloud Storage. Download the model locally. Make a local prediction. Costs This tutorial uses billable components of Google Cloud (GCP): Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Installation Install the latest version of Vertex client library. End of explanation ! pip3 install -U google-cloud-storage $USER_FLAG Explanation: Install the latest GA version of google-cloud-storage library as well. End of explanation if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) Explanation: Restart the kernel Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages. End of explanation PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID Explanation: Before you begin GPU runtime Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex APIs and Compute Engine APIs. The Google Cloud SDK is already installed in Google Cloud Notebook. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. End of explanation REGION = "us-central1" # @param {type: "string"} Explanation: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you. Americas: us-central1 Europe: europe-west4 Asia Pacific: asia-east1 You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation End of explanation from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial. End of explanation # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' Explanation: Authenticate your Google Cloud account If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for exporting the trained model. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket. Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. End of explanation ! gsutil mb -l $REGION $BUCKET_NAME Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation ! gsutil ls -al $BUCKET_NAME Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation import time from google.cloud.aiplatform import gapic as aip from google.protobuf import json_format from google.protobuf.json_format import MessageToJson, ParseDict from google.protobuf.struct_pb2 import Struct, Value Explanation: Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex client library Import the Vertex client library into our Python environment. End of explanation # API service endpoint API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION) # Vertex location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION Explanation: Vertex constants Setup up the following constants for Vertex: API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services. PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources. End of explanation # Image Dataset type DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml" # Image Labeling type LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_classification_single_label_io_format_1.0.0.yaml" # Image Training task TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_classification_1.0.0.yaml" Explanation: AutoML constants Set constants unique to AutoML datasets and training: Dataset Schemas: Tells the Dataset resource service which type of dataset it is. Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated). Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for. End of explanation # client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_dataset_client(): client = aip.DatasetServiceClient(client_options=client_options) return client def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client def create_pipeline_client(): client = aip.PipelineServiceClient(client_options=client_options) return client clients = {} clients["dataset"] = create_dataset_client() clients["model"] = create_model_client() clients["pipeline"] = create_pipeline_client() for client in clients.items(): print(client) Explanation: Tutorial Now you are ready to start creating your own AutoML image classification model. Set up clients The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server. You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront. Dataset Service for Dataset resources. Model Service for Model resources. Pipeline Service for training. End of explanation TIMEOUT = 90 def create_dataset(name, schema, labels=None, timeout=TIMEOUT): start_time = time.time() try: dataset = aip.Dataset( display_name=name, metadata_schema_uri=schema, labels=labels ) operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset) print("Long running operation:", operation.operation.name) result = operation.result(timeout=TIMEOUT) print("time:", time.time() - start_time) print("response") print(" name:", result.name) print(" display_name:", result.display_name) print(" metadata_schema_uri:", result.metadata_schema_uri) print(" metadata:", dict(result.metadata)) print(" create_time:", result.create_time) print(" update_time:", result.update_time) print(" etag:", result.etag) print(" labels:", dict(result.labels)) return result except Exception as e: print("exception:", e) return None result = create_dataset("flowers-" + TIMESTAMP, DATA_SCHEMA) Explanation: Dataset Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it. Create Dataset resource instance Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following: Uses the dataset client service. Creates an Vertex Dataset resource (aip.Dataset), with the following parameters: display_name: The human-readable name you choose to give it. metadata_schema_uri: The schema for the dataset type. Calls the client dataset service method create_dataset, with the following parameters: parent: The Vertex location root path for your Database, Model and Endpoint resources. dataset: The Vertex dataset object instance you created. The method returns an operation object. An operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning. You can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method: | Method | Description | | ----------- | ----------- | | result() | Waits for the operation to complete and returns a result object in JSON format. | | running() | Returns True/False on whether the operation is still running. | | done() | Returns True/False on whether the operation is completed. | | canceled() | Returns True/False on whether the operation was canceled. | | cancel() | Cancels the operation (this may take up to 30 seconds). | End of explanation # The full unique ID for the dataset dataset_id = result.name # The short numeric ID for the dataset dataset_short_id = dataset_id.split("/")[-1] print(dataset_id) Explanation: Now save the unique dataset identifier for the Dataset resource instance you created. End of explanation IMPORT_FILE = ( "gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv" ) Explanation: Data preparation The Vertex Dataset resource for images has some requirements for your data: Images must be stored in a Cloud Storage bucket. Each image file must be in an image format (PNG, JPEG, BMP, ...). There must be an index file stored in your Cloud Storage bucket that contains the path and label for each image. The index file must be either CSV or JSONL. CSV For image classification, the CSV index file has the requirements: No heading. First column is the Cloud Storage path to the image. Second column is the label. Location of Cloud Storage training data. Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage. End of explanation if "IMPORT_FILES" in globals(): FILE = IMPORT_FILES[0] else: FILE = IMPORT_FILE count = ! gsutil cat $FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") ! gsutil cat $FILE | head Explanation: Quick peek at your data You will use a version of the Flowers dataset that is stored in a public Cloud Storage bucket, using a CSV index file. Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows. End of explanation def import_data(dataset, gcs_sources, schema): config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}] print("dataset:", dataset_id) start_time = time.time() try: operation = clients["dataset"].import_data( name=dataset_id, import_configs=config ) print("Long running operation:", operation.operation.name) result = operation.result() print("result:", result) print("time:", int(time.time() - start_time), "secs") print("error:", operation.exception()) print("meta :", operation.metadata) print( "after: running:", operation.running(), "done:", operation.done(), "cancelled:", operation.cancelled(), ) return operation except Exception as e: print("exception:", e) return None import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA) Explanation: Import data Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following: Uses the Dataset client. Calls the client method import_data, with the following parameters: name: The human readable name you give to the Dataset resource (e.g., flowers). import_configs: The import configuration. import_configs: A Python list containing a dictionary, with the key/value entries: gcs_sources: A list of URIs to the paths of the one or more index files. import_schema_uri: The schema identifying the labeling type. The import_data() method returns a long running operation object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break. End of explanation def create_pipeline(pipeline_name, model_name, dataset, schema, task): dataset_id = dataset.split("/")[-1] input_config = { "dataset_id": dataset_id, "fraction_split": { "training_fraction": 0.8, "validation_fraction": 0.1, "test_fraction": 0.1, }, } training_pipeline = { "display_name": pipeline_name, "training_task_definition": schema, "training_task_inputs": task, "input_data_config": input_config, "model_to_upload": {"display_name": model_name}, } try: pipeline = clients["pipeline"].create_training_pipeline( parent=PARENT, training_pipeline=training_pipeline ) print(pipeline) except Exception as e: print("exception:", e) return None return pipeline Explanation: Train the model Now train an AutoML image classification model using your Vertex Dataset resource. To train the model, do the following steps: Create an Vertex training pipeline for the Dataset resource. Execute the pipeline to start the training. Create a training pipeline You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of: Being reusable for subsequent training jobs. Can be containerized and ran as a batch job. Can be distributed. All the steps are associated with the same pipeline job for tracking progress. Use this helper function create_pipeline, which takes the following parameters: pipeline_name: A human readable name for the pipeline job. model_name: A human readable name for the model. dataset: The Vertex fully qualified dataset identifier. schema: The dataset labeling (annotation) training schema. task: A dictionary describing the requirements for the training job. The helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters: parent: The Vertex location root path for your Dataset, Model and Endpoint resources. training_pipeline: the full specification for the pipeline training job. Let's look now deeper into the minimal requirements for constructing a training_pipeline specification: display_name: A human readable name for the pipeline job. training_task_definition: The dataset labeling (annotation) training schema. training_task_inputs: A dictionary describing the requirements for the training job. model_to_upload: A human readable name for the model. input_data_config: The dataset specification. dataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier. fraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML. End of explanation PIPE_NAME = "flowers_pipe-" + TIMESTAMP MODEL_NAME = "flowers_model-" + TIMESTAMP task = json_format.ParseDict( { "multi_label": False, "budget_milli_node_hours": 8000, "model_type": "MOBILE_TF_LOW_LATENCY_1", "disable_early_stopping": False, }, Value(), ) response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task) Explanation: Construct the task requirements Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion. The minimal fields we need to specify are: multi_label: Whether True/False this is a multi-label (vs single) classification. budget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour. For image classification, the budget must be a minimum of 8 hours. model_type: The type of deployed model: CLOUD: For deploying to Google Cloud. MOBILE_TF_LOW_LATENCY_1: For deploying to the edge and optimizing for latency (response time). MOBILE_TF_HIGH_ACCURACY_1: For deploying to the edge and optimizing for accuracy. MOBILE_TF_VERSATILE_1: For deploying to the edge and optimizing for a trade off between latency and accuracy. disable_early_stopping: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget. Finally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object. End of explanation # The full unique ID for the pipeline pipeline_id = response.name # The short numeric ID for the pipeline pipeline_short_id = pipeline_id.split("/")[-1] print(pipeline_id) Explanation: Now save the unique identifier of the training pipeline you created. End of explanation def get_training_pipeline(name, silent=False): response = clients["pipeline"].get_training_pipeline(name=name) if silent: return response print("pipeline") print(" name:", response.name) print(" display_name:", response.display_name) print(" state:", response.state) print(" training_task_definition:", response.training_task_definition) print(" training_task_inputs:", dict(response.training_task_inputs)) print(" create_time:", response.create_time) print(" start_time:", response.start_time) print(" end_time:", response.end_time) print(" update_time:", response.update_time) print(" labels:", dict(response.labels)) return response response = get_training_pipeline(pipeline_id) Explanation: Get information on a training pipeline Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter: name: The Vertex fully qualified pipeline identifier. When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED. End of explanation while True: response = get_training_pipeline(pipeline_id, True) if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED: print("Training job has not completed:", response.state) model_to_deploy_id = None if response.state == aip.PipelineState.PIPELINE_STATE_FAILED: raise Exception("Training Job Failed") else: model_to_deploy = response.model_to_upload model_to_deploy_id = model_to_deploy.name print("Training Time:", response.end_time - response.start_time) break time.sleep(60) print("model to deploy:", model_to_deploy_id) Explanation: Deployment Training the above model may take upwards of 30 minutes time. Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name. End of explanation def list_model_evaluations(name): response = clients["model"].list_model_evaluations(parent=name) for evaluation in response: print("model_evaluation") print(" name:", evaluation.name) print(" metrics_schema_uri:", evaluation.metrics_schema_uri) metrics = json_format.MessageToDict(evaluation._pb.metrics) for metric in metrics.keys(): print(metric) print("logloss", metrics["logLoss"]) print("auPrc", metrics["auPrc"]) return evaluation.name last_evaluation = list_model_evaluations(model_to_deploy_id) Explanation: Model information Now that your model is trained, you can get some information on your model. Evaluate the Model resource Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model. List evaluations for all slices Use this helper function list_model_evaluations, which takes the following parameter: name: The Vertex fully qualified model identifier for the Model resource. This helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric. For each evaluation (you probably only have one) we then print all the key names for each metric in the evaluation, and for a small set (logLoss and auPrc) you will print the result. End of explanation MODEL_DIR = BUCKET_NAME + "/" + "flowers" def export_model(name, format, gcs_dest): output_config = { "artifact_destination": {"output_uri_prefix": gcs_dest}, "export_format_id": format, } response = clients["model"].export_model(name=name, output_config=output_config) print("Long running operation:", response.operation.name) result = response.result(timeout=1800) metadata = response.operation.metadata artifact_uri = str(metadata.value).split("\\")[-1][4:-1] print("Artifact Uri", artifact_uri) return artifact_uri model_package = export_model(model_to_deploy_id, "tflite", MODEL_DIR) Explanation: Export as Edge model You can export an AutoML image classification model as an Edge model which you can then custom deploy to an edge device, such as a mobile phone or IoT device, or download locally. Use this helper function export_model to export the model to Google Cloud, which takes the following parameters: name: The Vertex fully qualified identifier for the Model resource. format: The format to save the model format as. gcs_dest: The Cloud Storage location to store the SavedFormat model artifacts to. This function calls the Model client service's method export_model, with the following parameters: name: The Vertex fully qualified identifier for the Model resource. output_config: The destination information for the exported model. artifact_destination.output_uri_prefix: The Cloud Storage location to store the SavedFormat model artifacts to. export_format_id: The format to save the model format as. For AutoML image classification: tf-saved-model: TensorFlow SavedFormat for deployment to a container. tflite: TensorFlow Lite for deployment to an edge or mobile device. edgetpu-tflite: TensorFlow Lite for TPU tf-js: TensorFlow for web client coral-ml: for Coral devices The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is exported. End of explanation ! gsutil ls $model_package # Download the model artifacts ! gsutil cp -r $model_package tflite tflite_path = "tflite/model.tflite" Explanation: Download the TFLite model artifacts Now that you have an exported TFLite version of your model, you can test the exported model locally, but first downloading it from Cloud Storage. End of explanation import tensorflow as tf interpreter = tf.lite.Interpreter(model_path=tflite_path) interpreter.allocate_tensors() input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() input_shape = input_details[0]["shape"] print("input tensor shape", input_shape) Explanation: Instantiate a TFLite interpreter The TFLite version of the model is not a TensorFlow SavedModel format. You cannot directly use methods like predict(). Instead, one uses the TFLite interpreter. You must first setup the interpreter for the TFLite model as follows: Instantiate an TFLite interpreter for the TFLite model. Instruct the interpreter to allocate input and output tensors for the model. Get detail information about the models input and output tensors that will need to be known for prediction. End of explanation test_items = ! gsutil cat $IMPORT_FILE | head -n1 test_item = test_items[0].split(",")[0] with tf.io.gfile.GFile(test_item, "rb") as f: content = f.read() test_image = tf.io.decode_jpeg(content) print("test image shape", test_image.shape) test_image = tf.image.resize(test_image, (224, 224)) print("test image shape", test_image.shape, test_image.dtype) test_image = tf.cast(test_image, dtype=tf.uint8).numpy() Explanation: Get test item You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction. End of explanation import numpy as np data = np.expand_dims(test_image, axis=0) interpreter.set_tensor(input_details[0]["index"], data) interpreter.invoke() softmax = interpreter.get_tensor(output_details[0]["index"]) label = np.argmax(softmax) print(label) Explanation: Make a prediction with TFLite model Finally, you do a prediction using your TFLite model, as follows: Convert the test image into a batch of a single image (np.expand_dims) Set the input tensor for the interpreter to your batch of a single image (data). Invoke the interpreter. Retrieve the softmax probabilities for the prediction (get_tensor). Determine which label had the highest probability (np.argmax). End of explanation delete_dataset = True delete_pipeline = True delete_model = True delete_endpoint = True delete_batchjob = True delete_customjob = True delete_hptjob = True delete_bucket = True # Delete the dataset using the Vertex fully qualified identifier for the dataset try: if delete_dataset and "dataset_id" in globals(): clients["dataset"].delete_dataset(name=dataset_id) except Exception as e: print(e) # Delete the training pipeline using the Vertex fully qualified identifier for the pipeline try: if delete_pipeline and "pipeline_id" in globals(): clients["pipeline"].delete_training_pipeline(name=pipeline_id) except Exception as e: print(e) # Delete the model using the Vertex fully qualified identifier for the model try: if delete_model and "model_to_deploy_id" in globals(): clients["model"].delete_model(name=model_to_deploy_id) except Exception as e: print(e) # Delete the endpoint using the Vertex fully qualified identifier for the endpoint try: if delete_endpoint and "endpoint_id" in globals(): clients["endpoint"].delete_endpoint(name=endpoint_id) except Exception as e: print(e) # Delete the batch job using the Vertex fully qualified identifier for the batch job try: if delete_batchjob and "batch_job_id" in globals(): clients["job"].delete_batch_prediction_job(name=batch_job_id) except Exception as e: print(e) # Delete the custom job using the Vertex fully qualified identifier for the custom job try: if delete_customjob and "job_id" in globals(): clients["job"].delete_custom_job(name=job_id) except Exception as e: print(e) # Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job try: if delete_hptjob and "hpt_job_id" in globals(): clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id) except Exception as e: print(e) if delete_bucket and "BUCKET_NAME" in globals(): ! gsutil rm -r $BUCKET_NAME Explanation: Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: Dataset Pipeline Model Endpoint Batch Job Custom Job Hyperparameter Tuning Job Cloud Storage Bucket End of explanation
5,139
Given the following text description, write Python code to implement the functionality described below step by step Description: In this notebook I'll show probabilistic interpretation of the nearest neighbours algorith as a mixture of Gaussians. Following Barber 2012. First I'll give an example of ... Next, I'll show how to reformulate ... using Theory This pretty much follows from Barber, 2012 kNN is simple to understand and implement, and often used as a baseline. Some limitations of this approach. * In metric based methods, how do we measure distance? euclidean distance does not account for how data is distributed * The whole dataset needs to be stored to make a classification since the novel point must be compared to all of the train points. * Each distance calculation can be expensive if the datapoints are high dimensional We can reformulate the kNN as a class conditional mixture of Gaussians. Probabilistic NN In Barber 2012 shows a an of the nearest neighbour method as the limiting case of a mixture model. What follows is a solution for Exercise 158 from Barber 2012 in python. Write a routine SoftNearNeigh(xtrain,xtest,trainlabels,sigma) to implement soft nearest neighbours, analogous to nearNeigh.m . Here sigma is the variance σ 2 in equation (14.3.1). As above, the file NNdata.mat contains training and test data for the handwritten digits 5 and 9. Using leave one out cross-validation, find the optimal σ 2 and use this to compute the classification accuracy of the method on the test data. Hint Step1: We want to solve a binary classification problem. Step5: Barber follows a generative approach and uses kernel density estimation (Parzen estimator) to interpret kNN as the limiting case of a mixture of Gaussians. An isotropic Gaussian of width $\sigma^2$ is placed at each data point, and a mixture is used to model each class. The Parzen estimator With kernel density estimation we want to approximate a PDF with a mixture of continuos probability distributions. A Parzen estimator centers a probability distribution at each data point $\textbf{x}_n$ as $P(\textbf{x}) = \frac{1}{N} \sum_{n=1}^{N} P(\textbf{x}|\textbf{x}_{n})$ For a D dimensional $\textbf{x}$ we choose an isotropic Gaussian $P(\textbf{x}|\textbf{x}{n}) = \mathcal{N} (\textbf{x}|\textbf{x}{n}, \sigma^2 \textbf{I}_{D})$, which gives the mixture $P(x) = \frac{1}{N} \sum_{n=1}^{N} \frac{1}{(2 \pi \sigma^2)^{D/2}}\mathcal{e}^{- (\textbf{x} - \textbf{x}_n)^2 / 2\sigma^2} $ Nearest Neighbour classification Given classes $c = {0, 1}$, we consider the following mixture model $P(\textbf{x}|c=0) = \frac{1}{N_0} \sum_{n \in \textit{class 0}} \mathcal{N}(\textbf{x}| \textbf{x}n, \sigma^2\textbf{I}) = \frac{1}{N_0} \frac{1}{(2 \pi \sigma^2)^{\frac{D}{2}}} \sum{n \in \textit{class 0}} e^{-(\textbf{x} - \textbf{x}_n)^2 / (2 \sigma^2) } $ $P(\textbf{x}|c=1) = \frac{1}{N_1} \sum_{n \in \textit{class 1}} \mathcal{N}(\textbf{x}| \textbf{x}n, \sigma^2\textbf{I}) = \frac{1}{N_1} \frac{1}{(2 \pi \sigma^2)^{\frac{D}{2}}} \sum{n \in \textit{class 1}} e^{-(\textbf{x} - \textbf{x}_n)^2 / (2 \sigma^2) } $ To classify a new instance $\textbf{x}_{*}$ we calculate the posterior for both classes and take the ratio $\frac{P(c=0|\textbf{x}{\star})}{P(c=1|\textbf{x}{\star})} = \frac{P(\textbf{x}{\star}|c=0) P(c=0)}{P(\textbf{x}{\star}|c=1) P(c=1)}$. If this ratio is $\gt 1$ than we classify $\textbf{n}{\star}$ as class 0, otherwise as class 1. The class probabilities can be determined by maximum likelihood with the following setting $P(c) = \sum{i=0}^N \frac{N_{i}}{N}$. TODO Step6: Problem; when (c0p / c1p) we have a tie! How should we interpret that? My assumption is to assing ties to class 1! kNN
Python Code: import scipy.io as sio nndata = sio.loadmat('/Users/gm/Downloads/BRMLtoolkit/data/NNdata.mat') nndata Explanation: In this notebook I'll show probabilistic interpretation of the nearest neighbours algorith as a mixture of Gaussians. Following Barber 2012. First I'll give an example of ... Next, I'll show how to reformulate ... using Theory This pretty much follows from Barber, 2012 kNN is simple to understand and implement, and often used as a baseline. Some limitations of this approach. * In metric based methods, how do we measure distance? euclidean distance does not account for how data is distributed * The whole dataset needs to be stored to make a classification since the novel point must be compared to all of the train points. * Each distance calculation can be expensive if the datapoints are high dimensional We can reformulate the kNN as a class conditional mixture of Gaussians. Probabilistic NN In Barber 2012 shows a an of the nearest neighbour method as the limiting case of a mixture model. What follows is a solution for Exercise 158 from Barber 2012 in python. Write a routine SoftNearNeigh(xtrain,xtest,trainlabels,sigma) to implement soft nearest neighbours, analogous to nearNeigh.m . Here sigma is the variance σ 2 in equation (14.3.1). As above, the file NNdata.mat contains training and test data for the handwritten digits 5 and 9. Using leave one out cross-validation, find the optimal σ 2 and use this to compute the classification accuracy of the method on the test data. Hint: you may have numerical difficulty with this method. To avoid this, consider using the logarithm, and how to numerically compute log ( e a + e b ) for large (negative) a and b . See also logsumexp.m . Data For this exercise we'll be using a subsent of the MNIST dataset provided in BRMLtoolkit at Barber 2012. End of explanation class0_train = nndata['train5'] class0_test = nndata['test5'] class1_train = nndata['train9'] class1_test = nndata['test9'] Explanation: We want to solve a binary classification problem. End of explanation import numpy as np def log_sum_exp(x): Log likelihood of a Parzen estimator a = np.max(x) return a + np.log(np.sum(np.exp(x-a))) def log_mean_exp(x): Log likelihood of a Parzen estimator a = np.max(x) return a + np.log(np.mean(np.exp(x-a))) def parzen(x, mu, sigma=1.0): Makes a function that allows the evalution of a Parzen estimator where the Kernel is a normal distribution with stddev sigma and with points at mu. Parameters ----------- x : numpy array Classification input mu : numpy matrix Contains the data points over which this distribution is based. sigma : scalar The standard deviation of the normal distribution around each data \ point. Returns ------- lpdf : callable Estimator of the log of the probability density under a point. # z = (1 / mu.shape[0]) * (1 / np.sqrt(sigma * np.pi * 2.0)) z = mu.shape[0] * np.sqrt(sigma * np.pi * 2.0) e = (-(x - mu)**2.0) / (2.0 * sigma) log_p = log_mean_exp(e) return log_p - z sigmas = [1e-13] priorC0 = class0_train.T.shape[0] / (class0_train.T.shape[0] + class1_train.T.shape[0]) priorC1 = class1_train.T.shape[0] / (class1_train.T.shape[0] + class1_train.T.shape[0]) for sigma in sigmas: correct = 0 for x in class0_test.T: c0p = priorC0 * parzen(x, class0_train.T, sigma=sigma) c1p = priorC1 * parzen(x, class1_train.T, sigma=sigma) if (c0p / c1p) > 1: correct += 1 print(sigma, correct, class1_test.shape[1], correct / class0_test.shape[1]) correct = 0 for x in class1_test.T: c0p = priorC0 * parzen(x, class0_train.T, sigma=sigma) c1p = priorC1 * parzen(x, class1_train.T, sigma=sigma) if (c0p / c1p) <= 1: correct += 1 print(sigma, correct, class0_test.shape[1], correct / (class1_test.shape[1])) Explanation: Barber follows a generative approach and uses kernel density estimation (Parzen estimator) to interpret kNN as the limiting case of a mixture of Gaussians. An isotropic Gaussian of width $\sigma^2$ is placed at each data point, and a mixture is used to model each class. The Parzen estimator With kernel density estimation we want to approximate a PDF with a mixture of continuos probability distributions. A Parzen estimator centers a probability distribution at each data point $\textbf{x}_n$ as $P(\textbf{x}) = \frac{1}{N} \sum_{n=1}^{N} P(\textbf{x}|\textbf{x}_{n})$ For a D dimensional $\textbf{x}$ we choose an isotropic Gaussian $P(\textbf{x}|\textbf{x}{n}) = \mathcal{N} (\textbf{x}|\textbf{x}{n}, \sigma^2 \textbf{I}_{D})$, which gives the mixture $P(x) = \frac{1}{N} \sum_{n=1}^{N} \frac{1}{(2 \pi \sigma^2)^{D/2}}\mathcal{e}^{- (\textbf{x} - \textbf{x}_n)^2 / 2\sigma^2} $ Nearest Neighbour classification Given classes $c = {0, 1}$, we consider the following mixture model $P(\textbf{x}|c=0) = \frac{1}{N_0} \sum_{n \in \textit{class 0}} \mathcal{N}(\textbf{x}| \textbf{x}n, \sigma^2\textbf{I}) = \frac{1}{N_0} \frac{1}{(2 \pi \sigma^2)^{\frac{D}{2}}} \sum{n \in \textit{class 0}} e^{-(\textbf{x} - \textbf{x}_n)^2 / (2 \sigma^2) } $ $P(\textbf{x}|c=1) = \frac{1}{N_1} \sum_{n \in \textit{class 1}} \mathcal{N}(\textbf{x}| \textbf{x}n, \sigma^2\textbf{I}) = \frac{1}{N_1} \frac{1}{(2 \pi \sigma^2)^{\frac{D}{2}}} \sum{n \in \textit{class 1}} e^{-(\textbf{x} - \textbf{x}_n)^2 / (2 \sigma^2) } $ To classify a new instance $\textbf{x}_{*}$ we calculate the posterior for both classes and take the ratio $\frac{P(c=0|\textbf{x}{\star})}{P(c=1|\textbf{x}{\star})} = \frac{P(\textbf{x}{\star}|c=0) P(c=0)}{P(\textbf{x}{\star}|c=1) P(c=1)}$. If this ratio is $\gt 1$ than we classify $\textbf{n}{\star}$ as class 0, otherwise as class 1. The class probabilities can be determined by maximum likelihood with the following setting $P(c) = \sum{i=0}^N \frac{N_{i}}{N}$. TODO: proof! To understand how this relates to the nearest neighbour method, we need to consider the case $\sigma^2 \rightarrow 0$. Note that both nominator and denominator are sums of exponentials. Intuitively, if the veriance is small, the nominator will be dominated by the term for which point $x_{n_0} \in class 0$ is closest to $\textbf{x}_{\star}$. The same holds for the denominator and points in class 1. $\frac{1}{(2 \pi \sigma^2)^{\frac{D}{2}}}$ cancels out, and for vanishingly small values of $\sigma$ we have $\frac{P(c=0|\textbf{x}{\star})}{P(c=1|\textbf{x}{\star})} \approx $ $\frac{ e^{-(\textbf{x}{\star} - x{n_0})^2 / (2 \sigma^2)} P(c=0)/N_{0}} {e^{-(\textbf{x}{\star} - x{n_1})^2 / (2 \sigma^2)}P(c=1)/N_{1}} $ $\frac{ e^{-(\textbf{x}{\star} - x{n_0})^2 / (2 \sigma^2)}} {e^{-(\textbf{x}{\star} - x{n_1})^2 / (2 \sigma^2)}} $ In the limit $\sigma^2 \rightarrow 0$ we have $\frac{ e^{-(\textbf{x}{\star} - x{n_0})^2}}{e^{-(\textbf{x}{\star} - x{n_1})^2}} $ so we classify $\textbf{x}{\star}$ as class 0 if $\textbf{x}{\star}$ has a point in class 0 than the closest point in class 1. Implementation End of explanation X_train = np.append(class0_train.T, class1_train.T, axis=0) y_train = ['a'] * class0_train.shape[1] + ['b'] * class1_train.shape[1] X_train.shape X_test = np.append(class0_test.T, class1_test.T, axis=0) y_test = ['a'] * class0_test.shape[1] + ['b'] * class1_test.shape[1] X_test.shape, len(y_test) from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=10, n_jobs=4, metric='euclidean', algorithm='ball_tree') fitted = knn.fit(X_train, y_train) ?KNeighborsClassifier y_pred = fitted.predict(X_test) from sklearn.metrics import accuracy_score, confusion_matrix print("Accuracy: {}", accuracy_score(y_test, y_pred)) print(confusion_matrix(y_test, y_pred)) Explanation: Problem; when (c0p / c1p) we have a tie! How should we interpret that? My assumption is to assing ties to class 1! kNN End of explanation
5,140
Given the following text description, write Python code to implement the functionality described below step by step Description: SPDX-FileCopyrightText Step1: Initialize wind farm To initialize a specific wind farm you need to provide a wind turbine fleet specifying the wind turbines and their number or total installed capacity (in Watt) in the farm. Optionally, you can specify a wind farm efficiency and a name as an identifier. Step2: Following, a wind farm with a constant efficiency is defined. A wind farm efficiency can also be dependent on the wind speed in which case it needs to be provided as a dataframe with 'wind_speed' and 'efficiency' columns containing wind speeds in m/s and the corresponding dimensionless wind farm efficiency. Step3: Initialize wind turbine cluster Like for a wind farm for the initialization of a wind turbine cluster you can use a dictionary that contains the basic parameters. A wind turbine cluster is defined by its wind farms. Step4: Use the TurbineClusterModelChain to calculate power output The TurbineClusterModelChain is a class that provides all necessary steps to calculate the power output of a wind farm or wind turbine cluster. Like the ModelChain (see basic example) you can use the TurbineClusterModelChain with default parameters as shown in this example for the wind farm or specify custom parameters as done here for the cluster. If you use the 'run_model' method first the aggregated power curve and the mean hub height of the wind farm/cluster is calculated, then inherited functions of the ModelChain are used to calculate the wind speed and density (if necessary) at hub height. After that, depending on the parameters, wake losses are applied and at last the power output is calculated. Step5: Plot results If you have matplotlib installed you can visualize the calculated power output.
Python Code: import pandas as pd import modelchain_example as mc_e from windpowerlib import TurbineClusterModelChain, WindTurbineCluster, WindFarm import logging logging.getLogger().setLevel(logging.DEBUG) # Get weather data weather = mc_e.get_weather_data('weather.csv') print(weather[['wind_speed', 'temperature', 'pressure']][0:3]) # Initialize wind turbines my_turbine, e126, my_turbine2 = mc_e.initialize_wind_turbines() print() print('nominal power of my_turbine: {}'.format(my_turbine.nominal_power)) Explanation: SPDX-FileCopyrightText: 2019 oemof developer group &#99;&#111;&#110;&#116;&#97;&#99;&#116;&#64;&#111;&#101;&#109;&#111;&#102;&#46;&#111;&#114;&#103; SPDX-License-Identifier: MIT SPDX-License-Identifier: CC-BY-4.0 TurbineClusterModelChain example This example shows you how to calculate the power output of wind farms and wind turbine clusters using the windpowerlib. A cluster can be useful if you want to calculate the feed-in of a region for which you want to use one single weather data point. Functions that are used in the ModelChain example, like the initialization of wind turbines, are imported and used without further explanation. Imports and initialization of wind turbines The import of weather data and the initialization of wind turbines is done as in the modelchain_example. Be aware that currently for wind farm and wind cluster calculations wind turbines need to have a power curve as some calculations do not work with the power coefficient curve. End of explanation # specification of wind farm data where turbine fleet is provided in a # pandas.DataFrame # for each turbine type you can either specify the number of turbines of # that type in the wind farm (float values are possible as well) or the # total installed capacity of that turbine type in W wind_turbine_fleet = pd.DataFrame( {'wind_turbine': [my_turbine, e126], # as windpowerlib.WindTurbine 'number_of_turbines': [6, None], 'total_capacity': [None, 12.6e6]} ) # initialize WindFarm object example_farm = WindFarm(name='example_farm', wind_turbine_fleet=wind_turbine_fleet) Explanation: Initialize wind farm To initialize a specific wind farm you need to provide a wind turbine fleet specifying the wind turbines and their number or total installed capacity (in Watt) in the farm. Optionally, you can specify a wind farm efficiency and a name as an identifier. End of explanation # specification of wind farm data (2) containing a wind farm efficiency # wind turbine fleet is provided using the to_group function example_farm_2_data = { 'name': 'example_farm_2', 'wind_turbine_fleet': [my_turbine.to_group(6), e126.to_group(total_capacity=12.6e6)], 'efficiency': 0.9} # initialize WindFarm object example_farm_2 = WindFarm(**example_farm_2_data) print('nominal power of first turbine type of example_farm_2: {}'.format( example_farm_2.wind_turbine_fleet.loc[0, 'wind_turbine'].nominal_power)) Explanation: Following, a wind farm with a constant efficiency is defined. A wind farm efficiency can also be dependent on the wind speed in which case it needs to be provided as a dataframe with 'wind_speed' and 'efficiency' columns containing wind speeds in m/s and the corresponding dimensionless wind farm efficiency. End of explanation # specification of cluster data example_cluster_data = { 'name': 'example_cluster', 'wind_farms': [example_farm, example_farm_2]} # initialize WindTurbineCluster object example_cluster = WindTurbineCluster(**example_cluster_data) Explanation: Initialize wind turbine cluster Like for a wind farm for the initialization of a wind turbine cluster you can use a dictionary that contains the basic parameters. A wind turbine cluster is defined by its wind farms. End of explanation # power output calculation for example_farm # initialize TurbineClusterModelChain with default parameters and use # run_model method to calculate power output mc_example_farm = TurbineClusterModelChain(example_farm).run_model(weather) # write power output time series to WindFarm object example_farm.power_output = mc_example_farm.power_output # set efficiency of example_farm to apply wake losses example_farm.efficiency = 0.9 # power output calculation for turbine_cluster # own specifications for TurbineClusterModelChain setup modelchain_data = { 'wake_losses_model': 'wind_farm_efficiency', # # 'dena_mean' (default), None, # 'wind_farm_efficiency' or name # of another wind efficiency curve # see :py:func:`~.wake_losses.get_wind_efficiency_curve` 'smoothing': True, # False (default) or True 'block_width': 0.5, # default: 0.5 'standard_deviation_method': 'Staffell_Pfenninger', # # 'turbulence_intensity' (default) # or 'Staffell_Pfenninger' 'smoothing_order': 'wind_farm_power_curves', # # 'wind_farm_power_curves' (default) or # 'turbine_power_curves' 'wind_speed_model': 'logarithmic', # 'logarithmic' (default), # 'hellman' or # 'interpolation_extrapolation' 'density_model': 'ideal_gas', # 'barometric' (default), 'ideal_gas' or # 'interpolation_extrapolation' 'temperature_model': 'linear_gradient', # 'linear_gradient' (def.) or # 'interpolation_extrapolation' 'power_output_model': 'power_curve', # 'power_curve' (default) or # 'power_coefficient_curve' 'density_correction': True, # False (default) or True 'obstacle_height': 0, # default: 0 'hellman_exp': None} # None (default) or None # initialize TurbineClusterModelChain with own specifications and use # run_model method to calculate power output mc_example_cluster = TurbineClusterModelChain( example_cluster, **modelchain_data).run_model(weather) # write power output time series to WindTurbineCluster object example_cluster.power_output = mc_example_cluster.power_output Explanation: Use the TurbineClusterModelChain to calculate power output The TurbineClusterModelChain is a class that provides all necessary steps to calculate the power output of a wind farm or wind turbine cluster. Like the ModelChain (see basic example) you can use the TurbineClusterModelChain with default parameters as shown in this example for the wind farm or specify custom parameters as done here for the cluster. If you use the 'run_model' method first the aggregated power curve and the mean hub height of the wind farm/cluster is calculated, then inherited functions of the ModelChain are used to calculate the wind speed and density (if necessary) at hub height. After that, depending on the parameters, wake losses are applied and at last the power output is calculated. End of explanation # try to import matplotlib logging.getLogger().setLevel(logging.WARNING) try: from matplotlib import pyplot as plt # matplotlib inline needed in notebook to plot inline %matplotlib inline except ImportError: plt = None # plot turbine power output if plt: example_cluster.power_output.plot(legend=True, label='example cluster') example_farm.power_output.plot(legend=True, label='example farm') plt.xlabel('Wind speed in m/s') plt.ylabel('Power in W') plt.show() # plot aggregated (and smoothed) power curve of example_cluster if plt: example_cluster.power_curve.plot( x='wind_speed', y='value', style='*') plt.xlabel('Wind speed in m/s') plt.ylabel('Power in W') plt.show() Explanation: Plot results If you have matplotlib installed you can visualize the calculated power output. End of explanation
5,141
Given the following text description, write Python code to implement the functionality described below step by step Description: Compare speed python list vs numpy ops Let's implement standard deviation function. And compare computation time for 10 million numbers. Step1: As we can see numpy function much fater than that implemtated on python list. There are a built-in function in numpy to compute the standard deviation. Verify the std computed in all three techniques give same result. Step2: Matrix multiplication Usecase - solve Normal Equations http Step3: Covariance Matrix Step4: Eigen Value Decomposition Read about Eigen, SVD, PCA decomposition https Step5: U and V are unitary matrix Step6: Vectors of U and V are orthogonal
Python Code: n = 10 ** 7 # Implementation using python list def std(x:list): x_mean = sum(x)/len(x) y = sum([(v - x_mean) ** 2 for v in x])/len(x) return y**0.5 %time std(range(n)) # Implementation using numpy array function def std_np(x): x_mean = np.sum(x)/len(x) return (((x - x_mean) ** 2).mean())** 0.5 %time std_np(np.arange(n)) Explanation: Compare speed python list vs numpy ops Let's implement standard deviation function. And compare computation time for 10 million numbers. End of explanation %time np.std(np.arange(int(1e7))) %%time n_input = tf.placeholder(dtype=tf.float64) x = tf.range(0, n_input) x_mean = tf.reduce_mean(x) x_std = tf.sqrt(tf.reduce_mean(tf.square(tf.subtract(x, x_mean)))) with tf.Session() as sess: print(sess.run([x_std], feed_dict={n_input: n})) Explanation: As we can see numpy function much fater than that implemtated on python list. There are a built-in function in numpy to compute the standard deviation. Verify the std computed in all three techniques give same result. End of explanation np.random.seed(1) W = np.array([2.3, - 5.7, 8.9]).T b = 1.2 X = np.random.random((10, 3)) y = np.dot(X, W) print("W: ", W) print("y: ", y) W_estimate = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y) W_estimate Explanation: Matrix multiplication Usecase - solve Normal Equations http://mlwiki.org/index.php/Normal_Equation End of explanation np.random.seed(1230) X = np.random.ranf((5, 3)) X n = X.shape[0] X0 = X - np.mean(X, axis = 0) (X0.T).dot(X0)/n np.cov(X, ddof=0, rowvar=False) np.var(X[:, 0]) np.cov(X[:,0], X[:,1], ddof=0) Explanation: Covariance Matrix End of explanation cx = np.cov(X, rowvar=False) cx e, v = np.linalg.eig(cx) e, v Z = X - X.mean(axis=0) Z Z.mean(axis=0) U, D, V = np.linalg.svd(Z) U D V Explanation: Eigen Value Decomposition Read about Eigen, SVD, PCA decomposition https://www.cc.gatech.edu/~dellaert/pubs/svd-note.pdf End of explanation U.dot(U.T) V.dot(V.T) Explanation: U and V are unitary matrix End of explanation U[0].dot(U[1]), U[0].dot(U[2]), U[1].dot(U[2]) V[0].dot(V[1]), V[0].dot(V[2]), V[1].dot(V[2]) X_0 = np.zeros_like(X) np.fill_diagonal(X_0, D) X_0 U.dot(X_0).dot(V) Explanation: Vectors of U and V are orthogonal End of explanation
5,142
Given the following text description, write Python code to implement the functionality described below step by step Description: Download lists of experiments by injection hemisphere, mouse cre line, and injection structure Download Pvalb experiments injected in the left hemisphere VISp Step1: Download grid data for these experiments at a particular resolution Step2: 10, 25, 50, or 100 micron volumes for Step3: http Step4: Read data (File mode) Step5: Read experiment grid data for these experiments by type 3D numpy arrays worked pretty well for Friday harbor last year Process data
Python Code: # Get the atlas id def query_atlases(search_pattern): return rma.build_query_url(rma.model_stage('Atlas', criteria="[name$il'%s']" % (search_pattern), only=['id', 'name'])) atlases = o.do_query(query_atlases, read_data, 'mouse*') pretty(atlases) # get the structure def query_structure(acronym, ontology_id): return rma.build_query_url(rma.model_stage('Structure', criteria="[acronym$eq'%s'][ontology_id$eq%d]" % (acronym, ontology_id), only=['id','name'])) structure = o.do_query(query_structure, read_data, 'VISp', 1)[0] pretty(structure) def query_hemisphere(name): return rma.build_query_url(rma.model_stage('Hemisphere', criteria="[name$il'%s']" % (name))) left_hemisphere_id = o.do_query(query_hemisphere, read_data, 'left')[0]['id'] left_hemisphere_id mca = MouseConnectivityApi() experiments = mca.get_experiments(structure['id']) # TODO: figure out why this didn't work w/ left hemisphere other_hemisphere_id = 3 # get experiments doesn't take hemisphere into account, so filter the results with a list comprehension left_hemisphere_experiments = [e for e in experiments if any([(injection['primary_injection_structure']['hemisphere_id'] == other_hemisphere_id) for injection in e['specimen']['stereotaxic_injections']])] pretty(left_hemisphere_experiments) Explanation: Download lists of experiments by injection hemisphere, mouse cre line, and injection structure Download Pvalb experiments injected in the left hemisphere VISp End of explanation from allensdk.api.queries.grid_data.grid_data_api import GridDataApi Explanation: Download grid data for these experiments at a particular resolution End of explanation gda = GridDataApi() # TODO: show search to get this section_data_set_id = 183282970 image_list = ['projection_density', 'projection_energy', 'injection_fraction', 'injection_density', 'injection_energy'] resolution = 100 # or 10, 25, 50 # Hmm, this didn't work for an image list of length > 1 for image in image_list: q = gda.build_projection_grid_download_query(section_data_set_id, image=[image], resolution=resolution) print(q) # gda.download_projection_grid_data(section_data_set_id, # image=image, # resolution=resolution) # TODO: data mask Explanation: 10, 25, 50, or 100 micron volumes for: * Projection density * projection energy * injection fraction * Injection density * Injection energy * Data mask (mask of valid voxels per experiment) End of explanation q = gda.build_expression_grid_download_query(section_data_set_id, include=image_list) print q Explanation: http://help.brain-map.org//display/mouseconnectivity/API#API-Expression3DGrids End of explanation #http://api.brain-map.org/api/v2/data/query.csv?criteria= #model::ProjectionStructureUnionize, #rma::criteria,[is_injection$eq'f'],hemisphere,structure,section_data_set[id$eq183282970](specimen(stereotaxic_injections(primary_injection_structure,stereotaxic_injection_coordinates))),rma::include,section_data_set(specimen(stereotaxic_injections(primary_injection_structure))), #rma::options[tabular$eq'distinct+specimens.name+as+specimen_name,stereotaxic_injection_coordinates.coordinates_ap,stereotaxic_injection_coordinates.coordinates_dv,stereotaxic_injection_coordinates.coordinates_ml,data_sets.id+as+data_set_id,stereotaxic_injections.primary_injection_structure_id,structures.acronym+as+target_structure,hemispheres.symbol+as+hemisphere,projection_structure_unionizes.is_injection,projection_structure_unionizes.sum_pixels,projection_structure_unionizes.sum_projection_pixels,projection_structure_unionizes.sum_pixel_intensity,projection_structure_unionizes.sum_projection_pixel_intensity,projection_structure_unionizes.projection_density,projection_structure_unionizes.projection_intensity,projection_structure_unionizes.projection_energy,projection_structure_unionizes.volume,projection_structure_unionizes.projection_volume,projection_structure_unionizes.normalized_projection_volume,projection_structure_unionizes.max_voxel_density,projection_structure_unionizes.max_voxel_x,projection_structure_unionizes.max_voxel_y,projection_structure_unionizes.max_voxel_z'][start_row$eq0][num_rows$eq3000] def build_query(section_data_set_id): criteria_string = ''.join(["[is_injection$eq'f'],", "hemisphere,", "structure,", "section_data_set[id$eq%d]" % (section_data_set_id), "(specimen", "(stereotaxic_injections", "(primary_injection_structure,stereotaxic_injection_coordinates)", "))"]) include_string = ''.join(["section_data_set", "(specimen", "(stereotaxic_injections", "(primary_injection_structure)", "))"]) tabular_list = ['distinct+specimens.name+as+specimen_name', 'stereotaxic_injection_coordinates.coordinates_ap', 'stereotaxic_injection_coordinates.coordinates_dv', 'stereotaxic_injection_coordinates.coordinates_ml', 'data_sets.id+as+data_set_id', 'stereotaxic_injections.primary_injection_structure_id', 'structures.acronym+as+target_structure', 'hemispheres.symbol+as+hemisphere', 'projection_structure_unionizes.is_injection', 'projection_structure_unionizes.sum_pixels', 'projection_structure_unionizes.sum_projection_pixels', 'projection_structure_unionizes.sum_pixel_intensity', 'projection_structure_unionizes.sum_projection_pixel_intensity', 'projection_structure_unionizes.projection_density', 'projection_structure_unionizes.projection_intensity', 'projection_structure_unionizes.projection_energy', 'projection_structure_unionizes.volume', 'projection_structure_unionizes.projection_volume', 'projection_structure_unionizes.normalized_projection_volume', 'projection_structure_unionizes.max_voxel_density', 'projection_structure_unionizes.max_voxel_x', 'projection_structure_unionizes.max_voxel_y', 'projection_structure_unionizes.max_voxel_z'] model_stage = rma.model_stage('ProjectionStructureUnionize', criteria=criteria_string, include=include_string, tabular=["'%s'" % ','.join(tabular_list)], # TODO: better handling of tabular quotes num_rows='all') url = rma.build_query_url(model_stage, fmt='csv') return url print(build_query(183282970)) Explanation: Read data (File mode): Read experiment metadata by injection hemisphere, mouse cre line, and injection structure “I download a list of all experiments. Which were injected in VISp?” End of explanation resolution=25 q = mca.build_volumetric_data_download_url('annotation/ccf_2015', 'annotation_%d.nrrd' % (resolution)) # mca.download_volumetric_data_download('annotation/ccf_2015', 'annotation_%d.nrrd' % (resolution)) print(q) Explanation: Read experiment grid data for these experiments by type 3D numpy arrays worked pretty well for Friday harbor last year Process data: Mask grid data using the data mask or injection fraction volumes “Give me the projection energy voxels from this experiment that are valid for analysis” “Give me the injection density voxels inside the injection site” Mask grid data arrays by structure “Give me the projection density voxel data for voxels belonging to VISp” This will involve the annotation volume at the correct resolution http://download.alleninstitute.org/informatics-archive/current-release/mouse_ccf/annotation/ccf_2015/ End of explanation
5,143
Given the following text description, write Python code to implement the functionality described below step by step Description: The idea In my previous blog post, we got to know the idea of "indentation-based complexity". We took a static view on the Linux kernel to spot the most complex areas. This time, we wanna track the evolution of the indentation-based complexity of a software system over time. We are especially interested in it's correlation between the lines of code. Because if we have a more or less stable development of the lines of codes of our system, but an increasing number of indentation per source code file, we surely got a complexity problem. Again, this analysis is higly inspired by Adam Tornhill's book "Software Design X-Ray" , which I currently always recommend if you want to get a deep dive into software data analysis. The data For the calculation of the evolution of our software system, we can use data from the version control system. In our case, we can get all changes to Java source code files with Git. We just need so say the right magic words, which is git log -p -- *.java This gives us data like the following Step1: The output is the commit data that I've describe above where each in line the text file represents one row in the DataFrame (without blank lines). Cleansing We skip all the data we don't need for sure. Especially the "extended index header" with the two lines that being with +++ and --- are candidates to mix with the real diff data that begins also with a + or a -. Furtunately, we can identify these rows easily Step2: Extracting metadata Next, we extract some metadata of a commit. We can identify the different entries by using a regular expression that looks up a specific key word for each line. We extract each individual information into a new Series/column because we need it for each change line during the software's history. Step3: To assign each commit's metadata to the remaining rows, we forward fill those rows with the metadata by using the fillna method. Step4: Identifying source code lines We can now focus on the changed source code lines. We can identify Step5: For our later indentation-based complexity calculation, we have to make sure that each line
Python Code: import pandas as pd diff_raw = pd.read_csv( "../../buschmais-spring-petclinic_fork/git_diff.log", sep="\n", names=["raw"]) diff_raw.head(16) Explanation: The idea In my previous blog post, we got to know the idea of "indentation-based complexity". We took a static view on the Linux kernel to spot the most complex areas. This time, we wanna track the evolution of the indentation-based complexity of a software system over time. We are especially interested in it's correlation between the lines of code. Because if we have a more or less stable development of the lines of codes of our system, but an increasing number of indentation per source code file, we surely got a complexity problem. Again, this analysis is higly inspired by Adam Tornhill's book "Software Design X-Ray" , which I currently always recommend if you want to get a deep dive into software data analysis. The data For the calculation of the evolution of our software system, we can use data from the version control system. In our case, we can get all changes to Java source code files with Git. We just need so say the right magic words, which is git log -p -- *.java This gives us data like the following: ``` commit e5254156eca3a8461fa758f17dc5fae27e738ab5 Author: Antoine Rey &#97;&#110;&#116;&#111;&#105;&#110;&#101;&#46;&#114;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109; Date: Fri Aug 19 18:54:56 2016 +0200 Convert Controler's integration test to unit test diff --git a/src/test/java/org/springframework/samples/petclinic /web/CrashControllerTests.java b/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java index ee83b8a..a83255b 100644 --- a/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java +++ b/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java @@ -1,8 +1,5 @@ package org.springframework.samples.petclinic.web; -import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get; -import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.*; - import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; ``` We have the * commit sha commit e5254156eca3a8461fa758f17dc5fae27e738ab5 * author's name Author: Antoine Rey &lt;antoine.rey@gmail.com&gt; * date of the commit Date: Fri Aug 19 18:54:56 2016 +0200 * commit message Convert Controler's integration test to unit test * names of the files that changes (after and before) diff --git a/src/test/java/org/springframework/samples/petclinic /web/CrashControllerTests.java b/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java * the extended index header index ee83b8a..a83255b 100644 --- a/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java +++ b/src/test/java/org/springframework/samples/petclinic/web/CrashControllerTests.java * and the full file diff where we can see additions or modifications (+) and deletions (-) ``` package org.springframework.samples.petclinic.web; -import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get; -import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.*; - import org.junit.Before; ``` We "just" have to get this data into our favorite data analysis framework, which is, of course, Pandas :-). We can actually do that! Let's see how! Advanced data wangling Reading in such a semi-structured data is a little challenge. But we can do it with some tricks. First, we read in the whole Git diff history by standard means, using read_csv and the separator \n to get one row per line. We make sure to give the columns a nice name as well. End of explanation index_row = diff_raw.raw.str.startswith("index ") ignored_diff_rows = (index_row.shift(1) | index_row.shift(2)) diff_raw = diff_raw[~(index_row | ignored_diff_rows)] diff_raw.head(10) Explanation: The output is the commit data that I've describe above where each in line the text file represents one row in the DataFrame (without blank lines). Cleansing We skip all the data we don't need for sure. Especially the "extended index header" with the two lines that being with +++ and --- are candidates to mix with the real diff data that begins also with a + or a -. Furtunately, we can identify these rows easily: These are the rows that begin with the row that starts with index. Using the shift operation starting at the row with index, we can get rid of all those lines. End of explanation diff_raw['commit'] = diff_raw.raw.str.split("^commit ").str[1] diff_raw['timestamp'] = pd.to_datetime(diff_raw.raw.str.split("^Date: ").str[1]) diff_raw['path'] = diff_raw.raw.str.extract("^diff --git.* b/(.*)", expand=True)[0] diff_raw.head() Explanation: Extracting metadata Next, we extract some metadata of a commit. We can identify the different entries by using a regular expression that looks up a specific key word for each line. We extract each individual information into a new Series/column because we need it for each change line during the software's history. End of explanation diff_raw = diff_raw.fillna(method='ffill') diff_raw.head(8) Explanation: To assign each commit's metadata to the remaining rows, we forward fill those rows with the metadata by using the fillna method. End of explanation %%timeit diff_raw.raw.str.extract("^\+( *).*$", expand=True)[0].str.len() diff_raw["i"] = diff_raw.raw.str[1:].str.len() - diff_raw.raw.str[1:].str.lstrip().str.len() diff_raw %%timeit diff_raw.raw.str[0] + diff_raw.raw.str.[1:].str.lstrip().str.len() diff_raw['added'] = diff_raw.line.str.extract("^\+( *).*$", expand=True)[0].str.len() diff_raw['deleted'] = diff_raw.line.str.extract("^-( *).*$", expand=True)[0].str.len() diff_raw.head() Explanation: Identifying source code lines We can now focus on the changed source code lines. We can identify End of explanation diff_raw['line'] = diff_raw.raw.str.replace("\t", " ") diff_raw.head() diff = \ diff_raw[ (~diff_raw['added'].isnull()) | (~diff_raw['deleted'].isnull())].copy() diff.head() diff['is_comment'] = diff.line.str[1:].str.match(r' *(//|/*\*).*') diff['is_empty'] = diff.line.str[1:].str.replace(" ","").str.len() == 0 diff['is_source'] = ~(diff['is_empty'] | diff['is_comment']) diff.head() diff.raw.str[0].value_counts() diff['lines_added'] = (~diff.added.isnull()).astype('int') diff['lines_deleted'] = (~diff.deleted.isnull()).astype('int') diff.head() diff = diff.fillna(0) #diff.to_excel("temp.xlsx") diff.head() commits_per_day = diff.set_index('timestamp').resample("D").sum() commits_per_day.head() %matplotlib inline commits_per_day.cumsum().plot() (commits_per_day.added - commits_per_day.deleted).cumsum().plot() (commits_per_day.lines_added - commits_per_day.lines_deleted).cumsum().plot() diff_sum = diff.sum() diff_sum.lines_added - diff_sum.lines_deleted 3913 Explanation: For our later indentation-based complexity calculation, we have to make sure that each line End of explanation
5,144
Given the following text description, write Python code to implement the functionality described below step by step Description: Boundary conditions This example shows solutions to time-dependent diffusion equations formulated with different boundary conditions. Isolated, Dirichlet (prescribed value) and periodic boundary conditions are tested. Step1: Below transport model allowing only diffusion is implemented. Diffusion coefficient is taken from parameters, where it is specified under key *.D (* denotes the equation) Step2: The function below creates and solves a model. It calls supplied setup_bc to initialize boundary conditions. Step3: This function makes plots of concentrations at different times Step4: Default diffusion coefficients and initial conditions Step5: No-flux (isolated) boundary conditions Step6: Dirichlet boundary conditions $c(0)=0$, $c(L)=0$ Step7: Periodic boundary conditions $c(L)=c(0)$
Python Code: %matplotlib inline import matplotlib.pylab as plt from oedes import * init_notebook() from matplotlib import ticker Explanation: Boundary conditions This example shows solutions to time-dependent diffusion equations formulated with different boundary conditions. Isolated, Dirichlet (prescribed value) and periodic boundary conditions are tested. End of explanation def v_D_diffusion_only(ctx, eq): return 0., ctx.param(eq, 'D') Explanation: Below transport model allowing only diffusion is implemented. Diffusion coefficient is taken from parameters, where it is specified under key *.D (* denotes the equation) End of explanation def run(species_D, species_ic, setup_bc, L=1., t=10, dt=1e-9): model = models.BaseModel() mesh = fvm.mesh1d(L) # BaseModel requires presence of Poisson's equation # The species are assumed to be uncharged, therefore it is decoupled from the system # and has no effect model.poisson = Poisson(mesh) model.poisson.bc = [models.AppliedVoltage(boundary) for boundary in mesh.boundaries] # Create equations params=dict() for i,D in enumerate(species_D): k='species%d'%i species = models.AdvectionDiffusion(mesh, k, z=0, v_D=v_D_diffusion_only) # uncharged: z=0 model.species.append(species) params[k+'.D']=D setup_bc(model) model.setUp() # These parameters are irrelevant for this test, but still are required to # evaluate the model params.update({'T': 300., 'electrode0.voltage': 0, 'electrode1.voltage': 0, 'electrode0.workfunction': 0, 'electrode1.workfunction': 0, 'epsilon_r': 3.}) # Create initial conditions xinit = model.X.copy() for species,ic in zip(model.species,species_ic): xinit[species.idx] = ic(mesh.cells['center'] / L) # Run the simulation and return the results c = context(model, x=xinit) c.transient(params,t,dt) return c Explanation: The function below creates and solves a model. It calls supplied setup_bc to initialize boundary conditions. End of explanation def plot_times(c,times,shrink=[0.5,0.5]): time_formatter = ticker.EngFormatter('s') times=np.asarray(times) figsize = np.asarray(plt.rcParams['figure.figsize'])*np.asarray(times.shape[::-1])*shrink fig,axes=plt.subplots(nrows=times.shape[0],ncols=times.shape[1],sharex=True,figsize=figsize) axes=np.asarray(axes).ravel() times=times.ravel() for ax,t in zip(axes,times): ct=c.attime(t) mpl=ct.mpl(fig,ax) mpl.allspecies() ax.set_yscale('linear') ax.set_title('t = %s'%time_formatter(t)) # testing support for species in ct.model.species: testing.store(ct.output()[species.name+'.c'],rtol=1e-5) fig.tight_layout() Explanation: This function makes plots of concentrations at different times: End of explanation test_mu = [ 1,10,100] test_ic = [ lambda u: np.abs(u - 0.5) < 0.1, lambda u: np.abs(u - 0.2) < 0.1, lambda u: np.where(u > 0.999, 1e2, 0) ] Explanation: Default diffusion coefficients and initial conditions: End of explanation def setup_bc_isolated(model): pass c=run(test_mu,test_ic,setup_bc_isolated) plot_times(c,[[1e-7,1e-5],[1e-3,1e-1],[1,1e1]]) Explanation: No-flux (isolated) boundary conditions End of explanation def setup_bc_zero(model): for species in model.species: species.bc = [models.Zero('electrode0'),models.Zero('electrode1')] c=run(test_mu,test_ic,setup_bc_zero) plot_times(c,[[1e-7,1e-6],[1e-4,1e-3],[1e-2,1e-1]]) Explanation: Dirichlet boundary conditions $c(0)=0$, $c(L)=0$ End of explanation def setup_bc_periodic(model): for species in model.species: species.bc = [models.Equal(species, 'electrode0')] c=run(test_mu,test_ic,setup_bc_periodic) plot_times(c,[[1e-7,1e-5],[1e-3,1e-2],[1e-1,1]]) Explanation: Periodic boundary conditions $c(L)=c(0)$ End of explanation
5,145
Given the following text description, write Python code to implement the functionality described below step by step Description: DFF and TFF (Toggle Flip-Flop) In this example we create a toggle flip-flop (TFF) from a d-flip-flop (DFF). In Magma, finite state machines can be constructed by composing combinational logic with flop-flops register primitives. Step1: Test using the python simulator. Step2: Generate Verilog Generate verilog with coreir. Step3: Here's an example of testing using fault's staged Tester class and the verilator simulator.
Python Code: import magma as m from mantle import DFF class TFF(m.Circuit): io = m.IO(O=m.Out(m.Bit)) + m.ClockIO() # instance a dff to hold the state of the toggle flip-flop - this needs to be done first dff = DFF() # compute the next state as the not of the old state ff.O io.O <= dff(~dff.O) def tff(): return TFF()() Explanation: DFF and TFF (Toggle Flip-Flop) In this example we create a toggle flip-flop (TFF) from a d-flip-flop (DFF). In Magma, finite state machines can be constructed by composing combinational logic with flop-flops register primitives. End of explanation from fault import PythonTester tester = PythonTester(TFF, TFF.CLK) tester.eval() val = tester.peek(TFF.O) assert val == False for i in range(10): val = not val tester.step() # toggle clock - now High assert val == tester.peek(TFF.O) tester.step() # toggle clock - now Low assert val == tester.peek(TFF.O) print("Success!") Explanation: Test using the python simulator. End of explanation m.compile("build/TFF", TFF, inline=True) %cat build/TFF.v %cat build/TFF.json !coreir -i build/TFF.json -p instancecount Explanation: Generate Verilog Generate verilog with coreir. End of explanation import fault tester = fault.Tester(TFF, TFF.CLK) for i in range(5): tester.step(2) tester.print("TFF.O=%d\n", TFF.O) tester.compile_and_run("verilator", disp_type='realtime') Explanation: Here's an example of testing using fault's staged Tester class and the verilator simulator. End of explanation
5,146
Given the following text description, write Python code to implement the functionality described below step by step Description: NBA Player Statistics Workshop Given a dataset of NBA players performance and salary in 2014, use Python to load the dataset and compute the summary statistics for the SALARY field Step2: Fetching the Data You have a couple of options of fetching the data set to begin your analysis Step3: Your turn Step4: Loading the Data Now that we have the CSV file that we're looking for, we need to be able to open the file and read it into memory. The trick is that we want to read only a single line at a time - consider really large CSV files. Python provides memory efficient iteration in the form of generators and the csv.reader module exposes one such generator, that reads the data from the CSV one row at a time. Moreover, we also want to parse our data so that we have specific access to the fields we're looking for. The csv.DictReader class will give you each row as a dictionary, where the keys are derived from the first, header line of the file. Here is a function that reads data from disk one line at a time and yields it to the user. Step5: Your turn Step6: Next step Step7: Are there different ways to print the first n rows of something? Sure! Try using break, which will stop a for loop from running. E.g. the code Step8: Summary Statistics In this section, you'll use the CSV data to write computations for mean, median, mode, minimum, and maximum. Use Python to access the values in the SALARY column. Step9: Nice work! Now... calculating the mode is a bit different. Remember about the Decorate-Sort-Undecorate pattern that we learned about in ThinkPython? That will work here! Step10: The "DSU" approach is a little inefficient. Instead of using a dictionary as our data type to solve the mode problem, we could use counter() from the Collections module. Read more about counter() and try it out here Step12: Putting the pieces together The above summary statistics can actually be computed inside of a single (and elegant!) function. Give it a try! Step13: Keep playing with the above function to get it to work more efficiently or to reduce bad data in the computation - e.g. what are all those zero salaries? Visualization Congratulations if you've made it this far! It's time for the bonus round! You've now had some summary statistics about the salaries of NBA players, but what we're really interested in is the relationship between SALARY and the rest of the fields in the data set. The PER - Player Efficiency Rating, is an aggregate score of all performance statistics; therefore if we determine the relationship of PER to SALARY, we might learn a lot about how to model NBA salaries. In order to explore this, let's create a scatter plot of SALARY to PER, where each point is an NBA player. Visualization is going to require a third party library. You probably already have matplotlib, so that might be the simplest if you're having trouble with installation. If you don't, pip install it now! Follow the documentation to create the scatter plot inline in the notebook in the following cells.
Python Code: # Imports - you'll need some of these later, but it's traditional to put them all at the beginning. import os import csv import json import urllib2 from collections import Counter from operator import itemgetter Explanation: NBA Player Statistics Workshop Given a dataset of NBA players performance and salary in 2014, use Python to load the dataset and compute the summary statistics for the SALARY field: mean median mode minimum maximum You will need to make use of the csv module to load the data and interact with it. Computations should require only simple arithmetic. (For the purposes of this exercise, attempt to use pure Python and no third party dependencies like Pandas - you can then compare and contrast the use of Pandas for this task later). Bonus: Determine the relationship of PER (Player Efficiency Rating) to Salary via a visualization of the data. NBA 2014 Players Dataset: http://bit.ly/gtnbads End of explanation def download(url, path): Downloads a URL and writes it to the specified path. The "path" is like the mailing address for the file - it tells the function where on your computer to send it! Also note the use of "with" to automatically close files - this is a good standard practice to follow. response = urllib2.urlopen(url) with open(path, 'w') as f: f.write(response.read()) response.close() Explanation: Fetching the Data You have a couple of options of fetching the data set to begin your analysis: Click on the link above and Download the file. Write a Python function that automatically downloads the data as a comma-separated value file (CSV) and writes it to disk. In either case, you'll have to be cognizant of where the CSV file lands. Here is a quick implementation of a function to download a URL at a file and write it to disk. Note the many approaches to do this as outlined here: How do I download a file over HTTP using Python?. End of explanation ## Write the Python to execute the function and download the file here: Explanation: Your turn: use the above function to download the data! End of explanation def read_csv(path): # First open the file with open(path, 'r') as f: # Create a DictReader to parse the CSV reader = csv.DictReader(f) for row in reader: # HINT: Convert SALARY column values into integers & PER column into floats. # Otherwise CSVs can turn ints into strs! You'll thank me later :D row['SALARY'] = int(row['SALARY']) row['PER'] = float(row['PER']) # Now yield each row one at a time. yield row Explanation: Loading the Data Now that we have the CSV file that we're looking for, we need to be able to open the file and read it into memory. The trick is that we want to read only a single line at a time - consider really large CSV files. Python provides memory efficient iteration in the form of generators and the csv.reader module exposes one such generator, that reads the data from the CSV one row at a time. Moreover, we also want to parse our data so that we have specific access to the fields we're looking for. The csv.DictReader class will give you each row as a dictionary, where the keys are derived from the first, header line of the file. Here is a function that reads data from disk one line at a time and yields it to the user. End of explanation ## Write the Python to execute our read_csv function. Explanation: Your turn: use the above function to open the file and print out the first row of the CSV! To do this, you'll need to do three things: First, remember where you told the download function to store your file? Pass that same path into read_csv: End of explanation ## Now write the Python to print the first row of the CSV here. Explanation: Next step: The read_csv function "returns" a generator. How can we access just the first row? Remember how to access the next row of a generator? End of explanation ## Write the Python to print *every* row of the CSV here. Explanation: Are there different ways to print the first n rows of something? Sure! Try using break, which will stop a for loop from running. E.g. the code: python for idx in xrange(100): if idx &gt; 10: break ...will stop the for loop after 10 iterations. Next, write a for loop that can access and print every row. End of explanation data = list(read_csv('fixtures/nba_players.csv')) #Put in your own path here. data = sorted(data, key=itemgetter('SALARY')) total = 0 count = 0 for row in data: count += 1 total += row['SALARY'] # Total Count print "There are %d total players." % count # Write the Python to get the median median = print "The median salary is %d." % median # Write the Python to get the minimum minimum = print "The minimum salary is %d." % minimum # Write the Python to get the maximum maximum = print "The maximum salary is %d." % maximum # Write the Python to get the mean mean = print "The mean salary is %d." % mean Explanation: Summary Statistics In this section, you'll use the CSV data to write computations for mean, median, mode, minimum, and maximum. Use Python to access the values in the SALARY column. End of explanation ## Write the Python to get the mode of the salaries. Explanation: Nice work! Now... calculating the mode is a bit different. Remember about the Decorate-Sort-Undecorate pattern that we learned about in ThinkPython? That will work here! End of explanation ## Experiment with using counter() here. Explanation: The "DSU" approach is a little inefficient. Instead of using a dictionary as our data type to solve the mode problem, we could use counter() from the Collections module. Read more about counter() and try it out here: End of explanation def statistics(path): Takes as input a path to `read_csv` and the field to compute the summary statistics upon. # Uncomment below to load the CSV into a list # data = list(read_csv(path)) # Fill in the function here stats = { 'maximum': data[-1]['SALARY'], 'minimum': data[0]['SALARY'], 'median': data[count / 2]['SALARY'], # Any potential problems here? 'mode': freqs.most_common(2), 'mean': total / count, } return stats Explanation: Putting the pieces together The above summary statistics can actually be computed inside of a single (and elegant!) function. Give it a try! End of explanation # Insert your Python to create the visualization here import os import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Makes the plot appear inline in your iPython Notebook. def read_data(path): # Pandas is an efficient way to wrangle the data quickly return pd.DataFrame(pd.read_csv(path)) def graph_data(path, xkey='PER', ykey='SALARY'): data = read_data(path) ## Fill this in yourself! plt.show() graph_data('fixtures/nba_players.csv') # Or whatever your path is Explanation: Keep playing with the above function to get it to work more efficiently or to reduce bad data in the computation - e.g. what are all those zero salaries? Visualization Congratulations if you've made it this far! It's time for the bonus round! You've now had some summary statistics about the salaries of NBA players, but what we're really interested in is the relationship between SALARY and the rest of the fields in the data set. The PER - Player Efficiency Rating, is an aggregate score of all performance statistics; therefore if we determine the relationship of PER to SALARY, we might learn a lot about how to model NBA salaries. In order to explore this, let's create a scatter plot of SALARY to PER, where each point is an NBA player. Visualization is going to require a third party library. You probably already have matplotlib, so that might be the simplest if you're having trouble with installation. If you don't, pip install it now! Follow the documentation to create the scatter plot inline in the notebook in the following cells. End of explanation
5,147
Given the following text description, write Python code to implement the functionality described below step by step Description: AI Explanations Step1: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Cloud AI Platform services are available. You can not use a Multi-Regional Storage bucket for training with AI Platform. Step2: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial. Step3: Authenticate your GCP account If you are using AI Platform Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Step4: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you submit a training job using the Cloud SDK, you upload a Python package containing your training code to a Cloud Storage bucket. AI Platform runs the code from this package. In this tutorial, AI Platform also saves the trained model that results from your job in the same bucket. You can then create an AI Platform model version based on this output in order to serve online predictions. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. Step5: Only if your bucket doesn't already exist Step6: Import libraries Import the libraries for this tutorial. Step7: Download and preprocess the data This section shows how to download the flower images, use the tf.data API to create a data input pipeline, and split the data into training and validation sets. Step8: The following cell contains some image visualization utility functions. This code isn't essential to training or deploying the model. If you're running this from Colab the cell is hidden. You can look at the code by right clicking on the cell --> "Form" --> "Show form" if you'd like to see it. Step9: Read images and labels from TFRecords In this dataset the images are stored as TFRecords. Step10: Use the visualization utility function provided earlier to preview flower images with their labels. Step11: Create training and validation datasets Step12: Build, train, and evaluate the model This section shows how to build, train, evaluate, and get local predictions from a model by using the TF.Keras Sequential API. Step13: Train the model Train this on a GPU if you have access (in Colab, from the menu select Runtime --> Change runtime type). On a CPU, it'll take ~30 minutes to run training. On a GPU, it takes ~5 minutes. Step14: Visualize local predictions Get predictions on your local model and visualize the images with their predicted labels, using the visualization utility function provided earlier. Step15: Export the model as a TF 2.x SavedModel When using TensorFlow 2.x, you export the model as a SavedModel and load it into Cloud Storage. During export, you need to define a serving function to convert data to the format your model expects. If you send encoded data to AI Platform, your serving function ensures that the data is decoded on the model server before it is passed as input to your model. Serving function for image data Sending base 64 encoded image data to AI Platform is more space efficient. Since this deployed model expects input data as raw bytes, you need to ensure that the b64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model. To resolve this, define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is part of the model's graph (instead of upstream on a CPU). When you send a prediction or explanation request, the request goes to the serving function (serving_fn), which preprocesses the b64 bytes into raw numpy bytes (preprocess_fn). At this point, the data can be passed to the model (m_call). Step16: Get input and output signatures Use TensorFlow's saved_model_cli to inspect the model's SignatureDef. You'll use this information when you deploy your model to AI Explanations in the next section. Step17: You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer. You need the signatures for the following layers Step18: Generate explanation metadata In order to deploy this model to AI Explanations, you need to create an explanation_metadata.json file with information about your model inputs, outputs, and baseline. You can use the Explainable AI SDK to generate most of the fields. For image models, using [0,1] as your input baseline represents black and white images. This example uses np.random to generate the baseline because the training images contain a lot of black and white (i.e. daisy petals). Note Step19: Deploy model to AI Explanations This section shows how to use gcloud to deploy the model to AI Explanations, using two different explanation methods for image models. Create the model Step20: Create explainable model versions For image models, we offer two choices for explanation methods Step21: Deploy an XRAI model Step22: Get predictions and explanations This section shows how to prepare test images to send to your deployed model, and how to send a batch prediction request to AI Explanations. Get and prepare test images To prepare the test images Step23: Format your explanation request Prepare a batch of instances. Step24: Send the explanations request and visualize If you deployed both an IG and an XRAI model, you can request explanations for both models and compare the results. If you only deployed one model above, run only the cell for that explanation method. You can use the Explainable AI SDK to send explanation requests to your deployed model and visualize the explanations. Step25: Check explanations and baselines To better make sense of your feature attributions, you can compare them with your model's baseline. For image models, the baseline_score returned by AI Explanations is the score your model would give an image input with the baseline you specified. The baseline is different for each class in the model. Every time your model predicts tulip as the top class, you'll see the same baseline score. Earlier, you used a baseline image of np.random randomly generated values. If you'd like the baseline for your model to be solid black and white images instead, pass [0,1] as the value to input_baselines in your explanation_metadata.json file above. If the baseline_score is very close to the value of example_score, the highlighted pixels may not be meaningful. Calculate the difference between baseline_score and example_score for the three test images above. Note that the score values for classification models are probabilities Step26: Explain the baseline image Another way to check your baseline choice is to view explanations for this model's baseline image Step27: Send the explanation request for the baseline image. (To check a baseline image for XRAI, change IG_VERSION to XRAI_VERSION below.) Step28: Visualize the explanation for your random baseline image, highlighting the pixels that contributed to the prediction Step29: The difference between your model's predicted score and the baseline score for this image should be close to 0. Run the following cell to confirm. If there is a difference between these two values, try increasing the number of integral steps used when you deploy your model. Step30: Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial. Alternatively, you can clean up individual resources by running the following commands
Python Code: PROJECT_ID = "[your-project-id]" #@param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) !gcloud config set project $PROJECT_ID Explanation: AI Explanations: Deploying an image model <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/main/notebooks/samples/explanations/tf2/ai-explanations-image.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/tree/main/notebooks/samples/explanations/tf2/ai-explanations-image.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> Overview This tutorial shows how to train a Keras classification model on image data and deploy it to the AI Platform Explanations service to get feature attributions on your deployed model. If you've already got a trained model and want to deploy it to AI Explanations, skip to the Export the model as a TF 2 SavedModel section. Dataset The dataset used for this tutorial is the flowers dataset from TensorFlow Datasets. Objective The goal of this tutorial is to train a model on a simple image dataset (flower classification) to understand how you can use AI Explanations with image models. For image models, AI Explanations returns an image with the pixels highlighted that signaled your model's prediction the most. This tutorial focuses more on deploying the model to AI Platform with Explanations than on the design of the model itself. Costs This tutorial uses billable components of Google Cloud Platform (GCP): AI Platform for: Prediction Explanation: AI Explanations comes at no extra charge to prediction prices. However, explanation requests take longer to process than normal predictions, so heavy usage of Explanations along with auto-scaling may result in more nodes being started and thus more charges Cloud Storage for: Storing model files for deploying to Cloud AI Platform Learn about AI Platform pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Before you begin Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime --> Change runtime type This tutorial assumes you are running the notebook either in Colab or Cloud AI Platform Notebooks. Set up your GCP project The following steps are required, regardless of your notebook environment. Select or create a GCP project. Make sure that billing is enabled for your project. Enable the AI Platform Training & Prediction and Compute Engine APIs. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. Project ID If you don't know your project ID. You might able to get your project ID using gcloud command, by executing the second code cell below. End of explanation REGION = 'us-central1' #@param {type: "string"} Explanation: Region You can also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Cloud AI Platform services are available. You can not use a Multi-Regional Storage bucket for training with AI Platform. End of explanation from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial. End of explanation import sys import os # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your Google Cloud account. This provides access # to your Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on AI Platform, then don't execute this code if not os.path.exists('/opt/deeplearning/metadata/env_version'): if 'google.colab' in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this tutorial in a notebook locally, replace the string # below with the path to your service account key and run this cell to # authenticate your Google Cloud account. else: %env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json Explanation: Authenticate your GCP account If you are using AI Platform Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. End of explanation BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]": BUCKET_NAME = PROJECT_ID + "_xai_flowers_" + TIMESTAMP Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you submit a training job using the Cloud SDK, you upload a Python package containing your training code to a Cloud Storage bucket. AI Platform runs the code from this package. In this tutorial, AI Platform also saves the trained model that results from your job in the same bucket. You can then create an AI Platform model version based on this output in order to serve online predictions. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. End of explanation ! gsutil mb -l $REGION gs://$BUCKET_NAME Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation import io import random import numpy as np import PIL import tensorflow as tf from matplotlib import pyplot as plt from base64 import b64encode # should be >= 2.1 print("Tensorflow version " + tf.__version__) if tf.__version__ < "2.1": raise Exception("TF 2.1 or greater is required") AUTO = tf.data.experimental.AUTOTUNE print("AUTO", AUTO) !pip install explainable-ai-sdk import explainable_ai_sdk Explanation: Import libraries Import the libraries for this tutorial. End of explanation GCS_PATTERN = 'gs://flowers-public/tfrecords-jpeg-192x192-2/*.tfrec' IMAGE_SIZE = [192, 192] BATCH_SIZE = 32 VALIDATION_SPLIT = 0.19 CLASSES = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips'] # do not change, maps to the labels in the data (folder names) # Split data files between training and validation filenames = tf.io.gfile.glob(GCS_PATTERN) random.shuffle(filenames) split = int(len(filenames) * VALIDATION_SPLIT) training_filenames = filenames[split:] validation_filenames = filenames[:split] print("Pattern matches {} data files. Splitting dataset into {} training files and {} validation files".format(len(filenames), len(training_filenames), len(validation_filenames))) validation_steps = int(3670 // len(filenames) * len(validation_filenames)) // BATCH_SIZE steps_per_epoch = int(3670 // len(filenames) * len(training_filenames)) // BATCH_SIZE print("With a batch size of {}, there will be {} batches per training epoch and {} batch(es) per validation run.".format(BATCH_SIZE, steps_per_epoch, validation_steps)) Explanation: Download and preprocess the data This section shows how to download the flower images, use the tf.data API to create a data input pipeline, and split the data into training and validation sets. End of explanation # @title display utilities [RUN ME] def dataset_to_numpy_util(dataset, N): dataset = dataset.batch(N) if tf.executing_eagerly(): # In eager mode, iterate in the Dataset directly. for images, labels in dataset: numpy_images = images.numpy() numpy_labels = labels.numpy() break else: # In non-eager mode, must get the TF note that # yields the nextitem and run it in a tf.Session. get_next_item = dataset.make_one_shot_iterator().get_next() with tf.Session() as ses: numpy_images, numpy_labels = ses.run(get_next_item) return numpy_images, numpy_labels def title_from_label_and_target(label, correct_label): label = np.argmax(label, axis=-1) # one-hot to class number correct_label = np.argmax(correct_label, axis=-1) # one-hot to class number correct = (label == correct_label) return "{} [{}{}{}]".format(CLASSES[label], str(correct), ', shoud be ' if not correct else '', CLASSES[correct_label] if not correct else ''), correct def display_one_flower(image, title, subplot, red=False): plt.subplot(subplot) plt.axis('off') plt.imshow(image) plt.title(title, fontsize=16, color='red' if red else 'black') return subplot + 1 def display_9_images_from_dataset(dataset): subplot = 331 plt.figure(figsize=(13, 13)) images, labels = dataset_to_numpy_util(dataset, 9) for i, image in enumerate(images): title = CLASSES[np.argmax(labels[i], axis=-1)] subplot = display_one_flower(image, title, subplot) if i >= 8: break plt.tight_layout() plt.subplots_adjust(wspace=0.1, hspace=0.1) plt.show() def display_9_images_with_predictions(images, predictions, labels): subplot = 331 plt.figure(figsize=(13, 13)) for i, image in enumerate(images): title, correct = title_from_label_and_target(predictions[i], labels[i]) subplot = display_one_flower(image, title, subplot, not correct) if i >= 8: break plt.tight_layout() plt.subplots_adjust(wspace=0.1, hspace=0.1) plt.show() def display_training_curves(training, validation, title, subplot): if subplot % 10 == 1: # set up the subplots on the first call plt.subplots(figsize=(10, 10), facecolor='#F0F0F0') plt.tight_layout() ax = plt.subplot(subplot) ax.set_facecolor('#F8F8F8') ax.plot(training) ax.plot(validation) ax.set_title('model ' + title) ax.set_ylabel(title) ax.set_xlabel('epoch') ax.legend(['train', 'valid.']) Explanation: The following cell contains some image visualization utility functions. This code isn't essential to training or deploying the model. If you're running this from Colab the cell is hidden. You can look at the code by right clicking on the cell --> "Form" --> "Show form" if you'd like to see it. End of explanation def read_tfrecord(example): features = { "image": tf.io.FixedLenFeature([], tf.string), # tf.string means bytestring "class": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar "one_hot_class": tf.io.VarLenFeature(tf.float32), } example = tf.io.parse_single_example(example, features) image = tf.image.decode_jpeg(example['image'], channels=3) image = tf.cast(image, tf.float32) / 255.0 # convert image to floats in [0, 1] range image = tf.reshape(image, [*IMAGE_SIZE, 3]) # explicit size will be needed for TPU one_hot_class = tf.sparse.to_dense(example['one_hot_class']) one_hot_class = tf.reshape(one_hot_class, [5]) return image, one_hot_class def load_dataset(filenames): # Read data from TFRecords dataset = tf.data.Dataset.from_tensor_slices(filenames) dataset = dataset.interleave(tf.data.TFRecordDataset, cycle_length=16, num_parallel_calls=AUTO) # faster dataset = dataset.map(read_tfrecord, num_parallel_calls=AUTO) return dataset Explanation: Read images and labels from TFRecords In this dataset the images are stored as TFRecords. End of explanation display_9_images_from_dataset(load_dataset(training_filenames)) Explanation: Use the visualization utility function provided earlier to preview flower images with their labels. End of explanation def get_batched_dataset(filenames): dataset = load_dataset(filenames) dataset = dataset.cache() # This dataset fits in RAM dataset = dataset.repeat() dataset = dataset.shuffle(2048) dataset = dataset.batch(BATCH_SIZE) dataset = dataset.prefetch(AUTO) # prefetch next batch while training (autotune prefetch buffer size) # For proper ordering of map/batch/repeat/prefetch, see Dataset performance guide: https://www.tensorflow.org/guide/performance/datasets return dataset def get_training_dataset(): return get_batched_dataset(training_filenames) def get_validation_dataset(): return get_batched_dataset(validation_filenames) some_flowers, some_labels = dataset_to_numpy_util(load_dataset(validation_filenames), 8 * 20) Explanation: Create training and validation datasets End of explanation from tensorflow.keras import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, GlobalAveragePooling2D, BatchNormalization from tensorflow.keras.optimizers import Adam model = Sequential([ # Stem Conv2D(kernel_size=3, filters=16, padding='same', activation='relu', input_shape=[*IMAGE_SIZE, 3]), BatchNormalization(), Conv2D(kernel_size=3, filters=32, padding='same', activation='relu'), BatchNormalization(), MaxPooling2D(pool_size=2), # Conv Group Conv2D(kernel_size=3, filters=64, padding='same', activation='relu'), BatchNormalization(), MaxPooling2D(pool_size=2), Conv2D(kernel_size=3, filters=96, padding='same', activation='relu'), BatchNormalization(), MaxPooling2D(pool_size=2), # Conv Group Conv2D(kernel_size=3, filters=128, padding='same', activation='relu'), BatchNormalization(), MaxPooling2D(pool_size=2), Conv2D(kernel_size=3, filters=128, padding='same', activation='relu'), BatchNormalization(), # 1x1 Reduction Conv2D(kernel_size=1, filters=32, padding='same', activation='relu'), BatchNormalization(), # Classifier GlobalAveragePooling2D(), Dense(5, activation='softmax') ]) model.compile( optimizer=Adam(lr=0.005, decay=0.98), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() Explanation: Build, train, and evaluate the model This section shows how to build, train, evaluate, and get local predictions from a model by using the TF.Keras Sequential API. End of explanation EPOCHS = 20 # Train for 60 epochs for higher accuracy, 20 should get you ~75% history = model.fit(get_training_dataset(), steps_per_epoch=steps_per_epoch, epochs=EPOCHS, validation_data=get_validation_dataset(), validation_steps=validation_steps) Explanation: Train the model Train this on a GPU if you have access (in Colab, from the menu select Runtime --> Change runtime type). On a CPU, it'll take ~30 minutes to run training. On a GPU, it takes ~5 minutes. End of explanation # Randomize the input so that you can execute multiple times to change results permutation = np.random.permutation(8 * 20) some_flowers, some_labels = (some_flowers[permutation], some_labels[permutation]) predictions = model.predict(some_flowers, batch_size=16) evaluations = model.evaluate(some_flowers, some_labels, batch_size=16) print(np.array(CLASSES)[np.argmax(predictions, axis=-1)].tolist()) print('[val_loss, val_acc]', evaluations) display_9_images_with_predictions(some_flowers, predictions, some_labels) Explanation: Visualize local predictions Get predictions on your local model and visualize the images with their predicted labels, using the visualization utility function provided earlier. End of explanation export_path = 'gs://' + BUCKET_NAME + '/explanations/mymodel' def _preprocess(bytes_input): decoded = tf.io.decode_jpeg(bytes_input, channels=3) decoded = tf.image.convert_image_dtype(decoded, tf.float32) resized = tf.image.resize(decoded, size=(192, 192)) return resized @tf.function(input_signature=[tf.TensorSpec([None], tf.string)]) def preprocess_fn(bytes_inputs): with tf.device("cpu:0"): decoded_images = tf.map_fn(_preprocess, bytes_inputs, dtype=tf.float32) return {"numpy_inputs": decoded_images} # User needs to make sure the key matches model's input m_call = tf.function(model.call).get_concrete_function([tf.TensorSpec(shape=[None, 192, 192, 3], dtype=tf.float32, name="numpy_inputs")]) @tf.function(input_signature=[tf.TensorSpec([None], tf.string)]) def serving_fn(bytes_inputs): images = preprocess_fn(bytes_inputs) prob = m_call(**images) return prob tf.saved_model.save(model, export_path, signatures={ 'serving_default': serving_fn, 'xai_preprocess': preprocess_fn, # Required for XAI 'xai_model': m_call # Required for XAI }) Explanation: Export the model as a TF 2.x SavedModel When using TensorFlow 2.x, you export the model as a SavedModel and load it into Cloud Storage. During export, you need to define a serving function to convert data to the format your model expects. If you send encoded data to AI Platform, your serving function ensures that the data is decoded on the model server before it is passed as input to your model. Serving function for image data Sending base 64 encoded image data to AI Platform is more space efficient. Since this deployed model expects input data as raw bytes, you need to ensure that the b64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model. To resolve this, define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is part of the model's graph (instead of upstream on a CPU). When you send a prediction or explanation request, the request goes to the serving function (serving_fn), which preprocesses the b64 bytes into raw numpy bytes (preprocess_fn). At this point, the data can be passed to the model (m_call). End of explanation ! saved_model_cli show --dir $export_path --all Explanation: Get input and output signatures Use TensorFlow's saved_model_cli to inspect the model's SignatureDef. You'll use this information when you deploy your model to AI Explanations in the next section. End of explanation loaded = tf.saved_model.load(export_path) input_name = list(loaded.signatures['xai_model'].structured_input_signature[1].keys())[0] print(input_name) output_name = list(loaded.signatures['xai_model'].structured_outputs.keys())[0] print(output_name) preprocess_name = list(loaded.signatures['xai_preprocess'].structured_input_signature[1].keys())[0] print(preprocess_name) Explanation: You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer. You need the signatures for the following layers: Serving function input layer Model input layer Model output layer End of explanation from explainable_ai_sdk.metadata.tf.v2 import SavedModelMetadataBuilder # We want to explain 'xai_model' signature. builder = SavedModelMetadataBuilder(export_path, signature_name='xai_model') random_baseline = np.random.rand(192, 192, 3) builder.set_image_metadata( 'numpy_inputs', input_baselines=[random_baseline.tolist()]) builder.save_metadata(export_path) Explanation: Generate explanation metadata In order to deploy this model to AI Explanations, you need to create an explanation_metadata.json file with information about your model inputs, outputs, and baseline. You can use the Explainable AI SDK to generate most of the fields. For image models, using [0,1] as your input baseline represents black and white images. This example uses np.random to generate the baseline because the training images contain a lot of black and white (i.e. daisy petals). Note: for the explanation request, use the model's signature for the input and output tensors. Do not use the serving function signature. End of explanation import datetime MODEL = 'flowers' + TIMESTAMP print(MODEL) # Create the model if it doesn't exist yet (you only need to run this once) ! gcloud ai-platform models create $MODEL --enable-logging --region=$REGION Explanation: Deploy model to AI Explanations This section shows how to use gcloud to deploy the model to AI Explanations, using two different explanation methods for image models. Create the model End of explanation # Each time you create a version the name should be unique IG_VERSION = 'v_ig' ! gcloud beta ai-platform versions create $IG_VERSION --region=$REGION \ --model $MODEL \ --origin $export_path \ --runtime-version 2.2 \ --framework TENSORFLOW \ --python-version 3.7 \ --machine-type n1-standard-4 \ --explanation-method integrated-gradients \ --num-integral-steps 25 # Make sure the IG model deployed correctly. State should be `READY` in the following log ! gcloud ai-platform versions describe $IG_VERSION --model $MODEL Explanation: Create explainable model versions For image models, we offer two choices for explanation methods: * Integrated Gradients (IG) * XRAI You can find more info on each method in the documentation. You can deploy a version with both so that you can compare results. If you already know which explanation method you'd like to use, you can deploy one version and skip the code blocks for the other method. Creating the version will take ~5-10 minutes. Note that your first deploy may take longer. Deploy an Integrated Gradients model End of explanation # Each time you create a version the name should be unique XRAI_VERSION = 'v_xrai' # Create the XRAI version with gcloud ! gcloud beta ai-platform versions create $XRAI_VERSION --region=$REGION \ --model $MODEL \ --origin $export_path \ --runtime-version 2.2 \ --framework TENSORFLOW \ --python-version 3.7 \ --machine-type n1-standard-4 \ --explanation-method xrai \ --num-integral-steps 25 # Make sure the XRAI model deployed correctly. State should be `READY` in the following log ! gcloud ai-platform versions describe $XRAI_VERSION --model $MODEL Explanation: Deploy an XRAI model End of explanation # Download test flowers from public bucket ! mkdir flowers ! gsutil -m cp gs://flowers_model/test_flowers/* ./flowers # Resize the images to what your model is expecting (192,192) test_filenames = [] for i in os.listdir('flowers'): img_path = 'flowers/' + i with PIL.Image.open(img_path) as ex_img: resize_img = ex_img.resize([192, 192]) resize_img.save(img_path) test_filenames.append(img_path) Explanation: Get predictions and explanations This section shows how to prepare test images to send to your deployed model, and how to send a batch prediction request to AI Explanations. Get and prepare test images To prepare the test images: Download a small sample of images from the flowers dataset -- just enough for a batch prediction. Resize the images to match the input shape (192, 192) of the model. Save the resized images back to your bucket. End of explanation # Prepare your images to send to your Cloud model instances = [] for image_path in test_filenames: img_bytes = tf.io.read_file(image_path) b64str = b64encode(img_bytes.numpy()).decode('utf-8') instances.append({preprocess_name: {'b64': b64str}}) Explanation: Format your explanation request Prepare a batch of instances. End of explanation # IG EXPLANATIONS remote_ig_model = explainable_ai_sdk.load_model_from_ai_platform(PROJECT_ID, MODEL, IG_VERSION) ig_response = remote_ig_model.explain(instances) for response in ig_response: response.visualize_attributions() # XRAI EXPLANATIONS remote_xrai_model = explainable_ai_sdk.load_model_from_ai_platform(PROJECT_ID, MODEL, XRAI_VERSION) xrai_response = remote_xrai_model.explain(instances) for response in xrai_response: response.visualize_attributions() Explanation: Send the explanations request and visualize If you deployed both an IG and an XRAI model, you can request explanations for both models and compare the results. If you only deployed one model above, run only the cell for that explanation method. You can use the Explainable AI SDK to send explanation requests to your deployed model and visualize the explanations. End of explanation for i, response in enumerate(ig_response): attr = response.get_attribution() baseline_score = attr.baseline_score predicted_score = attr.example_score print('Baseline score: ', baseline_score) print('Predicted score: ', predicted_score) print('Predicted - Baseline: ', predicted_score - baseline_score, '\n') Explanation: Check explanations and baselines To better make sense of your feature attributions, you can compare them with your model's baseline. For image models, the baseline_score returned by AI Explanations is the score your model would give an image input with the baseline you specified. The baseline is different for each class in the model. Every time your model predicts tulip as the top class, you'll see the same baseline score. Earlier, you used a baseline image of np.random randomly generated values. If you'd like the baseline for your model to be solid black and white images instead, pass [0,1] as the value to input_baselines in your explanation_metadata.json file above. If the baseline_score is very close to the value of example_score, the highlighted pixels may not be meaningful. Calculate the difference between baseline_score and example_score for the three test images above. Note that the score values for classification models are probabilities: the confidence your model has in its predicted class. A score of 0.90 for tulip means your model has classified the image as a tulip with 90% confidence. The code below checks baselines for the IG model. To inspect your XRAI model, swap out the ig_response and IG_VERSION variables below. End of explanation # Convert your baseline from above to a base64 string rand_test_img = PIL.Image.fromarray((random_baseline * 255).astype('uint8')) buffer = io.BytesIO() rand_test_img.save(buffer, format="PNG") new_image_string = b64encode(np.asarray(buffer.getvalue())).decode("utf-8") # Preview it plt.imshow(rand_test_img) sanity_check_img = {preprocess_name: {'b64': new_image_string}} Explanation: Explain the baseline image Another way to check your baseline choice is to view explanations for this model's baseline image: an image array of randomly generated values using np.random. First, convert the same np.random baseline array generated earlier to a base64 string and preview it. This encodes the random noise as if it's a PNG image. Additionally, you must convert the byte buffer to a numpy array, because this is the format the underlying model expects for input when you send the explain request. End of explanation # Sanity Check explanations EXPLANATIONS sanity_check_response = remote_ig_model.explain([sanity_check_img]) Explanation: Send the explanation request for the baseline image. (To check a baseline image for XRAI, change IG_VERSION to XRAI_VERSION below.) End of explanation sanity_check_response[0].visualize_attributions() Explanation: Visualize the explanation for your random baseline image, highlighting the pixels that contributed to the prediction End of explanation attr = sanity_check_response[0].get_attribution() baseline_score = attr.baseline_score example_score = attr.example_score print(abs(baseline_score - example_score)) Explanation: The difference between your model's predicted score and the baseline score for this image should be close to 0. Run the following cell to confirm. If there is a difference between these two values, try increasing the number of integral steps used when you deploy your model. End of explanation # Delete model version resource ! gcloud ai-platform versions delete $IG_VERSION --quiet --model $MODEL ! gcloud ai-platform versions delete $XRAI_VERSION --quiet --model $MODEL # Delete model resource ! gcloud ai-platform models delete $MODEL --quiet # Delete Cloud Storage objects that were created ! gsutil -m rm -r gs://$BUCKET_NAME Explanation: Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial. Alternatively, you can clean up individual resources by running the following commands: End of explanation
5,148
Given the following text description, write Python code to implement the functionality described below step by step Description: apsis on the BRML cluster Generally, apsis consists of a server, whose task it is to generate new candidates and receive updates, and several worker processes, who evaluate the actual machine learning algorithm and update the server. Right now, it's best if you start the server on your own computer, and the worker processes as jobs on the cluster. To start with, you need to install apsis and one requirement. To do so, first clone the apsis repo. git clone https Step1: This is the Connection object, which we'll use to interface with the server. I've used PC-HIWI6 Step2: Not surprisingly, there still aren't any experiments. Time to change that; let's build a simple experiment. We need to define several parameters for that Step3: Now, parameter definitions is interesting. It is a dictionary with string keys (the parameter names) and a dictionary defining the parameter. The latter dictionary contains the type field (defining the type of parameter definition). The other entries are the kwargs-like field to initialize the parameter definitions. For example, let's say we have two parameters. x is a numeric parameter between 0 and 10, and class is one of "A", "B" or "C". This, we define like this Step4: We'll ignore optimizer_params for now. Usually, you could use it to set the number of samples initially evaluated via RandomSearch instead of BayOpt, or the optimizer used for the acquisition function, or the acquisition function etc. Instead, we'll start with the initialization Step5: The experiment id is important for specifiying the experiment which you want to update, from which you want to get results etc. It can be set in init_experiment, but in doing so you have to be extremely careful not to use one already in use. If not specified, it's a newly generated uuid4 hex, and is guarenteed not to occur multiple times. Now, we had looked at all available experiment IDs before (when no experiment had been initialized). Let's do it again now. Step6: As you can see, the experiment now exists. Are there candidates already evaluated? Of course now, which the following can show us Step7: This function shows us three lists of candidates (currently empty). finished are all candidates that have been evaluated and are, well, finished. pending are candidates which have been generated, have possibly begun evaluating and then been paused. working are candidates currently in progress. How do we get candidates? Simple, via the get_next_candidate function Step8: A candidate is nothing but a dictionary with the following fields Step9: And let's extract the parameters. Depending on your evaluation function, you can also just use the param entry dictionary directly (for example for sklearn functions). Step10: Now, we'll just update the candidate with the result, and update the server Step11: And let's look at the candidates again Step12: Yay, it worked! And that's basically it. Every worker only has to use a few of the lines above (initializing the connection, getting the next candidate, evaluating and update).
Python Code: from apsis_client.apsis_connection import Connection conn = Connection(server_address="http://localhost:5000") Explanation: apsis on the BRML cluster Generally, apsis consists of a server, whose task it is to generate new candidates and receive updates, and several worker processes, who evaluate the actual machine learning algorithm and update the server. Right now, it's best if you start the server on your own computer, and the worker processes as jobs on the cluster. To start with, you need to install apsis and one requirement. To do so, first clone the apsis repo. git clone https://github.com/FrederikDiehl/apsis.git And add it to the python path (or call sys.path.append(YOUR_PATH) everytime you need it). Additionally, you'll need a newer requests version; locally at least. pip install --upgrade --user requests Now, change to the cloned apsis directory, and change to the brml_dev branch git checkout brml_dev I'll keep the current mostly-stable version with some hacks for brml there. You can then either start the server in a python shell, or with the REST_start_script in code/webservice. In the python shell (don't do this here, because it blocks the shell), do from apsis.webservice.REST_start_script import start_rest start_rest(port=5000) Or whichever port you want to use. You can do the rest here, now. But, first of all, try to access HOSTNAME:5000 via browser. You should see an overview page. Congratulations; that means the server is working. Now, first of all, let's look at the experiments page. The site to access (also via browser) is simply HOSTNAME:5000/experiments, the result should look like this: { "result": [] } This means the result of our request (getting all experiment ids) was successful, but we have no experiments started. Let's change that! End of explanation conn.get_all_experiment_ids() Explanation: This is the Connection object, which we'll use to interface with the server. I've used PC-HIWI6:5116 as my hostname (and yes, the http is important); yours will vary. We can do the same we've done before, and look for experiment ids. End of explanation name = "BraninHoo" optimizer = "BayOpt" minimization = True Explanation: Not surprisingly, there still aren't any experiments. Time to change that; let's build a simple experiment. We need to define several parameters for that: * name: The human-readable name of the experiment. * optimizer: The string defining the optimizer, can be either RandomSearch or BayOpt * param_defs: The parameter definition dictionary, we'll come back to that in a bit. * optimizer_arguments: The parameters for how the optimizer is supposed to work. * minimization: Bool whether the problem is one of minimization or maximization. Let's begin defining them. End of explanation param_defs = { "x": { "type": "MinMaxNumericParamDef", "lower_bound": -5, "upper_bound": 10 }, "y": { "type": "MinMaxNumericParamDef", "lower_bound": 0, "upper_bound": 15}, } Explanation: Now, parameter definitions is interesting. It is a dictionary with string keys (the parameter names) and a dictionary defining the parameter. The latter dictionary contains the type field (defining the type of parameter definition). The other entries are the kwargs-like field to initialize the parameter definitions. For example, let's say we have two parameters. x is a numeric parameter between 0 and 10, and class is one of "A", "B" or "C". This, we define like this: param_defs = { "x": { "type": "MinMaxNumericParamDef", "lower_bound": 0, "upper_bound"; 10 }, "class": { "type": "NominalParamDef", "values": ["A", "B", "C"] } } And that's it! For our example, we'll use the BraninHoo function, so we need two parameters, called x and y (or, sometimes, called x_0 and x_1, but that's ugly to type). x is between -5 and 10, y between 0 and 15. End of explanation exp_id = conn.init_experiment(name, optimizer, param_defs, minimization=minimization) print(exp_id) Explanation: We'll ignore optimizer_params for now. Usually, you could use it to set the number of samples initially evaluated via RandomSearch instead of BayOpt, or the optimizer used for the acquisition function, or the acquisition function etc. Instead, we'll start with the initialization: End of explanation conn.get_all_experiment_ids() Explanation: The experiment id is important for specifiying the experiment which you want to update, from which you want to get results etc. It can be set in init_experiment, but in doing so you have to be extremely careful not to use one already in use. If not specified, it's a newly generated uuid4 hex, and is guarenteed not to occur multiple times. Now, we had looked at all available experiment IDs before (when no experiment had been initialized). Let's do it again now. End of explanation conn.get_all_candidates(exp_id) Explanation: As you can see, the experiment now exists. Are there candidates already evaluated? Of course now, which the following can show us: End of explanation cand = conn.get_next_candidate(exp_id) print(cand) Explanation: This function shows us three lists of candidates (currently empty). finished are all candidates that have been evaluated and are, well, finished. pending are candidates which have been generated, have possibly begun evaluating and then been paused. working are candidates currently in progress. How do we get candidates? Simple, via the get_next_candidate function: End of explanation import math def branin_func(x, y, a=1, b=5.1/(4*math.pi**2), c=5/math.pi, r=6, s=10, t=1/(8*math.pi)): # see http://www.sfu.ca/~ssurjano/branin.html. result = a*(y-b*x**2+c*x-r)**2 + s*(1-t)*math.cos(x)+s return result Explanation: A candidate is nothing but a dictionary with the following fields: * cost: The cost of evaluating this candidate. Is currently unused, but can be used for statistics or - later - for Expected Improvement Per Second. * params: The parameter dictionary. This contains one entry for each parameter, with each value being the parameter value for this candidate. * id: The id of the candidate. Not really important for you. * worker_information: This can be used to specify continuation information, for example. It will never be changed by apsis. * result: The interesting field. The result of your evaluation. Now it's time for your work; for evaluating the parameters. Here, let's use the BraninHoo function. End of explanation x = cand["params"]["x"] y = cand["params"]["y"] result = branin_func(x, y) print(result) Explanation: And let's extract the parameters. Depending on your evaluation function, you can also just use the param entry dictionary directly (for example for sklearn functions). End of explanation cand["result"] = result conn.update(exp_id, cand, status="finished") Explanation: Now, we'll just update the candidate with the result, and update the server: End of explanation conn.get_all_candidates(exp_id) Explanation: And let's look at the candidates again: End of explanation def eval_one_cand(): cand = conn.get_next_candidate(exp_id) x = cand["params"]["x"] y = cand["params"]["y"] result = branin_func(x, y) cand["result"] = result conn.update(exp_id, cand, status="finished") for i in range(20): eval_one_cand() Explanation: Yay, it worked! And that's basically it. Every worker only has to use a few of the lines above (initializing the connection, getting the next candidate, evaluating and update). End of explanation
5,149
Given the following text description, write Python code to implement the functionality described below step by step Description: Multivariate Regression Let's grab a small little data set of Blue Book car values Step1: We can use pandas to split up this matrix into the feature vectors we're interested in, and the value we're trying to predict. Note how we use pandas.Categorical to convert textual category data (model name) into an ordinal number that we can work with.
Python Code: import pandas as pd df = pd.read_excel('http://cdn.sundog-soft.com/Udemy/DataScience/cars.xls') df.head() Explanation: Multivariate Regression Let's grab a small little data set of Blue Book car values: End of explanation import statsmodels.api as sm df['Model_ord'] = pd.Categorical(df.Model).codes X = df[['Mileage', 'Model_ord', 'Doors']] y = df[['Price']] X1 = sm.add_constant(X) est = sm.OLS(y, X1).fit() est.summary() y.groupby(df.Doors).mean() Explanation: We can use pandas to split up this matrix into the feature vectors we're interested in, and the value we're trying to predict. Note how we use pandas.Categorical to convert textual category data (model name) into an ordinal number that we can work with. End of explanation
5,150
Given the following text description, write Python code to implement the functionality described below step by step Description: Deep Learning with TensorFlow Credits Step1: First reload the data we generated in notmist.ipynb. Step2: Reformat into a shape that's more adapted to the models we're going to train Step3: Problem 1 Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compue the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy. 請參考 - https Step4: Problem 2 Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens? Problem 3 Introduce Dropout on the hidden layer of the neural network. Remember
Python Code: # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. import cPickle as pickle import numpy as np import tensorflow as tf Explanation: Deep Learning with TensorFlow Credits: Forked from TensorFlow by Google Setup Refer to the setup instructions. Exercise 3 Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model. The goal of this exercise is to explore regularization techniques. End of explanation pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f: save = pickle.load(f) train_dataset = save['train_dataset'] train_labels = save['train_labels'] valid_dataset = save['valid_dataset'] valid_labels = save['valid_labels'] test_dataset = save['test_dataset'] test_labels = save['test_labels'] del save # hint to help gc free up memory print 'Training set', train_dataset.shape, train_labels.shape print 'Validation set', valid_dataset.shape, valid_labels.shape print 'Test set', test_dataset.shape, test_labels.shape Explanation: First reload the data we generated in notmist.ipynb. End of explanation image_size = 28 num_labels = 10 def reformat(dataset, labels): dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32) # Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...] labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labels train_dataset, train_labels = reformat(train_dataset, train_labels) valid_dataset, valid_labels = reformat(valid_dataset, valid_labels) test_dataset, test_labels = reformat(test_dataset, test_labels) print 'Training set', train_dataset.shape, train_labels.shape print 'Validation set', valid_dataset.shape, valid_labels.shape print 'Test set', test_dataset.shape, test_labels.shape def accuracy(predictions, labels): return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) / predictions.shape[0]) Explanation: Reformat into a shape that's more adapted to the models we're going to train: - data as a flat matrix, - labels as float 1-hot encodings. End of explanation import numpy as np import tensorflow as tf batch_size = 128 image_size = 28 num_labels = 10 graph = tf.Graph() with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(None, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. l1_size = 1000 l2_size = 30 weights_l1 = tf.Variable( tf.truncated_normal([image_size * image_size, l1_size],seed=1)) biases_l1 = tf.Variable(tf.zeros([l1_size])) weights_output = tf.Variable( tf.truncated_normal([l1_size ,num_labels],seed=1)) biases_output = tf.Variable(tf.zeros([num_labels])) # Training computation. l1_output = tf.nn.relu(tf.matmul(tf_train_dataset,weights_l1) + biases_l1) logits = tf.matmul(l1_output, weights_output) + biases_output loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) + tf.nn.l2_loss(weights_l1)*.01 # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) train_prediction = tf.nn.softmax(logits) num_steps = 3001 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print "Initialized" for step in xrange(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l = session.run( [optimizer, loss], feed_dict=feed_dict) if (step % 500 == 0): print "Minibatch loss at step", step, ":", l print "Valid_dataset Set Validation",accuracy( session.run(train_prediction,feed_dict={tf_train_dataset:valid_dataset,}),valid_labels) print "Testing Set Validation",accuracy( session.run(train_prediction,feed_dict={tf_train_dataset:test_dataset,}),test_labels) Explanation: Problem 1 Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compue the L2 loss for a tensor t using nn.l2_loss(t). The right amount of regularization should improve your validation / test accuracy. 請參考 - https://msdn.microsoft.com/zh-tw/magazine/dn904675.aspx 請參考 - http://blog.csdn.net/zouxy09/article/details/24971995 請基本概念就是避免 overfitting ,白話的說不想要讓 weight 的值過大化,讓預側函數過於曲折 End of explanation batch_size = 128 image_size = 28 num_labels = 10 graph = tf.Graph() with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(None, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. l1_size = 1000 l2_size = 30 weights_l1 = tf.Variable( tf.truncated_normal([image_size * image_size, l1_size],seed=1)) biases_l1 = tf.Variable(tf.zeros([l1_size])) weights_output = tf.Variable( tf.truncated_normal([l1_size ,num_labels],seed=1)) biases_output = tf.Variable(tf.zeros([num_labels])) # Training computation. drop_weights_l1 = tf.nn.dropout(weights_l1,keep_prob=0.5) drop_l1_output = tf.nn.relu(tf.matmul(tf_train_dataset,drop_weights_l1) + biases_l1) drop_logits = tf.matmul(drop_l1_output, weights_output) + biases_output loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(drop_logits, tf_train_labels)) + tf.nn.l2_loss(drop_weights_l1)*.01 # prediction graph l1_output = tf.nn.relu(tf.matmul(tf_train_dataset,weights_l1) + biases_l1) logits = tf.matmul(l1_output, weights_output) + biases_output train_prediction = tf.nn.softmax(logits) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) num_steps = 3001 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print "Initialized" for step in xrange(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l = session.run( [optimizer, loss], feed_dict=feed_dict) if (step % 500 == 0): print "Minibatch loss at step", step, ":", l print "Valid_dataset Set Validation",accuracy( session.run(train_prediction,feed_dict={tf_train_dataset:valid_dataset,}),valid_labels) print "Testing Set Validation",accuracy( session.run(train_prediction,feed_dict={tf_train_dataset:test_dataset,}),test_labels) Explanation: Problem 2 Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens? Problem 3 Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides nn.dropout() for that, but you have to make sure it's only inserted during training. What happens to our extreme overfitting case? End of explanation
5,151
Given the following text description, write Python code to implement the functionality described below step by step Description: Latent Dirichlet Allocation for Text Data In this assignment you will apply standard preprocessing techniques on Wikipedia text data use GraphLab Create to fit a Latent Dirichlet allocation (LDA) model explore and interpret the results, including topic keywords and topic assignments for documents Recall that a major feature distinguishing the LDA model from our previously explored methods is the notion of mixed membership. Throughout the course so far, our models have assumed that each data point belongs to a single cluster. k-means determines membership simply by shortest distance to the cluster center, and Gaussian mixture models suppose that each data point is drawn from one of their component mixture distributions. In many cases, though, it is more realistic to think of data as genuinely belonging to more than one cluster or category - for example, if we have a model for text data that includes both "Politics" and "World News" categories, then an article about a recent meeting of the United Nations should have membership in both categories rather than being forced into just one. With this in mind, we will use GraphLab Create tools to fit an LDA model to a corpus of Wikipedia articles and examine the results to analyze the impact of a mixed membership approach. In particular, we want to identify the topics discovered by the model in terms of their most important words, and we want to use the model to predict the topic membership distribution for a given document. Note to Amazon EC2 users Step1: In the original data, each Wikipedia article is represented by a URI, a name, and a string containing the entire text of the article. Recall from the video lectures that LDA requires documents to be represented as a bag of words, which ignores word ordering in the document but retains information on how many times each word appears. As we have seen in our previous encounters with text data, words such as 'the', 'a', or 'and' are by far the most frequent, but they appear so commonly in the English language that they tell us almost nothing about how similar or dissimilar two documents might be. Therefore, before we train our LDA model, we will preprocess the Wikipedia data in two steps Step2: Model fitting and interpretation In the video lectures we saw that Gibbs sampling can be used to perform inference in the LDA model. In this assignment we will use a GraphLab Create method to learn the topic model for our Wikipedia data, and our main emphasis will be on interpreting the results. We'll begin by creating the topic model using create() from GraphLab Create's topic_model module. Note Step3: GraphLab provides a useful summary of the model we have fitted, including the hyperparameter settings for alpha, gamma (note that GraphLab Create calls this parameter beta), and K (the number of topics); the structure of the output data; and some useful methods for understanding the results. Step4: It is certainly useful to have pre-implemented methods available for LDA, but as with our previous methods for clustering and retrieval, implementing and fitting the model gets us only halfway towards our objective. We now need to analyze the fitted model to understand what it has done with our data and whether it will be useful as a document classification system. This can be a challenging task in itself, particularly when the model that we use is complex. We will begin by outlining a sequence of objectives that will help us understand our model in detail. In particular, we will get the top words in each topic and use these to identify topic themes predict topic distributions for some example documents compare the quality of LDA "nearest neighbors" to the NN output from the first assignment understand the role of model hyperparameters alpha and gamma Load a fitted topic model The method used to fit the LDA model is a randomized algorithm, which means that it involves steps that are random; in this case, the randomness comes from Gibbs sampling, as discussed in the LDA video lectures. Because of these random steps, the algorithm will be expected to yield slighty different output for different runs on the same data - note that this is different from previously seen algorithms such as k-means or EM, which will always produce the same results given the same input and initialization. It is important to understand that variation in the results is a fundamental feature of randomized methods. However, in the context of this assignment this variation makes it difficult to evaluate the correctness of your analysis, so we will load and analyze a pre-trained model. We recommend that you spend some time exploring your own fitted topic model and compare our analysis of the pre-trained model to the same analysis applied to the model you trained above. Step5: Identifying topic themes by top words We'll start by trying to identify the topics learned by our model with some major themes. As a preliminary check on the results of applying this method, it is reasonable to hope that the model has been able to learn topics that correspond to recognizable categories. In order to do this, we must first recall what exactly a 'topic' is in the context of LDA. In the video lectures on LDA we learned that a topic is a probability distribution over words in the vocabulary; that is, each topic assigns a particular probability to every one of the unique words that appears in our data. Different topics will assign different probabilities to the same word Step6: Quiz Question Step7: Let's look at the top 10 words for each topic to see if we can identify any themes Step8: We propose the following themes for each topic Step9: Measuring the importance of top words We can learn more about topics by exploring how they place probability mass (which we can think of as a weight) on each of their top words. We'll do this with two visualizations of the weights for the top words in each topic Step10: In the above plot, each line corresponds to one of our ten topics. Notice how for each topic, the weights drop off sharply as we move down the ranked list of most important words. This shows that the top 10-20 words in each topic are assigned a much greater weight than the remaining words - and remember from the summary of our topic model that our vocabulary has 547462 words in total! Next we plot the total weight assigned by each topic to its top 10 words Step11: Here we see that, for our topic model, the top 10 words only account for a small fraction (in this case, between 5% and 13%) of their topic's total probability mass. So while we can use the top words to identify broad themes for each topic, we should keep in mind that in reality these topics are more complex than a simple 10-word summary. Finally, we observe that some 'junk' words appear highly rated in some topics despite our efforts to remove unhelpful words before fitting the model; for example, the word 'born' appears as a top 10 word in three different topics, but it doesn't help us describe these topics at all. Topic distributions for some example documents As we noted in the introduction to this assignment, LDA allows for mixed membership, which means that each document can partially belong to several different topics. For each document, topic membership is expressed as a vector of weights that sum to one; the magnitude of each weight indicates the degree to which the document represents that particular topic. We'll explore this in our fitted model by looking at the topic distributions for a few example Wikipedia articles from our data set. We should find that these articles have the highest weights on the topics whose themes are most relevant to the subject of the article - for example, we'd expect an article on a politician to place relatively high weight on topics related to government, while an article about an athlete should place higher weight on topics related to sports or competition. Topic distributions for documents can be obtained using GraphLab Create's predict() function. GraphLab Create uses a collapsed Gibbs sampler similar to the one described in the video lectures, where only the word assignments variables are sampled. To get a document-specific topic proportion vector post-facto, predict() draws this vector from the conditional distribution given the sampled word assignments in the document. Notice that, since these are draws from a distribution over topics that the model has learned, we will get slightly different predictions each time we call this function on a document - we can see this below, where we predict the topic distribution for the article on Barack Obama Step12: To get a more robust estimate of the topics for each document, we can average a large number of predictions for the same document Step13: Quiz Question Step14: Quiz Question Step15: Comparing LDA to nearest neighbors for document retrieval So far we have found that our topic model has learned some coherent topics, we have explored these topics as probability distributions over a vocabulary, and we have seen how individual documents in our Wikipedia data set are assigned to these topics in a way that corresponds with our expectations. In this section, we will use the predicted topic distribution as a representation of each document, similar to how we have previously represented documents by word count or TF-IDF. This gives us a way of computing distances between documents, so that we can run a nearest neighbors search for a given document based on its membership in the topics that we learned from LDA. We can contrast the results with those obtained by running nearest neighbors under the usual TF-IDF representation, an approach that we explored in a previous assignment. We'll start by creating the LDA topic distribution representation for each document Step16: Next we add the TF-IDF document representations Step17: For each of our two different document representations, we can use GraphLab Create to compute a brute-force nearest neighbors model Step18: Let's compare these nearest neighbor models by finding the nearest neighbors under each representation on an example document. For this example we'll use Paul Krugman, an American economist Step19: Notice that that there is no overlap between the two sets of top 10 nearest neighbors. This doesn't necessarily mean that one representation is better or worse than the other, but rather that they are picking out different features of the documents. With TF-IDF, documents are distinguished by the frequency of uncommon words. Since similarity is defined based on the specific words used in the document, documents that are "close" under TF-IDF tend to be similar in terms of specific details. This is what we see in the example Step20: Quiz Question Step21: Understanding the role of LDA model hyperparameters Finally, we'll take a look at the effect of the LDA model hyperparameters alpha and gamma on the characteristics of our fitted model. Recall that alpha is a parameter of the prior distribution over topic weights in each document, while gamma is a parameter of the prior distribution over word weights in each topic. In the video lectures, we saw that alpha and gamma can be thought of as smoothing parameters when we compute how much each document "likes" a topic (in the case of alpha) or how much each topic "likes" a word (in the case of gamma). In both cases, these parameters serve to reduce the differences across topics or words in terms of these calculated preferences; alpha makes the document preferences "smoother" over topics, and gamma makes the topic preferences "smoother" over words. Our goal in this section will be to understand how changing these parameter values affects the characteristics of the resulting topic model. Quiz Question Step22: Quiz Question Step23: We'll start by loading some topic models that have been trained using different settings of alpha and gamma. Specifically, we will start by comparing the following two models to our original topic model Step24: Changing the hyperparameter alpha Since alpha is responsible for smoothing document preferences over topics, the impact of changing its value should be visible when we plot the distribution of topic weights for the same document under models fit with different alpha values. In the code below, we plot the (sorted) topic weights for the Wikipedia article on Barack Obama under models fit with high, original, and low settings of alpha. Step25: Here we can clearly see the smoothing enforced by the alpha parameter - notice that when alpha is low most of the weight in the topic distribution for this article goes to a single topic, but when alpha is high the weight is much more evenly distributed across the topics. Quiz Question Step26: Quiz Question Step27: Changing the hyperparameter gamma Just as we were able to see the effect of alpha by plotting topic weights for a document, we expect to be able to visualize the impact of changing gamma by plotting word weights for each topic. In this case, however, there are far too many words in our vocabulary to do this effectively. Instead, we'll plot the total weight of the top 100 words and bottom 1000 words for each topic. Below, we plot the (sorted) total weights of the top 100 words and bottom 1000 from each topic in the high, original, and low gamma models. Now we will consider the following two models Step28: From these two plots we can see that the low gamma model results in higher weight placed on the top words and lower weight placed on the bottom words for each topic, while the high gamma model places relatively less weight on the top words and more weight on the bottom words. Thus increasing gamma results in topics that have a smoother distribution of weight across all the words in the vocabulary. Quiz Question Step29: Quiz Question
Python Code: import graphlab as gl import numpy as np import matplotlib.pyplot as plt %matplotlib inline '''Check GraphLab Create version''' from distutils.version import StrictVersion assert (StrictVersion(gl.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.' # import wiki data wiki = gl.SFrame('people_wiki.gl/') wiki Explanation: Latent Dirichlet Allocation for Text Data In this assignment you will apply standard preprocessing techniques on Wikipedia text data use GraphLab Create to fit a Latent Dirichlet allocation (LDA) model explore and interpret the results, including topic keywords and topic assignments for documents Recall that a major feature distinguishing the LDA model from our previously explored methods is the notion of mixed membership. Throughout the course so far, our models have assumed that each data point belongs to a single cluster. k-means determines membership simply by shortest distance to the cluster center, and Gaussian mixture models suppose that each data point is drawn from one of their component mixture distributions. In many cases, though, it is more realistic to think of data as genuinely belonging to more than one cluster or category - for example, if we have a model for text data that includes both "Politics" and "World News" categories, then an article about a recent meeting of the United Nations should have membership in both categories rather than being forced into just one. With this in mind, we will use GraphLab Create tools to fit an LDA model to a corpus of Wikipedia articles and examine the results to analyze the impact of a mixed membership approach. In particular, we want to identify the topics discovered by the model in terms of their most important words, and we want to use the model to predict the topic membership distribution for a given document. Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook. Text Data Preprocessing We'll start by importing our familiar Wikipedia dataset. The following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read this page. End of explanation wiki_docs = gl.text_analytics.count_words(wiki['text']) wiki_docs = wiki_docs.dict_trim_by_keys(gl.text_analytics.stopwords(), exclude=True) Explanation: In the original data, each Wikipedia article is represented by a URI, a name, and a string containing the entire text of the article. Recall from the video lectures that LDA requires documents to be represented as a bag of words, which ignores word ordering in the document but retains information on how many times each word appears. As we have seen in our previous encounters with text data, words such as 'the', 'a', or 'and' are by far the most frequent, but they appear so commonly in the English language that they tell us almost nothing about how similar or dissimilar two documents might be. Therefore, before we train our LDA model, we will preprocess the Wikipedia data in two steps: first, we will create a bag of words representation for each article, and then we will remove the common words that don't help us to distinguish between documents. For both of these tasks we can use pre-implemented tools from GraphLab Create: End of explanation topic_model = gl.topic_model.create(wiki_docs, num_topics=10, num_iterations=200) Explanation: Model fitting and interpretation In the video lectures we saw that Gibbs sampling can be used to perform inference in the LDA model. In this assignment we will use a GraphLab Create method to learn the topic model for our Wikipedia data, and our main emphasis will be on interpreting the results. We'll begin by creating the topic model using create() from GraphLab Create's topic_model module. Note: This may take several minutes to run. End of explanation topic_model Explanation: GraphLab provides a useful summary of the model we have fitted, including the hyperparameter settings for alpha, gamma (note that GraphLab Create calls this parameter beta), and K (the number of topics); the structure of the output data; and some useful methods for understanding the results. End of explanation topic_model = gl.load_model('topic_models/lda_assignment_topic_model') Explanation: It is certainly useful to have pre-implemented methods available for LDA, but as with our previous methods for clustering and retrieval, implementing and fitting the model gets us only halfway towards our objective. We now need to analyze the fitted model to understand what it has done with our data and whether it will be useful as a document classification system. This can be a challenging task in itself, particularly when the model that we use is complex. We will begin by outlining a sequence of objectives that will help us understand our model in detail. In particular, we will get the top words in each topic and use these to identify topic themes predict topic distributions for some example documents compare the quality of LDA "nearest neighbors" to the NN output from the first assignment understand the role of model hyperparameters alpha and gamma Load a fitted topic model The method used to fit the LDA model is a randomized algorithm, which means that it involves steps that are random; in this case, the randomness comes from Gibbs sampling, as discussed in the LDA video lectures. Because of these random steps, the algorithm will be expected to yield slighty different output for different runs on the same data - note that this is different from previously seen algorithms such as k-means or EM, which will always produce the same results given the same input and initialization. It is important to understand that variation in the results is a fundamental feature of randomized methods. However, in the context of this assignment this variation makes it difficult to evaluate the correctness of your analysis, so we will load and analyze a pre-trained model. We recommend that you spend some time exploring your own fitted topic model and compare our analysis of the pre-trained model to the same analysis applied to the model you trained above. End of explanation topic_model.get_topics([0], num_words=3) Explanation: Identifying topic themes by top words We'll start by trying to identify the topics learned by our model with some major themes. As a preliminary check on the results of applying this method, it is reasonable to hope that the model has been able to learn topics that correspond to recognizable categories. In order to do this, we must first recall what exactly a 'topic' is in the context of LDA. In the video lectures on LDA we learned that a topic is a probability distribution over words in the vocabulary; that is, each topic assigns a particular probability to every one of the unique words that appears in our data. Different topics will assign different probabilities to the same word: for instance, a topic that ends up describing science and technology articles might place more probability on the word 'university' than a topic that describes sports or politics. Looking at the highest probability words in each topic will thus give us a sense of its major themes. Ideally we would find that each topic is identifiable with some clear theme and that all the topics are relatively distinct. We can use the GraphLab Create function get_topics() to view the top words (along with their associated probabilities) from each topic. Quiz Question: Identify the top 3 most probable words for the first topic. End of explanation sum(topic_model.get_topics([2], num_words=50)['score']) Explanation: Quiz Question: What is the sum of the probabilities assigned to the top 50 words in the 3rd topic? End of explanation [x['words'] for x in topic_model.get_topics(output_type='topic_words', num_words=10)] Explanation: Let's look at the top 10 words for each topic to see if we can identify any themes: End of explanation themes = ['science and research','team sports','music, TV, and film','American college and politics','general politics', \ 'art and publishing','Business','international athletics','Great Britain and Australia','international music'] Explanation: We propose the following themes for each topic: topic 0: Science and research topic 2: Team sports topic 3: Music, TV, and film topic 4: American college and politics topic 5: General politics topic 6: Art and publishing topic 7: Business topic 8: International athletics topic 9: Great Britain and Australia topic 10: International music We'll save these themes for later: End of explanation for i in range(10): plt.plot(range(100), topic_model.get_topics(topic_ids=[i], num_words=100)['score']) plt.xlabel('Word rank') plt.ylabel('Probability') plt.title('Probabilities of Top 100 Words in each Topic') Explanation: Measuring the importance of top words We can learn more about topics by exploring how they place probability mass (which we can think of as a weight) on each of their top words. We'll do this with two visualizations of the weights for the top words in each topic: - the weights of the top 100 words, sorted by the size - the total weight of the top 10 words Here's a plot for the top 100 words by weight in each topic: End of explanation top_probs = [sum(topic_model.get_topics(topic_ids=[i], num_words=10)['score']) for i in range(10)] ind = np.arange(10) width = 0.5 fig, ax = plt.subplots() ax.bar(ind-(width/2),top_probs,width) ax.set_xticks(ind) plt.xlabel('Topic') plt.ylabel('Probability') plt.title('Total Probability of Top 10 Words in each Topic') plt.xlim(-0.5,9.5) plt.ylim(0,0.15) plt.show() Explanation: In the above plot, each line corresponds to one of our ten topics. Notice how for each topic, the weights drop off sharply as we move down the ranked list of most important words. This shows that the top 10-20 words in each topic are assigned a much greater weight than the remaining words - and remember from the summary of our topic model that our vocabulary has 547462 words in total! Next we plot the total weight assigned by each topic to its top 10 words: End of explanation obama = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Barack Obama')[0])]]) pred1 = topic_model.predict(obama, output_type='probability') pred2 = topic_model.predict(obama, output_type='probability') print(gl.SFrame({'topics':themes, 'predictions (first draw)':pred1[0], 'predictions (second draw)':pred2[0]})) Explanation: Here we see that, for our topic model, the top 10 words only account for a small fraction (in this case, between 5% and 13%) of their topic's total probability mass. So while we can use the top words to identify broad themes for each topic, we should keep in mind that in reality these topics are more complex than a simple 10-word summary. Finally, we observe that some 'junk' words appear highly rated in some topics despite our efforts to remove unhelpful words before fitting the model; for example, the word 'born' appears as a top 10 word in three different topics, but it doesn't help us describe these topics at all. Topic distributions for some example documents As we noted in the introduction to this assignment, LDA allows for mixed membership, which means that each document can partially belong to several different topics. For each document, topic membership is expressed as a vector of weights that sum to one; the magnitude of each weight indicates the degree to which the document represents that particular topic. We'll explore this in our fitted model by looking at the topic distributions for a few example Wikipedia articles from our data set. We should find that these articles have the highest weights on the topics whose themes are most relevant to the subject of the article - for example, we'd expect an article on a politician to place relatively high weight on topics related to government, while an article about an athlete should place higher weight on topics related to sports or competition. Topic distributions for documents can be obtained using GraphLab Create's predict() function. GraphLab Create uses a collapsed Gibbs sampler similar to the one described in the video lectures, where only the word assignments variables are sampled. To get a document-specific topic proportion vector post-facto, predict() draws this vector from the conditional distribution given the sampled word assignments in the document. Notice that, since these are draws from a distribution over topics that the model has learned, we will get slightly different predictions each time we call this function on a document - we can see this below, where we predict the topic distribution for the article on Barack Obama: End of explanation def average_predictions(model, test_document, num_trials=100): avg_preds = np.zeros((model.num_topics)) for i in range(num_trials): avg_preds += model.predict(test_document, output_type='probability')[0] avg_preds = avg_preds/num_trials result = gl.SFrame({'topics':themes, 'average predictions':avg_preds}) result = result.sort('average predictions', ascending=False) return result print average_predictions(topic_model, obama, 100) Explanation: To get a more robust estimate of the topics for each document, we can average a large number of predictions for the same document: End of explanation george_bush = gl.SArray([wiki_docs[int(np.where(wiki['name']=='George W. Bush')[0])]]) print average_predictions(topic_model, george_bush, 100) Explanation: Quiz Question: What is the topic most closely associated with the article about former US President George W. Bush? Use the average results from 100 topic predictions. End of explanation steven_gerrard = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Steven Gerrard')[0])]]) print average_predictions(topic_model, steven_gerrard, 100) Explanation: Quiz Question: What are the top 3 topics corresponding to the article about English football (soccer) player Steven Gerrard? Use the average results from 100 topic predictions. End of explanation wiki['lda'] = topic_model.predict(wiki_docs, output_type='probability') Explanation: Comparing LDA to nearest neighbors for document retrieval So far we have found that our topic model has learned some coherent topics, we have explored these topics as probability distributions over a vocabulary, and we have seen how individual documents in our Wikipedia data set are assigned to these topics in a way that corresponds with our expectations. In this section, we will use the predicted topic distribution as a representation of each document, similar to how we have previously represented documents by word count or TF-IDF. This gives us a way of computing distances between documents, so that we can run a nearest neighbors search for a given document based on its membership in the topics that we learned from LDA. We can contrast the results with those obtained by running nearest neighbors under the usual TF-IDF representation, an approach that we explored in a previous assignment. We'll start by creating the LDA topic distribution representation for each document: End of explanation wiki['word_count'] = gl.text_analytics.count_words(wiki['text']) wiki['tf_idf'] = gl.text_analytics.tf_idf(wiki['word_count']) Explanation: Next we add the TF-IDF document representations: End of explanation model_tf_idf = gl.nearest_neighbors.create(wiki, label='name', features=['tf_idf'], method='brute_force', distance='cosine') model_lda_rep = gl.nearest_neighbors.create(wiki, label='name', features=['lda'], method='brute_force', distance='cosine') Explanation: For each of our two different document representations, we can use GraphLab Create to compute a brute-force nearest neighbors model: End of explanation model_tf_idf.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10) model_lda_rep.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10) Explanation: Let's compare these nearest neighbor models by finding the nearest neighbors under each representation on an example document. For this example we'll use Paul Krugman, an American economist: End of explanation # Get a list of 'reference_label' based on knn alex_rodriguez_tfidf = list(model_tf_idf.query(wiki[wiki['name'] == 'Alex Rodriguez'], label='name', k=5000)['reference_label']) print 'value of k:', alex_rodriguez_tfidf.index('Mariano Rivera') Explanation: Notice that that there is no overlap between the two sets of top 10 nearest neighbors. This doesn't necessarily mean that one representation is better or worse than the other, but rather that they are picking out different features of the documents. With TF-IDF, documents are distinguished by the frequency of uncommon words. Since similarity is defined based on the specific words used in the document, documents that are "close" under TF-IDF tend to be similar in terms of specific details. This is what we see in the example: the top 10 nearest neighbors are all economists from the US, UK, or Canada. Our LDA representation, on the other hand, defines similarity between documents in terms of their topic distributions. This means that documents can be "close" if they share similar themes, even though they may not share many of the same keywords. For the article on Paul Krugman, we expect the most important topics to be 'American college and politics' and 'science and research'. As a result, we see that the top 10 nearest neighbors are academics from a wide variety of fields, including literature, anthropology, and religious studies. Quiz Question: Using the TF-IDF representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use mylist.index(value) to find the index of the first instance of value in mylist.) End of explanation # Get a list of 'reference_label' based on knn alex_rodriguez_lda = list(model_lda_rep.query(wiki[wiki['name'] == 'Alex Rodriguez'], label='name', k=5000)['reference_label']) print 'value of k:', alex_rodriguez_lda.index('Mariano Rivera') Explanation: Quiz Question: Using the LDA representation, compute the 5000 nearest neighbors for American baseball player Alex Rodriguez. For what value of k is Mariano Rivera the k-th nearest neighbor to Alex Rodriguez? (Hint: Once you have a list of the nearest neighbors, you can use mylist.index(value) to find the index of the first instance of value in mylist.) End of explanation topic_model['alpha'] Explanation: Understanding the role of LDA model hyperparameters Finally, we'll take a look at the effect of the LDA model hyperparameters alpha and gamma on the characteristics of our fitted model. Recall that alpha is a parameter of the prior distribution over topic weights in each document, while gamma is a parameter of the prior distribution over word weights in each topic. In the video lectures, we saw that alpha and gamma can be thought of as smoothing parameters when we compute how much each document "likes" a topic (in the case of alpha) or how much each topic "likes" a word (in the case of gamma). In both cases, these parameters serve to reduce the differences across topics or words in terms of these calculated preferences; alpha makes the document preferences "smoother" over topics, and gamma makes the topic preferences "smoother" over words. Our goal in this section will be to understand how changing these parameter values affects the characteristics of the resulting topic model. Quiz Question: What was the value of alpha used to fit our original topic model? End of explanation topic_model['beta'] Explanation: Quiz Question: What was the value of gamma used to fit our original topic model? Remember that GraphLab Create uses "beta" instead of "gamma" to refer to the hyperparameter that influences topic distributions over words. End of explanation tpm_low_alpha = gl.load_model('topic_models/lda_low_alpha') tpm_high_alpha = gl.load_model('topic_models/lda_high_alpha') Explanation: We'll start by loading some topic models that have been trained using different settings of alpha and gamma. Specifically, we will start by comparing the following two models to our original topic model: - tpm_low_alpha, a model trained with alpha = 1 and default gamma - tpm_high_alpha, a model trained with alpha = 50 and default gamma End of explanation a = np.sort(tpm_low_alpha.predict(obama,output_type='probability')[0])[::-1] b = np.sort(topic_model.predict(obama,output_type='probability')[0])[::-1] c = np.sort(tpm_high_alpha.predict(obama,output_type='probability')[0])[::-1] ind = np.arange(len(a)) width = 0.3 def param_bar_plot(a,b,c,ind,width,ylim,param,xlab,ylab): fig = plt.figure() ax = fig.add_subplot(111) b1 = ax.bar(ind, a, width, color='lightskyblue') b2 = ax.bar(ind+width, b, width, color='lightcoral') b3 = ax.bar(ind+(2*width), c, width, color='gold') ax.set_xticks(ind+width) ax.set_xticklabels(range(10)) ax.set_ylabel(ylab) ax.set_xlabel(xlab) ax.set_ylim(0,ylim) ax.legend(handles = [b1,b2,b3],labels=['low '+param,'original model','high '+param]) plt.tight_layout() param_bar_plot(a,b,c,ind,width,ylim=1.0,param='alpha', xlab='Topics (sorted by weight of top 100 words)',ylab='Topic Probability for Obama Article') Explanation: Changing the hyperparameter alpha Since alpha is responsible for smoothing document preferences over topics, the impact of changing its value should be visible when we plot the distribution of topic weights for the same document under models fit with different alpha values. In the code below, we plot the (sorted) topic weights for the Wikipedia article on Barack Obama under models fit with high, original, and low settings of alpha. End of explanation paul_krugman = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Paul Krugman')[0])]]) paul_krugman_low_alpha = average_predictions(tpm_low_alpha, paul_krugman, 100) print paul_krugman_low_alpha[(paul_krugman_low_alpha['average predictions'] > 0.3) | (paul_krugman_low_alpha['average predictions'] < 0.05) ] print len(paul_krugman_low_alpha[(paul_krugman_low_alpha['average predictions'] > 0.3) | (paul_krugman_low_alpha['average predictions'] < 0.05) ]) Explanation: Here we can clearly see the smoothing enforced by the alpha parameter - notice that when alpha is low most of the weight in the topic distribution for this article goes to a single topic, but when alpha is high the weight is much more evenly distributed across the topics. Quiz Question: How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the low alpha model? Use the average results from 100 topic predictions. End of explanation paul_krugman_high_alpha = average_predictions(tpm_high_alpha, paul_krugman, 100) print paul_krugman_low_alpha[(paul_krugman_high_alpha['average predictions'] > 0.3) | (paul_krugman_high_alpha['average predictions'] < 0.05) ] print len(paul_krugman_low_alpha[(paul_krugman_high_alpha['average predictions'] > 0.3) | (paul_krugman_high_alpha['average predictions'] < 0.05) ]) Explanation: Quiz Question: How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the high alpha model? Use the average results from 100 topic predictions. End of explanation del tpm_low_alpha del tpm_high_alpha tpm_low_gamma = gl.load_model('topic_models/lda_low_gamma') tpm_high_gamma = gl.load_model('topic_models/lda_high_gamma') a_top = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1] b_top = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1] c_top = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1] a_bot = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1] b_bot = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1] c_bot = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1] ind = np.arange(len(a)) width = 0.3 param_bar_plot(a_top, b_top, c_top, ind, width, ylim=0.6, param='gamma', xlab='Topics (sorted by weight of top 100 words)', ylab='Total Probability of Top 100 Words') param_bar_plot(a_bot, b_bot, c_bot, ind, width, ylim=0.0002, param='gamma', xlab='Topics (sorted by weight of bottom 1000 words)', ylab='Total Probability of Bottom 1000 Words') Explanation: Changing the hyperparameter gamma Just as we were able to see the effect of alpha by plotting topic weights for a document, we expect to be able to visualize the impact of changing gamma by plotting word weights for each topic. In this case, however, there are far too many words in our vocabulary to do this effectively. Instead, we'll plot the total weight of the top 100 words and bottom 1000 words for each topic. Below, we plot the (sorted) total weights of the top 100 words and bottom 1000 from each topic in the high, original, and low gamma models. Now we will consider the following two models: - tpm_low_gamma, a model trained with gamma = 0.02 and default alpha - tpm_high_gamma, a model trained with gamma = 0.5 and default alpha End of explanation def calculate_avg_words(model, num_words=547462, cdf_cutoff=0.5, num_topics=10): avg_num_of_words = [] for i in range(num_topics): avg_num_of_words.append(len(model.get_topics(topic_ids=[i], num_words=547462, cdf_cutoff=.5))) avg_num_of_words = np.mean(avg_num_of_words) return avg_num_of_words calculate_avg_words(tpm_low_gamma) Explanation: From these two plots we can see that the low gamma model results in higher weight placed on the top words and lower weight placed on the bottom words for each topic, while the high gamma model places relatively less weight on the top words and more weight on the bottom words. Thus increasing gamma results in topics that have a smoother distribution of weight across all the words in the vocabulary. Quiz Question: For each topic of the low gamma model, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get_topics() function from GraphLab Create with the cdf_cutoff argument). End of explanation calculate_avg_words(tpm_high_gamma) Explanation: Quiz Question: For each topic of the high gamma model, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get_topics() function from GraphLab Create with the cdf_cutoff argument). End of explanation
5,152
Given the following text description, write Python code to implement the functionality described below step by step Description: Stage 1 Step1: In the code bellow, resize image into the special resolution Step2: 1.1 Create a standard training dataset Step4: Generate tfrecords Step5: Read a batch images Step6: Example shuffle dataset Step7: Example cPickle Step8: Example reshape
Python Code: %matplotlib inline import glob import os import numpy as np from scipy.misc import imread, imresize import matplotlib.pyplot as plt import tensorflow as tf raw_image = imread('model/datasets/nudity_dataset/3.jpg') # Define a tensor placeholder to store an image image = tf.placeholder("uint8", [None, None, 3]) image1 = tf.image.convert_image_dtype(image, dtype=tf.float32) image2 = tf.image.central_crop(image1, central_fraction=0.875) # Crop the central region of raw image model = tf.initialize_all_variables() # Quan trong print raw_image.shape with tf.Session() as session: session.run(model) result = session.run(image2, feed_dict={image: raw_image}) print result.dtype print("The shape of result: ",result.shape) print result.shape ## Draw image fig = plt.figure() a = fig.add_subplot(1,2,1) plt.imshow(raw_image) a = fig.add_subplot(1,2,2) plt.imshow(result) plt.show() Explanation: Stage 1: Preprocess VNG's data In this stage, we will read raw data from a given dataset. The dataset consists of variable-resolution images, while our system requires a constant input dimensionality. Therefore, we need to down-sampled the images to a fixed resolution (270 x 270) Examples of processing In the bellow code, we will crop the central region of raw image. End of explanation import numpy as np from scipy.misc import imread, imresize import matplotlib.pyplot as plt import tensorflow as tf raw_image = imread('model/datasets/nudity_dataset/3.jpg') image = tf.placeholder("uint8", [None, None, 3]) image1 = tf.image.convert_image_dtype(image, dtype = tf.float32) image1_t = tf.expand_dims(image1, 0) image2 = tf.image.resize_bilinear(image1_t, [270, 270], align_corners=False) image2 = tf.squeeze(image2, [0]) image3 = tf.sub(image2, 0.5) image3 = tf.mul(image2, 2.0) model = tf.initialize_all_variables() with tf.Session() as session: session.run(model) result = session.run(image3, feed_dict={image:raw_image}) ## Draw image fig = plt.figure() a = fig.add_subplot(1,2,1) plt.imshow(raw_image) a = fig.add_subplot(1,2,2) plt.imshow(result) plt.show() Explanation: In the code bellow, resize image into the special resolution End of explanation %matplotlib inline %load_ext autoreload %autoreload 2 import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import cPickle as pickle from model.datasets.data import generate_standard_dataset # Load Normal and Nude images into the train dataset image_normal_ls, file_name_normal = generate_standard_dataset('/home/cpu11757/workspace/Nudity_Detection/src/model/datasets/train/normal') nudity_ls, file_name_nudity = generate_standard_dataset('/home/cpu11757/workspace/Nudity_Detection/src/model/datasets/train/nude') init_op = tf.initialize_all_variables() labels = np.zeros(3000, dtype = np.uint) database = [] with tf.Session() as session: session.run(init_op) # Start populating the filename queue coord = tf.train.Coordinator() tf.train.start_queue_runners(coord=coord) for i in range(3000): #print i if i % 2 == 0: image = image_normal_ls.eval() else: image = nudity_ls.eval() labels[i] = 1 database.append(image) coord.request_stop() database = np.array(database) from Dataset.data import generate_standard_dataset import numpy as np import tensorflow as tf img_nudity, _ = generate_standard_dataset('/media/taivu/Data/Project/Nudity_Detection/src/model/datasets/AdditionalDataset/vng/sex') labels = np.ones(100, dtype = np.uint) dataset = [] with tf.Session() as sess: sess.run(tf.global_variables_initializer()) coord = tf.train.Coordinator() tf.train.start_queue_runners(coord=coord) for i in range(100): image = img_nudity.eval() dataset.append(image) coord.request_stop() database = np.array(dataset) print file_name_normal[1123] Explanation: 1.1 Create a standard training dataset End of explanation import os import tensorflow as tf def _int64_feature(value): return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) def _bytes_feature(value): return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value])) def convert_to(data_dir, dataset, labels, name): Converts a dataset to tfrecords. images = dataset labels = labels num_examples = dataset.shape[0] rows, cols, depth = dataset[0].shape filename = os.path.join(data_dir, name + '.tfrecords') writer = tf.python_io.TFRecordWriter(filename) for idx in range(num_examples): image_raw = images[idx].tostring() example = tf.train.Example(features = tf.train.Features(feature={ 'height': _int64_feature(rows), 'width': _int64_feature(cols), 'depth': _int64_feature(depth), 'label': _int64_feature(int(labels[idx])), 'image_raw': _bytes_feature(image_raw) })) writer.write(example.SerializeToString()) writer.close() convert_to('/home/taivu/workspace/NudityDetection/Dataset', database, labels, 'nudity_test_set') Explanation: Generate tfrecords End of explanation import tensorflow as tf import matplotlib.pyplot as plt def read_and_decode(filename_queue): reader = tf.TFRecordReader() _, serialized_example = reader.read(filename_queue) features = tf.parse_single_example( serialized_example, features={ 'image_raw': tf.FixedLenFeature([], tf.string), 'label': tf.FixedLenFeature([], tf.int64), 'depth': tf.FixedLenFeature([], tf.int64), 'width': tf.FixedLenFeature([], tf.int64), 'height': tf.FixedLenFeature([], tf.int64) }) image = tf.decode_raw(features['image_raw'], tf.float32) image = tf.reshape(image,[34,34,3]) label = tf.cast(features['label'], tf.int32) height = tf.cast(features['height'], tf.int32) width = tf.cast(features['width'], tf.int32) depth = tf.cast(features['depth'], tf.int32) return image, label, height, width, depth def data_input(data_dir, batch_size): filename_queue = tf.train.string_input_producer([data_dir], num_epochs = None) image, label, height, width, depth = read_and_decode(filename_queue) images_batch, labels_batch = tf.train.shuffle_batch( [image, label], batch_size = batch_size, capacity = 2000, min_after_dequeue = 80 ) return images_batch, labels_batch #filename_queue = tf.train.string_input_producer(['/home/cpu11757/workspace/Nudity_Detection/src/model/datasets/vng_dataset.tfrecords'], num_epochs = None) #image, label, height,_,depth = read_and_decode(filename_queue) img_batch, lb_batch = data_input('/home/cpu11757/workspace/Nudity_Detection/src/model/datasets/vng_dataset.tfrecords',500) init_op = tf.initialize_all_variables() fig = plt.figure() with tf.Session() as session: session.run(init_op) coord = tf.train.Coordinator() tf.train.start_queue_runners(coord=coord) images, labels = session.run([img_batch, lb_batch]) coord.request_stop() import matplotlib.pyplot as plt fig = plt.figure() plt.imshow(images[1]) print labels[0] plt.show() Explanation: Read a batch images End of explanation import tensorflow as tf f = ["f1", "f2", "f3", "f4", "f5", "f6", "f7", "f8"] l = ["l1", "l2", "l3", "l4", "l5", "l6", "l7", "l8"] fv = tf.constant(f) lv = tf.constant(l) rsq = tf.RandomShuffleQueue(10, 0, [tf.string, tf.string], shapes=[[],[]]) do_enqueues = rsq.enqueue_many([fv, lv]) gotf, gotl = rsq.dequeue() with tf.Session() as sess: sess.run(tf.initialize_all_variables()) coord = tf.train.Coordinator() tf.train.start_queue_runners(sess=sess,coord = coord) sess.run(do_enqueues) for i in xrange(2): one_f, one_l = sess.run([gotf, gotl]) print "F: ", one_f, "L: ", one_l coord.request_stop() Explanation: Example shuffle dataset End of explanation import cPickle as pickle dict1 = {'name':[],'id':[]} dict2 = {'local':[], 'paza':[]} #with open('test.p', 'wb') as fp: # pickle.dump(dict1,fp) # pickle.dump(dict2,fp) with open('test.p', 'rb') as fp: d1 = pickle.load(fp) d2 = pickle.load(fp) print len(d1) print len(d2) Explanation: Example cPickle End of explanation import tensorflow as tf import numpy as np a = tf.constant(np.array([[.1]])) init = tf.initialize_all_variables() with tf.Session() as session: session.run(init) b = session.run(tf.nn.softmax(a)) c = session.run(tf.nn.softmax_cross_entropy_with_logits([0.6, 0.4],[0,1])) #print b #print c label = np.array([[0], [1], [1]]) idx = np.arange(3) * 2 print ('IDX') print idx labels_one_hot = np.zeros((3,2)) print ('labels_one_hot') print labels_one_hot labels_one_hot.flat[idx + label.ravel()] = 1 print ('IDX + label.ravel()') print idx + label.ravel() import tensorflow as tf import matplotlib.pyplot as plt from Dataset.data import preprocess_image import numpy as np filename_queue = tf.train.string_input_producer(tf.train.match_filenames_once( '/home/taivu/workspace/NudityDetection/Dataset/train/normal/*.jpg')) img_reader = tf.WholeFileReader() _, img_file = img_reader.read(filename_queue) image = tf.image.decode_jpeg(img_file, 3) image = preprocess_image(image, 34, 34) images = tf.train.batch([image], batch_size = 10, capacity = 50, name = 'input') coord = tf.train.Coordinator() with tf.Session() as sess: sess.run(tf.global_variables_initializer()) threads = tf.train.start_queue_runners(coord=coord) result_img = sess.run([images]) result_img = np.array(result_img) coord.request_stop() coord.join(threads) fig = plt.figure() plt.imshow(result_img[0][1]) plt.show() import tensorflow as tf import numpy as np from execute_model import evaluate from Dataset.data import data_input import matplotlib.pyplot as plt dt, _ = data_input('/home/taivu/workspace/NudityDetection/Dataset/vng_dataset_validation.tfrecords', 10, False) coord = tf.train.Coordinator() with tf.Session() as sess: sess.run(tf.global_variables_initializer()) threads = tf.train.start_queue_runners(coord=coord) result_img = sess.run([dt]) coord.request_stop() coord.join(threads) #fig = plt.figure() result_img = np.array(result_img) print result_img.shape print result_img.dtype #plt.show() import tensorflow as tf import numpy as np from execute_model import evaluate from Dataset.data import data_input import matplotlib.pyplot as plt dt = data_input('/home/taivu/workspace/NudityDetection/Dataset/vng_dataset_validation.tfrecords', 10, False, False) coord = tf.train.Coordinator() with tf.Session() as sess: sess.run(tf.global_variables_initializer()) threads = tf.train.start_queue_runners(coord=coord) result_img = sess.run([dt]) coord.request_stop() coord.join(threads) #fig = plt.figure() result_img = np.array(result_img) print result_img.shape print result_img.dtype #plt.show() import tensorflow as tf import os import glob from Dataset.data import preprocess_image import matplotlib.pyplot as plt data_dir = '/home/taivu/workspace/AddPic' filenames = [] for pathAndFilename in glob.iglob(os.path.join(data_dir, '*.jpg')): filenames.append(pathAndFilename) filename_queue = tf.train.string_input_producer(filenames, shuffle = None) filename = filename_queue.dequeue() # img_reader = tf.WholeFileReader() img_file = tf.read_file(filename) #_, img_file = img_reader.read(filename) img = tf.image.decode_jpeg(img_file, 3) img = preprocess_image(img, 34, 34) filename_batch, img_batch = tf.train.batch([filename, img], batch_size = 3, capacity=200, name = 'input') init = tf.global_variables_initializer() coord =tf.train.Coordinator() with tf.Session() as sess: sess.run(init) tf.train.start_queue_runners(sess, coord) ls_img, ls_nf = sess.run([img_batch, filename_batch]) #fig = plt.figure() print ls_nf for i in range(3): a = fig.add_subplot(1,3, i) a.set_title('%d'%i) plt.imshow(ls_img[i]) plt.show() coord.request_stop() print ls_nf[0] import tensorflow as tf import numpy as np a = [[1,2,3]] b = [[4,5,6]] np.column_stack((a,b)) import math print int(math.ceil(float(5)/3)) Explanation: Example reshape End of explanation
5,153
Given the following text description, write Python code to implement the functionality described below step by step Description: Step7: Arbres binaires Le but de ce TP est d'implanter les fonctions usuelles telles que la génération exhaustive (fabriquer tous les éléments de l'ensemble), rank et unrank sur l'ensemble des arbres binaires. Pour représenter les arbres binaires en python, on utilisera la structure suivante. Exécutez les cellules suivantes et observez les exemples. Step8: Il y a 5 arbres binaires de taille 3. L'un deux est celui que nous venons de construire. Construisez explicitement les 4 autres Step19: Le but de ce TP est d'implanter les fonctions de la classe BinaryTrees ci-dessous (avec un "s" à la fin) qui représente l'ensemble des arbres binaires d'une taille donnée. La structure de la classe vous est donnée ainsi que les méthodes de base. Complétez les méthodes manquantes puis exécutez les exemples ci-dessous. Step20: La suite de tests que nous avions définies sur les permutations peut aussi s'appliquer sur les arbres binaires. Exécutez la cellule suivante puis vérifiez que les tests passent sur les exemples. Step23: Voici une fonction qui calcule un arbre binaire aléatoire. On se demande si chaque arbre est obenu avec une probabilité uniforme. Exécutez les cellules ci-dessous puis déterminez expérimentalment si la distribution de probabilité est uniforme. Step24: La hauteur d'un arbre se calcule récursivement
Python Code: class BinaryTree(): def __init__(self, children = None): A binary tree is either a leaf or a node with two subtrees. INPUT: - children, either None (for a leaf), or a list of size excatly 2 of either two binary trees or 2 objects that can be made into binary trees self._isleaf = (children is None) if not self._isleaf: if len(children) != 2: raise ValueError("A binary tree needs exactly two children") self._children = tuple(c if isinstance(c,BinaryTree) else BinaryTree(c) for c in children) self._size = None def __repr__(self): if self.is_leaf(): return "leaf" return str(self._children) def __eq__(self, other): Return true if other represents the same binary tree as self if not isinstance(other, BinaryTree): return False if self.is_leaf(): return other.is_leaf() return self.left() == other.left() and self.right() == other.right() def left(self): Return the left subtree of self return self._children[0] def right(self): Return the right subtree of self return self._children[1] def is_leaf(self): Return true is self is a leaf return self._isleaf def _compute_size(self): Recursively computes the size of self if self.is_leaf(): self._size = 0 else: self._size = self.left().size() + self.right().size() +1 def size(self): Return the number of nodes (non leaves) in the binary tree if self._size is None: self._compute_size() return self._size leaf = BinaryTree() t = BinaryTree() t t.size() t = BinaryTree([[leaf,leaf], leaf]) # a tree of size 2 t t.size() t = BinaryTree([leaf, [leaf,leaf]]) # a different tree of size 2 t t.size() t = BinaryTree([[leaf, leaf], [leaf, leaf]]) # a tree of size 3 t t.size() Explanation: Arbres binaires Le but de ce TP est d'implanter les fonctions usuelles telles que la génération exhaustive (fabriquer tous les éléments de l'ensemble), rank et unrank sur l'ensemble des arbres binaires. Pour représenter les arbres binaires en python, on utilisera la structure suivante. Exécutez les cellules suivantes et observez les exemples. End of explanation # t1 = BinaryTree(...) # t2 = BinaryTree(...) # t3 = BinaryTree(...) # t4 = BinaryTree(...) Explanation: Il y a 5 arbres binaires de taille 3. L'un deux est celui que nous venons de construire. Construisez explicitement les 4 autres End of explanation import math import random class BinaryTrees(): def __init__(self, size): The combinatorial set of binary trees of size `size` INPUT: - size a non negative integers self._size = size def size(self): Return the size of the binary trees of the set return self._size def __repr__(self): Default string repr of ``self`` return "Binary Trees of size " + str(self._size) def cardinality(self): Return the cardinality of the set # This is given to you n = self._size f = math.factorial(n) return math.factorial(2*n)//(f*f*(n+1)) def __iter__(self): Iterator on the elements of the set # écrire le code ici def first(self): Return the first element of the set for t in self: return t def rank(self,t): Return the rank of the binary tree t in the generation order of the set (starting at 0) INPUT: - t, a binary tree # écrire le code ici def unrank(self,i): Return the binary tree corresponding to the rank ``i`` INPUT: - i, a integer between 0 and the cardinality minus 1 # écrire le code ici def next(self,t): Return the next element following t in self INPUT : - t a binary tree OUPUT : The next binary tree or None if t is the last permutation of self # écrire le code ici def random_element(self): Return a random element of ``self`` with uniform probability # écrire le code ici BinaryTrees(0) list(BinaryTrees(0)) BinaryTrees(1) list(BinaryTrees(1)) BinaryTrees(2) list(BinaryTrees(2)) BT3 = BinaryTrees(3) BT3 list(BT3) t = BinaryTree(((leaf, leaf), (leaf, leaf))) BT3.rank(t) BT3.unrank(2) BT3.next(t) BT3.random_element() Explanation: Le but de ce TP est d'implanter les fonctions de la classe BinaryTrees ci-dessous (avec un "s" à la fin) qui représente l'ensemble des arbres binaires d'une taille donnée. La structure de la classe vous est donnée ainsi que les méthodes de base. Complétez les méthodes manquantes puis exécutez les exemples ci-dessous. End of explanation def test_cardinality_iter(S): assert len(list(S)) == S.cardinality() def test_rank(S): assert [S.rank(p) for p in S] == list(range(S.cardinality())) def test_unrank(S): assert list(S) == [S.unrank(i) for i in range(S.cardinality())] def test_next(S): L = [S.first()] while True: p = S.next(L[-1]) if p == None: break L.append(p) assert L == list(S) def all_tests(S): tests = {"Cardinality / iter": test_cardinality_iter, "Rank": test_rank, "Unrank": test_unrank, "Next": test_next} for k in tests: print ("Testsing: "+ k) try: tests[k](S) print ("Passed") except AssertionError: print ("Not passed") all_tests(BinaryTrees(3)) all_tests(BinaryTrees(4)) all_tests(BinaryTrees(5)) all_tests(BinaryTrees(6)) Explanation: La suite de tests que nous avions définies sur les permutations peut aussi s'appliquer sur les arbres binaires. Exécutez la cellule suivante puis vérifiez que les tests passent sur les exemples. End of explanation import random def random_grow(t): Randomly grows a binary tree INPUT: - t, a binary tree of size n OUTPUT: a binary tree of size n+1 if t.is_leaf(): return BinaryTree([leaf,leaf]) c = [t.left(),t.right()] i = random.randint(0,1) c[i] = random_grow(c[i]) return BinaryTree(c) def random_binary_tree(n): Return a random binary tree of size n t = leaf for i in range(n): t = random_grow(t) return t random_binary_tree(10) Explanation: Voici une fonction qui calcule un arbre binaire aléatoire. On se demande si chaque arbre est obenu avec une probabilité uniforme. Exécutez les cellules ci-dessous puis déterminez expérimentalment si la distribution de probabilité est uniforme. End of explanation assert BinaryTree([[leaf,leaf], leaf]).height() == 2 assert BinaryTree([leaf,[leaf, leaf]]).height() == 2 assert BinaryTree([[leaf,leaf], [leaf,leaf]]).height() == 2 assert BinaryTree([[leaf,[leaf,leaf]], [leaf,leaf]]).height() == 3 Explanation: La hauteur d'un arbre se calcule récursivement : pour une feuille, la hauteur est 0, sinon c'est le max de la hauteur des fils +1. Rajoutez une méthode height à la classe des arbres binaires et vérifiez son fonctionnement avec les tests suivants. End of explanation
5,154
Given the following text description, write Python code to implement the functionality described below step by step Description: Hello world! The beginning of almost everything in computer programming Step1: <a id='Jupyter'></a> 2. Interacting with Jupyter Notebook This interface (what you are reading now) is know as Jupyter Notebook, an interactive document, which is a mix of Markdown and Python code executed by IPython Step2: <a id='Python'></a> 3. Interacting with the Python interpreter Run python in a shell and type Step3: <a id='Scripts'></a> 4. Running Python programs (modules) as scripts
Python Code: !python -c "print('Hello world!')" Explanation: Hello world! The beginning of almost everything in computer programming :-) Let's see different alternatives to run Python code. Contents "Batch" running of single commands. Interacting with Jupyter Notebooks. Interacting with Python interpreters. "Batch" running of scripts (modules). <a id='Batch'></a> 1. "Batch" execution of single Python commands End of explanation print("Hello world!") # Modify me and push <SHIFT> + <RETURN> Explanation: <a id='Jupyter'></a> 2. Interacting with Jupyter Notebook This interface (what you are reading now) is know as Jupyter Notebook, an interactive document, which is a mix of Markdown and Python code executed by IPython: End of explanation def hello(): print('Hello world!') import dis dis.dis(hello) Explanation: <a id='Python'></a> 3. Interacting with the Python interpreter Run python in a shell and type: print("Hello world!") &lt;enter&gt; quit(). ``` $ python Python 2.7.10 (default, Oct 23 2015, 19:19:21) [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin Type "help", "copyright", "credits" or "license" for more information. print("Hello world!") Hello world! quit() $ ``` Alternatively, instead of python we can use ipython, which provides dynamic object introspection, command completion, access to the system shell, etc. ``` $ ipython Python 3.5.1rc1 (v3.5.1rc1:948ef16a6951, Nov 22 2015, 11:29:13) Type "copyright", "credits" or "license" for more information. IPython 5.1.0 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. In [1]: print("Hello world!") Hello world! In [2]: help(print) Help on built-in function print in module builtins: print(...) print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False) Prints the values to a stream, or to sys.stdout by default. Optional keyword arguments: file: a file-like object (stream); defaults to the current sys.stdout. sep: string inserted between values, default a space. end: string appended after the last value, default a newline. flush: whether to forcibly flush the stream. (type: <q> to exit) In [3]: quit() # <ctrl> + <d> also works in Unixes $ ``` Interpreted? Python is an interpreted programming language. When we run a Python program, we are executing the translation to bytecode of each Python statement of our program over the Python Virtual Machine (PVM). The .pyc files that appear after running a collection of modules as a script for the first time, contains the bytecode of such modules. This is used by Python to speed up their future executions. End of explanation !cat hello_world.py # Check the code (optional) !pyflakes3 hello_world.py !./hello_world.py # Specific of Unix !python hello_world.py %run hello_world.py # Specific of Ipython Explanation: <a id='Scripts'></a> 4. Running Python programs (modules) as scripts End of explanation
5,155
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1> Creating a custom Word2Vec embedding on your data </h1> This notebook illustrates Step2: Creating a training dataset The training dataset simply consists of a bunch of words separated by spaces extracted from your documents. The words are simply in the order that they appear in the documents and words from successive documents are simply appended together. In other words, there is not "document separator". <p> The only preprocessing that I do is to replace anything that is not a letter or hyphen by a space. <p> Recall that word2vec is unsupervised. There is no label. Step3: This is what the resulting file looks like Step4: Running word2vec We can run the existing tutorial code as-is. Step5: The actual evaluation dataset doesn't matter. Let's just make sure to have some words in the input also in the eval. The analogy dataset is of the form <pre> Athens Greece Cairo Egypt Baghdad Iraq Beijing China </pre> i.e. four words per line where the model is supposed to predict the fourth given the first three. But we'll just make up a junk file. Step6: Examine the created embedding Let's load up the embedding file in TensorBoard. Start up TensorBoard, switch to the "Projector" tab and then click on the button to "Load data". Load the vocab.txt that is in the output directory of the model. Step7: Here, for example, is the word "founders" in context -- it's near doing, creative, difficult, and fight, which sounds about right ... The numbers next to the words reflect the count -- we should try to get a large enough vocabulary that we can use --min_count=10 when training word2vec, but that would also take too long for a classroom situation. <img src="embeds.png" /> Step8: Export the embedding vectors into a text file Let's export the embedding into a text file, so that we can use it the way we used the Glove embeddings in txtcls2.ipynb. Notice that we have written out our vocabulary and vectors into two files. We just have to merge them now. Step9: Training model with custom embedding Now, you can use this embedding file instead of the Glove embedding used in txtcls2.ipynb
Python Code: # change these to try this notebook out BUCKET = 'alexhanna-dev-ml' PROJECT = 'alexhanna-dev' REGION = 'us-central1' import os os.environ['BUCKET'] = BUCKET os.environ['PROJECT'] = PROJECT os.environ['REGION'] = REGION Explanation: <h1> Creating a custom Word2Vec embedding on your data </h1> This notebook illustrates: <ol> <li> Creating a training dataset <li> Running word2vec <li> Examining the created embedding <li> Export the embedding into a file you can use in other models <li> Training the text classification model of [txtcls2.ipynb](txtcls2.ipynb) with this custom embedding. </ol> End of explanation import google.datalab.bigquery as bq query= SELECT CONCAT( LOWER(REGEXP_REPLACE(title, '[^a-zA-Z $-]', ' ')), " ", LOWER(REGEXP_REPLACE(text, '[^a-zA-Z $-]', ' '))) AS text FROM `bigquery-public-data.hacker_news.stories` WHERE LENGTH(title) > 100 AND LENGTH(text) > 100 df = bq.Query(query).execute().result().to_dataframe() df[:5] with open('word2vec/words.txt', 'w') as ofp: for txt in df['text']: ofp.write(txt + " ") Explanation: Creating a training dataset The training dataset simply consists of a bunch of words separated by spaces extracted from your documents. The words are simply in the order that they appear in the documents and words from successive documents are simply appended together. In other words, there is not "document separator". <p> The only preprocessing that I do is to replace anything that is not a letter or hyphen by a space. <p> Recall that word2vec is unsupervised. There is no label. End of explanation !cut -c-1000 word2vec/words.txt Explanation: This is what the resulting file looks like: End of explanation %%bash cd word2vec TF_CFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_compile_flags()))') ) TF_LFLAGS=( $(python -c 'import tensorflow as tf; print(" ".join(tf.sysconfig.get_link_flags()))') ) g++ -std=c++11 \ -shared word2vec_ops.cc word2vec_kernels.cc \ -o word2vec_ops.so -fPIC ${TF_CFLAGS[@]} ${TF_LFLAGS[@]} \ -O2 -D_GLIBCXX_USE_CXX11_ABI=0 # -I/usr/local/lib/python2.7/dist-packages/tensorflow/include/external/nsync/public \ Explanation: Running word2vec We can run the existing tutorial code as-is. End of explanation %%writefile word2vec/junk.txt : analogy-questions-ignored the user plays several levels of the game puzzle vote down the negative %%bash cd word2vec rm -rf trained python word2vec.py \ --train_data=./words.txt --eval_data=./junk.txt --save_path=./trained \ --min_count=1 --embedding_size=10 --window_size=2 Explanation: The actual evaluation dataset doesn't matter. Let's just make sure to have some words in the input also in the eval. The analogy dataset is of the form <pre> Athens Greece Cairo Egypt Baghdad Iraq Beijing China </pre> i.e. four words per line where the model is supposed to predict the fourth given the first three. But we'll just make up a junk file. End of explanation from google.datalab.ml import TensorBoard TensorBoard().start('word2vec/trained') Explanation: Examine the created embedding Let's load up the embedding file in TensorBoard. Start up TensorBoard, switch to the "Projector" tab and then click on the button to "Load data". Load the vocab.txt that is in the output directory of the model. End of explanation for pid in TensorBoard.list()['pid']: TensorBoard().stop(pid) print('Stopped TensorBoard with pid {}'.format(pid)) Explanation: Here, for example, is the word "founders" in context -- it's near doing, creative, difficult, and fight, which sounds about right ... The numbers next to the words reflect the count -- we should try to get a large enough vocabulary that we can use --min_count=10 when training word2vec, but that would also take too long for a classroom situation. <img src="embeds.png" /> End of explanation !wc word2vec/trained/*.txt !head -3 word2vec/trained/*.txt import pandas as pd vocab = pd.read_csv("word2vec/trained/vocab.txt", sep="\s+", header=None, names=('word', 'count')) vectors = pd.read_csv("word2vec/trained/vectors.txt", sep="\s+", header=None) vectors = pd.concat([vocab, vectors], axis=1) del vectors['count'] vectors.to_csv("word2vec/trained/embedding.txt.gz", sep=" ", header=False, index=False, index_label=False, compression='gzip') !zcat word2vec/trained/embedding.txt.gz | head -3 Explanation: Export the embedding vectors into a text file Let's export the embedding into a text file, so that we can use it the way we used the Glove embeddings in txtcls2.ipynb. Notice that we have written out our vocabulary and vectors into two files. We just have to merge them now. End of explanation %%bash gsutil cp word2vec/trained/embedding.txt.gz gs://${BUCKET}/txtcls2/custom_embedding.txt.gz %%bash OUTDIR=gs://${BUCKET}/txtcls2/trained_model JOBNAME=txtcls_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gsutil cp txtcls1/trainer/*.py $OUTDIR gcloud ml-engine jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=$(pwd)/txtcls1/trainer \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=BASIC_GPU \ --runtime-version=1.4 \ -- \ --bucket=${BUCKET} \ --output_dir=${OUTDIR} \ --glove_embedding=gs://${BUCKET}/txtcls2/custom_embedding.txt.gz \ --train_steps=36000 Explanation: Training model with custom embedding Now, you can use this embedding file instead of the Glove embedding used in txtcls2.ipynb End of explanation
5,156
Given the following text description, write Python code to implement the functionality described below step by step Description: BEM++ overview https Step1: Q Step2: What mean these arguments? The first argument is always grid object The second argument can be discontinious polynomial ("DP"), polynomial ("P") or some special function space ("DUAL") The third argument is the order of polynomial Study degrees of freedom for different types and order of function spaces. Grid function After introducing space we can obtain grid function, which is representation of data on given grid. This object consists of a set of basis function coefficients and a corresponding space object Step3: Operators Boundary operators $$ A Step4: General operator $A
Python Code: import bempp.api import numpy as np grid = bempp.api.shapes.regular_sphere(3) grid.plot() Explanation: BEM++ overview https://bempp.com/ Overview Overview presentation here. Applicable only for 3D problems Support Laplace, Helmholtz and Maxwell equations with Dirichlet and Neumann boundary conditions Support H-matrices and itertive solvers for linear systems Installation https://bitbucket.org/bemppsolutions/bempp Conda environment preparation Create conda environment conda create -n bempp python=3.6 Activate this environment source activate bempp Install in this environment NumPy stack + MPI packages Numpy Scipy Matplotlib mpi4py Cython Jupyter Notebook kernel conda install jupyter notebook Threading building block BEM++ uses TBB package, so you should install it before bulding BEM++. For OS X: brew install tbb For Ubuntu: sudo apt-get install libtbb-dev Boost Probably you will need install boost library if it had not already installed. For OS X: brew install boost For Ubuntu: sudo apt-get install libboost-all-dev Cloning source code and build from source Clone BEM++ source code in your folder with lib source codes cd your-folder git clone git@bitbucket.org:bemppsolutions/bempp.git Got to the cloned folder and build C/C++ core of BEM++ cd ./bempp/ mkdir build cd ./build cmake .. make -j4 The last command runs build process and parallel it in 4 threads. Create and install Python package After successfully finished building in the previous step go to the home folder cd .. and run python setup.py install This command initiates converting C/C++ core of BEM++ to Python package and installs it to the target place inside your environment. Check correctness of installation To check that BEM++ is installed correctly, run python inside environment, where you have installed BEM++, and import it python import bempp.api Next steps... Now we are ready to try BEM++ for solving different integral equations, but how to give integral equation for this package? Main ingredients Remember first-kind integral equation $$ \int_{\partial \Omega} \frac{q(y)}{\Vert x - y \Vert} dy = f(x), \quad x \in \partial \Omega. $$ and list steps required to solve it numerically. From IE to LS Discretization Local basis functions Test functions Operator Linear system solver Grids End of explanation space = bempp.api.function_space(grid, "DP", 0) Explanation: Q: what means argument of the function? Plot the following objects - cube - ellipsoid - rectangle with hole Import/export You can import your mesh in Gmsh file bempp.api.import_grid(filename) You can export mesh from Gmsh file bempp.api.export(grid=grid, file_name=filename) Function space After setting grid, we can define functions which are used for discretization in this mesh. End of explanation import numpy as np def fun(x, normal, domain_index, result): result[0] = np.exp(1j * x[0]) grid_fun = bempp.api.GridFunction(space, fun=fun) grid_fun.plot() Explanation: What mean these arguments? The first argument is always grid object The second argument can be discontinious polynomial ("DP"), polynomial ("P") or some special function space ("DUAL") The third argument is the order of polynomial Study degrees of freedom for different types and order of function spaces. Grid function After introducing space we can obtain grid function, which is representation of data on given grid. This object consists of a set of basis function coefficients and a corresponding space object End of explanation slp = bempp.api.operators.boundary.laplace.single_layer(space, space, space) scaled_operator = 1.5 * slp sum_operator = slp + slp squared_operator = slp * slp Explanation: Operators Boundary operators $$ A: D \to R $$ is defined by domain space ($D$), range space ($R$) and dual-to-range ($V$). The last space is neccessary for weak reformulation and it will be discussed later. Potential operators map from given space to set of external points Operators in BEM++ All available operators are described here Main groups of operators are operators for Laplace equation, for Maxwell equations and for Helmholtz equation Algebra in operators is implemented, so you can perform the following operatios with operators: sum, multiply by scalar, squared and others... Lazy evaluation: discretization is performed not after defining operators, but only at the moment of solving linear system End of explanation slp_discrete = slp.weak_form() print("Shape of the matrix: {0}".format(slp_discrete.shape)) print("Type of the matrix: {0}".format(slp_discrete.dtype)) x = np.random.rand(slp_discrete.shape[1]) y = slp_discrete * x Explanation: General operator $A: D \to R$ mapping has the form $$ Au= f, $$ where $u \in D$, $f \in R$ We can inner multiply both side on some function from dual space (here we get dual-to-rage space!) and get $$ \langle Au, v\rangle = \langle f, v\rangle $$ In case of integral equation $$ \langle Au, v\rangle = \int_{\Gamma} Au\bar{v}(y)dy. $$ End of explanation
5,157
Given the following text description, write Python code to implement the functionality described below step by step Description: DV360 Automation Step1: 0.2 Setup your GCP project To utilise the DV360 API, you need a Google Cloud project. For the purpose of this workshop, we've done this for you, but normally you'd have to complete the following steps, before you can make requests using the DV360 API Step2: 0.4 Set DV360 account settings Next, we need to set our DV360 parameters, and generate a sandbox (test) campaign. Note, if you'd prefer to use an existing campaign, update CAMPAIGN_ID below. Step4: Create a new 'sandbox' campaign to use with the rest of the exercises Executing the following code block will overwrite any CAMPAIGN_ID used above. Step5: 1A) SDF using DBM API (sunset) Important Step6: Define a boilerplate targeting template that all Line Items should adhere too Step7: Modify latest SDF LineItems file and update the columns according to the targeting template Step8: Upload the output .csv file in the DV360 UI Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details' 1.3 SDF + Entity Read Files What are Entity Read Files (ERFs)? ERFs are flat files (.JSON) in Google Cloud Storage that contain lookup values for DV360 entities like geographies, creatives, etc. Each DV360 entity (Advertiser, Campaign, LineItem, etc) has a corresponding .JSON file in Cloud Storage retained free-of-charge for 60 days from their processing date. ERFs consist of 1 file per entity type, written x1 per day to two seperate Cloud buckets Step9: Retrieve a list of country codes / IDs from GeoLocation.json for each of our store locations Step10: Download the latest SDF LineItems (because we've made changes since our last download) Step11: Modify the contents of the latest SDF output, then save a new CSV with updated Geo Targeting IDs Step12: Upload the output .csv file in the DV360 UI Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details' 1.4 SDF + Cloud Vision API Next, let's look at how we you can utilise external APIs. Download the 'product_feed' tab from Google Store as CSV (File >> Download >> CSV) Execute the following code block and upload 'product_feed.csv' This will create a new Python dictionary (key Step14: Define a function to send images to the Cloud Vision API Step15: Run our images through the function, and return a lookup table Step16: Now we have our new labels from the Vision API, we need to write these into the keywords targeting field Step18: Upload the output .csv file in the DV360 UI Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details' 1.5 Optimisation using Reports Next, we'll look at how you could combine reporting data, with operations such as optimising bid multipliers or deactivating activity. Note Step19: Download an updated SDF LineItems file, and if the LineItem ID is in the poor performers list, add a Geo bid multiplier to half the bids (0.5) Step20: Note the only rows included in the output, are those that we want to modify. Upload the output .csv file in the DV360 UI Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details' 1.6 Challenge Challenge Step21: Solution Step23: Upload the output .csv file in the DV360 UI 1B) SDF using DV360 API Reference Step24: Define a boilerplate targeting template that all Line Items should adhere too Step25: Modify latest SDF LineItems file and update the columns according to the targeting template Step26: Upload the output .csv file in the DV360 UI Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details' 1.3 SDF + Entity Read Files What are Entity Read Files (ERFs)? ERFs are flat files (.JSON) in Google Cloud Storage that contain lookup values for DV360 entities like geographies, creatives, etc. Each DV360 entity (Advertiser, Campaign, LineItem, etc) has a corresponding .JSON file in Cloud Storage retained free-of-charge for 60 days from their processing date. ERFs consist of 1 file per entity type, written x1 per day to two seperate Cloud buckets Step27: Retrieve a list of country codes / IDs from GeoLocation.json for each of our store locations Step28: Download the latest SDF LineItems (because we've made changes since our last download) Step29: Modify the contents of the latest SDF output, then save a new CSV with updated Geo Targeting IDs Step30: Upload the output .csv file in the DV360 UI Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details' 1.4 SDF + Cloud Vision API Next, let's look at how we you can utilise external APIs. Download the 'product_feed' tab from Google Store as CSV (File >> Download >> CSV) Execute the following code block and upload 'product_feed.csv' This will create a new Python dictionary (key Step32: Define a function to send images to the Cloud Vision API Step33: Run our images through the function, and return a lookup table (reference) Step34: View the results of our Vision analysis Step35: Download the latest SDF LineItems (because we've made changes since our last download) Step36: Now we have our new labels from the Vision API, we need to write these into the keywords targeting field Step37: Upload the output .csv file in the DV360 UI Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details' 1.5 Optimisation using Reports Next, we'll look at how you could combine reporting data, with operations such as optimising bid multipliers or deactivating activity. Note Step38: Download an updated SDF LineItems file, and if the LineItem ID is in the poor performers list, add a Geo bid multiplier to half the bids (0.5) Step39: Upload the output .csv file in the DV360 UI Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details' 1.6 Challenge Challenge Step40: Solution Step42: Upload the output .csv file in the DV360 UI 2) Display & Video 360 API What is the Display & Video 360 API? The Display & Video 360 API (formly known as the DV360 Write API) is the programmatic interface for the Display & Video 360 platform. It allows developers to easily and efficiently automate complex Display & Video 360 workflows, such as creating insertion orders and setting targeting options for individual line items. We'll use it now to build upon the campaign we created earlier using SDF. Reference Step43: Upload the extended feed for Google Store's new territories Step45: Create Insertion Order template Here we're defining a new a function called 'create_insertion_order'. Note Step47: Create LineItem template Here we define a new function called 'create_lineitem', based on a template we specified. Note Step48: Build our new campaign First, we'll loop through the list of countries generated at the beginning, and for each country, create a new Insertion Order by calling our function 'create_insertion_order'. Within that loop, we find every product that is sold in the corresponding country-code, and create a new Line Item for every matching product using our function 'create_lineitem'. Sit tight, this one can take a while (~10 mins)... Link to DV360 UI Step49: If successful, the result should look similar to the below in DV360 Step50: Apply individual targeting criteria to single entity Step51: Applying individual targeting criteria to multiple entities Step53: 2.3 Bulk targeting Bulk updates using templated targeting controls Step54: Retrieve list of active LineItems, and Apply bulk targeting Step56: 2.4 Optimisation (external trigger) The following optimisations will be completed on your campaign, created earlier. Create functions to 'deactivate' or 'optimise' Lineitems Step57: Creat list of out of stock products Step58: Process optimisation Step59: 2.5 Optimisation (reporting data) As your new campaign has no performance data, the following optimisations will be completed on an existing campaign with historical data. Create new performance report and fetch results Step60: Load report to Pandas DataFrame Step61: Create two lists of poorly performing LineItems 1. LineItems that should be paused 2. Lineitems to reduce bids Step62: Process optimisation Step64: 2.6 Creative upload Uploading Display creatives from remote storage (http) The following demonstrates how to upload image assets from remote storage, but it's also possible to upload from local storage. Reference Step65: Upload image creatives Note, all of the following assets are the same dimension (300x250) and type 'CREATIVE_TYPE_STANDARD'. When uploading assets of multiple sizes, the creatives.create body must reflect this. Step66: 2.7 Challenge Challenge Step67: Solution Step73: Link to DV360 UI Resources Getting started with SDF in DV360 guide Structured Data Files (SDF) developer guide Getting started with the Display & Video 360 API developer guide Getting started with the DoubleClick Bid Manager API developer guide How to access Entity Read Files Quickstart
Python Code: !pip install google-api-python-client !pip install google-cloud-vision import csv import datetime import io import json import pprint from google.api_core import retry from google.cloud import vision from google.colab import files from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient import discovery from googleapiclient import http import pandas as pd import requests print('Successfully imported Python libraries!') Explanation: DV360 Automation: codelab Author: Matt Lynam Objective Enable Display & Video 360 (DV360) advertisers to increase workflow efficiency by utilising the right automation solution according to their needs, resources and technical capability. Goals * Provide an overview of the current automation suite available in DV360 * Demonstrate the capabilities and limitations of DV360's UI and APIs * Explore common advertiser use cases and pitfalls * Acquire hands-on experience by applying key concepts using a fictional case study 0) Setup and authentication Google Colab primer Google Colaboratory, or "Colab" for short, allows you to write and execute Python in your browser, with: - Zero configuration required - Free access to GPUs - Easy sharing & colaboration A notebook is a list of cells, containing either explanatory text or executable code and its output. This is a text cell. Useful Colab tips * Double-click within the cell to edit * Code cells can be executed by clicking the Play icon in the left gutter of the cell; or with Cmd/Ctrl + Enter to run the cell in place; * Use Cmd/Ctrl + / to comment out a line of code 0.1 Install Python client libraries Run the following block to install the latest Google Python Client Library and import additional libraries used for this workshop. End of explanation API_SCOPES = ['https://www.googleapis.com/auth/doubleclickbidmanager', 'https://www.googleapis.com/auth/display-video', 'https://www.googleapis.com/auth/devstorage.read_only', 'https://www.googleapis.com/auth/cloud-vision'] # Authenticate using user credentials stored in client_secrets.json client_secrets_file = files.upload() client_secrets_json = json.loads(next(iter(client_secrets_file.values()))) flow = InstalledAppFlow.from_client_config(client_secrets_json, API_SCOPES) credentials = flow.run_console() print('Success!') # Build DBM Read API service object dbm_service = discovery.build( 'doubleclickbidmanager', 'v1.1', credentials=credentials) print('DBM API service object created') # Build Google Cloud Storage Read API service object gcs_service = discovery.build('storage', 'v1', credentials=credentials) print('GCS service object created') # Create Display Video API service object display_video_service = discovery.build( 'displayvideo', 'v1', credentials=credentials) print('Display Video API service object created') Explanation: 0.2 Setup your GCP project To utilise the DV360 API, you need a Google Cloud project. For the purpose of this workshop, we've done this for you, but normally you'd have to complete the following steps, before you can make requests using the DV360 API: Select or create a Google Cloud Platform project. Enable billing on your project. Enable the 'Display & Video 360' and 'DoubleClick Bid Manager' API from the API library Create GCP credentials We've also generated credentials for you, but if you needed to generate new credentials, this would be the process: Go to the API credentials page in the Cloud Platform Console. Fill out the required fields on the OAuth consent screen. On the credentials page, click Create credentials >> OAuth client ID. Select Other as the application type, and then click Create. Download the credentials by clicking the Download JSON button Reference: https://developers.google.com/display-video/api/guides/how-tos/authorizing 0.3 Authentication Next, we'll permission the application to submit authorised API requests on our behalf using OAuth authentication. The following scopes are specified in an array: * DBM API * Display Video API * GCP Storage Read * Cloud Vision API Reference: * Example OAuth2 Python Library * Google scopes End of explanation PARTNER_ID = '234340' #@param {type:"string"} ADVERTISER_ID = '2436036' #@param {type:"string"} CAMPAIGN_ID = '4258803' #@param {type:"string"} # For use with legacy DBM API SDF_VERSION = '5.3' #@param {type:"string"} # For use with DV360 API SDF_VERSION_DV360 = 'SDF_VERSION_5_3' #@param {type:"string"} print('DV360 settings saved!') Explanation: 0.4 Set DV360 account settings Next, we need to set our DV360 parameters, and generate a sandbox (test) campaign. Note, if you'd prefer to use an existing campaign, update CAMPAIGN_ID below. End of explanation YOUR_NAME = 'Matt' #@param {type:"string"} # Set dates for new campaign month = datetime.datetime.today().strftime('%m') day = datetime.datetime.today().strftime('%d') year = datetime.datetime.today().strftime('%Y') month_plus30 = (datetime.datetime.today() + datetime.timedelta(days=30)).strftime('%m') day_plus30 = (datetime.datetime.today() + datetime.timedelta(days=30)).strftime('%d') year_plus30 = (datetime.datetime.today() + datetime.timedelta(days=30)).strftime('%Y') def create_campaign(YOUR_NAME): Creates a new DV360 Campaign object. campaign_name = f'{year}-{month}-{day} | {YOUR_NAME}' campaign_obj = { 'displayName': campaign_name, 'entityStatus': 'ENTITY_STATUS_ACTIVE', 'campaignGoal': { 'campaignGoalType': 'CAMPAIGN_GOAL_TYPE_ONLINE_ACTION', 'performanceGoal': { 'performanceGoalType': 'PERFORMANCE_GOAL_TYPE_CPC', 'performanceGoalAmountMicros': 1000000 } }, 'campaignFlight': { 'plannedSpendAmountMicros': 1000000, 'plannedDates': { 'startDate': { 'year': year, 'month': month, 'day': day }, 'endDate': { 'year': year_plus30, 'month': month_plus30, 'day': day_plus30 } } }, 'frequencyCap': { 'maxImpressions': 10, 'timeUnit': 'TIME_UNIT_DAYS', 'timeUnitCount': 1 } } # Create the campaign. campaign = display_video_service.advertisers().campaigns().create( advertiserId=ADVERTISER_ID, body=campaign_obj ).execute() return campaign new_campaign = create_campaign(YOUR_NAME) # Display the new campaign. CAMPAIGN_ID = new_campaign['campaignId'] print(f"\nCampaign '{new_campaign['name']}' was created." f"\nCampaign id: '{new_campaign['campaignId']}'" f"\nCampaign name: '{new_campaign['displayName']}'" f"\nCampaign status: '{new_campaign['entityStatus']}'") Explanation: Create a new 'sandbox' campaign to use with the rest of the exercises Executing the following code block will overwrite any CAMPAIGN_ID used above. End of explanation # Configure the sdf.download request request_body = { 'fileTypes': ['LINE_ITEM'], 'filterType': 'CAMPAIGN_ID', 'filterIds': [CAMPAIGN_ID], 'version': SDF_VERSION } # Make the request to download all SDF LineItems for your new campaign request = dbm_service.sdf().download(body=request_body) response = request.execute() # Load SDF response to Pandas DataFrame sdf_df = pd.read_csv(io.StringIO(response['lineItems'])) # Show sample (5 rows) of DataFrame sdf_df.head() Explanation: 1A) SDF using DBM API (sunset) Important: the SDF resource (sdf.download) for the DBM API has migrated to a new endpoint (displayvideo.googleapis.com). SDF methods using this (doubleclickbidmanager.googleapis.com) endpoint have been sunset, and will not be updated moving forward. Please follow track 1B, for code samples using the DV360 API. Reference: https://developers.google.com/bid-manager/v1.1/sdf Structured Data Files (SDF) are a way of using spreadsheets to make bulk changes to DV360 entities, including Campaigns, Insertion Orders, Line Items, TrueView Ad Groups, TrueView Ads and deals. SDF are the first step on the path to full automation in DV360, but only allow you to automate so far, as we'll explore now... 1.1 Manually create SDF Create a copy of the Google Store product feed Update the highlighted cells (B2:B3) on the tab called "sdf_insertionorders" Save the updated "sdf_insertionorders" tab and "sdf_lineitems" tab to .CSV (File >> Download >> CSV) Upload the two .CSV files together in the DV360 UI This will create a very basic campaign, with 2 insertion orders, and 10 lineitems per insertion order. 1.2 Editing SDF programmatically Our new LineItems are missing some important targeting and inventory controls: * Channels (e.g. groups of publisher URLs) * Inventory source * Brand safety * Geo targeting Let’s use software to make these changes for us... End of explanation targeting_template = { 'Channel Targeting - Include': '2580510;', 'Channel Targeting - Exclude': '2580509;', 'Inventory Source Targeting - Include': '1;', 'Inventory Source Targeting - Exclude': '6; 8; 9; 10; 2; 11; 12; 13; 16; 20; 23; 27; 29; 30; 31; 34; 35; 36; ' '38; 43; 46; 50; 51; 56; 60; 63; 67; 74;', 'Digital Content Labels - Exclude': 'G; PG; T;', 'Brand Safety Sensitivity Setting': 'Use custom', 'Brand Safety Custom Settings': 'Adult; Alcohol; Derogatory; Downloads & Sharing; Drugs; Gambling; ' 'Politics; Profanity; Religion; Sensitive social issues; Suggestive; ' 'Tobacco; Tragedy; Transportation Accidents; Violence; Weapons;' } Explanation: Define a boilerplate targeting template that all Line Items should adhere too End of explanation # Overwrite targeting columns using 'targeting_template' sdf_df['Channel Targeting - Include'] = targeting_template[ 'Channel Targeting - Include'] sdf_df['Channel Targeting - Exclude'] = targeting_template[ 'Channel Targeting - Exclude'] sdf_df['Inventory Source Targeting - Include'] = targeting_template[ 'Inventory Source Targeting - Include'] sdf_df['Inventory Source Targeting - Exclude'] = targeting_template[ 'Inventory Source Targeting - Exclude'] sdf_df['Digital Content Labels - Exclude'] = targeting_template[ 'Digital Content Labels - Exclude'] sdf_df['Brand Safety Sensitivity Setting'] = targeting_template[ 'Brand Safety Sensitivity Setting'] sdf_df['Brand Safety Custom Settings'] = targeting_template[ 'Brand Safety Custom Settings'] # Save modified dataframe to remote storage in Colab sdf_df.to_csv('sdf_update1_controls.csv', index=False) # Show sample (5 rows) of DataFrame sdf_df.head() # Download modified csv to local storage files.download('sdf_update1_controls.csv') print( "Success, check your downloads for a file called 'sdf_update1_controls.csv'" ) Explanation: Modify latest SDF LineItems file and update the columns according to the targeting template End of explanation # Actually today-7 to avoid issues with collection yesterday = datetime.date.today() - datetime.timedelta(7) # Download public ERF for geolocation info request = gcs_service.objects().get_media( bucket='gdbm-public', object='entity/' + yesterday.strftime('%Y%m%d') + '.0.GeoLocation.json') response = request.execute() geolocations = json.loads(response) print('GeoLocation.json successfully downloaded \n') print("Here's a random sample of 5 entries:\n") pprint.pprint(geolocations[0:5]) Explanation: Upload the output .csv file in the DV360 UI Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details' 1.3 SDF + Entity Read Files What are Entity Read Files (ERFs)? ERFs are flat files (.JSON) in Google Cloud Storage that contain lookup values for DV360 entities like geographies, creatives, etc. Each DV360 entity (Advertiser, Campaign, LineItem, etc) has a corresponding .JSON file in Cloud Storage retained free-of-charge for 60 days from their processing date. ERFs consist of 1 file per entity type, written x1 per day to two seperate Cloud buckets: Public (10 .JSON files) - contain common public data such as GeoLocation and Language which are stored in the gdbm-public bucket (the same bucket for every DV360 user). Private (13 .JSON files) - contain information about the DV360 Partner's campaigns, creatives, budgets and other private data and are stored in Partner-specific buckets (restricted to specific users) Reference: https://developers.google.com/bid-manager/guides/entity-read/overview ERFs can be used to speed up, and automate, the creation of SDF files. Let's explore this now... Download yesterday's GeoLocation.json from public ERF bucket using Google Cloud Storage API End of explanation # Provide a list of store locations store_locations = ['United Kingdom', 'France', 'Spain', 'Germany', 'Portugal'] # Create a new dictionary to save the country code and ID later on geo_targeting_ids = {} # Note: GeoLocation.json is over 800,000 lines for location in geolocations: if location['canonical_name'] in store_locations: geo_targeting_ids[location['country_code']] = location['id'] print(location) print(geo_targeting_ids) Explanation: Retrieve a list of country codes / IDs from GeoLocation.json for each of our store locations End of explanation # Configure the sdf.download request request_body = { 'fileTypes': ['LINE_ITEM'], 'filterType': 'CAMPAIGN_ID', 'filterIds': [CAMPAIGN_ID], 'version': SDF_VERSION } # Make the request to download all SDF LineItems for your new campaign request = dbm_service.sdf().download(body=request_body) response = request.execute() # Load SDF response to Pandas DataFrame sdf_df = pd.read_csv(io.StringIO(response['lineItems'])) # Show sample (5 rows) of DataFrame sdf_df.head() Explanation: Download the latest SDF LineItems (because we've made changes since our last download) End of explanation for country in geo_targeting_ids: target_country = geo_targeting_ids[country] sdf_df.loc[sdf_df.Name.str.contains(country), 'Geography Targeting - Include'] = f'{target_country};' # Save modified dataframe to remote storage in Colab sdf_df.to_csv('sdf_update2_geo.csv', index=False) # Display updated DataFrame sdf_df.head() # Download modified csv to local storage files.download('sdf_update2_geo.csv') print("Success, look for a file called 'sdf_update2_geo.csv' in your downloads folder") Explanation: Modify the contents of the latest SDF output, then save a new CSV with updated Geo Targeting IDs End of explanation # Upload product feed using Colab's upload utility product_feed_csv = files.upload() contents = next(iter(product_feed_csv.values())).decode('utf-8') products = csv.DictReader(io.StringIO(contents)) image_url_list = {} # Iterate through each row and update dict() with sku:link for row in products: image_url_list[row['sku']] = row['image_link'] pprint.pprint(image_url_list) Explanation: Upload the output .csv file in the DV360 UI Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details' 1.4 SDF + Cloud Vision API Next, let's look at how we you can utilise external APIs. Download the 'product_feed' tab from Google Store as CSV (File >> Download >> CSV) Execute the following code block and upload 'product_feed.csv' This will create a new Python dictionary (key:value pairing), mapping SKUs with their image link Warning: Cloud Vision API is paid product, utilising the following example in your own Cloud project will incur costs. Try out the Cloud Vision API for free at cloud.google.com/vision End of explanation def vision_analysis(image_url): Process images using the Cloud Vision API. # Assign image URL image = vision.Image() image.source.image_uri = image_url # Instantiates a Vision client client = vision.ImageAnnotatorClient(credentials=credentials) # Performs label detection on the image file vision_response = client.label_detection(image=image) dv360_targeting_keywords = [] labels = [] for label in vision_response.label_annotations: dv360_targeting_keywords.append(label.description) label = f'{label.description} ({label.score:.2%})' labels.append(label) return dv360_targeting_keywords, labels Explanation: Define a function to send images to the Cloud Vision API End of explanation imageslookup = {} for sku, url in image_url_list.items(): imageslookup[sku], vision_labels = vision_analysis(url) print(f'Analysis completed for: {url}') print('Labels (confidence score):') pprint.pprint(vision_labels, indent=4) print('=' * 30) print('\n\nLookup table:') pprint.pprint(imageslookup, indent=4) Explanation: Run our images through the function, and return a lookup table End of explanation # Configure the sdf.download request request_body = { 'fileTypes': ['LINE_ITEM'], 'filterType': 'CAMPAIGN_ID', 'filterIds': [CAMPAIGN_ID], 'version': SDF_VERSION } request = dbm_service.sdf().download(body=request_body) response = request.execute() # Load SDF response to Pandas DataFrame sdf_df = pd.read_csv(io.StringIO(response['lineItems'])) for product in imageslookup: sdf_df.loc[sdf_df.Name.str.contains(product), 'Keyword Targeting - Include'] = ';'.join( imageslookup[product]).lower() # Save modified dataframe to remote storage in Colab sdf_df.to_csv('sdf_update3_keywords.csv', index=False) # Show sample (5 rows) of DataFrame sdf_df.head() # Download modified csv to local storage files.download('sdf_update3_keywords.csv') print("Success, look for the file called 'sdf_update3_keywords.csv' in your downloads folder") Explanation: Now we have our new labels from the Vision API, we need to write these into the keywords targeting field End of explanation # Define DV360 report definition (i.e. metrics and filters) report_definition = { 'params': { 'type': 'TYPE_GENERAL', 'metrics': [ 'METRIC_IMPRESSIONS', 'METRIC_CLICKS', 'METRIC_CTR', 'METRIC_REVENUE_ADVERTISER' ], 'groupBys': [ 'FILTER_ADVERTISER', 'FILTER_INSERTION_ORDER', 'FILTER_LINE_ITEM', 'FILTER_ADVERTISER_CURRENCY' ], 'filters': [{ 'type': 'FILTER_ADVERTISER', 'value': ADVERTISER_ID }], }, 'metadata': { 'title': 'DV360 Automation API-generated report', 'dataRange': 'LAST_90_DAYS', 'format': 'csv' }, 'schedule': { 'frequency': 'ONE_TIME' } } # Create new query using report definition operation = dbm_service.queries().createquery(body=report_definition).execute() pprint.pprint(operation) # Runs the given Queries.getquery request, retrying with an exponential # backoff. Returns completed operation. Will raise an exception if the # operation takes more than five hours to complete. @retry.Retry( predicate=retry.if_exception_type(Exception), initial=5, maximum=60, deadline=18000) def check_get_query_completion(getquery_request): Queries metadata to check for completion. completion_response = getquery_request.execute() pprint.pprint(completion_response) if completion_response['metadata']['running']: raise Exception('The operation has not completed.') return completion_response getquery_request = dbm_service.queries().getquery(queryId=operation['queryId']) getquery_response = check_get_query_completion(getquery_request) report_url = getquery_response['metadata'][ 'googleCloudStoragePathForLatestReport'] # Use skipfooter to remove report footer from data report_df = pd.read_csv(report_url, skipfooter=16, engine='python') report_df.head(10) # Define our 'KPIs' ctr_target = 0.15 imp_threshold = 10000 # Convert IDs to remove decimal point, then string report_df['Line Item ID'] = report_df['Line Item ID'].apply(int) poor_performers = report_df.query( 'Impressions > @imp_threshold & (Clicks / Impressions)*100 < @ctr_target') # Convert results to Python list poor_performers = list(poor_performers['Line Item ID']) print(f'There are {len(poor_performers)} LineItems with a CTR' f' < {ctr_target}% and over {imp_threshold} impressions:' f'\n{poor_performers}') Explanation: Upload the output .csv file in the DV360 UI Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details' 1.5 Optimisation using Reports Next, we'll look at how you could combine reporting data, with operations such as optimising bid multipliers or deactivating activity. Note: your new campaign has no performance history, so we'll use an existing campaign for this exercise. End of explanation # Configure the sdf.download request request_body = { 'fileTypes': ['LINE_ITEM'], 'filterType': 'CAMPAIGN_ID', 'filterIds': ['1914007'], 'version': SDF_VERSION } request = dbm_service.sdf().download(body=request_body) response = request.execute() # Load SDF response to Pandas DataFrame sdf_df = pd.read_csv(io.StringIO(response['lineItems'])) for li in poor_performers: geo = sdf_df.loc[sdf_df['Line Item Id'] == li, 'Geography Targeting - Include'].iloc[0] sdf_df.loc[sdf_df['Line Item Id'] == li, 'Bid Multipliers'] = f'(geo; {geo} 0.5;);' # Save modified dataframe to remote storage in Colab sdf_df.to_csv('sdf_update4_bidmultipliers.csv', index=False) # Display updated DataFrame sdf_df.head() files.download('sdf_update4_bidmultipliers.csv') print('Success, your new SDF file has been downloaded') Explanation: Download an updated SDF LineItems file, and if the LineItem ID is in the poor performers list, add a Geo bid multiplier to half the bids (0.5) End of explanation #TODO Explanation: Note the only rows included in the output, are those that we want to modify. Upload the output .csv file in the DV360 UI Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details' 1.6 Challenge Challenge: update your campaign with both language and audience targeting. All Lineitems should target the following Google audiences Affinity Categories » Technology » Mobile Enthusiasts Affinity Categories » Technology » Technophiles » High-End Computer Aficionado In-Market Categories » Consumer Electronics LineItems for France, should be targeted at French speakers LineItems for Great Britain, should be targeted at English speakers Tips Google Audience IDs can be found in the DV360 UI or by downloading an SDF with an existing audience applied Language IDs can be found in the Language.json ERF file or by downloading an SDF with the language already applied End of explanation # Format today-2 in required date format yesterday = (datetime.date.today() - datetime.timedelta(2)).strftime('%Y%m%d') # Download ERF for Language.json from public GCS bucket request = gcs_service.objects().get_media( bucket='gdbm-public', object='entity/' + yesterday + '.0.Language.json') response = request.execute() languages = json.loads(response) language_targets = ['en', 'fr'] lang_targeting_ids = {} # Search language.json for language targets 'en' and 'fr' for lang in languages: if lang['code'] in language_targets: lang_targeting_ids[lang['code']] = lang['id'] print(lang) print(lang_targeting_ids) # Define targeting template targeting_template = { 'Affinity & In Market Targeting - Include': '4569529;4586809;4497529;', } # Configure the sdf.download request request_body = { 'fileTypes': ['LINE_ITEM'], 'filterType': 'CAMPAIGN_ID', 'filterIds': [CAMPAIGN_ID], 'version': SDF_VERSION } request = dbm_service.sdf().download(body=request_body) response = request.execute() # Load SDF response to Pandas DataFrame sdf_df = pd.read_csv(io.StringIO(response['lineItems'])) # Update DataFrame with Language and Audience targeting sdf_df.loc[sdf_df.Name.str.contains('GB'), 'Language Targeting - Include'] = f"{lang_targeting_ids['en']};" sdf_df.loc[sdf_df.Name.str.contains('FR'), 'Language Targeting - Include'] = f"{lang_targeting_ids['fr']};" sdf_df['Affinity & In Market Targeting - Include'] = targeting_template[ 'Affinity & In Market Targeting - Include'] # Save modified dataframe to remote storage in Colab sdf_df.to_csv('sdf_update5_challenge.csv', index=False) # Display updated DataFrame sdf_df.head() # Download file to disk using Colab syntax files.download('sdf_update5_challenge.csv') print("Success, check your downloads for a file called 'sdf_update5_challenge.csv'") Explanation: Solution End of explanation def download_sdf(request_body): Download sdf .zip, extract .csv files, load 'SDF-LineItems.csv' to Pandas DataFrame. # Create the sdfdownloadtask sdf_operation = display_video_service.sdfdownloadtasks().create( body=sdf_body).execute() print(f'Operation {sdf_operation["name"]} was created.') # Configure the operations.get request get_request = display_video_service.sdfdownloadtasks().operations().get( name=sdf_operation['name']) # Runs the given operations.get request, retrying with an exponential # backoff. Returns completed operation. Will raise an exception if the # operation takes more than five hours to complete. @retry.Retry(predicate=retry.if_exception_type(Exception), initial=5, maximum=60, deadline=18000) def check_sdf_downloadtask_completion(get_request): operation = get_request.execute() if 'done' not in operation: raise Exception('The operation has not completed.') return operation # Get current status of operation with exponential backoff retry logic operation = check_sdf_downloadtask_completion(get_request) # Check if the operation finished with an error and return if 'error' in operation: raise Exception(f'The operation finished in error with code {operation["error"]["code"]} {operation["error"]["message"]}') print('The operation completed successfully.') print('Resource {operation["response"]["resourceName"]} was created.') # Extract download file resource name to use in download request resource_name = operation['response']['resourceName'] # Configure the Media.download request dowload_request = display_video_service.media().download_media( resourceName=resource_name) output_file = f"{resource_name.replace('/','-')}.zip" # Create output stream for downloaded file outstream = io.FileIO(output_file, mode='wb') # Make downloader object downloader = http.MediaIoBaseDownload(outstream, dowload_request) # Download media file in chunks until finished download_finished = False while download_finished is False: _, download_finished = downloader.next_chunk() print(f'File downloaded to {output_file}') # Load output into a Pandas dataframe df = pd.read_csv(output_file, compression='zip') return df print('Download SDF function created') Explanation: Upload the output .csv file in the DV360 UI 1B) SDF using DV360 API Reference: https://developers.google.com/display-video/api/reference/rest/v1/sdfdownloadtasks/create Structured Data Files (SDF) are a way of using spreadsheets to make bulk changes to DV360 entities, including Campaigns, Insertion Orders, Line Items, TrueView Ad Groups, TrueView Ads and deals. SDF are the first step on the path to full automation in DV360, but only allow you to automate so far, as we'll explore now... 1.1 Manually create SDF Create a copy of the Google Store product feed Update the highlighted cells (B2:B3) on the tab called "sdf_insertionorders" Save the updated "sdf_insertionorders" tab and "sdf_lineitems" tab to .CSV (File >> Download >> CSV) Upload the two .CSV files together in the DV360 UI This will create a very basic campaign, with 2 insertion orders, and 10 lineitems per insertion order. 1.2 Editing SDF programmatically Our new LineItems are missing some important targeting and inventory controls: * Channels (e.g. groups of publisher URLs) * Inventory source * Brand safety * Geo targeting Let’s use software to make these changes for us... Create a function to download SDFs As we'll be downloading multiple SDF files in the next exercises, we've created a function to handle to the download process for us. End of explanation targeting_template = { 'Channel Targeting - Include': '2580510;', 'Channel Targeting - Exclude': '2580509;', 'Inventory Source Targeting - Include': '1;', 'Inventory Source Targeting - Exclude': '6; 8; 9; 10; 2; 11; 12; 13; 16; 20; 23; 27; 29; 30; 31; 34; 35; 36; ' '38; 43; 46; 50; 51; 56; 60; 63; 67; 74;', 'Digital Content Labels - Exclude': 'G; PG; T;', 'Brand Safety Sensitivity Setting': 'Use custom', 'Brand Safety Custom Settings': 'Adult; Alcohol; Derogatory; Downloads & Sharing; Drugs; Gambling; ' 'Politics; Profanity; Religion; Sensitive social issues; Suggestive; ' 'Tobacco; Tragedy; Transportation Accidents; Violence; Weapons;' } Explanation: Define a boilerplate targeting template that all Line Items should adhere too End of explanation # Configure the sdfdownloadtasks.create request sdf_body = { 'version': SDF_VERSION_DV360, 'advertiserId': ADVERTISER_ID, 'parentEntityFilter': { 'fileType': ['FILE_TYPE_LINE_ITEM'], 'filterType': 'FILTER_TYPE_CAMPAIGN_ID', 'filterIds': [CAMPAIGN_ID] } } # Fetch updated SDF lineitem sdf_df = download_sdf(sdf_body) # Overwrite targeting columns using 'targeting_template' sdf_df['Channel Targeting - Include'] = targeting_template[ 'Channel Targeting - Include'] sdf_df['Channel Targeting - Exclude'] = targeting_template[ 'Channel Targeting - Exclude'] sdf_df['Inventory Source Targeting - Include'] = targeting_template[ 'Inventory Source Targeting - Include'] sdf_df['Inventory Source Targeting - Exclude'] = targeting_template[ 'Inventory Source Targeting - Exclude'] sdf_df['Digital Content Labels - Exclude'] = targeting_template[ 'Digital Content Labels - Exclude'] sdf_df['Brand Safety Sensitivity Setting'] = targeting_template[ 'Brand Safety Sensitivity Setting'] sdf_df['Brand Safety Custom Settings'] = targeting_template[ 'Brand Safety Custom Settings'] # Save modified dataframe to remote storage in Colab sdf_df.to_csv('sdf_update1_controls.csv', index=False) # Show sample (5 rows) of DataFrame sdf_df.head() # Download modified csv to local storage files.download('sdf_update1_controls.csv') print( "Success, check your downloads for a file called 'sdf_update1_controls.csv'" ) Explanation: Modify latest SDF LineItems file and update the columns according to the targeting template End of explanation # Actually today-7 to avoid issues with collection yesterday = datetime.date.today() - datetime.timedelta(7) # Download public ERF for geolocation info request = gcs_service.objects().get_media( bucket='gdbm-public', object='entity/' + yesterday.strftime('%Y%m%d') + '.0.GeoLocation.json') response = request.execute() geolocations = json.loads(response) print('GeoLocation.json successfully downloaded \n') print("Here's a random sample of 5 entries:\n") pprint.pprint(geolocations[0:5]) Explanation: Upload the output .csv file in the DV360 UI Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details' 1.3 SDF + Entity Read Files What are Entity Read Files (ERFs)? ERFs are flat files (.JSON) in Google Cloud Storage that contain lookup values for DV360 entities like geographies, creatives, etc. Each DV360 entity (Advertiser, Campaign, LineItem, etc) has a corresponding .JSON file in Cloud Storage retained free-of-charge for 60 days from their processing date. ERFs consist of 1 file per entity type, written x1 per day to two seperate Cloud buckets: Public (10 .JSON files) - contain common public data such as GeoLocation and Language which are stored in the gdbm-public bucket (the same bucket for every DV360 user). Private (13 .JSON files) - contain information about the DV360 Partner's campaigns, creatives, budgets and other private data and are stored in Partner-specific buckets (restricted to specific users) Reference: https://developers.google.com/bid-manager/guides/entity-read/overview ERFs can be used to speed up, and automate, the creation of SDF files. Let's explore this now... Download yesterday's GeoLocation.json from public ERF bucket using Google Cloud Storage API End of explanation # Provide a list of store locations store_locations = ['United Kingdom', 'France', 'Spain', 'Germany', 'Portugal'] # Create a new dictionary to save the country code and ID later on geo_targeting_ids = {} # Note: GeoLocation.json is over 800,000 lines for location in geolocations: if location['canonical_name'] in store_locations: geo_targeting_ids[location['country_code']] = location['id'] print(location) print(geo_targeting_ids) Explanation: Retrieve a list of country codes / IDs from GeoLocation.json for each of our store locations End of explanation # Configure the sdfdownloadtasks.create request sdf_body = { 'version': SDF_VERSION_DV360, 'advertiserId': ADVERTISER_ID, 'parentEntityFilter': { 'fileType': ['FILE_TYPE_LINE_ITEM'], 'filterType': 'FILTER_TYPE_CAMPAIGN_ID', 'filterIds': [CAMPAIGN_ID] } } sdf_df = download_sdf(sdf_body) sdf_df.head() Explanation: Download the latest SDF LineItems (because we've made changes since our last download) End of explanation for country in geo_targeting_ids: target_country = geo_targeting_ids[country] sdf_df.loc[sdf_df.Name.str.contains(country), 'Geography Targeting - Include'] = f'{target_country};' # Save modified dataframe to remote storage in Colab sdf_df.to_csv('sdf_update2_geo.csv', index=False) # Display updated DataFrame sdf_df.head() # Download modified csv to local storage files.download('sdf_update2_geo.csv') print("Success, see file 'sdf_update2_geo.csv' in your downloads folder") Explanation: Modify the contents of the latest SDF output, then save a new CSV with updated Geo Targeting IDs End of explanation # Upload product feed using Colab's upload utility product_feed_csv = files.upload() contents = next(iter(product_feed_csv.values())).decode('utf-8') products = csv.DictReader(io.StringIO(contents)) image_url_list = {} # Iterate through each row and update dict() with sku:link for row in products: image_url_list[row['sku']] = row['image_link'] pprint.pprint(image_url_list) Explanation: Upload the output .csv file in the DV360 UI Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details' 1.4 SDF + Cloud Vision API Next, let's look at how we you can utilise external APIs. Download the 'product_feed' tab from Google Store as CSV (File >> Download >> CSV) Execute the following code block and upload 'product_feed.csv' This will create a new Python dictionary (key:value pairing), mapping SKUs with their image link Warning: Cloud Vision API is paid product, utilising the following example in your own Cloud project will incur costs. Try out the Cloud Vision API for free at cloud.google.com/vision End of explanation def vision_analysis(image_url): Process images using the Cloud Vision API. # Assign image URL image = vision.Image() image.source.image_uri = image_url # Instantiates a Vision client client = vision.ImageAnnotatorClient(credentials=credentials) # Performs label detection on the image file response = client.label_detection(image=image) dv360_targeting_keywords = [] vision_labels = [] for label in response.label_annotations: dv360_targeting_keywords.append(label.description) label = f'{label.description} ({label.score:.2%})' vision_labels.append(label) return dv360_targeting_keywords, vision_labels print("Vision function created") Explanation: Define a function to send images to the Cloud Vision API End of explanation imageslookup = {} for sku, url in image_url_list.items(): imageslookup[sku], vision_labels = vision_analysis(url) print(f'Analysis completed for: {url}') print('Labels (confidence score):') pprint.pprint(vision_labels, indent=4) print('=' * 30) Explanation: Run our images through the function, and return a lookup table (reference) End of explanation print('\n\nLookup table:') pprint.pprint(imageslookup, indent=4) Explanation: View the results of our Vision analysis: End of explanation # Configure the sdfdownloadtasks.create request sdf_body = { 'version': SDF_VERSION_DV360, 'advertiserId': ADVERTISER_ID, 'parentEntityFilter': { 'fileType': ['FILE_TYPE_LINE_ITEM'], 'filterType': 'FILTER_TYPE_CAMPAIGN_ID', 'filterIds': [CAMPAIGN_ID] } } sdf_df = download_sdf(sdf_body) sdf_df.head() Explanation: Download the latest SDF LineItems (because we've made changes since our last download) End of explanation for product in imageslookup: sdf_df.loc[sdf_df.Name.str.contains(product), 'Keyword Targeting - Include'] = ';'.join( imageslookup[product]).lower() # Save modified dataframe to remote storage in Colab sdf_df.to_csv('sdf_update3_keywords.csv', index=False) sdf_df.head() # Download modified csv to local storage files.download('sdf_update3_keywords.csv') print("Success, see 'sdf_update3_keywords.csv' in your downloads folder") Explanation: Now we have our new labels from the Vision API, we need to write these into the keywords targeting field End of explanation # Define DV360 report definition (i.e. metrics and filters) report_definition = { 'params': { 'type': 'TYPE_GENERAL', 'metrics': [ 'METRIC_IMPRESSIONS', 'METRIC_CLICKS', 'METRIC_CTR', 'METRIC_REVENUE_ADVERTISER' ], 'groupBys': [ 'FILTER_ADVERTISER', 'FILTER_INSERTION_ORDER', 'FILTER_LINE_ITEM', 'FILTER_ADVERTISER_CURRENCY' ], 'filters': [{ 'type': 'FILTER_ADVERTISER', 'value': ADVERTISER_ID }], }, 'metadata': { 'title': 'DV360 Automation API-generated report', 'dataRange': 'LAST_90_DAYS', 'format': 'csv' }, 'schedule': { 'frequency': 'ONE_TIME' } } # Create new query using report definition operation = dbm_service.queries().createquery(body=report_definition).execute() pprint.pprint(operation) # Runs the given Queries.getquery request, retrying with an exponential # backoff. Returns completed operation. Will raise an exception if the # operation takes more than five hours to complete. @retry.Retry(predicate=retry.if_exception_type(Exception), initial=5, maximum=60, deadline=18000) def check_get_query_completion(getquery_request): response = getquery_request.execute() pprint.pprint(response) if response['metadata']['running']: raise Exception('The operation has not completed.') return response getquery_request = dbm_service.queries().getquery(queryId=operation['queryId']) response = check_get_query_completion(getquery_request) report_url = response['metadata']['googleCloudStoragePathForLatestReport'] # Use skipfooter to remove report footer from data report_df = pd.read_csv(report_url, skipfooter=16, engine='python') report_df.head(10) # Define our 'KPIs' ctr_target = 0.15 imp_threshold = 1000 # Convert IDs to remove decimal point, then string report_df['Line Item ID'] = report_df['Line Item ID'].apply(int) poor_performers = report_df.query( 'Impressions > @imp_threshold & (Clicks / Impressions)*100 < @ctr_target') # Convert results to Python list poor_performers = list(poor_performers['Line Item ID']) print(f'There are {len(poor_performers)} LineItems with a CTR' f' < {ctr_target}% and over {imp_threshold} impressions:' f'\n{poor_performers}') Explanation: Upload the output .csv file in the DV360 UI Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details' 1.5 Optimisation using Reports Next, we'll look at how you could combine reporting data, with operations such as optimising bid multipliers or deactivating activity. Note: your new campaign has no performance history, so we'll use an existing campaign for this exercise. End of explanation # Configure the sdfdownloadtasks.create request sdf_body = { 'version': SDF_VERSION_DV360, 'advertiserId': ADVERTISER_ID, 'parentEntityFilter': { 'fileType': ['FILE_TYPE_LINE_ITEM'], 'filterType': 'FILTER_TYPE_CAMPAIGN_ID', 'filterIds': ['1914007'] } } sdf_df = download_sdf(sdf_body) sdf_df.head() for li in poor_performers: geo = sdf_df.loc[sdf_df['Line Item Id'] == li, 'Geography Targeting - Include'].iloc[0] sdf_df.loc[sdf_df['Line Item Id'] == li, 'Bid Multipliers'] = f'(geo; {geo} 0.5;);' # Save modified dataframe to remote storage in Colab sdf_df.to_csv('sdf_update4_bidmultipliers.csv', index=False) # Display updated DataFrame sdf_df.head() files.download('sdf_update4_bidmultipliers.csv') print('Success, your new SDF file has been downloaded') Explanation: Download an updated SDF LineItems file, and if the LineItem ID is in the poor performers list, add a Geo bid multiplier to half the bids (0.5) End of explanation #TODO Explanation: Upload the output .csv file in the DV360 UI Once the changes have been applied successfully, check the 'Targeting' controls within 'Line Item details' 1.6 Challenge Challenge: update your campaign with both language and audience targeting. All Lineitems should target the following Google audiences Affinity Categories » Technology » Mobile Enthusiasts Affinity Categories » Technology » Technophiles » High-End Computer Aficionado In-Market Categories » Consumer Electronics LineItems for France, should be targeted at French speakers LineItems for Great Britain, should be targeted at English speakers Tips Google Audience IDs can be found in the DV360 UI or by downloading an SDF with an existing audience applied Language IDs can be found in the Language.json ERF file or by downloading an SDF with the language already applied End of explanation # Format today-7 in required date format yesterday = (datetime.date.today() - datetime.timedelta(7)).strftime('%Y%m%d') # Download ERF for Language.json from public GCS bucket request = gcs_service.objects().get_media( bucket='gdbm-public', object='entity/' + yesterday + '.0.Language.json') response = request.execute() languages = json.loads(response) language_targets = ['en', 'fr'] lang_targeting_ids = {} # Search language.json for language targets 'en' and 'fr' for lang in languages: if lang['code'] in language_targets: lang_targeting_ids[lang['code']] = lang['id'] print(lang) print(lang_targeting_ids) # Define targeting template targeting_template = { 'Affinity & In Market Targeting - Include': '4569529;4586809;4497529;', } # Configure the sdfdownloadtasks.create request sdf_body = { 'version': SDF_VERSION_DV360, 'advertiserId': ADVERTISER_ID, 'parentEntityFilter': { 'fileType': ['FILE_TYPE_LINE_ITEM'], 'filterType': 'FILTER_TYPE_CAMPAIGN_ID', 'filterIds': [CAMPAIGN_ID] } } sdf_df = download_sdf(sdf_body) # Update DataFrame with Language and Audience targeting sdf_df.loc[sdf_df.Name.str.contains('GB'), 'Language Targeting - Include'] = f"{lang_targeting_ids['en']};" sdf_df.loc[sdf_df.Name.str.contains('FR'), 'Language Targeting - Include'] = f"{lang_targeting_ids['fr']};" sdf_df['Affinity & In Market Targeting - Include'] = targeting_template[ 'Affinity & In Market Targeting - Include'] # Save modified dataframe to remote storage in Colab sdf_df.to_csv('sdf_update5_challenge.csv', index=False) # Display updated DataFrame sdf_df.head() # Download file to disk using Colab syntax files.download('sdf_update5_challenge.csv') print("Success, see downloads folder for file 'sdf_update5_challenge.csv'") Explanation: Solution End of explanation request = display_video_service.advertisers().lineItems().list( advertiserId=ADVERTISER_ID, filter='entityStatus="ENTITY_STATUS_ACTIVE"', pageSize=1 ) response = request.execute() # Check if response is empty. if not response: print('Advertiser has no active Line Items') else: pprint.pprint(response['lineItems']) def get_active_lineitems(ADVERTISER_ID, CAMPAIGN_ID): Returns list of Lineitems with active status. list_lineitems = display_video_service.advertisers().lineItems().list( advertiserId=ADVERTISER_ID, filter=f'entityStatus="ENTITY_STATUS_ACTIVE" AND campaignId="{CAMPAIGN_ID}"', fields='lineItems(lineItemId,displayName)' # Return only two fields ).execute() active_lineitems = [li['lineItemId'] for li in list_lineitems['lineItems']] return active_lineitems Explanation: Upload the output .csv file in the DV360 UI 2) Display & Video 360 API What is the Display & Video 360 API? The Display & Video 360 API (formly known as the DV360 Write API) is the programmatic interface for the Display & Video 360 platform. It allows developers to easily and efficiently automate complex Display & Video 360 workflows, such as creating insertion orders and setting targeting options for individual line items. We'll use it now to build upon the campaign we created earlier using SDF. Reference: https://developers.google.com/display-video/api/reference/rest 2.1 Campaign builds Check Advertiser (ADVERTISER_ID) has active Lineitems End of explanation # Upload product feed using Colab's upload utility product_feed_csv = files.upload() contents = next(iter(product_feed_csv.values())).decode('utf-8') products = list(csv.DictReader(io.StringIO(contents))) # Create unique list of country-codes -- set() automatically de dupes unique_country_codes = set([row['country code'] for row in products]) print(unique_country_codes) Explanation: Upload the extended feed for Google Store's new territories: Spain, Germany and Portugal. End of explanation def create_insertion_order(parent_campaign_id, new_io_name): Creates a new DV360 insertion order object. # Define our new Insertion Order boilerplate new_insertion_order = { 'campaignId': parent_campaign_id, 'displayName': new_io_name, # Define naming convention 'entityStatus': 'ENTITY_STATUS_DRAFT', 'pacing': { 'pacingPeriod': 'PACING_PERIOD_DAILY', 'pacingType': 'PACING_TYPE_EVEN', 'dailyMaxMicros': '1000000' # Equiv to $1 or local currency }, 'frequencyCap': { 'unlimited': False, 'timeUnit': 'TIME_UNIT_MONTHS', 'timeUnitCount': 1, 'maxImpressions': 5 }, 'performanceGoal': { 'performanceGoalType': 'PERFORMANCE_GOAL_TYPE_CPC', 'performanceGoalAmountMicros': '1000000', # $1 CPM/CPC target }, 'bidStrategy': { 'fixedBid': { 'bidAmountMicros': '0' }, }, 'budget': { 'automationType': 'INSERTION_ORDER_AUTOMATION_TYPE_NONE', 'budgetUnit': 'BUDGET_UNIT_CURRENCY', 'budgetSegments': [{ 'budgetAmountMicros': '30000000', # Equiv to $30 or local currency 'description': 'My first segment', 'dateRange': { 'startDate': { 'year': year, 'month': month, 'day': day }, 'endDate': { 'year': year_plus30, 'month': month_plus30, 'day': day_plus30 } } }] } } # API create() request to generate new Insertion Order newinsertionorder_request = display_video_service.advertisers( ).insertionOrders().create( advertiserId=ADVERTISER_ID, body=new_insertion_order).execute() # Define patch to activate new Insertion Order afer creation patch = { 'entityStatus': 'ENTITY_STATUS_ACTIVE', } # API patch() request display_video_service.advertisers().insertionOrders().patch( advertiserId=ADVERTISER_ID, insertionOrderId=newinsertionorder_request['insertionOrderId'], updateMask='entityStatus', body=patch).execute() print(newinsertionorder_request) return newinsertionorder_request print('Insertion Order function created') Explanation: Create Insertion Order template Here we're defining a new a function called 'create_insertion_order'. Note: all new Insertion Orders and Line Items created using the DV360 API are created in 'Draft' mode (as a safety mechanism), and must be activated with a second API call, or via the UI (e.g. manually by a trader). End of explanation def create_lineitem(parent_io_id, new_li_name): Creates a new DV360 lineitem object. # Define our new LineItem boilerplate new_lineitem = { 'advertiserId': ADVERTISER_ID, 'insertionOrderId': parent_io_id, 'displayName': new_li_name, # Define naming convention 'lineItemType': 'LINE_ITEM_TYPE_DISPLAY_DEFAULT', 'entityStatus': 'ENTITY_STATUS_DRAFT', 'flight': { 'flightDateType': 'LINE_ITEM_FLIGHT_DATE_TYPE_INHERITED', }, 'pacing': { 'pacingPeriod': 'PACING_PERIOD_DAILY', 'pacingType': 'PACING_TYPE_EVEN', 'dailyMaxMicros': '1000000' }, 'frequencyCap': { 'timeUnit': 'TIME_UNIT_MONTHS', 'timeUnitCount': 1, 'maxImpressions': 5 }, 'partnerRevenueModel': { 'markupType': 'PARTNER_REVENUE_MODEL_MARKUP_TYPE_TOTAL_MEDIA_COST_MARKUP' }, 'budget': { 'budgetAllocationType': 'LINE_ITEM_BUDGET_ALLOCATION_TYPE_UNLIMITED', 'budgetUnit': 'BUDGET_UNIT_CURRENCY' }, 'bidStrategy': { 'fixedBid': { 'bidAmountMicros': '1000000' } } } # API create() request to generate new Lineitem newlineitem_request = display_video_service.advertisers().lineItems().create( advertiserId=ADVERTISER_ID, body=new_lineitem).execute() # Define patch to activate new Line Item afer creation patch = { 'entityStatus': 'ENTITY_STATUS_ACTIVE', } # API patch() request display_video_service.advertisers().lineItems().patch( advertiserId=ADVERTISER_ID, lineItemId=newlineitem_request['lineItemId'], updateMask='entityStatus', body=patch).execute() print(newlineitem_request) return newlineitem_request print('LineItem function created') Explanation: Create LineItem template Here we define a new function called 'create_lineitem', based on a template we specified. Note: the following template does not include any targeting controls by default. Normally, we strongly encourage the addition of targeting before activating a line item. End of explanation %%time for country_code in unique_country_codes: # Create() and patch() new Insertion Order io_name = f'Google Store | {country_code} | Display | Prospecting' insertionorder = create_insertion_order(CAMPAIGN_ID, io_name) for row in products: if country_code in row['country code']: # Create() and patch() new LineItem li_name = f"{row['country code']} | {row['title']} | {row['sku']}" lineitem = create_lineitem(insertionorder['insertionOrderId'], li_name) print('Process completed') Explanation: Build our new campaign First, we'll loop through the list of countries generated at the beginning, and for each country, create a new Insertion Order by calling our function 'create_insertion_order'. Within that loop, we find every product that is sold in the corresponding country-code, and create a new Line Item for every matching product using our function 'create_lineitem'. Sit tight, this one can take a while (~10 mins)... Link to DV360 UI End of explanation # Create the page token variable. next_page_token = '' while True: # Request the targeting options list. response = display_video_service.targetingTypes().targetingOptions().list( advertiserId=ADVERTISER_ID, targetingType='TARGETING_TYPE_BROWSER', pageToken=next_page_token).execute() # Check if response is empty. if not response: print('List request returned no Targeting Options') break # Iterate over retrieved targeting options. options_dict = {} for option in response['targetingOptions']: options_dict[ option['targetingOptionId']] = option['browserDetails']['displayName'] # Break out of loop if there is no next page. if 'nextPageToken' not in response: break # Update the next page token. next_page_token = response['nextPageToken'] pprint.pprint(options_dict) Explanation: If successful, the result should look similar to the below in DV360: 2.2 Individual targeting Reference: https://developers.google.com/display-video/api/guides/managing-line-items/targeting Retrieve a list of available targeting options using targetingTypes().targetingOptions() The following example demonstrates retrieving of Browser targeting options only. The "BrowserDetails" field is only applicable with "TARGETING_TYPE_BROWSER". End of explanation # Return list of Lineitems with active status active_lineitems = get_active_lineitems(ADVERTISER_ID, CAMPAIGN_ID) # Fetch first Lineitem ID lineitem_id = active_lineitems[0] # Create a assigned targeting option object. assigned_targeting_option_obj = { 'browserDetails': { 'targetingOptionId': '500072' } } # Create the assigned targeting option. assigned_targeting_option = display_video_service.advertisers().lineItems( ).targetingTypes().assignedTargetingOptions().create( advertiserId=ADVERTISER_ID, lineItemId=f'{lineitem_id}', targetingType='TARGETING_TYPE_BROWSER', body=assigned_targeting_option_obj ).execute() # Display the new assigned targeting option. print(f"Assigned Targeting Option {assigned_targeting_option['name']} created.") Explanation: Apply individual targeting criteria to single entity End of explanation # Create the page token variable. next_page_token = '' while True: # Request the targeting options list. response = display_video_service.googleAudiences().list( advertiserId=ADVERTISER_ID, filter='displayName : "Technology"', pageToken=next_page_token).execute() # Check if response is empty. if not response: print('List request returned no Targeting Options') break # Iterate over retrieved targeting options. options_dict = {} for option in response['googleAudiences']: options_dict[option['googleAudienceId']] = [ option['displayName'], option['googleAudienceType'] ] # Break out of loop if there is no next page. if 'nextPageToken' not in response: break # Update the next page token. next_page_token = response['nextPageToken'] pprint.pprint(response) google_audience_id = '92948' # Return list of Lineitems with active status active_lineitems = get_active_lineitems(ADVERTISER_ID, CAMPAIGN_ID) # Create a assigned targeting option object. assigned_targeting_option_obj = { 'audienceGroupDetails': { 'includedGoogleAudienceGroup': { 'settings': [{ 'googleAudienceId': f'{google_audience_id}' }] } } } pprint.pprint(assigned_targeting_option_obj) # Update bulk targeting for li in active_lineitems: # Create the assigned targeting option. assigned_targeting_option = display_video_service.advertisers().lineItems( ).targetingTypes().assignedTargetingOptions().create( advertiserId=ADVERTISER_ID, lineItemId=f'{li}', targetingType='TARGETING_TYPE_AUDIENCE_GROUP', body=assigned_targeting_option_obj).execute() # Display the new assigned targeting option. print(f"Targeting Option {assigned_targeting_option['name']} created.") Explanation: Applying individual targeting criteria to multiple entities End of explanation def set_default_li_targeting(lineitem_id): Sets default LineItem targeting according to standard template. # Define 'Channels' create_channel_assigned_targetingoptions = [] for targeting_id in ['1777746835', '1778039430']: create_channel_assigned_targetingoptions.append( {'channelDetails': { 'channelId': targeting_id, 'negative': False }}) # Define 'Inventory' create_inventory_assigned_targetingoptions = [] for targeting_id in ['1']: create_inventory_assigned_targetingoptions.append( {'inventorySourceDetails': {'inventorySourceId': targeting_id}} ) # Define 'Sensitive categories' create_sensitive_cat_assigned_targetingoptions = [] sensitive_category = [ '1163177997', '1163178297', '118521027123', '118521027843', '118521028083', '118521028563', '118521028803', '1596254697' ] for targeting_id in sensitive_category: create_sensitive_cat_assigned_targetingoptions.append({ 'sensitiveCategoryExclusionDetails': { 'excludedTargetingOptionId': targeting_id } }) # Define 'Digital content labels' create_digital_content_assigned_targetingoptions = [] content_rating_tier = ['19875634320', '19875634200', '19875634080'] for targeting_id in content_rating_tier: create_digital_content_assigned_targetingoptions.append({ 'digitalContentLabelExclusionDetails': { 'excludedTargetingOptionId': targeting_id } }) # Contruct request bulk_edit_line_item_request = { 'createRequests': [ { 'targetingType': 'TARGETING_TYPE_CHANNEL', 'assignedTargetingOptions': [ create_channel_assigned_targetingoptions ] }, { 'targetingType': 'TARGETING_TYPE_INVENTORY_SOURCE', 'assignedTargetingOptions': [ create_inventory_assigned_targetingoptions ] }, { 'targetingType': 'TARGETING_TYPE_SENSITIVE_CATEGORY_EXCLUSION', 'assignedTargetingOptions': [ create_sensitive_cat_assigned_targetingoptions ] }, { 'targetingType': 'TARGETING_TYPE_DIGITAL_CONTENT_LABEL_EXCLUSION', 'assignedTargetingOptions': [ create_digital_content_assigned_targetingoptions ] }, ] } # Edit the line item targeting. bulk_request = display_video_service.advertisers().lineItems( ).bulkEditLineItemAssignedTargetingOptions( advertiserId=ADVERTISER_ID, lineItemId=lineitem_id, body=bulk_edit_line_item_request ) bulk_response = bulk_request.execute() # Check if response is empty. # If not, iterate over and display new assigned targeting options. if not bulk_response: print('Bulk edit request created no new AssignedTargetingOptions') else: for assigned_targeting_option in bulk_response[ 'createdAssignedTargetingOptions']: print(f"Targeting Option {assigned_targeting_option['name']} created.") print('Lineitem targeting function created') Explanation: 2.3 Bulk targeting Bulk updates using templated targeting controls End of explanation # Return list of Lineitems with active status active_lineitems = get_active_lineitems(ADVERTISER_ID, CAMPAIGN_ID) # Update bulk targeting for li in active_lineitems: set_default_li_targeting(li) Explanation: Retrieve list of active LineItems, and Apply bulk targeting End of explanation def optimise_lineitem(lineitem_id, action): Optimises lineitem according to given parameter. lineitem_object = display_video_service.advertisers().lineItems().get( advertiserId=ADVERTISER_ID, lineItemId=lineitem_id).execute() if lineitem_object['entityStatus'] == 'ENTITY_STATUS_ACTIVE': if action == 'pause': patch = { 'entityStatus': 'ENTITY_STATUS_PAUSED', } lineitem_patched = display_video_service.advertisers().lineItems().patch( advertiserId=ADVERTISER_ID, lineItemId=lineitem_id, updateMask='entityStatus', body=patch).execute() print(f"LineItemID {lineitem_patched['name']} was paused") elif action == 'optimise': patch = {'bidStrategy': {'fixedBid': {'bidAmountMicros': '500000'},}} lineitem_patched = display_video_service.advertisers().lineItems().patch( advertiserId=ADVERTISER_ID, lineItemId=lineitem_id, updateMask='bidStrategy', body=patch).execute() print(f"{lineitem_patched['name']} was optimised") else: print("Not a valid action, must be either 'pause' or 'optimise'") else: print( f"{lineitem_object['name']} already paused/archived - no action taken") print('Optimisation function created') Explanation: 2.4 Optimisation (external trigger) The following optimisations will be completed on your campaign, created earlier. Create functions to 'deactivate' or 'optimise' Lineitems End of explanation out_of_stock_list = [] products = csv.DictReader(io.StringIO(contents)) # Iterate through each row, checking for products where availability = 0 for row in products: if row['availability'] == '0': out_of_stock_list.append(row['sku']) # This should generate a list of 9 SKUs that are no-longer in stock print( f'Found {len(out_of_stock_list)} out-of-stock products {out_of_stock_list}') Explanation: Creat list of out of stock products End of explanation # Return list of Lineitems with active status active_lineitems = get_active_lineitems(ADVERTISER_ID, CAMPAIGN_ID) # Iterate through out-of-stock list. If sku is found in lineitem's name, perform optimisation. for product in out_of_stock_list: for key, value in active_lineitems.items(): if product in key: optimise_lineitem(value, 'pause') Explanation: Process optimisation End of explanation # Define DV360 report definition (i.e. metrics and filters) report_definition = { 'params': { 'type': 'TYPE_GENERAL', 'metrics': [ 'METRIC_IMPRESSIONS', 'METRIC_CLICKS', 'METRIC_CTR', 'METRIC_REVENUE_ADVERTISER' ], 'groupBys': [ 'FILTER_ADVERTISER', 'FILTER_INSERTION_ORDER', 'FILTER_LINE_ITEM', 'FILTER_ADVERTISER_CURRENCY' ], 'filters': [{ 'type': 'FILTER_ADVERTISER', 'value': ADVERTISER_ID }], }, 'metadata': { 'title': 'DV360 Automation API-generated report', 'dataRange': 'LAST_90_DAYS', 'format': 'csv' }, 'schedule': { 'frequency': 'ONE_TIME' } } # Create new query using report definition operation = dbm_service.queries().createquery(body=report_definition).execute() pprint.pprint(operation) # Runs the given Queries.getquery request, retrying with an exponential # backoff. Returns completed operation. Will raise an exception if the # operation takes more than five hours to complete. @retry.Retry( predicate=retry.if_exception_type(Exception), initial=5, maximum=60, deadline=18000) def check_get_query_completion(getquery_request): response = getquery_request.execute() pprint.pprint(response) if response['metadata']['running']: raise Exception('The operation has not completed.') return response getquery_request = dbm_service.queries().getquery(queryId=operation['queryId']) response = check_get_query_completion(getquery_request) Explanation: 2.5 Optimisation (reporting data) As your new campaign has no performance data, the following optimisations will be completed on an existing campaign with historical data. Create new performance report and fetch results End of explanation # Capture report URL from response report_url = response['metadata']['googleCloudStoragePathForLatestReport'] # Use skipfooter to remove report footer from data report_df = pd.read_csv(report_url, skipfooter=16, engine='python') report_df.head(10) Explanation: Load report to Pandas DataFrame End of explanation # Define our 'KPIs' ctr_to_pause = 0.1 ctr_to_optimise = 0.3 imp_threshold = 5000 # Convert IDs to remove decimal point, then string report_df['Line Item ID'] = report_df['Line Item ID'].apply(int) lineitems_to_pause = report_df.query('Impressions > @imp_threshold and (Clicks / Impressions)*100 < @ctr_to_pause') lineitems_to_reducebid = report_df.query('Impressions > @imp_threshold and (Clicks / Impressions)*100 > @ctr_to_pause < @ctr_to_optimise') # Convert results to Python list lineitems_to_pause = list(lineitems_to_pause['Line Item ID']) lineitems_to_reducebid = list(lineitems_to_reducebid['Line Item ID']) print(f'Found {len(lineitems_to_pause)} LineItems with a CTR' f'< {ctr_to_pause}% and > {imp_threshold} impressions:' f'{lineitems_to_pause}') print(f'Found {len(lineitems_to_reducebid)} LineItems with a CTR' f' between {ctr_to_pause}%-{ctr_to_optimise}%, and > {imp_threshold}' f'\n impressions: {lineitems_to_reducebid}') Explanation: Create two lists of poorly performing LineItems 1. LineItems that should be paused 2. Lineitems to reduce bids End of explanation %%time if lineitems_to_pause: for lineitem in lineitems_to_pause: optimise_lineitem(str(lineitem), 'pause') if lineitems_to_reducebid: for lineitem in lineitems_to_reducebid: optimise_lineitem(str(lineitem), 'optimise') print('Optimisation completed') Explanation: Process optimisation End of explanation def upload_creative_image_asset(asset_url, click_url): Creates a new DV360 creative object. # Fetch asset from cloud storage using requests library asset = requests.get(asset_url) # Create upload object from http image url fh = io.BytesIO(asset.content) media_body = http.MediaIoBaseUpload(fh, mimetype='image/png', chunksize=1024*1024, resumable=True) # Extract filename from url path filename = str(asset_url.rsplit(sep='/', maxsplit=1)[1]) # Create the request body body = {'filename': filename} # Upload the asset asset_request = display_video_service.advertisers().assets().upload( advertiserId=ADVERTISER_ID, body=body, media_body=media_body).execute() # Display the new asset media ID print(f"Asset was created with media ID {asset_request['asset']['mediaId']}") display_name = f'{filename}'.split(sep='.')[0].lower() + ' 300x250' # Create a creative object. creative_obj = { 'displayName': f'{display_name}', 'entityStatus': 'ENTITY_STATUS_ACTIVE', 'creativeType': 'CREATIVE_TYPE_STANDARD', 'hostingSource': 'HOSTING_SOURCE_HOSTED', 'dimensions': { 'widthPixels': 300, 'heightPixels': 250 }, 'assets': [{ 'asset': { 'mediaId': asset_request['asset']['mediaId'] }, 'role': 'ASSET_ROLE_MAIN' }], 'exitEvents': [{ 'type': 'EXIT_EVENT_TYPE_DEFAULT', 'url': f'{click_url}', }] } creative_request = display_video_service.advertisers().creatives().create( advertiserId=ADVERTISER_ID, body=creative_obj ).execute() # Display the new creative ID print(f"Creative was created with ID {creative_request['creativeId']}" f" and DisplayName '{creative_request['displayName']}'") pprint.pprint(creative_request) print('Creative upload function defined') Explanation: 2.6 Creative upload Uploading Display creatives from remote storage (http) The following demonstrates how to upload image assets from remote storage, but it's also possible to upload from local storage. Reference: https://developers.google.com/display-video/api/guides/creating-creatives/overview End of explanation image_assets = { 'https://github.com/google/dv360-automation/blob/master/docs/images/googlestore/pixelbook.png?raw=true': 'https://store.google.com/product/google_pixelbook', 'https://github.com/google/dv360-automation/blob/master/docs/images/googlestore/googlehome.png?raw=true': 'https://store.google.com/product/google_home_hub', 'https://github.com/google/dv360-automation/blob/master/docs/images/googlestore/googlehomemini.png?raw=true': 'https://store.google.com/product/google_home_mini', 'https://github.com/google/dv360-automation/blob/master/docs/images/googlestore/pixel2.png?raw=true': 'https://store.google.com/product/pixel_2', 'https://github.com/google/dv360-automation/blob/master/docs/images/googlestore/chromecastultra.png?raw=true': 'https://store.google.com/product/chromecast_ultra' } for asset, click_url in image_assets.items(): upload_creative_image_asset(asset, click_url) Explanation: Upload image creatives Note, all of the following assets are the same dimension (300x250) and type 'CREATIVE_TYPE_STANDARD'. When uploading assets of multiple sizes, the creatives.create body must reflect this. End of explanation #TODO Explanation: 2.7 Challenge Challenge: build a new campaign for 'Google Airways' using the flights feed provided here. Tips You don't need to rewrite any functions, reuse the existing ones Don't forget to use print() statements to see progress within a for loop Your final campaign should look similar to the below: End of explanation %%time # Load flight information from CSV file googleairways_routes = files.upload() contents = next(iter(googleairways_routes.values())).decode('utf-8') routes = list(csv.DictReader(io.StringIO(contents))) # Create a unique set (de-duped) of cities from the routes provided unique_cities = set() for row in routes: unique_cities.add(row['airport-city']) print(unique_cities) # Create Campaign and Patch() new_campaign = create_campaign('Google Airways') print(new_campaign) # Step through each city within our unique set of cities for city in unique_cities: # Create Insertion Order and Patch() io_name = f'Flights | {city}' create_io = create_insertion_order(new_campaign['campaignId'], io_name) # Step through each route(row) of the CSV upload for row in routes: if city == row['airport-city']: # Create LineItems and Patch() li_name = f"Flight {row['flightno']} | {row['depairport-city']} to {row['arrairport-city']}" create_lis = create_lineitem(create_io['insertionOrderId'], li_name) print('Process completed') Explanation: Solution End of explanation # Exclude following campaigns in the reset process protected_campaigns = ['1914007','985747'] def reset_demo_account(): Reset DV360 account to earlier state. print('Resetting DV360 account...') # Reactivate Campaigns list_campaigns = display_video_service.advertisers().campaigns().list( advertiserId=ADVERTISER_ID, filter='entityStatus="ENTITY_STATUS_ACTIVE"').execute() results = list_campaigns['campaigns'] print(f'Found {len(results)} active campaigns') for index, campaign in enumerate(results, start=1): print(f'Campaign {index} of {len(results)}') pause_campaign(campaign['campaignId']) # Reactivate LineItems list_lineitems = display_video_service.advertisers().lineItems().list( advertiserId=ADVERTISER_ID, filter='entityStatus="ENTITY_STATUS_PAUSED" AND campaignId="1914007"' ).execute() if not list_lineitems: print('No paused lineitems found') else: for index, li in enumerate(list_lineitems['lineItems'], start=1): print(f"Lineitem {index} of {len(list_lineitems['lineItems'])}") lineitem_id = li['lineItemId'] activate_lineitem(lineitem_id) print('Account reset completed') def delete_campaign(campaign_id): Updates DV360 campaign object status to deleted. if campaign_id in protected_campaigns: print(f'Campaign ID {campaign_id} not deleted (protected campaign)') else: try: display_video_service.advertisers().campaigns().delete( advertiserId=ADVERTISER_ID, campaignId=campaign_id).execute() print(f'{campaign_id} successfully deleted') except Exception: print('Could not delete campaign') def archive_campaign(campaign_id): Updates DV360 campaign object status to archived. patch = {'entityStatus': 'ENTITY_STATUS_ARCHIVED'} if campaign_id in protected_campaigns: print(f'Campaign ID {campaign_id} not archived (protected campaign)') else: archive_campaign = display_video_service.advertisers().campaigns().patch( advertiserId=ADVERTISER_ID, campaignId=campaign_id, updateMask='entityStatus', body=patch).execute() print(f'Campaign ID {campaign_id} successfully archived') def pause_campaign(campaign_id): Updates DV360 campaign object status to paused. patch = {'entityStatus': 'ENTITY_STATUS_PAUSED'} if campaign_id in protected_campaigns: print(f'Campaign ID {campaign_id} not paused (protected campaign)') else: display_video_service.advertisers().campaigns().patch( advertiserId=ADVERTISER_ID, campaignId=campaign_id, updateMask='entityStatus', body=patch).execute() print(f'Campaign ID {campaign_id} successfully paused') def activate_lineitem(lineitem_id): Updates DV360 lineitem object status to active. patch = {'entityStatus': 'ENTITY_STATUS_ACTIVE'} display_video_service.advertisers().lineItems().patch( lineItemId=lineitem_id, advertiserId=ADVERTISER_ID, updateMask='entityStatus', body=patch).execute() print(f'Lineitem ID {lineitem_id} reactivated') # @title { display-mode: "form" } #@markdown Reset DV360 account # Call main function to intialise reset procedure reset_demo_account() Explanation: Link to DV360 UI Resources Getting started with SDF in DV360 guide Structured Data Files (SDF) developer guide Getting started with the Display & Video 360 API developer guide Getting started with the DoubleClick Bid Manager API developer guide How to access Entity Read Files Quickstart: Setup the Vision API Please help us improve this workshop by completing the satisfaction survey Thank you! Clean up To clean up all of the DV360 resources used during these exercises, you can run the following script. Warning: this will remove all Campaigns from the DV360 advertiser specified in ADVERTISER_ID, unless they are explicitly defined as a 'protected_campaign' End of explanation
5,158
Given the following text description, write Python code to implement the functionality described below step by step Description: Sprachenvielfalt Figures for https Step1: Sprachenvielfalt http Step2: Prozentualer Anteil der Bevölkerung Step3: Analphabetismus in Prozent Step4: Karte Analphabetismus Step5: Karte indigene Sprachen
Python Code: import numpy as np import travelmaps2 as tm import matplotlib.pyplot as plt from mpl_toolkits.basemap import Basemap from matplotlib import cm, colors, rcParams plt.style.use('ggplot') # Adjust dpi, so figure on screen and savefig looks the same dpi = 100 rcParams['figure.dpi'] = dpi rcParams['savefig.dpi'] = dpi fpath = '../mexico.werthmuller.org/content/images/' Explanation: Sprachenvielfalt Figures for https://mexico.werthmuller.org/kulturgeschichte/sprachenvielfalt. End of explanation fig = plt.figure() ax = fig.add_subplot(1, 1, 1) lal = 1300000 ax.barh(10.6, 1544968); ax.text(lal, 11, "1'544'968", va='center', ha='right') ax.barh( 9.6, 786113); ax.text(lal, 10, " 786'113", va='center', ha='right') ax.barh( 8.6, 477995); ax.text(lal, 9, " 477'995", va='center', ha='right') ax.barh( 7.6, 450429); ax.text(lal, 8, " 450'429", va='center', ha='right') ax.barh( 6.6, 445856); ax.text(lal, 7, " 445'856", va='center', ha='right') ax.barh( 5.6, 404704); ax.text(lal, 6, " 404'704", va='center', ha='right') ax.barh( 4.6, 284992); ax.text(lal, 5, " 284'992", va='center', ha='right') ax.barh( 3.6, 244033); ax.text(lal, 4, " 244'033", va='center', ha='right') ax.barh( 2.6, 223073); ax.text(lal, 3, " 223'073", va='center', ha='right') ax.barh( 1.6, 212117); ax.text(lal, 2, " 212'117", va='center', ha='right') ax.barh( 0.6, 1620948); ax.text(lal, 1, "1'620'948", va='center', ha='right') ax.xaxis.set_ticks_position('none') ax.yaxis.set_ticks_position('none') plt.xticks([500000, 1000000, 1500000], ()) plt.yticks(np.arange(11)+1, ('Alle anderen', 'Chol', 'Mazateco', 'Totonaca', 'Otomi', 'Tzotzil', 'Tzetzal', 'Zapotecas', 'Mixtecas', 'Maya', 'Náhuatl')) plt.title("Bevölkerung mit indigener Hauptsprache") #plt.savefig(fpath+'sprachenvielfalt/BevSprache.png', bbox_inches='tight') plt.show() Explanation: Sprachenvielfalt http://mexico.werthmuller.org/kulturgeschichte/sprachenvielfalt Bevölkerung mit indigener Hauptsprache End of explanation fig = plt.figure() ax = fig.add_subplot(1, 1, 1) lal = 5 ax.barh(10.6, 33.8); ax.text(lal, 11, "33.8 %", va='center', ha='right') ax.barh( 9.6, 29.6); ax.text(lal, 10, "29.6 %", va='center', ha='right') ax.barh( 8.6, 27.3); ax.text(lal, 9, "27.3 %", va='center', ha='right') ax.barh( 7.6, 16.2); ax.text(lal, 8, "16.2 %", va='center', ha='right') ax.barh( 6.6, 15.2); ax.text(lal, 7, "15.2 %", va='center', ha='right') ax.barh( 5.6, 14.8); ax.text(lal, 6, "14.8 %", va='center', ha='right') ax.barh( 4.6, 12.0); ax.text(lal, 5, "12.0 %", va='center', ha='right') ax.barh( 3.6, 11.5); ax.text(lal, 4, "11.5 %", va='center', ha='right') ax.barh( 2.6, 10.6); ax.text(lal, 3, "10.6 %", va='center', ha='right') ax.barh( 1.6, 9.3); ax.text(lal, 2, " 9.3 %", va='center', ha='right') ax.barh( 0.6, 6.6); ax.text(lal, 1, " 6.6 %", va='center', ha='right') ax.xaxis.set_ticks_position('none') ax.yaxis.set_ticks_position('none') plt.xticks([10, 20, 30], ()) plt.yticks(np.arange(11)+1, ('Mexiko Durch.', 'Veracruz', 'San Luis Potosí', 'Puebla', 'Campeche', 'Hidalgo', 'Guerrero', 'Quintana Roo', 'Chiapas', 'Yucatán', 'Oaxaca')) plt.title("Prozentualer Anteil der Bevölkerung") #plt.savefig(fpath+'sprachenvielfalt/ProzBev.png', bbox_inches='tight') plt.show() Explanation: Prozentualer Anteil der Bevölkerung End of explanation fig = plt.figure() ax = fig.add_subplot(1, 1, 1) lal = 3 ax.barh( 7.6, 17.8); ax.text(lal, 8, "17.8 %", va='center', ha='right') ax.barh( 6.6, 16.7); ax.text(lal, 7, "16.7 %", va='center', ha='right') ax.barh( 5.6, 16.3); ax.text(lal, 6, "16.3 %", va='center', ha='right') ax.barh( 4.6, 11.4); ax.text(lal, 5, "11.4 %", va='center', ha='right') ax.barh( 3.6, 10.4); ax.text(lal, 4, "10.4 %", va='center', ha='right') ax.barh( 2.6, 10.2); ax.text(lal, 3, "10.2 %", va='center', ha='right') ax.barh( 1.6, 10.2); ax.text(lal, 2, "10.2 %", va='center', ha='right') ax.barh( 0.6, 6.9); ax.text(lal, 1, " 6.9 %", va='center', ha='right') ax.xaxis.set_ticks_position('none') ax.yaxis.set_ticks_position('none') plt.xticks([5, 10, 15], ()) plt.yticks(np.arange(8)+1, ('Mexiko Durch.', 'Hidalgo', 'Michoacán', 'Puebla', 'Veracruz', 'Oaxaca', 'Guerrero', 'Chiapas')) plt.title("Analphabetismus in Prozent") #plt.savefig(fpath+'sprachenvielfalt/Analfabetismo.png', bbox_inches='tight') plt.show() Explanation: Analphabetismus in Prozent End of explanation tm.setup_noxkcd(200) fig_x = plt.figure(figsize=(tm.cm2in([11, 6]))) # Create basemap m_x = Basemap(width=3500000, height=2300000, resolution='c', projection='tmerc', lat_0=24, lon_0=-102) m_x.drawmapboundary(fill_color='#99ccff') # Fill non-visited countries (fillcontinents does a bad job) countries = ['USA', 'BLZ', 'GTM', 'HND', 'SLV', 'NIC', 'CUB'] tm.country(countries, m_x, fc='.8', ec='.5', lw=.5) # Fill states cols = np.array([3.3, # 0 Aguascalientes 3.2, # 1 Baja California Sur 2.6, # 2 Baja California 8.3, # 3 Campeche 17.8, # 4 Chiapas 3.7, # 5 Chihuahua 2.6, # 6 Coahuila 5.1, # 7 Colima 2.1, # 8 Distrito Federal 3.8, # 9 Durango 8.2, # 10 Guanajuato 16.7, # 11 Guerrero 10.2, # 12 Hidalgo 4.4, # 13 Jalisco 4.4, # 14 México 10.2, # 15 Michoacán 6.4, # 16 Morelos 6.3, # 17 Nayarit 2.2, # 18 Nuevo León 16.3, # 19 Oaxaca 10.4, # 20 Puebla 6.3, # 21 Querétaro 4.8, # 22 Quintana Roo 7.9, # 23 San Luis Potosí 5.0, # 24 Sinaloa 3.0, # 25 Sonora 7.1, # 26 Tabasco 3.6, # 27 Tamaulipas 5.2, # 28 Tlaxcala 11.4, # 29 Veracruz 9.2, # 30 Yucatán 5.5]) # 31 Zacatecas cols = cm.Greens((cols-2.1)/(17.8-2.1)) fcs = 32*[''] for i in range(32): fcs[i] = colors.rgb2hex(cols[i,:]) tm.country('MEX', bmap=m_x, fc=fcs, ec='k', lw=.5, adm=1) # Add visited cities tm.city([0, 0], '', m_x) # Save-path #plt.savefig(fpath+'sprachenvielfalt/KarteAnalphabetismus.png', bbox_inches='tight') plt.show() Explanation: Karte Analphabetismus End of explanation tm.setup_noxkcd(200) fig_x = plt.figure(figsize=(tm.cm2in([11, 6]))) # Create basemap m_x = Basemap(width=3500000, height=2300000, resolution='c', projection='tmerc', lat_0=24, lon_0=-102) m_x.drawmapboundary(fill_color='#99ccff') # Fill non-visited countries (fillcontinents does a bad job) countries = ['USA', 'BLZ', 'GTM', 'HND', 'SLV', 'NIC', 'CUB'] tm.country(countries, m_x, fc='.8', ec='.5', lw=.5) # Fill states cols = np.array([0.2, # 0 Aguascalientes 1.8, # 1 Baja California Sur 1.4, # 2 Baja California 12.0, # 3 Campeche 27.3, # 4 Chiapas 3.5, # 5 Chihuahua 0.2, # 6 Coahuila 0.7, # 7 Colima 1.5, # 8 Distrito Federal 2.2, # 9 Durango 0.3, # 10 Guanajuato 15.2, # 11 Guerrero 14.8, # 12 Hidalgo 0.8, # 13 Jalisco 2.7, # 14 México 3.5, # 15 Michoacán 1.9, # 16 Morelos 5.2, # 17 Nayarit 0.9, # 18 Nuevo León 33.8, # 19 Oaxaca 11.5, # 20 Puebla 1.8, # 21 Querétaro 16.2, # 22 Quintana Roo 10.6, # 23 San Luis Potosí 0.9, # 24 Sinaloa 2.5, # 25 Sonora 2.9, # 26 Tabasco 0.8, # 27 Tamaulipas 2.5, # 28 Tlaxcala 9.3, # 29 Veracruz 29.6, # 30 Yucatán 0.4]) # 31 Zacatecas cols = cm.Greens((cols-0.2)/(33.8-.2)) fcs = 32*[''] for i in range(32): fcs[i] = colors.rgb2hex(cols[i,:]) tm.country('MEX', bmap=m_x, fc=fcs, ec='k', lw=.5, adm=1) # Add visited cities tm.city([0, 0], '', m_x) # Save-path #plt.savefig(fpath+'sprachenvielfalt/KarteEinheimischeSprachen.png', bbox_inches='tight') plt.show() Explanation: Karte indigene Sprachen End of explanation
5,159
Given the following text description, write Python code to implement the functionality described below step by step Description: Deep Learning with MNIST This is the mnist_mlp code with all the blanks filled in for you. The original Keras example is here Step1: Each of our 60,000 handwritten digits comes prepackaged as a 28x28 matrix, which you can think of as a primitive 28x28 pixel photo of a digit. The elements in the matrix take on values between 0 and 255, depending on how dark the writing is in that "pixel". But we usually train our models on feature vectors, not matrices. Go ahead and use numpy's reshape method to reshape our matrix into a 28x28 = 784-dimensional vector. Signature np.reshape(a, newshape, order='C') Docstring Gives a new shape to an array without changing its data. Parameters a Step2: Do the same for our test set, X_test. Don't forget to check how many data points it has with the shape method. Step3: But since we're training on the CPU and don't want to wait all day for our models to train, we'll use only the first 6000 data points for our actual training set. Step4: This part just makes sure numpy casts our data values to the correct type. Step5: When training neural networks, it's important to normalize our data. There are multiple reasons for this. Normalizing lets us not worry about any scaling effects our input might have on our model (for example, one feature is measured in millimeters, and another in miles). Furthermore, when we randomly initialize the weights of our neurons, we won't have to worry (as much) that our neurons will become saturated. Saturation occurs when the output of our neuron is very near one or zero, no matter the input. Remember, the output of a (sigmoidal) neuron is a linear combination of its inputs passed into a logistic function. The gradient of the logistic function is very small when its output is near 1 or 0, hence the gradient in the backpropogation algorithm will be very small and our neurons weights will update too slowly. Normalize (Well, standardize, techinically) X_train so that its mean is 0 and standard deviation 1. (Hint Step6: Since our output is a linear combination of some input from our hidden layer, we need to make sure our classes are one-hot encoded. For example, if our neural net is 50% sure a digit is a 2 and 50% sure it is a 4, we don't want our network predicting a 3! Step7: Now we can finally start building our model! Keras has two model types. Sequential (below) is a typical neural network with layers stacked one on top of the other. The other model, Graph, models any network that can be represented as a directed acyclic graph (DAG). Can you imagine a neural network that violates one or both of the DAG properties? Step8: Here is the blueprint for our model. It's your job to write the code. Reference the Keras docs (or look over this quick introduction for all the code you need to know). Add a dense hidden layer 512 neurons wide that accepts our 784 dimensional input. (What does it mean for a layer to be dense?). Give these neurons a 'sigmoid' activation function. Add another dense hidden layer 512 neurons wide, each neuron with a sigmoid activation function. Finally, add a dense layer 10 neurons wide with a 'softmax' activation function. (What do you think this layer is doing?) Step9: Our model requires two other parameters Step10: At last, the heavy lifting portion of our code, the fitting of the model. Call model.fit with our training data, a batch_size of 128, and 1 epoch. A batch is how many datapoints our neural net uses to calculate the gradient before taking a step in the gradient descent process. Batches are taken sequentially from the entire training set during optimization. An epoch is a single pass through the data during optimization. For example, if we are training on 100 data points, and our batch size is 10, then we will have completed an epoch after processing 10 batches.
Python Code: from __future__ import print_function import numpy as np np.random.seed(1337) # for reproducibility from keras.datasets import mnist from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation from keras.optimizers import SGD, Adam, RMSprop from keras.utils import np_utils (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train X_train.shape Explanation: Deep Learning with MNIST This is the mnist_mlp code with all the blanks filled in for you. The original Keras example is here: https://github.com/fchollet/keras This version is meant to be run on the CPU. It trains on less data and will give you less impressive predictive performance than the original example code (which is meant to be run on a GPU). Train a simple deep NN on the MNIST dataset. 16 seconds per 6,000 data instance epoch on Intel Celeron N2840 @ 2.16GHz (Sigmoid) 14 seconds per 6,000 data instance epoch on Intel Celeron N2480 @ 2.16GHz (ReLU) 20 seconds per 6,000 data instance epoch on Intel Celeron N2480 @ 2.16GHz (ReLU, validation_data) 2 seconds per 60,000 data instance epoch on a K520 GPU. End of explanation # Reshape the 60,000 28x28 matrices in X_train into a 60000x784 dimensional matrix X_train = X_train.reshape(60000, 784) Explanation: Each of our 60,000 handwritten digits comes prepackaged as a 28x28 matrix, which you can think of as a primitive 28x28 pixel photo of a digit. The elements in the matrix take on values between 0 and 255, depending on how dark the writing is in that "pixel". But we usually train our models on feature vectors, not matrices. Go ahead and use numpy's reshape method to reshape our matrix into a 28x28 = 784-dimensional vector. Signature np.reshape(a, newshape, order='C') Docstring Gives a new shape to an array without changing its data. Parameters a : array_like Array to be reshaped. newshape : int or tuple of ints The new shape should be compatible with the original shape. If an integer, then the result will be a 1-D array of that length. One shape dimension can be -1. In this case, the value is inferred from the length of the array and remaining dimensions. Returns reshaped_array : ndarray This will be a new view object if possible; otherwise, it will be a copy. See Also ndarray.reshape : Equivalent method. End of explanation # Do the same for X_test X_test = X_test.reshape(10000, 784) Explanation: Do the same for our test set, X_test. Don't forget to check how many data points it has with the shape method. End of explanation X_train = X_train[:6000] y_train = y_train[:6000] X_test = X_test[:5000] # we'll use a half-sized test set, too y_test = y_test[:5000] Explanation: But since we're training on the CPU and don't want to wait all day for our models to train, we'll use only the first 6000 data points for our actual training set. End of explanation X_train = X_train.astype('float32') X_test = X_test.astype('float32') Explanation: This part just makes sure numpy casts our data values to the correct type. End of explanation # Standardize X_train and X_test train_mean = np.mean(X_train) train_std = np.std(X_train) X_train -= train_mean X_train /= train_std X_test -= train_mean X_test /= train_std Explanation: When training neural networks, it's important to normalize our data. There are multiple reasons for this. Normalizing lets us not worry about any scaling effects our input might have on our model (for example, one feature is measured in millimeters, and another in miles). Furthermore, when we randomly initialize the weights of our neurons, we won't have to worry (as much) that our neurons will become saturated. Saturation occurs when the output of our neuron is very near one or zero, no matter the input. Remember, the output of a (sigmoidal) neuron is a linear combination of its inputs passed into a logistic function. The gradient of the logistic function is very small when its output is near 1 or 0, hence the gradient in the backpropogation algorithm will be very small and our neurons weights will update too slowly. Normalize (Well, standardize, techinically) X_train so that its mean is 0 and standard deviation 1. (Hint: subtract the mean from each datapoint then divide by the standard deviation). Also standardize X_test, but use the mean and standard deviation from the training data. End of explanation print("Class representation before one hot encoding:\n", y_train[0:3]) Y_train = np_utils.to_categorical(y_train, 10) Y_test = np_utils.to_categorical(y_test, 10) print("Class representation after one hot encoding:\n", Y_train[0:3]) Explanation: Since our output is a linear combination of some input from our hidden layer, we need to make sure our classes are one-hot encoded. For example, if our neural net is 50% sure a digit is a 2 and 50% sure it is a 4, we don't want our network predicting a 3! End of explanation model = Sequential() Explanation: Now we can finally start building our model! Keras has two model types. Sequential (below) is a typical neural network with layers stacked one on top of the other. The other model, Graph, models any network that can be represented as a directed acyclic graph (DAG). Can you imagine a neural network that violates one or both of the DAG properties? :-) End of explanation # Your multi-layer perceptron here # sigmoid, no dropout model.add(Dense(output_dim=684, input_dim=784)) model.add(Activation('sigmoid')) model.add(Dense(684)) model.add(Activation('sigmoid')) model.add(Dense(10)) model.add(Activation('softmax')) # ReLU, no dropout ''' model.add(Dense(output_dim=684, input_dim=784)) model.add(Activation('relu')) model.add(Dense(684)) model.add(Activation('relu')) model.add(Dense(10)) model.add(Activation('softmax')) ''' ''' # ReLU, dropout model.add(Dense(output_dim=684, input_dim=784)) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(Dense(684)) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(Dense(10)) model.add(Activation('softmax')) ''' Explanation: Here is the blueprint for our model. It's your job to write the code. Reference the Keras docs (or look over this quick introduction for all the code you need to know). Add a dense hidden layer 512 neurons wide that accepts our 784 dimensional input. (What does it mean for a layer to be dense?). Give these neurons a 'sigmoid' activation function. Add another dense hidden layer 512 neurons wide, each neuron with a sigmoid activation function. Finally, add a dense layer 10 neurons wide with a 'softmax' activation function. (What do you think this layer is doing?) End of explanation model.compile(loss='categorical_crossentropy', optimizer='sgd') Explanation: Our model requires two other parameters: a loss (objective) function and an optimization method. For our loss function we will use what Keras calls 'categorical_crossentropy', but which also goes by 'multiclass logloss'. Keras has a fair amount of objective functions to choose from, but categorical_crossentropy is the only one that really makes sense here. We will use stochastic gradient descent ('sgd') for our optimization method. Stochastic, because it randomly selects a sample from our training data to calculate the gradient during backpropogation (or iterates sequentially over a random shuffling of the data). Gradient descent because... err, it's doing gradient descent. End of explanation # Call model.fit with batch_size = 128 and nb_epoch = 1 # If you get an I/O error, make sure your model # completely finishes compiling before running this line. model.fit(X_train, Y_train, batch_size=128, nb_epoch = 1, validation_data=(X_test, Y_test), show_accuracy=True) results = model.evaluate(X_test, Y_test, show_accuracy=True) print("Loss on Test:", results[0]) print("Accuracy on Test:", results[1]) Explanation: At last, the heavy lifting portion of our code, the fitting of the model. Call model.fit with our training data, a batch_size of 128, and 1 epoch. A batch is how many datapoints our neural net uses to calculate the gradient before taking a step in the gradient descent process. Batches are taken sequentially from the entire training set during optimization. An epoch is a single pass through the data during optimization. For example, if we are training on 100 data points, and our batch size is 10, then we will have completed an epoch after processing 10 batches. End of explanation
5,160
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: 图像分类 在此项目中,你将对 CIFAR-10 数据集 中的图片进行分类。该数据集包含飞机、猫狗和其他物体。你需要预处理这些图片,然后用所有样本训练一个卷积神经网络。图片需要标准化(normalized),标签需要采用 one-hot 编码。你需要应用所学的知识构建卷积的、最大池化(max pooling)、丢弃(dropout)和完全连接(fully connected)的层。最后,你需要在样本图片上看到神经网络的预测结果。 获取数据 请运行以下单元,以下载 CIFAR-10 数据集(Python版)。 Step2: 探索数据 该数据集分成了几部分/批次(batches),以免你的机器在计算时内存不足。CIFAR-10 数据集包含 5 个部分,名称分别为 data_batch_1、data_batch_2,以此类推。每个部分都包含以下某个类别的标签和图片: 飞机 汽车 鸟类 猫 鹿 狗 青蛙 马 船只 卡车 了解数据集也是对数据进行预测的必经步骤。你可以通过更改 batch_id 和 sample_id 探索下面的代码单元。batch_id 是数据集一个部分的 ID(1 到 5)。sample_id 是该部分中图片和标签对(label pair)的 ID。 问问你自己:“可能的标签有哪些?”、“图片数据的值范围是多少?”、“标签是按顺序排列,还是随机排列的?”。思考类似的问题,有助于你预处理数据,并使预测结果更准确。 Step5: 实现预处理函数 标准化 在下面的单元中,实现 normalize 函数,传入图片数据 x,并返回标准化 Numpy 数组。值应该在 0 到 1 的范围内(含 0 和 1)。返回对象应该和 x 的形状一样。 Step8: One-hot 编码 和之前的代码单元一样,你将为预处理实现一个函数。这次,你将实现 one_hot_encode 函数。输入,也就是 x,是一个标签列表。实现该函数,以返回为 one_hot 编码的 Numpy 数组的标签列表。标签的可能值为 0 到 9。每次调用 one_hot_encode 时,对于每个值,one_hot 编码函数应该返回相同的编码。确保将编码映射保存到该函数外面。 提示:不要重复发明轮子。 Step10: 随机化数据 之前探索数据时,你已经了解到,样本的顺序是随机的。再随机化一次也不会有什么关系,但是对于这个数据集没有必要。 预处理所有数据并保存 运行下方的代码单元,将预处理所有 CIFAR-10 数据,并保存到文件中。下面的代码还使用了 10% 的训练数据,用来验证。 Step12: 检查点 这是你的第一个检查点。如果你什么时候决定再回到该记事本,或需要重新启动该记事本,你可以从这里开始。预处理的数据已保存到本地。 Step17: 构建网络 对于该神经网络,你需要将每层都构建为一个函数。你看到的大部分代码都位于函数外面。要更全面地测试你的代码,我们需要你将每层放入一个函数中。这样使我们能够提供更好的反馈,并使用我们的统一测试检测简单的错误,然后再提交项目。 注意:如果你觉得每周很难抽出足够的时间学习这门课程,我们为此项目提供了一个小捷径。对于接下来的几个问题,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 程序包中的类来构建每个层级,但是“卷积和最大池化层级”部分的层级除外。TF Layers 和 Keras 及 TFLearn 层级类似,因此很容易学会。 但是,如果你想充分利用这门课程,请尝试自己解决所有问题,不使用 TF Layers 程序包中的任何类。你依然可以使用其他程序包中的类,这些类和你在 TF Layers 中的类名称是一样的!例如,你可以使用 TF Neural Network 版本的 conv2d 类 tf.nn.conv2d,而不是 TF Layers 版本的 conv2d 类 tf.layers.conv2d。 我们开始吧! 输入 神经网络需要读取图片数据、one-hot 编码标签和丢弃保留概率(dropout keep probability)。请实现以下函数: 实现 neural_net_image_input 返回 TF Placeholder 使用 image_shape 设置形状,部分大小设为 None 使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "x" 命名 实现 neural_net_label_input 返回 TF Placeholder 使用 n_classes 设置形状,部分大小设为 None 使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "y" 命名 实现 neural_net_keep_prob_input 返回 TF Placeholder,用于丢弃保留概率 使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "keep_prob" 命名 这些名称将在项目结束时,用于加载保存的模型。 注意:TensorFlow 中的 None 表示形状可以是动态大小。 Step20: 卷积和最大池化层 卷积层级适合处理图片。对于此代码单元,你应该实现函数 conv2d_maxpool 以便应用卷积然后进行最大池化: 使用 conv_ksize、conv_num_outputs 和 x_tensor 的形状创建权重(weight)和偏置(bias)。 使用权重和 conv_strides 对 x_tensor 应用卷积。 建议使用我们建议的间距(padding),当然也可以使用任何其他间距。 添加偏置 向卷积中添加非线性激活(nonlinear activation) 使用 pool_ksize 和 pool_strides 应用最大池化 建议使用我们建议的间距(padding),当然也可以使用任何其他间距。 注意:对于此层,请勿使用 TensorFlow Layers 或 TensorFlow Layers (contrib),但是仍然可以使用 TensorFlow 的 Neural Network 包。对于所有其他层,你依然可以使用快捷方法。 Step23: 扁平化层 实现 flatten 函数,将 x_tensor 的维度从四维张量(4-D tensor)变成二维张量。输出应该是形状(部分大小(Batch Size),扁平化图片大小(Flattened Image Size))。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。 Step26: 全连接层 实现 fully_conn 函数,以向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。 Step29: 输出层 实现 output 函数,向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。 注意:该层级不应应用 Activation、softmax 或交叉熵(cross entropy)。 Step32: 创建卷积模型 实现函数 conv_net, 创建卷积神经网络模型。该函数传入一批图片 x,并输出对数(logits)。使用你在上方创建的层创建此模型: 应用 1、2 或 3 个卷积和最大池化层(Convolution and Max Pool layers) 应用一个扁平层(Flatten Layer) 应用 1、2 或 3 个完全连接层(Fully Connected Layers) 应用一个输出层(Output Layer) 返回输出 使用 keep_prob 向模型中的一个或多个层应用 TensorFlow 的 Dropout Step35: 训练神经网络 单次优化 实现函数 train_neural_network 以进行单次优化(single optimization)。该优化应该使用 optimizer 优化 session,其中 feed_dict 具有以下参数: x 表示图片输入 y 表示标签 keep_prob 表示丢弃的保留率 每个部分都会调用该函数,所以 tf.global_variables_initializer() 已经被调用。 注意:不需要返回任何内容。该函数只是用来优化神经网络。 Step37: 显示数据 实现函数 print_stats 以输出损失和验证准确率。使用全局变量 valid_features 和 valid_labels 计算验证准确率。使用保留率 1.0 计算损失和验证准确率(loss and validation accuracy)。 Step38: 超参数 调试以下超参数: * 设置 epochs 表示神经网络停止学习或开始过拟合的迭代次数 * 设置 batch_size,表示机器内存允许的部分最大体积。大部分人设为以下常见内存大小: 64 128 256 ... 设置 keep_probability 表示使用丢弃时保留节点的概率 Step40: 在单个 CIFAR-10 部分上训练 我们先用单个部分,而不是用所有的 CIFAR-10 批次训练神经网络。这样可以节省时间,并对模型进行迭代,以提高准确率。最终验证准确率达到 50% 或以上之后,在下一部分对所有数据运行模型。 Step42: 完全训练模型 现在,单个 CIFAR-10 部分的准确率已经不错了,试试所有五个部分吧。 Step45: 检查点 模型已保存到本地。 测试模型 利用测试数据集测试你的模型。这将是最终的准确率。你的准确率应该高于 50%。如果没达到,请继续调整模型结构和参数。
Python Code: DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' # Use Floyd's cifar-10 dataset if present floyd_cifar10_location = '/input/cifar-10/python.tar.gz' if isfile(floyd_cifar10_location): tar_gz_path = floyd_cifar10_location else: tar_gz_path = 'cifar-10-python.tar.gz' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(tar_gz_path): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar: urlretrieve( 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz', tar_gz_path, pbar.hook) if not isdir(cifar10_dataset_folder_path): with tarfile.open(tar_gz_path) as tar: tar.extractall() tar.close() tests.test_folder_path(cifar10_dataset_folder_path) Explanation: 图像分类 在此项目中,你将对 CIFAR-10 数据集 中的图片进行分类。该数据集包含飞机、猫狗和其他物体。你需要预处理这些图片,然后用所有样本训练一个卷积神经网络。图片需要标准化(normalized),标签需要采用 one-hot 编码。你需要应用所学的知识构建卷积的、最大池化(max pooling)、丢弃(dropout)和完全连接(fully connected)的层。最后,你需要在样本图片上看到神经网络的预测结果。 获取数据 请运行以下单元,以下载 CIFAR-10 数据集(Python版)。 End of explanation %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 1 sample_id = 5 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id) Explanation: 探索数据 该数据集分成了几部分/批次(batches),以免你的机器在计算时内存不足。CIFAR-10 数据集包含 5 个部分,名称分别为 data_batch_1、data_batch_2,以此类推。每个部分都包含以下某个类别的标签和图片: 飞机 汽车 鸟类 猫 鹿 狗 青蛙 马 船只 卡车 了解数据集也是对数据进行预测的必经步骤。你可以通过更改 batch_id 和 sample_id 探索下面的代码单元。batch_id 是数据集一个部分的 ID(1 到 5)。sample_id 是该部分中图片和标签对(label pair)的 ID。 问问你自己:“可能的标签有哪些?”、“图片数据的值范围是多少?”、“标签是按顺序排列,还是随机排列的?”。思考类似的问题,有助于你预处理数据,并使预测结果更准确。 End of explanation def normalize(x): Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data # TODO: Implement Function return x/255 DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_normalize(normalize) Explanation: 实现预处理函数 标准化 在下面的单元中,实现 normalize 函数,传入图片数据 x,并返回标准化 Numpy 数组。值应该在 0 到 1 的范围内(含 0 和 1)。返回对象应该和 x 的形状一样。 End of explanation def one_hot_encode(x): One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels # TODO: Implement Function return np.eye(10)[x] DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_one_hot_encode(one_hot_encode) Explanation: One-hot 编码 和之前的代码单元一样,你将为预处理实现一个函数。这次,你将实现 one_hot_encode 函数。输入,也就是 x,是一个标签列表。实现该函数,以返回为 one_hot 编码的 Numpy 数组的标签列表。标签的可能值为 0 到 9。每次调用 one_hot_encode 时,对于每个值,one_hot 编码函数应该返回相同的编码。确保将编码映射保存到该函数外面。 提示:不要重复发明轮子。 End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode) Explanation: 随机化数据 之前探索数据时,你已经了解到,样本的顺序是随机的。再随机化一次也不会有什么关系,但是对于这个数据集没有必要。 预处理所有数据并保存 运行下方的代码单元,将预处理所有 CIFAR-10 数据,并保存到文件中。下面的代码还使用了 10% 的训练数据,用来验证。 End of explanation DON'T MODIFY ANYTHING IN THIS CELL import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb')) Explanation: 检查点 这是你的第一个检查点。如果你什么时候决定再回到该记事本,或需要重新启动该记事本,你可以从这里开始。预处理的数据已保存到本地。 End of explanation import tensorflow as tf def neural_net_image_input(image_shape): Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. # TODO: Implement Function return tf.placeholder(tf.float32, (None, *image_shape), name='x') def neural_net_label_input(n_classes): Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. # TODO: Implement Function return tf.placeholder(tf.float32, (None, n_classes), name='y') def neural_net_keep_prob_input(): Return a Tensor for keep probability : return: Tensor for keep probability. # TODO: Implement Function return tf.placeholder(tf.float32, name='keep_prob') DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input) Explanation: 构建网络 对于该神经网络,你需要将每层都构建为一个函数。你看到的大部分代码都位于函数外面。要更全面地测试你的代码,我们需要你将每层放入一个函数中。这样使我们能够提供更好的反馈,并使用我们的统一测试检测简单的错误,然后再提交项目。 注意:如果你觉得每周很难抽出足够的时间学习这门课程,我们为此项目提供了一个小捷径。对于接下来的几个问题,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 程序包中的类来构建每个层级,但是“卷积和最大池化层级”部分的层级除外。TF Layers 和 Keras 及 TFLearn 层级类似,因此很容易学会。 但是,如果你想充分利用这门课程,请尝试自己解决所有问题,不使用 TF Layers 程序包中的任何类。你依然可以使用其他程序包中的类,这些类和你在 TF Layers 中的类名称是一样的!例如,你可以使用 TF Neural Network 版本的 conv2d 类 tf.nn.conv2d,而不是 TF Layers 版本的 conv2d 类 tf.layers.conv2d。 我们开始吧! 输入 神经网络需要读取图片数据、one-hot 编码标签和丢弃保留概率(dropout keep probability)。请实现以下函数: 实现 neural_net_image_input 返回 TF Placeholder 使用 image_shape 设置形状,部分大小设为 None 使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "x" 命名 实现 neural_net_label_input 返回 TF Placeholder 使用 n_classes 设置形状,部分大小设为 None 使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "y" 命名 实现 neural_net_keep_prob_input 返回 TF Placeholder,用于丢弃保留概率 使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "keep_prob" 命名 这些名称将在项目结束时,用于加载保存的模型。 注意:TensorFlow 中的 None 表示形状可以是动态大小。 End of explanation def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor # TODO: Implement Function # variables input_dense = x_tensor.shape[-1].value weight = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], input_dense, conv_num_outputs],stddev=0.1)) bias = tf.Variable(tf.zeros([conv_num_outputs])) # conv2d conv_layer = tf.nn.conv2d(x_tensor, weight, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME') conv_layer = tf.nn.bias_add(conv_layer, bias) #conv_layer = tf.nn.relu(conv_layer) # move it after maxpool # maxpool conv_layer = tf.nn.max_pool( conv_layer, ksize=[1, pool_ksize[0], pool_ksize[1], 1], strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME') return tf.nn.relu(conv_layer) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_con_pool(conv2d_maxpool) Explanation: 卷积和最大池化层 卷积层级适合处理图片。对于此代码单元,你应该实现函数 conv2d_maxpool 以便应用卷积然后进行最大池化: 使用 conv_ksize、conv_num_outputs 和 x_tensor 的形状创建权重(weight)和偏置(bias)。 使用权重和 conv_strides 对 x_tensor 应用卷积。 建议使用我们建议的间距(padding),当然也可以使用任何其他间距。 添加偏置 向卷积中添加非线性激活(nonlinear activation) 使用 pool_ksize 和 pool_strides 应用最大池化 建议使用我们建议的间距(padding),当然也可以使用任何其他间距。 注意:对于此层,请勿使用 TensorFlow Layers 或 TensorFlow Layers (contrib),但是仍然可以使用 TensorFlow 的 Neural Network 包。对于所有其他层,你依然可以使用快捷方法。 End of explanation def flatten(x_tensor): Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). # TODO: Implement Function return tf.contrib.layers.flatten(x_tensor) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_flatten(flatten) Explanation: 扁平化层 实现 flatten 函数,将 x_tensor 的维度从四维张量(4-D tensor)变成二维张量。输出应该是形状(部分大小(Batch Size),扁平化图片大小(Flattened Image Size))。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。 End of explanation def fully_conn(x_tensor, num_outputs): Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. # TODO: Implement Function return tf.contrib.layers.fully_connected(x_tensor, num_outputs) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_fully_conn(fully_conn) Explanation: 全连接层 实现 fully_conn 函数,以向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。 End of explanation def output(x_tensor, num_outputs): Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. # TODO: Implement Function return tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=None) DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_output(output) Explanation: 输出层 实现 output 函数,向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。 注意:该层级不应应用 Activation、softmax 或交叉熵(cross entropy)。 End of explanation def conv_net(x, keep_prob): Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) conv_layer = conv2d_maxpool(x, 32, (3, 3), (1, 1), (2, 2), (2, 2)) conv_layer = conv2d_maxpool(x, 64, (3, 3), (1, 1), (2, 2), (2, 2)) conv_layer = conv2d_maxpool(x, 128, (3, 3), (3, 3), (2, 2), (2, 2)) conv_layer = tf.nn.dropout(conv_layer, keep_prob) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) flat_layer = flatten(conv_layer) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) fc_layer = fully_conn(flat_layer, 512) fc_layer = tf.nn.dropout(fc_layer, keep_prob) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) output_layer = output(fc_layer, 10) # TODO: return output return output_layer DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net) Explanation: 创建卷积模型 实现函数 conv_net, 创建卷积神经网络模型。该函数传入一批图片 x,并输出对数(logits)。使用你在上方创建的层创建此模型: 应用 1、2 或 3 个卷积和最大池化层(Convolution and Max Pool layers) 应用一个扁平层(Flatten Layer) 应用 1、2 或 3 个完全连接层(Fully Connected Layers) 应用一个输出层(Output Layer) 返回输出 使用 keep_prob 向模型中的一个或多个层应用 TensorFlow 的 Dropout End of explanation def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data # TODO: Implement Function session.run(optimizer, feed_dict={ x: feature_batch, y: label_batch, keep_prob: keep_probability}) pass DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_train_nn(train_neural_network) Explanation: 训练神经网络 单次优化 实现函数 train_neural_network 以进行单次优化(single optimization)。该优化应该使用 optimizer 优化 session,其中 feed_dict 具有以下参数: x 表示图片输入 y 表示标签 keep_prob 表示丢弃的保留率 每个部分都会调用该函数,所以 tf.global_variables_initializer() 已经被调用。 注意:不需要返回任何内容。该函数只是用来优化神经网络。 End of explanation def print_stats(session, feature_batch, label_batch, cost, accuracy): Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function # TODO: Implement Function loss = session.run(cost, feed_dict={ x: feature_batch, y: label_batch, keep_prob: 1.}) valid_acc = session.run(accuracy, feed_dict={ x: valid_features, y: valid_labels, keep_prob: 1.}) print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc)) pass Explanation: 显示数据 实现函数 print_stats 以输出损失和验证准确率。使用全局变量 valid_features 和 valid_labels 计算验证准确率。使用保留率 1.0 计算损失和验证准确率(loss and validation accuracy)。 End of explanation # TODO: Tune Parameters epochs = 30 batch_size = 256 keep_probability = 0.7 Explanation: 超参数 调试以下超参数: * 设置 epochs 表示神经网络停止学习或开始过拟合的迭代次数 * 设置 batch_size,表示机器内存允许的部分最大体积。大部分人设为以下常见内存大小: 64 128 256 ... 设置 keep_probability 表示使用丢弃时保留节点的概率 End of explanation DON'T MODIFY ANYTHING IN THIS CELL print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) Explanation: 在单个 CIFAR-10 部分上训练 我们先用单个部分,而不是用所有的 CIFAR-10 批次训练神经网络。这样可以节省时间,并对模型进行迭代,以提高准确率。最终验证准确率达到 50% 或以上之后,在下一部分对所有数据运行模型。 End of explanation DON'T MODIFY ANYTHING IN THIS CELL save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path) Explanation: 完全训练模型 现在,单个 CIFAR-10 部分的准确率已经不错了,试试所有五个部分吧。 End of explanation DON'T MODIFY ANYTHING IN THIS CELL %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): Test the saved model against the test dataset test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model() Explanation: 检查点 模型已保存到本地。 测试模型 利用测试数据集测试你的模型。这将是最终的准确率。你的准确率应该高于 50%。如果没达到,请继续调整模型结构和参数。 End of explanation
5,161
Given the following text description, write Python code to implement the functionality described below step by step Description: Sistema Nacional de Información e Indicadores de Vivienda ID |Descripción ---| Step1: 2. Descarga de datos Los datos se descargan por medio de una conexión a un servicio SOAP proporcionado por el SNIIV. Para acceder a los datos proporcionados por cada uno de los servicios del SNIIV, tiene que hacerse un POST request, especificando en el encabezado el servicio al que se busca acceder y en el cuerpo de la petición, un XML con parametros para que el servidor de SNIIV pueda regresar una respuesta Step2: Lo anterior es un ejemplo de cómo pueden obtenerse datos desde el servicio montado por el SNIIV. Con base en este ejemplo es posible hacer una función que realice las peticiones de manera más compacta en cada caso. Step3: Con el script creado es posible iterar por todas las operaciones que tiene el servidor de SNIIV para explorar los datos disponibles y extraer datos para parámetros. Step4: Se van a revisar los siguientes
Python Code: descripciones = { 'P0405': 'Viviendas Verticales', 'P0406': 'Viviendas urbanas en PCU U1 y U2', 'P0411': 'Subsidios CONAVI' } # Librerias utilizadas import pandas as pd import sys import urllib import os import csv import zeep import requests from lxml import etree import xmltodict import ast import collections # Configuracion del sistema print('Python {} on {}'.format(sys.version, sys.platform)) print('Pandas version: {}'.format(pd.__version__)) import platform; print('Running on {} {}'.format(platform.system(), platform.release())) Explanation: Sistema Nacional de Información e Indicadores de Vivienda ID |Descripción ---|:---------- P0405|Viviendas Verticales P0406|Viviendas urbanas en PCU U1 y U2 P0411|Subsidios CONAVI End of explanation # Esta celda contiene textos estándar para los encabezados y el cuerpo de la operación que se solicita al servidor. # los textos entre corchetes {} sirven para especificar la operacion a la que se busca tener acceso scheme = r'http://www.conavi.gob.mx:8080/WS_App_SNIIV.asmx?WSDL' #El scheme siempre es el mismo SOAPAction = ('http://www.conavi.gob.mx:8080/WS_App_SNIIV/{}') xmlbody = ( '<?xml version="1.0" encoding="utf-8"?>' '<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">' '<soap:Body>' '<{} xmlns="http://www.conavi.gob.mx:8080/WS_App_SNIIV">' '<dat></dat>' '</{}>' '</soap:Body>' '</soap:Envelope>' ) # Conexion y descarga de datos operacion = 'Subsidios' heads = {'Content-Type': 'text/xml; charset=utf-8', 'SOAPAction': SOAPAction.format(operacion)} body = xmlbody.format(operacion, operacion) r = requests.post(scheme, data=body, headers=heads) print(r.content[0:300]) list(r.headers.keys()) r = xmltodict.parse(r.content) r.keys() # Después de parseada la respuesta a un diccionario de Python, los datos se encuentran varios niveles por debajo, por lo que es # necesario explorar estos niveles hasta llegar a los datos útiles rodict = r['soap:Envelope']['soap:Body']['{}Response'.format(operacion)]['{}Result'.format(operacion)]['app_sniiv_rep_subs'] rodict[0] # Con los datos parseados en forma de OrderedDict ya es posible hacer un DataFrame de la siguiente manera. pd.DataFrame(rodict, columns=rodict[0].keys()).head() Explanation: 2. Descarga de datos Los datos se descargan por medio de una conexión a un servicio SOAP proporcionado por el SNIIV. Para acceder a los datos proporcionados por cada uno de los servicios del SNIIV, tiene que hacerse un POST request, especificando en el encabezado el servicio al que se busca acceder y en el cuerpo de la petición, un XML con parametros para que el servidor de SNIIV pueda regresar una respuesta End of explanation def getsoap(operacion): scheme = r'http://www.conavi.gob.mx:8080/WS_App_SNIIV.asmx?WSDL' # El scheme siempre es el mismo SOAPAction = ('http://www.conavi.gob.mx:8080/WS_App_SNIIV/{}') xmlbody = ( '<?xml version="1.0" encoding="utf-8"?>' '<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">' '<soap:Body>' '<{} xmlns="http://www.conavi.gob.mx:8080/WS_App_SNIIV">' '<dat></dat>' '</{}>' '</soap:Body>' '</soap:Envelope>' ) heads = {'Content-Type': 'text/xml; charset=utf-8', 'SOAPAction': SOAPAction.format(operacion)} body = xmlbody.format(operacion, operacion) r = requests.post(scheme, data=body, headers=heads) if r.status_code != 200: # print('status: {}\n**Operacion terminada**'.format(r.status_code)) return None else: print(operacion) print('status: {}\nContent Type: {}'.format(r.status_code, r.headers['Content-Type'])) print('Date: {}\nContent Lenght: {}'.format(r.headers['Date'], r.headers['Content-Length'])) r = xmltodict.parse(r.content) try: rodict = r[list(test.keys())[0]]['soap:Body']['{}Response'.format(operacion)]['{}Result'.format(operacion)][list(test.keys())[0]] df = pd.DataFrame(rodict, columns=rodict[0].keys()) print('dataframe creado') return df except: print('---------\nNo fue posible crear dataframe. Regresando OrderedDict') return r # Prueba del Script test = getsoap('get_tot_fech') test Explanation: Lo anterior es un ejemplo de cómo pueden obtenerse datos desde el servicio montado por el SNIIV. Con base en este ejemplo es posible hacer una función que realice las peticiones de manera más compacta en cada caso. End of explanation operaciones = { 'viv_vig_x_avnc': 'Descripción: Obtiene la Vivienda Vigente por Avance de Obra a la última fecha de actualización, como parámetro pase una cadena vacía', 'get_tot_fech': 'Descripción: Obtiene las últimas fecha de actualización de la información referente a financiamientos, subsidios y vivienda vigente, como parámetro pase una cadena vacía', 'Financiamientos': 'Descripción: Obtiene datos de los Financiamientos por Organismo, Destino y Agrupación a la última fecha de actualización, como parámetro pase una cadena vacía', 'viv_vig_x_pcu': 'Descripción: Obtiene la Vivienda Vigente por PCU a la última fecha de actualización, como parámetro pase una cadena vacía', 'Subsidios': 'Descripción: Obtiene datos de los Subsidios CONAVI por Tipo de Entidad Ejecutora y Modalidad a la última fecha de actualización, como parámetro pase una cadena vacía', 'viv_vig_x_tipo': 'Descripción: Obtiene la Vivienda Vigente por Tipo a la última fecha de actualización, como parámetro pase una cadena vacía', 'get_tot_ini': 'Descripción: Obtiene el total de financiamientos, subsidios y vivienda vigente a la última fecha de actualización (acciones y monto), como parámetro pase una cadena vacía', 'viv_vig_x_valor': 'Descripción: Obtiene la Vivienda Vigente por Valor a la última fecha de actualización, como parámetro pase una cadena vacía', 'financiamientos_gpo_org': 'Descripción: Obtiene datos de los Financiamientos por Grupo y Organismo a nivel Nacional y Estatal a la última fecha de actualización, como parámetro pase una cadena vacía', 'get_avnc_vv_mun': 'Descripción: Obtiene la Oferta de Vivienda Vigente a nivel Municipio (Top 3) por Avance de Construcción', 'get_cont_vv_mun': 'Descripción: Obtiene la Oferta de Vivienda Vigente a nivel Municipio (Top 3) por PCU', 'get_fechas_act': 'Descripción: Obtiene últimas fecha de actualización de la información referente a financiamientos, subsidios y vivienda vigente (dd/mm/aaaa), como parámetro pase una cadena vacía', 'get_finan_evol': 'Descripción: Obtiene la evolución de Acciones y Monto de Financiamientos en los últimos 3 años (mes a mes)', 'get_finan_evol_acum': 'Descripción: Obtiene la evolución de Acciones y Monto de Financiamientos en los últimos 3 años (acumulado)', 'get_finan_rg_mun': 'Descripción: Obtiene el Reporte General de Financiamientos a nivel Municipio (Top 3)', 'get_finan_x_rgoing': 'Descripción: Obtiene Acciones y Monto de Financiamientos por Rango de Ingreso VSMM a Nivel Estatal', 'get_finan_x_rgoing_mun': 'Descripción: Obtiene Acciones y Monto de Financiamientos por Rango de Ingreso VSMM a nivel Municipio (Top 3)', 'get_finan_x_valviv': 'Descripción: Obtiene Acciones y Monto de Financiamientos por Valor de la Vivienda a Nivel Estatal', 'get_finan_x_valviv_mun': 'Descripción: Obtiene Acciones y Monto de Financiamientos por Valor de la Vivienda a nivel Municipio (Top 3)', 'get_regviv_evol': 'Descripción: Obtiene la evolución de Registro de Vivienda en los últimos 3 años (mes a mes)', 'get_regviv_evol_acum': 'Descripción: Obtiene la evolución de Registro de Vivienda en los últimos 3 años (acumulado)', 'get_subs_evol': 'Descripción: Obtiene la evolución de Acciones y Monto de Subsidios en los últimos 3 años (mes a mes)', 'get_subs_evol_acum': 'Descripción: Obtiene la evolución de Acciones y Monto de Subsidios en los últimos 3 años (acumulado)', 'get_subs_rg_mun': 'Descripción: Obtiene el Reporte General de Subsidios a nivel Municipio (Top 3)', 'get_subs_x_rgoing': 'Descripción: Obtiene Acciones y Monto de Subsidios CONAVI por Rango de Ingreso VSMM a Nivel Estatal', 'get_subs_x_rgoing_mun': 'Descripción: Obtiene Acciones y Monto de Subsidios CONAVI por Rango de Ingreso VSMM a nivel Municipio (Top 3)', 'get_subs_x_valviv': 'Descripción: Obtiene Acciones y Monto de Subsidios CONAVI por Valor de la Vivienda a Nivel Estatal', 'get_subs_x_valviv_mun': 'Descripción: Obtiene Acciones y Monto de Subsidios CONAVI por Valor de la Vivienda a nivel Municipio (Top 3)', 'get_tipohv_vv_mun': 'Descripción: Obtiene la Oferta de Vivienda Vigente a nivel Municipio (Top 3) por Tipo de Vivienda Horizontal - Vertical', 'get_tipol_vv_mun': 'Descripción: Obtiene la Oferta de Vivienda Vigente a nivel Municipio (Top 3) por Tipología de Vivienda' } # Iterar por todas las operaciones datos = {} for operacion, descripcion in operaciones.items(): print('{}\n{}:\n{}'.format('||'*30, operacion, descripcion)) datos[operacion] = getsoap(operacion) Explanation: Con el script creado es posible iterar por todas las operaciones que tiene el servidor de SNIIV para explorar los datos disponibles y extraer datos para parámetros. End of explanation # Agrupadores estándar para todos los xml senv = 'soap:Envelope' sbo = 'soap:Body' revdata = datos['get_subs_x_valviv_mun'][senv][sbo]['get_subs_x_valviv_munResponse']['get_subs_x_valviv_munResult'] revdata # Los datos que regresó el servidor para esta operación parecen ser un JSON, pero vienen en forma de str type(revdata) revdic = ast.literal_eval(revdata) revdic = collections.OrderedDict(revdic) revdic revdic.keys() pd.DataFrame(revdic2['01'], columns=revdic2['01'][list(revdic2['01'].keys())]) revdic2['01']['AGUASCALIENTES'] revdic2 = {} for estado in revdic.keys(): tempdict = {} revdic2[estado] = collections.OrderedDict(revdic[estado][0]) revdic2['01'] Explanation: Se van a revisar los siguientes: get_subs_x_valviv_mun - Subsidios CONAVI get_subs_x_rgoing_mun - Subsidios CONAVI get_regviv_evol_acum - Vivienda en PCU get_cont_vv_mun - Vivienda en PCU get_tipohv_vv_mun - Vivienda Vigente por tipo ID |Descripción ---|:---------- P0405|Viviendas Verticales P0406|Viviendas urbanas en PCU U1 y U2 P0411|Subsidios CONAVI Subsidios CONAVI Los datos para este indicador pueden estar en: get_subs_x_valviv_mun get_subs_x_rgoing_mun End of explanation
5,162
Given the following text description, write Python code to implement the functionality described below step by step Description: Lecture 2 continued regular expressions Provides a way to search text. Looking for matching patterns Step3: Object-oriented programming Creating classes of objects with data and methods(functions) that operate on the data. Step4: Functional Tools sometimes you want to change behavior based on the passed value types Step5: enumerate enumerates creates a tuple (index,element) Step6: Zip and argument unpacking zip forms two more more lists for other lists Step7: args and kwargs higher order function support - functions on functions
Python Code: import re print all([ not re.match("a","cat"), re.search("a","cat"), not re.search("c","dog"), 3 == len(re.split("[ab]","carbs")), "R-D-" == re.sub("[0-9]","-","R2D2") ]) # prints true if all are true Explanation: Lecture 2 continued regular expressions Provides a way to search text. Looking for matching patterns End of explanation class Set: def __init__(self, values=None): This is the constructor self.dict={} if values is not None: for value in values: self.add(value) def __repr__(self): Sting to represent object in class like a to_string function return "set:"+str(self.dict.keys()) def add(self, value): self.dict[value] = True def contains(self, value): return value in self.dict def remove(self,value): del self.dict[value] #using the class s = Set([1,2,3]) s.add(4) s.remove(3) print s.contains(3) print s Explanation: Object-oriented programming Creating classes of objects with data and methods(functions) that operate on the data. End of explanation def exp(base,power): return base**power #exp(2,power) def two_to_the(power): return exp(2,power) print( two_to_the(3) ) def multiply(x,y): return x*y products = map(multiply,[1,2],[4,5]) print products Explanation: Functional Tools sometimes you want to change behavior based on the passed value types End of explanation #example use documents = ["a","b"] def do_something(index,item): print "index:"+str(index)+" Item:"+item for i, document in enumerate(documents): do_something(i,document) def do_something(index ): print "index:"+str(index) for i, _ in enumerate(documents): do_something(i) Explanation: enumerate enumerates creates a tuple (index,element) End of explanation list1 = ['a','b','c'] list2 = [1,2,3] zip(list1,list2) pairs = [('a', 1), ('b', 2), ('c', 3)] letters, numbers = zip(*pairs) # * performs argument unpacking print letters print numbers Explanation: Zip and argument unpacking zip forms two more more lists for other lists End of explanation def doubler(f): def g(x): return 2 * f(x) return g def f1(x): return x+1 g= doubler(f1) print g(3) def magic(*args, **kwargs): print "unamed args", args print "Key word args", kwargs magic(1,2,key1="word1",key2="word2") Explanation: args and kwargs higher order function support - functions on functions End of explanation
5,163
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I have my data in a pandas DataFrame, and it looks like the following:
Problem: import pandas as pd df = pd.DataFrame({'cat': ['A', 'B', 'C'], 'val1': [7, 10, 5], 'val2': [10, 2, 15], 'val3': [0, 1, 6], 'val4': [19, 14, 16]}) def g(df): df = df.set_index('cat') res = df.div(df.sum(axis=1), axis=0) return res.reset_index() df = g(df.copy())
5,164
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I have dfs as follows:
Problem: import pandas as pd df1 = pd.DataFrame({'id': [1, 2, 3, 4, 5], 'city': ['bj', 'bj', 'sh', 'sh', 'sh'], 'district': ['ft', 'ft', 'hp', 'hp', 'hp'], 'date': ['2019/1/1', '2019/1/1', '2019/1/1', '2019/1/1', '2019/1/1'], 'value': [1, 5, 9, 13, 17]}) df2 = pd.DataFrame({'id': [3, 4, 5, 6, 7], 'date': ['2019/2/1', '2019/2/1', '2019/2/1', '2019/2/1', '2019/2/1'], 'value': [1, 5, 9, 13, 17]}) def g(df1, df2): df = pd.concat([df1,df2.merge(df1[['id','city','district']], how='left', on='id')],sort=False).reset_index(drop=True) df['date'] = pd.to_datetime(df['date']) df['date'] = df['date'].dt.strftime('%d-%b-%Y') return df.sort_values(by=['id','date']).reset_index(drop=True) result = g(df1.copy(),df2.copy())
5,165
Given the following text description, write Python code to implement the functionality described below step by step Description: Logistic Regression In this note, I am going to train a logistic regression model with gradient decent estimation. Logistic regression model can be thought as a neural network without hidden layer and hence a good entry of learning deep learning model. Overview This note will covere Step1: Loss function, chain rule and its derivative Model can be described as Step2: Plot the cost function and as you can see it's convex and has global optimal minimum. Step3: The grandinet and delta_w is just simple equations we produced above. Step4: Start trining and just interating for 10 steps. w = w-dw is key point we update w during each integration. Step5: Plot just 4 itegrations and you can see it toward to global minimum.
Python Code: import numpy as np # Matrix and vector computation package np.seterr(all='ignore') # ignore numpy warning like multiplication of inf import matplotlib.pyplot as plt # Plotting library from matplotlib.colors import colorConverter, ListedColormap # some plotting functions from matplotlib import cm # Colormaps # Allow matplotlib to plot inside this notebook %matplotlib inline # Set the seed of the numpy random number generator so that the tutorial is reproducable np.random.seed(seed=1) # Define and generate the samples nb_of_samples_per_class = 20 # The number of sample in each class red_mean = [-1,0] # The mean of the red class blue_mean = [1,0] # The mean of the blue class std_dev = 1.2 # standard deviation of both classes # Generate samples from both classes x_red = np.random.randn(nb_of_samples_per_class, 2) * std_dev + red_mean x_blue = np.random.randn(nb_of_samples_per_class, 2) * std_dev + blue_mean # Merge samples in set of input variables x, and corresponding set of output variables t X = np.vstack((x_red, x_blue)) # 20x2 t = np.vstack((np.zeros((nb_of_samples_per_class,1)), np.ones((nb_of_samples_per_class,1)))) # 20 x1 # Plot both classes on the x1, x2 plane plt.plot(x_red[:,0], x_red[:,1], 'ro', label='class red') plt.plot(x_blue[:,0], x_blue[:,1], 'bo', label='class blue') plt.grid() plt.legend(loc=2) plt.xlabel('$x_1$', fontsize=15) plt.ylabel('$x_2$', fontsize=15) plt.axis([-4, 4, -4, 4]) plt.title('red vs. blue classes in the input space') plt.show() Explanation: Logistic Regression In this note, I am going to train a logistic regression model with gradient decent estimation. Logistic regression model can be thought as a neural network without hidden layer and hence a good entry of learning deep learning model. Overview This note will covere: * Prepare the data * Loss function, chain rule and its derivative * Code implementation Parepare the data Here we are generating 20 data points from 2 class distributions: blue $(t=1)$ and red $(t=-1)$ End of explanation # Define the logistic function def logistic(z): return 1 / (1 + np.exp(-z)) # Define the neural network function y = 1 / (1 + numpy.exp(-x*w)) # x:20x2 and w: 1x2 so use w.T here def nn(x, w): return logistic(x.dot(w.T)) # 20x1 -> this is y # Define the neural network prediction function that only returns # 1 or 0 depending on the predicted class def nn_predict(x,w): return np.around(nn(x,w)) # Define the cost function def cost(y, t): return - np.sum(np.multiply(t, np.log(y)) + np.multiply((1-t), np.log(1-y))) # y and t all 20x1 Explanation: Loss function, chain rule and its derivative Model can be described as: $$ y = \sigma(\mathbf{x} * \mathbf{w}^T) $$ $$\sigma(z) = \frac{1}{1+e^{-z}}$$ The parameter set $w$ can be optimized by maximizing the likelihood: $$\underset{\theta}{\text{argmax}}\; \mathcal{L}(\theta|t,z) = \underset{\theta}{\text{argmax}} \prod_{i=1}^{n} \mathcal{L}(\theta|t_i,z_i)$$ The likelihood can be described as join distribution of $t\;and\;z\;$given $\theta$: $$P(t,z|\theta) = P(t|z,\theta)P(z|\theta)$$ We don't care the probability of $z$ so $$\mathcal{L}(\theta|t,z) = P(t|z,\theta) = \prod_{i=1}^{n} P(t_i|z_i,\theta)$$ and $t_i$ is a Bernoulli variable. so $$\begin{split} P(t|z) & = \prod_{i=1}^{n} P(t_i=1|z_i)^{t_i} * (1 - P(t_i=1|z_i))^{1-t_i} \ & = \prod_{i=1}^{n} y_i^{t_i} * (1 - y_i)^{1-t_i} \end{split}$$ The cross entropy cost function can be defined as (by taking negative $log$): $$\begin{split} \xi(t,y) & = - log \mathcal{L}(\theta|t,z) \ & = - \sum_{i=1}^{n} \left[ t_i log(y_i) + (1-t_i)log(1-y_i) \right] \ & = - \sum_{i=1}^{n} \left[ t_i log(\sigma(z) + (1-t_i)log(1-\sigma(z)) \right] \end{split}$$ and $t$ can be only 0 or 1 so above can be expressed as: $$\xi(t,y) = -t * log(y) - (1-t) * log(1-y)$$ The grandient decent can be defined as: $$w(k+1) = w(k) - \Delta w(k)$$ $$\Delta w(k) = \mu\frac{\partial \xi}{\partial w} \;\;\; where \;\mu\; is\; learning\; rate$$ simplely apply chain rule here: $$\frac{\partial \xi_i}{\partial \mathbf{w}} = \frac{\partial z_i}{\partial \mathbf{w}} \frac{\partial y_i}{\partial z_i} \frac{\partial \xi_i}{\partial y_i}$$ (1) $$\begin{split} \frac{\partial \xi}{\partial y} & = \frac{\partial (-t * log(y) - (1-t) log(1-y))}{\partial y} = \frac{\partial (-t * log(y))}{\partial y} + \frac{\partial (- (1-t)log(1-y))}{\partial y} \ & = -\frac{t}{y} + \frac{1-t}{1-y} = \frac{y-t}{y(1-y)} \end{split}$$ (2) $$\frac{\partial y}{\partial z} = \frac{\partial \sigma(z)}{\partial z} = \frac{\partial \frac{1}{1+e^{-z}}}{\partial z} = \frac{-1}{(1+e^{-z})^2} e^{-z}-1 = \frac{1}{1+e^{-z}} \frac{e^{-z}}{1+e^{-z}} = \sigma(z) * (1- \sigma(z)) = y (1-y)$$ (3) $$\frac{\partial z}{\partial \mathbf{w}} = \frac{\partial (\mathbf{x} * \mathbf{w})}{\partial \mathbf{w}} = \mathbf{x}$$ So combine (1) - (3): $$\frac{\partial \xi_i}{\partial \mathbf{w}} = \frac{\partial z_i}{\partial \mathbf{w}} \frac{\partial y_i}{\partial z_i} \frac{\partial \xi_i}{\partial y_i} = \mathbf{x} * y_i (1 - y_i) * \frac{y_i - t_i}{y_i (1-y_i)} = \mathbf{x} * (y_i-t_i)$$ Finally, we get: $$\Delta w_j = \mu * \sum_{i=1}^{N} x_{ij} (y_i - t_i)$$ Code implementation First of all, define the logistic function logistic and the model nn. The cost function is the sum of the cross entropy of all training samples. End of explanation # Plot the cost in function of the weights # Define a vector of weights for which we want to plot the cost nb_of_ws = 100 # compute the cost nb_of_ws times in each dimension ws1 = np.linspace(-5, 5, num=nb_of_ws) # weight 1 ws2 = np.linspace(-5, 5, num=nb_of_ws) # weight 2 ws_x, ws_y = np.meshgrid(ws1, ws2) # generate grid cost_ws = np.zeros((nb_of_ws, nb_of_ws)) # initialize cost matrix # Fill the cost matrix for each combination of weights for i in range(nb_of_ws): for j in range(nb_of_ws): cost_ws[i,j] = cost(nn(X, np.asmatrix([ws_x[i,j], ws_y[i,j]])) , t) # Plot the cost function surface plt.contourf(ws_x, ws_y, cost_ws, 20, cmap=cm.pink) cbar = plt.colorbar() cbar.ax.set_ylabel('$\\xi$', fontsize=15) plt.xlabel('$w_1$', fontsize=15) plt.ylabel('$w_2$', fontsize=15) plt.title('Cost function surface') plt.grid() plt.show() Explanation: Plot the cost function and as you can see it's convex and has global optimal minimum. End of explanation # define the gradient function. def gradient(w, x, t): return (nn(x, w) - t).T * x # define the update function delta w which returns the # delta w for each weight in a vector def delta_w(w_k, x, t, learning_rate): return learning_rate * gradient(w_k, x, t) Explanation: The grandinet and delta_w is just simple equations we produced above. End of explanation # Set the initial weight parameter w = np.asmatrix([-4, -2]) # Set the learning rate learning_rate = 0.05 # Start the gradient descent updates and plot the iterations nb_of_iterations = 10 # Number of gradient descent updates w_iter = [w] # List to store the weight values over the iterations for i in range(nb_of_iterations): dw = delta_w(w, X, t, learning_rate) # Get the delta w update w = w-dw # Update the weights w_iter.append(w) # Store the weights for plotting Explanation: Start trining and just interating for 10 steps. w = w-dw is key point we update w during each integration. End of explanation # Plot the first weight updates on the error surface # Plot the error surface plt.contourf(ws_x, ws_y, cost_ws, 20, alpha=0.9, cmap=cm.pink) cbar = plt.colorbar() cbar.ax.set_ylabel('cost') # Plot the updates for i in range(1, 4): w1 = w_iter[i-1] w2 = w_iter[i] # Plot the weight-cost value and the line that represents the update plt.plot(w1[0,0], w1[0,1], 'bo') # Plot the weight cost value plt.plot([w1[0,0], w2[0,0]], [w1[0,1], w2[0,1]], 'b-') plt.text(w1[0,0]-0.2, w1[0,1]+0.4, '$w({})$'.format(i), color='b') w1 = w_iter[3] # Plot the last weight plt.plot(w1[0,0], w1[0,1], 'bo') plt.text(w1[0,0]-0.2, w1[0,1]+0.4, '$w({})$'.format(4), color='b') # Show figure plt.xlabel('$w_1$', fontsize=15) plt.ylabel('$w_2$', fontsize=15) plt.title('Gradient descent updates on cost surface') plt.grid() plt.show() Explanation: Plot just 4 itegrations and you can see it toward to global minimum. End of explanation
5,166
Given the following text description, write Python code to implement the functionality described below step by step Description: Homework 6 Step1: Problem set #2 Step2: Problem set #3 Step3: Problem set #4 Step4: Problem set #5 Step5: Specifying a field other than name, area or elevation for the sort parameter should fail silently, defaulting to sorting alphabetically. Expected output Step6: Paste your code Please paste the code for your entire Flask application in the cell below, in case we want to take a look when grading or debugging your assignment.
Python Code: import requests data = requests.get('http://localhost:5000/lakes').json() print(len(data), "lakes") for item in data[:10]: print(item['name'], "- elevation:", item['elevation'], "m / area:", item['area'], "km^2 / type:", item['type']) Explanation: Homework 6: Web Applications For this homework, you're going to write a web API for the lake data in the MONDIAL database. (Make sure you've imported the data as originally outlined in our week 1 tutorial.) The API should perform the following tasks: A request to /lakes should return a JSON list of dictionaries, with the information from the name, elevation, area and type fields from the lake table in MONDIAL. The API should recognize the query string parameter sort. When left blank or set to name, the results should be sorted by the name of the lake (in alphabetical order). When set to area or elevation, the results should be sorted by the requested field, in descending order. The API should recognize the query string parameter type. When specified, the results should only include rows that have the specified value in the type field. You should be able to use both the sort and type parameters in any request. This notebook contains only test requests to your API. Write the API as a standalone Python program, start the program and then run the code in the cells below to ensure that your API produces the expected output. When you're done, paste the source code in the final cell (so we can check your work, if needed). Hints when writing your API code: You'll need to construct the SQL query as a string, piece by piece. This will likely involve a somewhat messy tangle of if statements. Lean into the messy tangle. Make sure to use parameter placeholders (%s) in the query. If you're getting SQL errors, print out your SQL statement in the request handler function so you can debug it. (When you use print() in Flask, the results will display in your terminal window.) When in doubt, return to the test code. Examine it carefully and make sure you know exactly what it's trying to do. Problem set #1: A list of lakes Your API should return a JSON list of dictionaries (objects). Use the code below to determine what the keys of the dictionaries should be. (For brevity, this example only prints out the first ten records, but of course your API should return all of them.) Expected output: 143 lakes Ammersee - elevation: 533 m / area: 46 km^2 / type: None Arresoe - elevation: None m / area: 40 km^2 / type: None Atlin Lake - elevation: 668 m / area: 798 km^2 / type: None Balaton - elevation: 104 m / area: 594 km^2 / type: None Barrage de Mbakaou - elevation: None m / area: None km^2 / type: dam Bodensee - elevation: 395 m / area: 538 km^2 / type: None Brienzersee - elevation: 564 m / area: 29 km^2 / type: None Caspian Sea - elevation: -28 m / area: 386400 km^2 / type: salt Chad Lake - elevation: 250 m / area: 23000 km^2 / type: salt Chew Bahir - elevation: 520 m / area: 800 km^2 / type: salt The API should recognize the query string parameter sort. When left blank or set to name, the results should be sorted by the name of the lake (in alphabetical order). When set to area or elevation, the results should be sorted by the requested field, in descending order. The API should recognize the query string parameter type. When specified, the results should only include rows that have the specified value in the type field. You should be able to use both the sort and type parameters in any request. End of explanation import requests data = requests.get('http://localhost:5000/lakes?type=salt').json() avg_area = sum([x['area'] for x in data if x['area'] is not None]) / len(data) avg_elev = sum([x['elevation'] for x in data if x['elevation'] is not None]) / len(data) print("average area:", int(avg_area)) print("average elevation:", int(avg_elev)) Explanation: Problem set #2: Lakes of a certain type The following code fetches all lakes of type salt and finds their average area and elevation. Expected output: average area: 18880 average elevation: 970 End of explanation import requests data = requests.get('http://localhost:5000/lakes?sort=elevation').json() for item in [x['name'] for x in data if x['elevation'] is not None][:15]: print("*", item) Explanation: Problem set #3: Lakes in order The following code fetches lakes in reverse order by their elevation and prints out the name of the first fifteen, excluding lakes with an empty elevation field. Expected output: * Licancabur Crater Lake * Nam Co * Lago Junin * Lake Titicaca * Poopo * Salar de Uyuni * Koli Sarez * Lake Irazu * Qinghai Lake * Segara Anak * Lake Tahoe * Crater Lake * Lake Tana * Lake Van * Issyk-Kul End of explanation import requests data = requests.get('http://localhost:5000/lakes?sort=area&type=caldera').json() for item in data: print("*", item['name']) Explanation: Problem set #4: Order and type The following code prints the names of the largest caldera lakes, ordered in reverse order by area. Expected output: * Lake Nyos * Lake Toba * Lago Trasimeno * Lago di Bolsena * Lago di Bracciano * Crater Lake * Segara Anak * Laacher Maar End of explanation import requests data = requests.get('http://localhost:5000/lakes', params={'type': "' OR true; --"}).json() data Explanation: Problem set #5: Error handling Your API should work fine even when faced with potential error-causing inputs. For example, the expected output for this statement is an empty list ([]), not every row in the table. End of explanation import requests data = requests.get('http://localhost:5000/lakes', params={'sort': "florb"}).json() [x['name'] for x in data[:5]] Explanation: Specifying a field other than name, area or elevation for the sort parameter should fail silently, defaulting to sorting alphabetically. Expected output: ['Ammersee', 'Arresoe', 'Atlin Lake', 'Balaton', 'Barrage de Mbakaou'] End of explanation import pg8000 import decimal from flask import Flask, request, jsonify app = Flask(__name__) @app.route('/lakes') def give_lakes(): conn = pg8000.connect(database = 'mondial', user = 'rebeccaschuetz') cursor = conn.cursor() sort = request.args.get('sort', 'name') type_param = request.args.get('type', None) # to get rid of not valid type_params: cursor.execute('SELECT name, elevation, area, type FROM lake WHERE type = %s LIMIT 1', [type_param]) if not cursor.fetchone(): lakes_list = [] if type_param: if sort == 'elevation' or sort == 'area': cursor.execute('SELECT name, elevation, area, type FROM lake WHERE type = %s ORDER BY ' + sort + ' desc', [type_param]) else: sort = 'name' cursor.execute('SELECT name, elevation, area, type FROM lake WHERE type = %s ORDER BY ' + sort, [type_param]) else: if sort == 'elevation' or sort == 'area': cursor.execute('SELECT name, elevation, area, type FROM lake ORDER BY ' + sort + ' desc') else: sort = 'name' cursor.execute('SELECT name, elevation, area, type FROM lake ORDER BY ' + sort) lakes_list = [] for item in cursor.fetchall(): def decimal_to_int(x): if isinstance(x, decimal.Decimal): return int(x) else: return None # elevation = item[1] # if elevation: # elevation = int(elevation) # area = item[2] # if area: # area = int(area) # lakes_dict = {'name': item[0], # 'elevation': elevation, # 'area': area, # 'type': item[3]} lakes_dict = {'name': item[0], 'elevation': decimal_to_int(item[1]), 'area': decimal_to_int(item[2]), 'type': item[3]} lakes_list.append(lakes_dict) for dictionary in lakes_list: print(dictionary) return jsonify(lakes_list) app.run() Explanation: Paste your code Please paste the code for your entire Flask application in the cell below, in case we want to take a look when grading or debugging your assignment. End of explanation
5,167
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2021 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https Step1: Load KIP data KIP data specified by columns in a dataframe architecture Step2: Load LS Data LS data specified by columns in a dataframe architecture Step3: Load Checkpoint
Python Code: import pandas as pd import tensorflow as tf import numpy as np pd.set_option('expand_frame_repr', True) pd.set_option("display.max_rows", 100) pd.set_option('max_colwidth',90) Explanation: Copyright 2021 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Instructions: Public URLS are scoped by https://storage.googleapis.com/kip-datasets/ and can be downloaded directly from the browser while gsutil URLs are scoped by gs://kip-datasets and can be opened using tf.io.gfile.GFile. The KIP and LS datasets can be obtained using the URLs stored in csv files at gs://kip-datasets/kip/kip.csv and gs://kip-datasets/ls/ls.csv. Their contents are shown below. URLs point to datasets given by the parameters corresponding to the given row in the dataframe. Datasets are stored as .npz files. They have keys 'images' and 'labels' pointing to the respective numpy arrays. End of explanation with tf.io.gfile.GFile('gs://kip-datasets/kip/kip.csv') as f: df = pd.read_csv(f) df Explanation: Load KIP data KIP data specified by columns in a dataframe architecture: ConvNet (the main 4 layer convolutional neural network with pooling used) and ConvNet3 (a 3 layer version); see nn_training.ipynb dataset: cifar10, cifar100, mnist, fashion_mnist, svhn_cropped ssize (support size): 10, 100, 500 for non CIFAR-100, and 100, 500 for CIFAR-100 zca: True or False (corresponding to 'zca' or 'nozca' in filename) l (label learning): True or False (corresponding to 'l' or 'nol' in filename) aug: True or False (corresponding to 'aug' or 'noaug' in filename) ckpt: 0, 1, 2, 4, 6, 8, 12, 18, 26, 37, 54, 78, 112, 162, 233, 335, 483, 695, 1000, and thereafter checkpoints in step size 500 up to the length of training (maximum of 50000) Remaining columns specify metadata: test acc, test_mse, and URL to npz file stored in GCS End of explanation with tf.io.gfile.GFile('gs://kip-datasets/ls/ls.csv') as f: df = pd.read_csv(f) df Explanation: Load LS Data LS data specified by columns in a dataframe architecture: ConvNet (the main 4 layer convolutional neural network with pooling used) and ConvNet3 (a 3 layer version); see nn_training.ipynb dataset: cifar10, cifar100, mnist, fashion_mnist, svhn_cropped ssize (support size): 10, 100, 500 for non CIFAR-100, and 100, 500 for CIFAR-100 zca: True, False Remaining columns specify metadata: test acc, test_mse, and URL to npz file stored in GCS End of explanation with tf.io.gfile.GFile('gs://kip-datasets/kip/cifar10/ConvNet_ssize100_nozca_l_noaug_ckpt1000.npz', 'rb') as f: npz = np.load(f) list(npz.keys()) Explanation: Load Checkpoint End of explanation
5,168
Given the following text description, write Python code to implement the functionality described below step by step Description: Graph Gmail inbox data with IPython notebook Step1: Download your Gmail inbox as a ".mbox" file by clicking on "Account" under your Gmail user menu, then "Download data" Install the Python libraries mailbox and dateutils with <code>sudo pip install mailbox</code> and <code>sudo pip install dateutils</code> Step2: Open your ".mbox" file with <code>mailbox</code> Step3: Sort your mailbox by date Step4: Organize dates of email receipt as a list Step5: Count and graph emails received per day Step6: Restyle the chart in Plotly's GUI Step7: Custom css styling
Python Code: from IPython.display import Image Image('http://i.imgur.com/SYija2N.png') Explanation: Graph Gmail inbox data with IPython notebook End of explanation import mailbox from email.utils import parsedate from dateutil.parser import parse import itertools import plotly.plotly as py from plotly.graph_objs import * path = '/Users/jack/Desktop/All mail Including Spam and Trash.mbox' Explanation: Download your Gmail inbox as a ".mbox" file by clicking on "Account" under your Gmail user menu, then "Download data" Install the Python libraries mailbox and dateutils with <code>sudo pip install mailbox</code> and <code>sudo pip install dateutils</code> End of explanation mbox = mailbox.mbox(path) Explanation: Open your ".mbox" file with <code>mailbox</code> End of explanation def extract_date(email): date = email.get('Date') return parsedate(date) sorted_mails = sorted(mbox, key=extract_date) mbox.update(enumerate(sorted_mails)) mbox.flush() Explanation: Sort your mailbox by date End of explanation all_dates = [] mbox = mailbox.mbox(path) for message in mbox: all_dates.append( str( parse( message['date'] ) ).split(' ')[0] ) Explanation: Organize dates of email receipt as a list End of explanation email_count = [(g[0], len(list(g[1]))) for g in itertools.groupby(all_dates)] email_count[0] x = [] y = [] for date, count in email_count: x.append(date) y.append(count) py.iplot( Data([ Scatter( x=x, y=y ) ]) ) Explanation: Count and graph emails received per day End of explanation import plotly.tools as tls tls.embed('https://plot.ly/~jackp/3266') Explanation: Restyle the chart in Plotly's GUI End of explanation from IPython.core.display import HTML import urllib2 HTML(urllib2.urlopen('http://bit.ly/1Bf5Hft').read()) Explanation: Custom css styling End of explanation
5,169
Given the following text description, write Python code to implement the functionality described below step by step Description: Zillow’s Home Value Prediction (Zestimate) Zillow's Zestimate is created to give consumers as much information as possible about homes and the housing market, marking consumers had access to this type of home value information. It's based on 7.5 million statistical and machine learning models that analyze hundreds of data points on each property. We will develop an algorithm to help push the accuracy of the Zestimate even further. Team members Step1: 1. Data Description and Preprocession The dataset we used is provided by Zillow price. The response variable is logerror, which is defined as $logerror = \log(Zestimate)-\log(SalePrice)$. There are 58 predictor variables available to be chosen. Some of them are qualitative variables, and there's heavy missing value situation in lots of variables. Here, we will do some explorative work for furthur modelling. The part will cover following topics. - Basic information - Missing Value Analysis - Correlation Analysis - Univariate Analysis 1.1 load data Step2: The train dataset includes 90811 housing trade records from 2016-01-01 to 2016-12-30. In the same time the test datset date is from 2016-10 to 2017-12. The property datatable includes 298517 properties in three places Step3: We draw a barplot to see the monthly trade frequency of our dataset. Obviously there are much fewer training data from Oct 2016 to Dec 2016, because they split part of them as test data. The transaction number of first two months are less than other months, which implies the time tendency of housing transaction. 1.2 Missing value analysis Step4: We decide to give up variables whose missing ratio is more than 25%. Here, we just show the missing data pattern in variables that we choose. The first plot shows the missing rate of these variables, whose black part means data exist. Based on the visualization of data missing situation with records, there's some pattern about missing between variables in plot 2. In many records, value of variables are missing together, like 'finishedsquarefeet12' and 'lotsizesquarefeet'. So we can't simply impute data according to their own distribution. We also draw the geographical graph about missing situation. It automatically splits the dataset into statistically significant chunks and colorizes them based on the average nullity of data points within them. Later we will compare and find the best way to impute rest missing data. 1.3 Correlation Analysis Here we will explore the relationship between some important variables. These variables are gotten through XGBoost Regression from an initial experimental model which includes all variables left. Step5: From this correlation matrix, some important variables are positive related. So we need to use Lasso or other penalty function to reduce this multicolinearity effect. 1.4 Univariate Analysis First, let's look at the distribution of response variable -- logerror. Removing the influence of outliners, this histogram shows that logerror has a nice normal like distribution, which fits the assumption of the linear regression model. This datset only includes trade records for 1 year. There may exists a seasonal effect in the logerror. But considering the period, we won't use this information to train our model. Step6: 2. Feature Selection Based on the result of explorative variable analysis, we already know that our response variable is normal-like distributed. And We select predictors whose missing rate is less than 25% roughly. Now we want to deal with our X variables Step7: 3.1.1 Ridge regression Step8: 3.1.2 Random Forest 1) Tune Max_depth and max_leaf_node Step9: the answer show that for both value, the higher the node and depth, the better the performance, in order to find the optimal one, I will try to set higher value. Step10: the answer show that the best max_depth is 14, and best max_leaf_nodes are 400. 2) Tune min_samples_leaf and min_samples_spl using the best parameter for best max_depth and best max_leaf_nodes, we can continually tune other parameters. Step11: best parameter for random forest Step12: 3.1.4 Perform Stacking Method Step13: 3.2 XGBOOST For this part, we firstly fill the missing data with three different value, the mean, median and mode. Then, for all these dataset, we use grid search cross validation to find the best parameters one by one. We only provide the training process of fill-with-median model here. The best parameter for each model and predcition value are listed below, the answer show that fill missing value with mode will provide the best answer. |Mean|Median|Mode ---- | ---|---|--- learning_rate|0.1|0.1|0.1 n_estimators|140|1000|140 max_depth|5|5|5 min_child_weight|6.5|6|8.5 gamma|0|0.1|0.01 subsample|0.85|0.85|0.8 colsample_bytree|0.85|0.85|0.75 reg_alpha|5|5|1 reg_lambda|100|50|500 scale_pos_weight|1|1|1 logerror|0.0660177|0.0673185|0.0657067 Step14: 3.2.1 Tune max_depth and min_child_weight We tune these first as they will have the highest impact on model outcome. To start with, let’s set wider ranges and then we will perform another iteration for smaller ranges. Step15: Here, we have run 12 combinations with wider intervals between values. The ideal values are 3 for max_depth and 3 for min_child_weight. Lets go one step deeper and look for optimum values. We’ll search for values 2 below the optimum values because I find there exist the trend of lower the best. Step16: Here, we get the optimum values as 5 for max_depth and 6 for min_child_weight 3.2.2 Tune gamma Now lets tune gamma value using the parameters already tuned above. Gamma can take various values but I’ll check for 5 values here. You can go into more precise values as. Step17: This shows that our original value of gamma, i.e. 0.1 is the optimum one. Before proceeding, a good idea would be to re-calibrate the number of boosting rounds for the updated parameters. so the best parameters for now is Step18: Here, we found 0.8,0.8 as the optimum value for both subsample and colsample_bytree. Now we should try values in 0.05 interval around these. Step19: Then we got the optimum values are Step20: the values tried are very widespread, we should try values closer to the optimum here (0.1) to see if we get something better. Step21: You can see that we got a best value which are 5 and 50. Now we can apply this regularization in the model and look at the impact Step22: adding all tuned model below and show importance 3.2.5 Fitting model Step23: From the figure above, we can see that taxamount, structure tax value dollar count and the location of the house is very important for predict the logerror. However, there is another variable called region county has less importance, probably because the information it contains has been added into model by other variables which can convey location message. Step24: 3.3 Neuro Network This is the interestest part, we use TensorFlow to build a neuro network which has four hidden layer, each hidden layer has 20, 15 ,10, 5 neuro( the feature number is 24). We have tried different parameters, train iteration, and the answer show that with higher train itereation, the predicted answer will not perform better, and the prediciton value has logerror 0.067474, which does not ourperform XGBoost. We believe the reason is that neuro net work does not have obvious advantage when operate on simple problem, although the answer is good enough. Step25: 4. Analysis and Comparision In this project, we have tried 3 methods. The one having best results is XGboost with missing value filled by mode. This dataset is from Kaggle and we donot have the true response of test dataset. So we can just using the score which is given by Kaggle Website after uploading our predictions to get the accuracy of models. For the first method, we stack three simple regression models
Python Code: import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import numpy as np import missingno as msno import xgboost as xgb from sklearn.preprocessing import scale from xgboost.sklearn import XGBRegressor from sklearn.linear_model import Ridge from sklearn import cross_validation, metrics #Additional scklearn functions from sklearn.model_selection import GridSearchCV #Perforing grid search from sklearn.ensemble import RandomForestRegressor import matplotlib.pyplot as plt import seaborn as sns color = sns.color_palette() from sklearn.svm import SVR import itertools from sklearn.model_selection import KFold, cross_val_score plt.style.use('ggplot') %matplotlib inline Explanation: Zillow’s Home Value Prediction (Zestimate) Zillow's Zestimate is created to give consumers as much information as possible about homes and the housing market, marking consumers had access to this type of home value information. It's based on 7.5 million statistical and machine learning models that analyze hundreds of data points on each property. We will develop an algorithm to help push the accuracy of the Zestimate even further. Team members: - 914507521 Jingxian Liao - 914443957 Weizhuo Xiong - 914521950 Mengye Liu Introduction Data Description and Preprocession Feature Selection Regression Models Simple Regression and Stacking Ridge regression Random Forest Support Vector Machine Model Stakcing XGBoost model Neuro Network model Analysis and Comparison Conclusion Introduction A home is often the largest and most expensive purchase a person makes in his or her lifetime. Ensuring homeowners have a trusted way to monitor this asset is incredibly important. The Zestimate was created to give consumers as much information as possible about homes and the housing market, marking the first time consumers had access to this type of home value information at no cost. For this competition, we will develop an algorithm that makes predictions about the future sale prices of homes. Global Imports End of explanation train_df = pd.read_csv("originalinput/train_2016.csv", parse_dates=["transactiondate"]) properties = pd.read_csv('originalinput/properties_2016.csv') train_df.tail() train_df.shape, properties.shape Explanation: 1. Data Description and Preprocession The dataset we used is provided by Zillow price. The response variable is logerror, which is defined as $logerror = \log(Zestimate)-\log(SalePrice)$. There are 58 predictor variables available to be chosen. Some of them are qualitative variables, and there's heavy missing value situation in lots of variables. Here, we will do some explorative work for furthur modelling. The part will cover following topics. - Basic information - Missing Value Analysis - Correlation Analysis - Univariate Analysis 1.1 load data End of explanation train_df = pd.merge(train_df, properties, on='parcelid', how='left') train_df.head() train_df['month']=[a.month for a in train_df['transactiondate']] monthcount = train_df.groupby(train_df['month']).count()['logerror'] month = np.arange('2016-01', '2017-01', dtype='datetime64[M]') fig, ax = plt.subplots(figsize=(5,3)) ax.bar(range(1,13),monthcount,color='red') ax.vlines(10,0,10000) ax.set_xlim(1,13) ax.set_xticks(range(1,13)) ax.set_title("Distribution of transaction dates") Explanation: The train dataset includes 90811 housing trade records from 2016-01-01 to 2016-12-30. In the same time the test datset date is from 2016-10 to 2017-12. The property datatable includes 298517 properties in three places:.... So we neeed to merge the property into train_df to find the correspoinding predictor information. End of explanation missing_df = train_df.isnull().sum(axis=0).reset_index() missing_df.columns = ['column_name', 'missing_count'] missing_df['missing_ratio'] = missing_df['missing_count'] / train_df.shape[0] deletecolumn = missing_df[missing_df['missing_ratio']>0.25]["column_name"] #deletecolumns are the columns whose data is missing more than 25%. #missing ratio of variables chosen usetrain_df = train_df[['bathroomcnt', 'bedroomcnt', 'calculatedbathnbr', 'calculatedfinishedsquarefeet', 'finishedsquarefeet12', 'fips', 'fullbathcnt', 'latitude', 'longitude', 'lotsizesquarefeet', 'propertylandusetypeid', 'rawcensustractandblock', 'regionidcity', 'regionidcounty', 'regionidzip', 'roomcnt', 'yearbuilt', 'structuretaxvaluedollarcnt', 'taxvaluedollarcnt', 'assessmentyear', 'landtaxvaluedollarcnt', 'taxamount', 'censustractandblock']] missingValueColumns = usetrain_df.columns[usetrain_df.isnull().any()].tolist() msno.bar(usetrain_df[missingValueColumns],\ figsize=(24,8),color='black',fontsize=10) msno.matrix(usetrain_df[missingValueColumns],width_ratios=(10,1),\ figsize=(20,8),color=(0,0, 0),fontsize=12,sparkline=True,labels=True) msno.geoplot(usetrain_df, x='longitude', y='latitude') Explanation: We draw a barplot to see the monthly trade frequency of our dataset. Obviously there are much fewer training data from Oct 2016 to Dec 2016, because they split part of them as test data. The transaction number of first two months are less than other months, which implies the time tendency of housing transaction. 1.2 Missing value analysis End of explanation corrMatt = train_df[['finishedfloor1squarefeet','numberofstories','regionidcity','poolcnt','regionidzip', 'buildingclasstypeid','calculatedbathnbr','fireplacecnt','taxdelinquencyflag','pooltypeid10', 'pooltypeid7','structuretaxvaluedollarcnt','propertyzoningdesc','finishedsquarefeet13', 'garagetotalsqft','censustractandblock','finishedsquarefeet6','calculatedfinishedsquarefeet', 'taxdelinquencyyear','latitude']].corr() mask = np.array(corrMatt) mask[np.tril_indices_from(mask)] = False fig,ax= plt.subplots() fig.set_size_inches(8,5) sns.heatmap(corrMatt, mask=mask,vmax=.4, square=True) Explanation: We decide to give up variables whose missing ratio is more than 25%. Here, we just show the missing data pattern in variables that we choose. The first plot shows the missing rate of these variables, whose black part means data exist. Based on the visualization of data missing situation with records, there's some pattern about missing between variables in plot 2. In many records, value of variables are missing together, like 'finishedsquarefeet12' and 'lotsizesquarefeet'. So we can't simply impute data according to their own distribution. We also draw the geographical graph about missing situation. It automatically splits the dataset into statistically significant chunks and colorizes them based on the average nullity of data points within them. Later we will compare and find the best way to impute rest missing data. 1.3 Correlation Analysis Here we will explore the relationship between some important variables. These variables are gotten through XGBoost Regression from an initial experimental model which includes all variables left. End of explanation #upper = np.percentile(train_df.logerror.values, 97.5) #lower = np.percentile(train_df.logerror.values, 2.5) train_df['logerror'].loc[train_df['logerror']>upper] = upper train_df['logerror'].loc[train_df['logerror']<lower] = lower fig,ax = plt.subplots() fig.set_size_inches(8,3) sns.distplot(train_df.logerror.values, bins=50,kde=False,color="blue",ax=ax) ax.set(xlabel='logerror', ylabel='Frequency',title="Distribution Of Response") traingroupedMonth = train_df.groupby(["month"])["logerror"].mean().to_frame().reset_index() fig,ax1= plt.subplots() fig.set_size_inches(10,3) sns.pointplot(x=traingroupedMonth["month"], y=traingroupedMonth["logerror"], data=traingroupedMonth, join=True) ax1.set(xlabel='Month Of Year 2016', ylabel='logerror',title="Average logerror",label='big') Explanation: From this correlation matrix, some important variables are positive related. So we need to use Lasso or other penalty function to reduce this multicolinearity effect. 1.4 Univariate Analysis First, let's look at the distribution of response variable -- logerror. Removing the influence of outliners, this histogram shows that logerror has a nice normal like distribution, which fits the assumption of the linear regression model. This datset only includes trade records for 1 year. There may exists a seasonal effect in the logerror. But considering the period, we won't use this information to train our model. End of explanation X = pd.read_table('adjustedinput/mode/x_train_mode.csv',sep=',') X = X.iloc[:,1:] y = pd.read_csv('adjustedinput/train_y.csv') X_train = scale(X) y_train = np.array(y).ravel() Explanation: 2. Feature Selection Based on the result of explorative variable analysis, we already know that our response variable is normal-like distributed. And We select predictors whose missing rate is less than 25% roughly. Now we want to deal with our X variables: factorize categorical variables, figure out the importance of these variables and then select some of them as our final predictors. After checking the meaning of these variables, the following variables are cateogircal: fips, propertylandusetypeid, regionidcounty. There are only three values in regionidcounty: 3101,1286 and 2061. So we just factorise it as 0,1 and 2. The value of propertyliandsetypeid are meaningless. And there are so many unique values, we need to sort them into larger classes based on their meanings and frequencies. And then, we use oneencode method to make them as dummy variables. Consider the property of fips(Federal Information Processing Standard code),we try to use this code to get the geography information of these properties. To figure out the relationship between logerror and other variables, we use xgboost model to see the importance of these variables. We use correlation matrix to see whether some variables are basically independent with logerror in the beginning, but the result shows that their correlations with logerror are so similar that we can't tell which one is larger. So, we use this way to do some digging. The code part is very similar as the xgboost model we build, therefore we omit them and simply explain the result. Here the dataset we use includes all the variables in case some extremely important variables will be omitted by the rough selection. All the missing values are filled as -999. The first 10 important variables are 'finishedfloor1squarefeet','numberofstories','regionidcity','poolcnt','regionidzip','buildingclasstypeid','calculatedbathnbr','fireplacecnt','taxdelinquencyflag','pooltypeid10'. These variables involves different aspects of housing situations, like location, housewares, housing size, interior design, etc. We will consider these situations when we choose our final features. 3. Regression Models 3.1 Simple Regression and Stacking For this part, we use three different regression method: ridge, randomforest, support vector machine to train the model, then we using the stacking method to combine them together. For the dataset, we fill missing data with mode since which will provide the best prediciton. For each seperate regression method, we use grid search cross validation to find the best parameters that are needed in final model. The best parameters are shown below, when we upload the predicted value, the logerror returned is 0.0870198. Ridge||RandomForest||SVM|| ---- | ---|---|---|---|---| parameter | value|parameter | value|parameter | value alpha| 1|max_depth |14|bandwidth|8.07e-05 ||max_leaf_nodes|400|| ||min_samples_leaf | 1|| ||min_sample_split | 5|| End of explanation ridge_test = { 'alpha':[1e-15, 1e-10, 1e-8, 1e-5,1e-4, 1e-3,1e-2, 1, 5, 10,30,50,100]} gsearch = GridSearchCV(estimator = Ridge(), param_grid = ridge_test,n_jobs=4,iid=False, cv=5,scoring='neg_mean_squared_error') gsearch.fit(X_train,y_train) gsearch.grid_scores_, gsearch.best_params_, gsearch.best_score_ Explanation: 3.1.1 Ridge regression End of explanation maxd, maxleaf, scores = [], [], [] for i,j in itertools.product(range(2,10,2),[10,50,100,200,500]): RFC=RandomForestRegressor(max_depth=i,max_leaf_nodes=j, min_samples_leaf=1, min_samples_split=2,min_weight_fraction_leaf=0.0,oob_score=True) score = RFC.fit(X_train,y_train).oob_score_ maxd.append(i); maxleaf.append(j);scores.append(abs(score)) print('max_depth:',i,'max_leaf_nodes:',j,'score:%.4f'%abs(score)) index=np.argmax(scores) print('best:','max_depth:',maxd[index],'max_leaf_nodes:',maxleaf[index],'score',scores[index]) Explanation: 3.1.2 Random Forest 1) Tune Max_depth and max_leaf_node End of explanation maxd, maxleaf, scores = [], [], [] for i,j in itertools.product(range(8,20,3),[150,200,250,300,400]): RFC=RandomForestRegressor(max_depth=i,max_leaf_nodes=j, min_samples_leaf=1, min_samples_split=2,min_weight_fraction_leaf=0.0,oob_score=True) score = RFC.fit(X_train,y_train).oob_score_ maxd.append(i); maxleaf.append(j);scores.append(abs(score)) print('max_depth:',i,'max_leaf_nodes:',j,'score:%.4f'%abs(score)) index=np.argmax(scores) print('best:','max_depth:',maxd[index],'max_leaf_nodes:',maxleaf[index],'score',scores[index]) Explanation: the answer show that for both value, the higher the node and depth, the better the performance, in order to find the optimal one, I will try to set higher value. End of explanation minleaf, minsplit, scores = [], [], [] for i,j in itertools.product([1,10,30],[2,5,10,50]): RFC=RandomForestRegressor(max_depth=14,max_leaf_nodes=400, min_samples_leaf=i, min_samples_split=j,min_weight_fraction_leaf=0.0,oob_score=True) score = RFC.fit(X_train,y_train).oob_score_ minleaf.append(i); minsplit.append(j);scores.append(abs(score)) print('min_samples_leaf:',i,'min_samples_spl:',j,'score',"%.6f"%abs(score)) index=np.argmax(scores) print('best:','min_samples_leaf:',minleaf[index],'min_samples_spl:',minsplit[index],'score',scores[index]) Explanation: the answer show that the best max_depth is 14, and best max_leaf_nodes are 400. 2) Tune min_samples_leaf and min_samples_spl using the best parameter for best max_depth and best max_leaf_nodes, we can continually tune other parameters. End of explanation import hpsklearn from hpsklearn import HyperoptEstimator, svc from hyperopt import tpe from sklearn.svm import SVR X_train = X_train.astype('float64') y_train = y_train.astype('float64') mysvr = SVR() estim = HyperoptEstimator( regressor =hpsklearn.components.svr('mysvr'), algo=tpe.suggest, trial_timeout=100) estim.fit(X_train,y_train) estim.best_model() Explanation: best parameter for random forest: 1. max_depth is 14 2. max_leaf_nodes is 400 3. min_samples_leaf is 1 4. min_sample_split is 5 3.1.3 SVM End of explanation from mlxtend.regressor import StackingRegressor ridge = Ridge(alpha=1) svr = SVR(C=8.07639000981e-05, cache_size=512, coef0=0.0, degree=1, epsilon=561.857419396, gamma='auto', kernel='linear', max_iter=76417421.0, shrinking=False, tol=0.00924829940784, verbose=False) rf = RandomForestRegressor(max_depth=14,max_leaf_nodes=400, min_samples_leaf=1, min_samples_split=5) stregr = StackingRegressor(regressors=[ridge,rf],meta_regressor=svr) stregr.fit(X_train, y_train) ypred = stregr.predict(X_test) print("Mean Squared Error: %.4f"% np.mean((ypred - y_test) ** 2)) test = pd.read_table('originalinput/sample_submission.csv',sep=',') test_df_remaining = pd.read_table("adjustedinput/mode/test_df_remaining_mode.csv",sep=',') test_df_remaining = test_df_remaining.iloc[:,1:] test_df_remaining.head() #estimate reponse variable in test data y_predict_stack = stregr.predict(test_df_remaining) for i in range(1,7): test.iloc[:,i]=y_predict_stack Explanation: 3.1.4 Perform Stacking Method End of explanation def feat_imp_plot(xgbmodel): plt xgbmodel.fit(train[predictors], train[target],eval_metric='auc') feat_imp = pd.Series(xgbmodel.booster().get_fscore()).sort_values(ascending=False) plt.figure(figsize=(15,5)) #feat_imp = pd.Series(xgb1.booster().get_fscore()).sort_values(ascending=False) feat_imp.plot(kind='bar', title='Feature Importances') plt.ylabel('Feature Importance Score') train = pd.read_table("x_train_mode.csv",sep=',') y = pd.read_table("train_y.csv",sep=',') train['logerror']=y.iloc[:,0] target = 'logerror' train = train.iloc[:,1:] predictors = [x for x in train.columns if x not in [target]] train.head() Explanation: 3.2 XGBOOST For this part, we firstly fill the missing data with three different value, the mean, median and mode. Then, for all these dataset, we use grid search cross validation to find the best parameters one by one. We only provide the training process of fill-with-median model here. The best parameter for each model and predcition value are listed below, the answer show that fill missing value with mode will provide the best answer. |Mean|Median|Mode ---- | ---|---|--- learning_rate|0.1|0.1|0.1 n_estimators|140|1000|140 max_depth|5|5|5 min_child_weight|6.5|6|8.5 gamma|0|0.1|0.01 subsample|0.85|0.85|0.8 colsample_bytree|0.85|0.85|0.75 reg_alpha|5|5|1 reg_lambda|100|50|500 scale_pos_weight|1|1|1 logerror|0.0660177|0.0673185|0.0657067 End of explanation param_test1 = { 'max_depth':list(range(3,10,2)), 'min_child_weight':list(range(1,6,2)) } gsearch1 = GridSearchCV(estimator = XGBRegressor(learning_rate =0.1, n_estimators=140, max_depth=5, min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8, nthread=4, scale_pos_weight=1, seed=27), param_grid = param_test1, scoring='neg_mean_squared_error',n_jobs=4,iid=False, cv=5) gsearch1.fit(train[predictors],train[target]) gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_ Explanation: 3.2.1 Tune max_depth and min_child_weight We tune these first as they will have the highest impact on model outcome. To start with, let’s set wider ranges and then we will perform another iteration for smaller ranges. End of explanation param_test2 = { 'max_depth':[3,4,5,6,7], 'min_child_weight':[3,4,5,6,7] } gsearch2 = GridSearchCV(estimator = XGBRegressor(learning_rate =0.1, n_estimators=140, max_depth=5, min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8, nthread=4, scale_pos_weight=1, seed=27), param_grid = param_test2, scoring='neg_mean_squared_error',n_jobs=4,iid=False, cv=5) gsearch2.fit(train[predictors],train[target]) gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_ Explanation: Here, we have run 12 combinations with wider intervals between values. The ideal values are 3 for max_depth and 3 for min_child_weight. Lets go one step deeper and look for optimum values. We’ll search for values 2 below the optimum values because I find there exist the trend of lower the best. End of explanation param_test3 = { 'gamma':[i/10.0 for i in range(0,5)] } gsearch3 = GridSearchCV(estimator = XGBRegressor( learning_rate =0.1, n_estimators=140, max_depth=5, min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8,nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test3, scoring='neg_mean_squared_error',n_jobs=4,iid=False, cv=5) gsearch3.fit(train[predictors],train[target]) gsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_ Explanation: Here, we get the optimum values as 5 for max_depth and 6 for min_child_weight 3.2.2 Tune gamma Now lets tune gamma value using the parameters already tuned above. Gamma can take various values but I’ll check for 5 values here. You can go into more precise values as. End of explanation param_test4 = { 'subsample':[i/10.0 for i in range(6,10)], 'colsample_bytree':[i/10.0 for i in range(6,10)] } gsearch4 = GridSearchCV(estimator = XGBRegressor( learning_rate =0.1, n_estimators=140, max_depth=5, min_child_weight=6, gamma=0.1, subsample=0.8, colsample_bytree=0.8, nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test4, scoring='neg_mean_squared_error',n_jobs=4,iid=False, cv=5) gsearch4.fit(train[predictors],train[target]) gsearch4.grid_scores_, gsearch4.best_params_, gsearch4.best_score_ Explanation: This shows that our original value of gamma, i.e. 0.1 is the optimum one. Before proceeding, a good idea would be to re-calibrate the number of boosting rounds for the updated parameters. so the best parameters for now is : max_depth:5 min_child_weight:6 gamma:0.1 3.2.3 Tune subsample and colsample_bytree The next step would be try different subsample and colsample_bytree values. Lets do this in 2 stages as well and take values 0.6,0.7,0.8,0.9 for both to start with. End of explanation param_test5 = { 'subsample':[i/100.0 for i in range(70,95,5)], 'colsample_bytree':[i/100.0 for i in range(70,95,5)] } gsearch5 = GridSearchCV(estimator = XGBRegressor( learning_rate =0.1, n_estimators=140, max_depth=5, min_child_weight=6, gamma=0.1, subsample=0.8, colsample_bytree=0.8, nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test5, scoring='neg_mean_squared_error',n_jobs=4,iid=False, cv=5) gsearch5.fit(train[predictors],train[target]) gsearch5.grid_scores_, gsearch5.best_params_, gsearch5.best_score_ Explanation: Here, we found 0.8,0.8 as the optimum value for both subsample and colsample_bytree. Now we should try values in 0.05 interval around these. End of explanation param_test6 = { 'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100], 'reg_lambda':[1e-5, 1e-2, 0.1, 1, 100] } gsearch6 = GridSearchCV(estimator = XGBRegressor( learning_rate =0.1, n_estimators=140, max_depth=5, min_child_weight=6, gamma=0.1, subsample=0.85, colsample_bytree=0.85, nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test6, scoring='neg_mean_squared_error',n_jobs=4,iid=False, cv=5) gsearch6.fit(train[predictors],train[target]) gsearch6.grid_scores_, gsearch6.best_params_, gsearch6.best_score_ Explanation: Then we got the optimum values are: subsample: 0.85 colsample_bytree: 0.85 3.2.4 Tuning Regularization Parameters Next step is to apply regularization to reduce overfitting. End of explanation param_test8 = { 'reg_alpha':[0.2,0.5,0.7,1,2,5,10], 'reg_lambda':[50,100,300,500] } gsearch8 = GridSearchCV(estimator = XGBRegressor( learning_rate =0.1, n_estimators=140, max_depth=5, min_child_weight=6, gamma=0.1, subsample=0.85, colsample_bytree=0.85, nthread=4, scale_pos_weight=1,seed=27), param_grid = param_test8, scoring='neg_mean_squared_error',n_jobs=4,iid=False, cv=5) gsearch8.fit(train[predictors],train[target]) gsearch8.grid_scores_, gsearch8.best_params_, gsearch8.best_score_ Explanation: the values tried are very widespread, we should try values closer to the optimum here (0.1) to see if we get something better. End of explanation param_test7 = { 'reg_alpha':[0, 0.001, 0.005, 0.01, 0.05], 'reg_lambda':[0, 0.001, 0.005, 0.01, 0.05] } gsearch7 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=140, max_depth=4, min_child_weight=6, gamma=0.1, subsample=0.8, colsample_bytree=0.8, nthread=4, scale_pos_weight=1,random_state=27), param_grid = param_test7, scoring='neg_mean_squared_error',n_jobs=4,iid=False, cv=5) gsearch7.fit(train[predictors],train[target]) gsearch7.grid_scores_, gsearch7.best_params_, gsearch7.best_score_ Explanation: You can see that we got a best value which are 5 and 50. Now we can apply this regularization in the model and look at the impact: 3.2.5 Reducing Learning Rate adding more trees Lastly, we should lower the learning rate and add more trees. Lets use the cv function of XGBoost to do the job again. End of explanation xgb1 = XGBRegressor( learning_rate =0.1, n_estimators=1000, max_depth=5, min_child_weight=6, gamma=0.1, subsample=0.85, colsample_bytree=0.85, nthread=4, reg_alpha=5, reg_lambda=50, scale_pos_weight=1, seed=27) xgb1.fit(train[predictors],train[target]) feat_imp = pd.Series(xgb1.booster().get_fscore()).sort_values(ascending=False) plt.figure(figsize=(15,5)) feat_imp.plot(kind='bar', title='Feature Importances') plt.ylabel('Feature Importance Score') Explanation: adding all tuned model below and show importance 3.2.5 Fitting model End of explanation from sklearn.externals import joblib median_xgb = joblib.load("xgb_median.m") plt.show() xgb.plot_tree(median_xgb, num_trees=0, rankdir='LR') Explanation: From the figure above, we can see that taxamount, structure tax value dollar count and the location of the house is very important for predict the logerror. However, there is another variable called region county has less importance, probably because the information it contains has been added into model by other variables which can convey location message. End of explanation from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow.contrib import learn import matplotlib.pyplot as plt from sklearn.pipeline import Pipeline from sklearn import datasets, linear_model from sklearn import cross_validation import numpy as np X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.2, random_state=42) total_len = X_train.shape[0] # Parameters learning_rate = 0.001 training_epochs = 500 batch_size = 10 display_step = 1 dropout_rate = 0.9 # Network Parameters n_hidden_1 = 20 # 1st layer number of features n_hidden_2 = 15 # 2nd layer number of features n_hidden_3 = 10 n_hidden_4 = 5 n_input = X_train.shape[1] n_classes = 1 # tf Graph input x = tf.placeholder("float", [None, 24]) y = tf.placeholder("float", [None]) # Create model def multilayer_perceptron(x, weights, biases): # Hidden layer with RELU activation layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1']) layer_1 = tf.nn.relu(layer_1) # Hidden layer with RELU activation layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2']) layer_2 = tf.nn.relu(layer_2) # Hidden layer with RELU activation layer_3 = tf.add(tf.matmul(layer_2, weights['h3']), biases['b3']) layer_3 = tf.nn.relu(layer_3) # Hidden layer with RELU activation layer_4 = tf.add(tf.matmul(layer_3, weights['h4']), biases['b4']) layer_4 = tf.nn.relu(layer_4) # Output layer with linear activation out_layer = tf.matmul(layer_4, weights['out']) + biases['out'] return out_layer # Store layers weight & bias weights = { 'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1], 0, 0.1)), 'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], 0, 0.1)), 'h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3], 0, 0.1)), 'h4': tf.Variable(tf.random_normal([n_hidden_3, n_hidden_4], 0, 0.1)), 'out': tf.Variable(tf.random_normal([n_hidden_4, n_classes], 0, 0.1)) } biases = { 'b1': tf.Variable(tf.random_normal([n_hidden_1], 0, 0.1)), 'b2': tf.Variable(tf.random_normal([n_hidden_2], 0, 0.1)), 'b3': tf.Variable(tf.random_normal([n_hidden_3], 0, 0.1)), 'b4': tf.Variable(tf.random_normal([n_hidden_4], 0, 0.1)), 'out': tf.Variable(tf.random_normal([n_classes], 0, 0.1)) } # Construct model pred = multilayer_perceptron(x, weights, biases) pred = tf.transpose(pred) # Define loss and optimizer cost = tf.reduce_mean(tf.square(pred-y)) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Launch the graph with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(training_epochs): avg_cost = 0. total_batch = int(total_len/batch_size) # Loop over all batches for i in range(total_batch-1): batch_x = X_train[i*batch_size:(i+1)*batch_size] batch_y = Y_train[i*batch_size:(i+1)*batch_size] # Run optimization op (backprop) and cost op (to get loss value) _, c, p = sess.run([optimizer, cost, pred], feed_dict={x: batch_x, y: batch_y}) # Compute average loss avg_cost += c / total_batch # sample prediction label_value = batch_y estimate = p err = label_value-estimate print ("Optimization Finished!") predicted_vals = sess.run(pred, feed_dict={x: X_test}) Explanation: 3.3 Neuro Network This is the interestest part, we use TensorFlow to build a neuro network which has four hidden layer, each hidden layer has 20, 15 ,10, 5 neuro( the feature number is 24). We have tried different parameters, train iteration, and the answer show that with higher train itereation, the predicted answer will not perform better, and the prediciton value has logerror 0.067474, which does not ourperform XGBoost. We believe the reason is that neuro net work does not have obvious advantage when operate on simple problem, although the answer is good enough. End of explanation from sklearn.externals import joblib mode_xgb = joblib.load("xgb_final.m") plt.show() xgb.plot_tree(mode_xgb, num_trees=0, rankdir='LR') feat_imp = pd.Series(mode_xgb.booster().get_fscore()).sort_values(ascending=False) plt.figure(figsize=(15,5)) #feat_imp = pd.Series(xgb1.booster().get_fscore()).sort_values(ascending=False) feat_imp.plot(kind='bar', title='Feature Importances') plt.ylabel('Feature Importance Score of final model') Explanation: 4. Analysis and Comparision In this project, we have tried 3 methods. The one having best results is XGboost with missing value filled by mode. This dataset is from Kaggle and we donot have the true response of test dataset. So we can just using the score which is given by Kaggle Website after uploading our predictions to get the accuracy of models. For the first method, we stack three simple regression models: ridge, svm and randomforest. We use different kernel here: Ridge is based on the idea that the variables are linear related; the kernel of SVM is sigmoid; and Random Forest is a meta estimator of decision tree regressor. We hope that the cross validation of constructing second layer data could help us to fit better. But unfortunately, the stacking model performs not satisfying as we think. More often than not the reason might be found in our original models. They may train and learn the same parts of information in our dataset, and many other important information is lost. For the second one, we have tried XGboost on three different datasets whose difference is using different methods to fill the missing value. The regression function is linear one. After tuning parameters, the best one is mode. What's more, it is also the best one among these three methods. For the method by using mean, it may have problem with some categorical variables, for example, we cannot interpret the meaning of 2.7 bedrooms. As for median, it may be a better estimator for cluster trend when extreme value exists but in this case, many data values are very closed so if we use mode the probability of getting a wrong number is smaller than other two method in terms of the posterior probability. XGBoos is an ensamble method it uses many trees to take a decision so it gains power by repeating itself. It can take a huge advantage in a fight by creating thousands of trees. Gradient boosting requires much more care in setting up, so it is pretty much meaningless to train xgboost without cross-validation, so at most time xgboost has the better performance when compared with random forest. For the third one, We have tried different parameters, train iteration, and the answer show that with higher train itereation, the predicted answer will not perform better, and the prediciton value has logerror 0.067474, which does not outperform XGBoost. We believe the reason is that neuro net work does not have obvious advantage when operate on simple problem, although the answer is good enough. 5. Conclusions Finally, we choose 21 features and XGboost with mode filling missing data. Its tuned parameters are showed in part 3.2 and its barplot for the importance of features as well as XGboost tree are showed below. End of explanation
5,170
Given the following text description, write Python code to implement the functionality described below step by step Description: License Copyright 2017 J. Patrick Hall, jphall@gwu.edu Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions Step1: Load and prepare data for modeling Step2: Impute missing values Step3: Train a predictive model Step4: Determine important variables for use in sensitivity analysis Step6: Helper function for finding quantile indices Step7: Get validation data ranges Step8: This result alone is interesting. The model appears to be struggling to accurately predict low and high values for SalePrice. This behavior should be corrected to increase the accuracy of predictions. A strategy for improving predictions for these homes with extreme values might be to weight them higher during training using observation weights, or they may need their own models. Now use trained model to test predictions for interesting situations How will the model handle making the home with the lowest predicted price even less desirable? Step9: While the model does not seem to handle low-valued homes very well, making the home with the lowest predicted price less appealling does not seem to make the model's predictions any worse. While this prediction behavior appears somewhat stable, which would normally be desirable, this is not particularly good news as the underlying prediction is so inaccurate. How will the model handle making the home with the highest predicted price even more desirable? Step10: This result may point to unstable predictions for the higher end of SalesPrice. Shutdown H2O
Python Code: # imports import h2o import numpy as np import pandas as pd from h2o.estimators.gbm import H2OGradientBoostingEstimator # start h2o h2o.init() h2o.remove_all() Explanation: License Copyright 2017 J. Patrick Hall, jphall@gwu.edu Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Sensitivity Analysis Preliminaries: imports, start h2o, load and clean data End of explanation # load data path = '../../03_regression/data/train.csv' frame = h2o.import_file(path=path) # assign target and inputs y = 'SalePrice' X = [name for name in frame.columns if name not in [y, 'Id']] Explanation: Load and prepare data for modeling End of explanation # determine column types # impute reals, enums = [], [] for key, val in frame.types.items(): if key in X: if val == 'enum': enums.append(key) else: reals.append(key) _ = frame[reals].impute(method='median') _ = frame[enums].impute(method='mode') # split into training and validation train, valid = frame.split_frame([0.7], seed=12345) Explanation: Impute missing values End of explanation # train GBM model model = H2OGradientBoostingEstimator(ntrees=100, max_depth=10, distribution='huber', learn_rate=0.1, stopping_rounds=5, seed=12345) model.train(y=y, x=X, training_frame=train, validation_frame=valid) preds = valid.cbind(model.predict(valid)) Explanation: Train a predictive model End of explanation model.varimp_plot() Explanation: Determine important variables for use in sensitivity analysis End of explanation def get_quantile_dict(y, id_, frame): Returns the percentiles of a column y as the indices for another column id_. Args: y: Column in which to find percentiles. id_: Id column that stores indices for percentiles of y. frame: H2OFrame containing y and id_. Returns: Dictionary of percentile values and index column values. quantiles_df = frame.as_data_frame() quantiles_df.sort_values(y, inplace=True) quantiles_df.reset_index(inplace=True) percentiles_dict = {} percentiles_dict[0] = quantiles_df.loc[0, id_] percentiles_dict[99] = quantiles_df.loc[quantiles_df.shape[0]-1, id_] inc = quantiles_df.shape[0]//10 for i in range(1, 10): percentiles_dict[i * 10] = quantiles_df.loc[i * inc, id_] return percentiles_dict sale_quantile_dict = get_quantile_dict('SalePrice', 'Id', preds) pred_quantile_dict = get_quantile_dict('predict', 'Id', preds) print('SalePrice quantiles:\n', sale_quantile_dict) print() print('prediction quantiles:\n',pred_quantile_dict) Explanation: Helper function for finding quantile indices End of explanation print('lowest SalePrice:\n', preds[preds['Id'] == int(sale_quantile_dict[0])]['SalePrice']) print('lowest prediction:\n', preds[preds['Id'] == int(pred_quantile_dict[0])]['predict']) print('highest SalePrice:\n', preds[preds['Id'] == int(sale_quantile_dict[99])]['SalePrice']) print('highest prediction:\n', preds[preds['Id'] == int(pred_quantile_dict[99])]['predict']) Explanation: Get validation data ranges End of explanation # look at current row print(preds[preds['Id'] == int(pred_quantile_dict[0])]) # find current error observed = preds[preds['Id'] == int(pred_quantile_dict[0])]['SalePrice'][0,0] predicted = preds[preds['Id'] == int(pred_quantile_dict[0])]['predict'][0,0] print('Error: %.2f%%' % (100*(abs(observed - predicted)/observed))) # change value of important variables test_case = preds[preds['Id'] == int(pred_quantile_dict[0])] test_case = test_case.drop('predict') test_case['OverallQual'] = 0 test_case['Neighborhood'] = 'IDOTRR' test_case['GrLivArea'] = 500 test_case = test_case.cbind(model.predict(test_case)) print(test_case) # recalculate error observed = test_case['SalePrice'][0,0] predicted = test_case['predict'][0,0] print('Error: %.2f%%' % (100*(abs(observed - predicted)/observed))) Explanation: This result alone is interesting. The model appears to be struggling to accurately predict low and high values for SalePrice. This behavior should be corrected to increase the accuracy of predictions. A strategy for improving predictions for these homes with extreme values might be to weight them higher during training using observation weights, or they may need their own models. Now use trained model to test predictions for interesting situations How will the model handle making the home with the lowest predicted price even less desirable? End of explanation # look at current row print(preds[preds['Id'] == int(pred_quantile_dict[99])]) # find current error observed = preds[preds['Id'] == int(pred_quantile_dict[99])]['SalePrice'][0,0] predicted = preds[preds['Id'] == int(pred_quantile_dict[99])]['predict'][0,0] print('Error: %.2f%%' % (100*(abs(observed - predicted)/observed))) # change value of important variables test_case = preds[preds['Id'] == int(pred_quantile_dict[99])] test_case = test_case.drop('predict') test_case['Neighborhood'] = 'StoneBr' test_case['GrLivArea'] = 5000 test_case = test_case.cbind(model.predict(test_case)) print(test_case) # recalculate error observed = test_case['SalePrice'][0,0] predicted = test_case['predict'][0,0] print('Error: %.2f%%' % (100*(abs(observed - predicted)/observed))) Explanation: While the model does not seem to handle low-valued homes very well, making the home with the lowest predicted price less appealling does not seem to make the model's predictions any worse. While this prediction behavior appears somewhat stable, which would normally be desirable, this is not particularly good news as the underlying prediction is so inaccurate. How will the model handle making the home with the highest predicted price even more desirable? End of explanation h2o.cluster().shutdown(prompt=True) Explanation: This result may point to unstable predictions for the higher end of SalesPrice. Shutdown H2O End of explanation
5,171
Given the following text description, write Python code to implement the functionality described below step by step Description: Inverted encoding model Step1: In this example, we will assume that the stimuli are patches of different motion directions. These stimuli span a 360-degree, circular feature space. We will build an encoding model that has 6 channels, or basis functions, which also span this feature space. Step2: Now we'll generate synthetic data. Ideally, each voxel that we measure from is roughly tuned to some part of the feature space (see Sprague, Boynton, Serences, 2019). So we will generate data that has a receptive field (RF). We can define the RF along the same feature axis as the channels that we generated above. The following two functions will generate the voxel RFs, and then generate several trials of that dataset. There are options to add uniform noise to either the RF or the trials. Step3: Now let's generate some training data and look at it. This code will create a plot that depicts the response of an example voxel for different trials. Step4: Using this synthetic training data, we can fit the IEM. Step5: Calling the IEM fit method defines the channels, or the basis set, which span the feature domain. We can examine the channels and plot them to check that they look appropriate. Remember that the plot below is in circular space. Hence, the channels wrap around the x-axis. For example, the channel depicted in blue is centered at 0 degrees (far left of plot), which is the same as 360 degrees (far right of plot). We can check whether the channels properly tile the feature space by summing across all of them. This is shown on the right plot. It should be a straight horizontal line. Step6: Now we can generate test data and see how well we can predict the test stimuli. Step7: In addition to predicting the exact feature, we can examine the model-based reconstructions in the feature domain. That is, instead of getting single predicted values for each feature, we can look at a reconstructed function which peaks at the predicted feature. Below we will plot all of the reconstructions. There will be some variability because of the noise added during the synthetic data generation. Step8: For a sanity check, let's check how R^2 changes as the number of voxels increases. We can write a quick wrapper function to train and test on a given set of motion directions, as below. Step9: We'll iterate through the list and look at the resulting R^2 values.
Python Code: import numpy as np from brainiak.reconstruct import iem as IEM import matplotlib.pyplot as plt import numpy.matlib as matlib import scipy.signal Explanation: Inverted encoding model End of explanation # Set up parameters n_channels = 6 cos_exponent = 5 range_start = 0 range_stop = 360 feature_resolution = 360 iem_obj = IEM.InvertedEncoding1D(n_channels, cos_exponent, stimulus_mode='circular', range_start=range_start, range_stop=range_stop, channel_density=feature_resolution) # You can also try the half-circular space. Here's the associated code: # range_stop = 180 # since 0 and 360 degrees are the same, we want to stop shy of 360 # feature_resolution = 180 # iem_obj = IEM.InvertedEncoding1D(n_channels, cos_exponent, stimulus_mode='halfcircular', range_start=range_start, # range_stop=range_stop, channel_density=feature_resolution, verbose=True) stim_vals = np.linspace(0, feature_resolution - (feature_resolution/6), 6).astype(int) Explanation: In this example, we will assume that the stimuli are patches of different motion directions. These stimuli span a 360-degree, circular feature space. We will build an encoding model that has 6 channels, or basis functions, which also span this feature space. End of explanation # Generate synthetic data s.t. each voxel has a Gaussian tuning function def generate_voxel_RFs(n_voxels, feature_resolution, random_tuning=True, RF_noise=0.): if random_tuning: # Voxel selectivity is random voxel_tuning = np.floor((np.random.rand(n_voxels) * range_stop) + range_start).astype(int) else: # Voxel selectivity is evenly spaced along the feature axis voxel_tuning = np.linspace(range_start, range_stop, n_voxels+1) voxel_tuning = voxel_tuning[0:-1] voxel_tuning = np.floor(voxel_tuning).astype(int) gaussian = scipy.signal.gaussian(feature_resolution, 15) voxel_RFs = np.zeros((n_voxels, feature_resolution)) for i in range(0, n_voxels): voxel_RFs[i, :] = np.roll(gaussian, voxel_tuning[i] - ((feature_resolution//2)-1)) voxel_RFs += np.random.rand(n_voxels, feature_resolution)*RF_noise # add noise to voxel RFs voxel_RFs = voxel_RFs / np.max(voxel_RFs, axis=1)[:, None] return voxel_RFs, voxel_tuning def generate_voxel_data(voxel_RFs, n_voxels, trial_list, feature_resolution, trial_noise=0.25): one_hot = np.eye(feature_resolution) # Generate trial-wise responses based on voxel RFs if range_start > 0: trial_list = trial_list + range_start elif range_start < 0: trial_list = trial_list - range_start stim_X = one_hot[:, trial_list] #@ basis_set.transpose() trial_data = voxel_RFs @ stim_X trial_data += np.random.rand(n_voxels, trial_list.size)*(trial_noise*np.max(trial_data)) return trial_data Explanation: Now we'll generate synthetic data. Ideally, each voxel that we measure from is roughly tuned to some part of the feature space (see Sprague, Boynton, Serences, 2019). So we will generate data that has a receptive field (RF). We can define the RF along the same feature axis as the channels that we generated above. The following two functions will generate the voxel RFs, and then generate several trials of that dataset. There are options to add uniform noise to either the RF or the trials. End of explanation np.random.seed(100) n_voxels = 50 n_train_trials = 120 training_stim = np.repeat(stim_vals, n_train_trials/6) voxel_RFs, voxel_tuning = generate_voxel_RFs(n_voxels, feature_resolution, random_tuning=False, RF_noise=0.1) train_data = generate_voxel_data(voxel_RFs, n_voxels, training_stim, feature_resolution, trial_noise=0.25) print(np.linalg.cond(train_data)) # print("Voxels are tuned to: ", voxel_tuning) # Generate plots to look at the RF of an example voxel. voxi = 20 f = plt.figure() plt.subplot(1, 2, 1) plt.plot(train_data[voxi, :]) plt.xlabel("trial") plt.ylabel("activation") plt.title("Activation over trials") plt.subplot(1, 2, 2) plt.plot(voxel_RFs[voxi, :]) plt.xlabel("degrees (motion direction)") plt.axvline(voxel_tuning[voxi]) plt.title("Receptive field at {} deg".format(voxel_tuning[voxi])) plt.suptitle("Example voxel") plt.figure() plt.imshow(train_data) plt.ylabel('voxel') plt.xlabel('trial') plt.suptitle('Simulated data from each voxel') Explanation: Now let's generate some training data and look at it. This code will create a plot that depicts the response of an example voxel for different trials. End of explanation # Fit an IEM iem_obj.fit(train_data.transpose(), training_stim) Explanation: Using this synthetic training data, we can fit the IEM. End of explanation # Let's visualize the basis functions. channels = iem_obj.channels_ feature_axis = iem_obj.channel_domain print(channels.shape) plt.figure() plt.subplot(1, 2, 1) for i in range(0, channels.shape[0]): plt.plot(feature_axis, channels[i,:]) plt.title('Channels (i.e. basis functions)') plt.subplot(1, 2, 2) plt.plot(np.sum(channels, 0)) plt.ylim(0, 2.5) plt.title('Sum across channels') Explanation: Calling the IEM fit method defines the channels, or the basis set, which span the feature domain. We can examine the channels and plot them to check that they look appropriate. Remember that the plot below is in circular space. Hence, the channels wrap around the x-axis. For example, the channel depicted in blue is centered at 0 degrees (far left of plot), which is the same as 360 degrees (far right of plot). We can check whether the channels properly tile the feature space by summing across all of them. This is shown on the right plot. It should be a straight horizontal line. End of explanation # Generate test data n_test_trials = 12 test_stim = np.repeat(stim_vals, n_test_trials/len(stim_vals)) np.random.seed(330) test_data = generate_voxel_data(voxel_RFs, n_voxels, test_stim, feature_resolution, trial_noise=0.25) # Predict test stim & get R^2 score pred_feature = iem_obj.predict(test_data.transpose()) R2 = iem_obj.score(test_data.transpose(), test_stim) print("Predicted features are: {} degrees.".format(pred_feature)) print("Actual features are: {} degrees.".format(test_stim)) print("Test R^2 is {}".format(R2)) Explanation: Now we can generate test data and see how well we can predict the test stimuli. End of explanation # Now get the model-based reconstructions, which are continuous # functions that should peak at each test stimulus feature recons = iem_obj._predict_feature_responses(test_data.transpose()) f = plt.figure() for i in range(0, n_test_trials-1): plt.plot(feature_axis, recons[:, i]) for i in stim_vals: plt.axvline(x=i, color='k', linestyle='--') plt.title("Reconstructions of {} degrees".format(np.unique(test_stim))) Explanation: In addition to predicting the exact feature, we can examine the model-based reconstructions in the feature domain. That is, instead of getting single predicted values for each feature, we can look at a reconstructed function which peaks at the predicted feature. Below we will plot all of the reconstructions. There will be some variability because of the noise added during the synthetic data generation. End of explanation iem_obj.verbose = False def train_and_test(nvox, ntrn, ntst, rfn, tn): vRFs, vox_tuning = generate_voxel_RFs(nvox, feature_resolution, random_tuning=True, RF_noise=rfn) trn = np.repeat(stim_vals, ntrn/6).astype(int) trnd = generate_voxel_data(vRFs, nvox, trn, feature_resolution, trial_noise=tn) tst = np.repeat(stim_vals, ntst/6).astype(int) tstd = generate_voxel_data(vRFs, nvox, tst, feature_resolution, trial_noise=tn) iem_obj.fit(trnd.transpose(), trn) recons = iem_obj._predict_feature_responses(tstd.transpose()) pred_ori = iem_obj.predict(tstd.transpose()) R2 = iem_obj.score(tstd.transpose(), tst) return recons, pred_ori, R2, tst Explanation: For a sanity check, let's check how R^2 changes as the number of voxels increases. We can write a quick wrapper function to train and test on a given set of motion directions, as below. End of explanation np.random.seed(300) vox_list = (5, 10, 15, 25, 50) R2_list = np.zeros(len(vox_list)) for idx, nvox in enumerate(vox_list): recs, preds, R2_list[idx], test_features = train_and_test(nvox, 120, 30, 0.1, 0.25) print("The R2 values for increasing numbers of voxels: ") print(R2_list) Explanation: We'll iterate through the list and look at the resulting R^2 values. End of explanation
5,172
Given the following text description, write Python code to implement the functionality described below step by step Description: Week 1 Step1: Make a vector with 6 elements Step2: Get some information about the vector Step3: Create a matrix like this Step4: Get some information about the matrix Step5: A very powerful feature of NumPy and Python are List Comprehensions. These can replace many for loops and are much more efficient to run. Here we square every element in the vector a from above Step6: Using NumPy we can select rows and columns of data very easily (known as array slicing). For example, we can print the first row of the matrix, m Step7: Or we can slice the first column only. Using the , symbol we can ask for specfic rows and columns. The first integer always specifies the rows, which is followed by , and the second integer specifies the columns. The colon character Step8: We can select a specific element using the matrix's column and row index, for example we want to select the item in the second column's second row Step9: Entire books have been written about NumPy. Let's move on to Matplotlib. Matplotlib First we will load Matplotlib, a 2D plotting library, and run %matplotlib inline. This magic function (functions beginning with the % symbol are called magic functions) will display any plots inline in the notebook. Step10: Using the plt.plot() function, you can plot many types of data and Matplotlib will try to figure out what you want to do with the data Step11: The plt.plot() function will take an arbritrary number of x y argument pairs Step12: Or plot three lines at once by supplying three pairs of x y values Step13: Combined with functions, more complex curves can be plotted. Let's plot the sigmoid curve Step14: Pandas Let's now import Pandas. This library provides R-style dataframe table functionality. Like NumPy, it is convention to import the Pandas library as pd for brevity. Step15: Let's load the well known Wisconsin Breast Cancer dataset Step16: You can view a summary of the data using the describe() function Step17: Let's rename the column names Step18: See the first few rows Step19: Notice how the Class column (the last column in the table) consists of 2s and 4s. In this case 2 stands for malignant and 4 stands for benign. You can check this quickly using Step20: So you see only 2s and 4s are contained in this column. See a breakdown of the counts using the value_counts() function Step21: However, it is convention in machine learning to use a 0-based index to represent classes. Let's replace the 2s with 0s and the 4s with 1s Step22: Perhaps now we would like to drop the ID column Step23: As you can see you can use Pandas to quickly manipulate and access tabular data. Here we access the first 10 rows of the Size_Uniformity column Step24: Columns can also be accessed using the name of the column as an index Step25: You can examine the data types (Pandas dataframes can contain multiple types) Step26: The column Bare_Nucleoli appears as type object as it contains some missing data, which appear as ? in the dataset. Later in the course we will learn how to handle missing data. You can also use Pandas to perform quick statistical analyses. Here we calculate the standard deviation for each column Step27: Or calculate the standard deviation for a certain column Step28: Pandas also provides useful plotting tools. To look for correlations in data, a scatter matrix is often useful. Here we will plot only three columns of the data, and only the first 100 rows of the data, as a scatter plot with so many columns can take some time to render and can result in a very large plot. Step29: SciKit-Learn Last but not least, we shall import some modules from SciKit-Learn. SciKit-Learn is the main machine learning library for Python. It is a large library and is not normally loaded entirely; in general you load only the modules you need from the main library. Here we will load the datasets module and the k-nearest neighbours module (KNeighborsClassifier) Step30: Load the Iris dataset (a flower data set often used for demonstration purposes) Step31: Convention states that matrices are represented using uppercase letters, often the letter X, and label vectors are represented using lower case letters, often y Step32: The k-Nearest Neighbour algorithm is possibly the simplest classifier. Given a new observation, take the label of the sample closest to it in the n-dimensional feature space. First we must randomise the data, but we must ensure we randomise the labels as well in sync. We can use a NumPy feature to create indices that then correlate to both the targets and the data Step33: Then, the data must be split into a test set and a training set (again we are using naming conventions here for the training and test data X_train and X_test and their labels y_train and y_test) Step34: Now we will try to fit the k-nearest neighbours classifier to the training data Step35: The classifier has now been trained on the training data (X_train). We can now check how well it predicts newly seen data (using our test set, X_test) Step36: We can now element-wise compare our predicted results, in y_pred, with the true labels stored in y_test
Python Code: import numpy as np Explanation: Week 1: Getting Started with Jupyter Notebooks In this notebook, we will make sure all the packages required for this course are properly installed and working. To use this notebook, select the input cells (shown as In [x]) in order and press Shift-Enter to execute the code. Your installation is properly working if none of the cells below return any errors. Loading and Testing the Course's Required Packages In this notebook we will load and test each of packages that we will mainly be using during the course: Numpy, Matplotlib, Pandas, and SciKit-Learn. Numpy First, we will import NumPy. NumPy is a linear algebra library, and provides useful vector and matrix functionality, similar to MATLAB. It is convention to define NumPy as np for the sake of brevity: End of explanation a = np.array([1,2,3,4,5,6]) # Print the contents of a a Explanation: Make a vector with 6 elements: End of explanation print("The vector a has " + str(a.ndim) + " dimension(s) and has the shape " + str(a.shape) + ".") Explanation: Get some information about the vector: End of explanation m = np.array([[1,2,3], [4,5,6]]) m Explanation: Create a matrix like this: End of explanation print("The matrix m has " + str(m.ndim) + " dimension(s) and has the shape " + str(m.shape) + ".") Explanation: Get some information about the matrix: End of explanation a_squared = [i**2 for i in a] a_squared Explanation: A very powerful feature of NumPy and Python are List Comprehensions. These can replace many for loops and are much more efficient to run. Here we square every element in the vector a from above: End of explanation m[0] Explanation: Using NumPy we can select rows and columns of data very easily (known as array slicing). For example, we can print the first row of the matrix, m: End of explanation m[:,0] Explanation: Or we can slice the first column only. Using the , symbol we can ask for specfic rows and columns. The first integer always specifies the rows, which is followed by , and the second integer specifies the columns. The colon character : is shorthand for all rows or all columns. Here we select all rows of matrix m using : and select the first column, using 0: End of explanation m[1,1] Explanation: We can select a specific element using the matrix's column and row index, for example we want to select the item in the second column's second row: End of explanation import matplotlib.pyplot as plt %matplotlib inline Explanation: Entire books have been written about NumPy. Let's move on to Matplotlib. Matplotlib First we will load Matplotlib, a 2D plotting library, and run %matplotlib inline. This magic function (functions beginning with the % symbol are called magic functions) will display any plots inline in the notebook. End of explanation plt.plot([1,2,3,4,5]) Explanation: Using the plt.plot() function, you can plot many types of data and Matplotlib will try to figure out what you want to do with the data: End of explanation # x**y is shorthand for x to the power of y plt.plot([1,2,3,4,5],[1**2,2**2,3**2,4**2,5**2]) Explanation: The plt.plot() function will take an arbritrary number of x y argument pairs: End of explanation p = np.arange(1,10) # Get a range of numbers form 1 to 10 plt.plot(p, p, p, p**2, p, p**3) Explanation: Or plot three lines at once by supplying three pairs of x y values: End of explanation import math def sigmoid(x): a = [] for item in x: a.append(1/(1+math.exp(-item))) return a x = np.arange(-10., 10., 0.1) sig = sigmoid(x) plt.plot(x,sig) Explanation: Combined with functions, more complex curves can be plotted. Let's plot the sigmoid curve: End of explanation import pandas as pd Explanation: Pandas Let's now import Pandas. This library provides R-style dataframe table functionality. Like NumPy, it is convention to import the Pandas library as pd for brevity. End of explanation df = pd.read_csv("https://raw.githubusercontent.com/mdbloice/Machine-Learning-for-Health-Informatics/master/data/breast-cancer-wisconsin.csv") Explanation: Let's load the well known Wisconsin Breast Cancer dataset: End of explanation df.describe() Explanation: You can view a summary of the data using the describe() function: End of explanation df.columns = ["ID","Clump_Thickness","Size_Uniformity","Shape_Uniformity","Marginal_Adhesion","Epithelial_Size","Bare_Nucleoli","Bland_Chromatin","Normal_Nucleoli","Mitoses","Class"] # Print the new header names: df.columns Explanation: Let's rename the column names: End of explanation df.head() Explanation: See the first few rows: End of explanation df.Class.unique() Explanation: Notice how the Class column (the last column in the table) consists of 2s and 4s. In this case 2 stands for malignant and 4 stands for benign. You can check this quickly using: End of explanation df.Class.value_counts() Explanation: So you see only 2s and 4s are contained in this column. See a breakdown of the counts using the value_counts() function: End of explanation df = df.replace({"Class": {2: 0, 4: 1}}) df.Class.value_counts() Explanation: However, it is convention in machine learning to use a 0-based index to represent classes. Let's replace the 2s with 0s and the 4s with 1s: End of explanation df = df.drop(["ID"], axis=1) df.describe() Explanation: Perhaps now we would like to drop the ID column: End of explanation df.Size_Uniformity[0:10] Explanation: As you can see you can use Pandas to quickly manipulate and access tabular data. Here we access the first 10 rows of the Size_Uniformity column: End of explanation df['Size_Uniformity'][0:10] Explanation: Columns can also be accessed using the name of the column as an index: End of explanation df.dtypes Explanation: You can examine the data types (Pandas dataframes can contain multiple types): End of explanation df.std() Explanation: The column Bare_Nucleoli appears as type object as it contains some missing data, which appear as ? in the dataset. Later in the course we will learn how to handle missing data. You can also use Pandas to perform quick statistical analyses. Here we calculate the standard deviation for each column: End of explanation df.Clump_Thickness.std() Explanation: Or calculate the standard deviation for a certain column: End of explanation from pandas.tools.plotting import scatter_matrix # Manually select three of the table's columns by passing an array of column names: df_subset = df[['Clump_Thickness','Size_Uniformity', 'Shape_Uniformity']] # The semi colon at the end of this line is to suppress informational output (we only want to see the plot) scatter_matrix(df_subset.head(100), alpha=0.2, figsize=(6,6), diagonal='kde'); Explanation: Pandas also provides useful plotting tools. To look for correlations in data, a scatter matrix is often useful. Here we will plot only three columns of the data, and only the first 100 rows of the data, as a scatter plot with so many columns can take some time to render and can result in a very large plot. End of explanation from sklearn import datasets from sklearn.neighbors import KNeighborsClassifier Explanation: SciKit-Learn Last but not least, we shall import some modules from SciKit-Learn. SciKit-Learn is the main machine learning library for Python. It is a large library and is not normally loaded entirely; in general you load only the modules you need from the main library. Here we will load the datasets module and the k-nearest neighbours module (KNeighborsClassifier): End of explanation iris = datasets.load_iris() Explanation: Load the Iris dataset (a flower data set often used for demonstration purposes): End of explanation X = iris.data y = iris.target Explanation: Convention states that matrices are represented using uppercase letters, often the letter X, and label vectors are represented using lower case letters, often y: End of explanation np.random.seed(376483) random_indices = np.random.permutation(len(y)) random_indices Explanation: The k-Nearest Neighbour algorithm is possibly the simplest classifier. Given a new observation, take the label of the sample closest to it in the n-dimensional feature space. First we must randomise the data, but we must ensure we randomise the labels as well in sync. We can use a NumPy feature to create indices that then correlate to both the targets and the data: End of explanation X_train = X[random_indices[:-10]] X_test = X[random_indices[-10:]] y_train = y[random_indices[:-10]] y_test = y[random_indices[-10:]] print("Number of training samples: %d. Number of test samples: %d." % (len(X_train), len(X_test)) ) Explanation: Then, the data must be split into a test set and a training set (again we are using naming conventions here for the training and test data X_train and X_test and their labels y_train and y_test): End of explanation knn = KNeighborsClassifier() # Initialise the classifier. knn.fit(X_train, y_train) # Fit the classifier. Explanation: Now we will try to fit the k-nearest neighbours classifier to the training data: End of explanation y_pred = knn.predict(X_test) # The classifier's predicted labels are now contained in y_pred: y_pred Explanation: The classifier has now been trained on the training data (X_train). We can now check how well it predicts newly seen data (using our test set, X_test): End of explanation y_pred == y_test Explanation: We can now element-wise compare our predicted results, in y_pred, with the true labels stored in y_test: End of explanation
5,173
Given the following text description, write Python code to implement the functionality described below step by step Description: Content and Objectives Show effects of multipath on a pulse and on a pulse-shaped data signal for random data Import Step1: Function for determining the impulse response of an RC filter Step2: Parameters Step3: Define Channel Step4: Pulse Shape and Effect of Multi-Path on Pulse Shape Step5: show pulse and version after multi-path Step6: show spectra Note Step7: Note Step8: show signals Step9: Note I Step10: Note
Python Code: # importing import numpy as np import matplotlib.pyplot as plt import matplotlib # showing figures inline %matplotlib inline # plotting options font = {'size' : 20} plt.rc('font', **font) plt.rc('text', usetex=True) matplotlib.rc('figure', figsize=(18, 10) ) Explanation: Content and Objectives Show effects of multipath on a pulse and on a pulse-shaped data signal for random data Import End of explanation ######################## # find impulse response of an RRC filter ######################## def get_rrc_ir(K, n_sps, t_symbol, beta): ''' Determines coefficients of an RRC filter Formula out of: J. Huber, Trelliscodierung, Springer, 1992, S. 15 NOTE: roll-off factor must not equal zero NOTE: Length of the IR has to be an odd number IN: length of IR, sps factor, symbol time, roll-off factor OUT: filter coefficients ''' if beta == 0: beta = 1e-32 K = int(K) if ( K%2 == 0): raise ValueError('Length of the impulse response should be an odd number') # initialize np.array rrc = np.zeros( K ) # find sample time and initialize index vector t_sample = t_symbol / n_sps time_ind = range( -(K-1)//2, (K-1)//2+1) # assign values of rrc for t_i in time_ind: t = (t_i)* t_sample if t_i == 0: rrc[ int( t_i+(K-1)//2 ) ] = (1-beta+4*beta/np.pi) elif np.abs(t) == t_symbol / ( 4 * beta ): rrc[ int( t_i+(K-1)//2 ) ] = beta*np.sin( np.pi/(4*beta)*(1+beta) ) \ - 2*beta/np.pi*np.cos(np.pi/(4*beta)*(1+beta)) else: rrc[ int( t_i+(K-1)//2 ) ] = ( 4 * beta * t / t_symbol * np.cos( np.pi*(1+beta)*t/t_symbol ) \ + np.sin( np.pi * (1-beta) * t / t_symbol ) ) / ( np.pi * t / t_symbol * (1-(4*beta*t/t_symbol)**2) ) rrc = rrc / np.sqrt(t_symbol) return rrc Explanation: Function for determining the impulse response of an RC filter End of explanation # parameters of the filter beta = 0.33 n_sps = 8 # samples per symbol syms_per_filt = 4 # symbols per filter (plus minus in both directions) K_filt = 2 * syms_per_filt * n_sps + 1 # length of the fir filter # set symbol time t_symb = 1.0 # switch for normalizing all signals in order to only look at their shape and not their energy normalize_signals = 1 Explanation: Parameters End of explanation # defining delays of multi-path and their weight # NOTE: No "sanity check" if lengths correspond, so errors may occurr # choose whether delays of multipath are # (0) in multiples of the symbol time t_sym or # (1) in multiples of the sampling time t_sym / n_sps channel_factors = [ 1.0, .5, .1 ] delay_type = 1 if delay_type == 0: # construction based on delays w.r.t. symbol time channel_delays_syms = np.array( [ 1, 3, 5 ] ) channel_delays_samples = n_sps * channel_delays_syms else: # "fractional delays", i.e. delays w.r.t. t_sym / n_sps channel_delays_samples = np.array( [ 1, 12, 15 ] ) h_channel = np.zeros( np.max(channel_delays_samples ) + 1 ) for k in np.arange( len(channel_delays_samples) ): h_channel[ channel_delays_samples[k] ] = channel_factors[k] Explanation: Define Channel End of explanation rrc = get_rrc_ir( K_filt, n_sps, t_symb, beta) r_rrc = np.convolve( rrc, h_channel ) if normalize_signals: rrc /= np.linalg.norm( rrc ) r_rrc /= np.linalg.norm( r_rrc ) Explanation: Pulse Shape and Effect of Multi-Path on Pulse Shape End of explanation plt.figure() plt.plot( np.arange(len(rrc)), rrc, linewidth=2.0, label='$g_{\\mathrm{rrc}}(t)$') plt.plot( np.arange(len(r_rrc)), r_rrc, linewidth=2.0, label='$r(t)$') plt.legend(loc='upper right') plt.grid(True) plt.xlabel('$n/t$') Explanation: show pulse and version after multi-path End of explanation rrc_padded = np.hstack( ( rrc , np.zeros( 9*len(rrc) ) ) ) rrc_padded = np.roll( rrc_padded, -( K_filt - 1 ) // 2 ) RRC = np.fft.fft( rrc_padded ) r_rrc_padded = np.hstack( ( r_rrc , np.zeros( 10*len(rrc) - len( r_rrc ) ) ) ) r_rrc_padded = np.roll( r_rrc_padded, -( K_filt - 1 ) // 2 ) R_rrc = np.fft.fft( r_rrc_padded ) if normalize_signals: RRC /= np.linalg.norm( RRC ) R_rrc /= np.linalg.norm( R_rrc ) f = np.linspace( -n_sps/(2*t_symb), n_sps/(2*t_symb), len(rrc_padded) ) plt.figure() plt.plot( f, np.abs( RRC )**2, linewidth=2.0, label='$|G_{\\mathrm{rrc}}(f)|^2$') plt.plot( f, np.abs( R_rrc )**2, linewidth=2.0, label='$|R(f)|^2$') plt.legend(loc='upper right') plt.grid(True) plt.xlabel('$f$') Explanation: show spectra Note: Both spectra are normalized to have equal energy in frequency regime End of explanation # number of symbols and samples per symbol n_symb = 8 # modulation scheme and constellation points constellation = np.array( [ 1 , -1] ) # generate random binary vector and modulate the specified modulation scheme d = np.random.randint( 2, size = n_symb ) s = constellation[ d ] # prepare sequence to be filtered by upsampling s_up = np.zeros( n_symb * n_sps, dtype=complex) s_up[ : : n_sps ] = s s_up_delayed = np.hstack( ( np.zeros( (K_filt-1) // 2 ) , s_up ) ) s_up_delayed_delayed = np.hstack( ( np.zeros( K_filt-1 ), s_up ) ) # apply RRC filtering s_Tx = np.convolve( s_up, rrc) rc = np.convolve( rrc, rrc) s_rc = np.convolve( s_up, rc) # get received signal r = np.convolve( s_Tx, h_channel) # apply MF at the Rx y_mf_rrc = np.convolve(r, rrc) y_mf_no_channel = np.convolve( s_Tx, rrc ) # normalize signals if applicable if normalize_signals: s_Tx /= np.linalg.norm( s_Tx ) r /= np.linalg.norm( r ) y_mf_rrc /= np.linalg.norm( y_mf_rrc ) y_mf_no_channel /= np.linalg.norm( y_mf_no_channel ) Explanation: Note: Spectrum is shown to emphasize that pulse shape might be distorted. Real Data Modulated Tx-signal End of explanation plt.plot( np.arange(len(s_up)), np.max(np.abs(s_Tx)) * np.real(s_up),'vr', ms=12, label='$A_n$') plt.plot( np.arange(len(s_Tx)), np.real(s_Tx), linewidth=2.0, label='$s(t)=\sum A_ng_{\\mathrm{rrc}}(t-nT)$') plt.plot( np.arange(len(s_up_delayed)), np.max(np.abs(s_Tx)) * np.real(s_up_delayed),'xg', ms=18, label='$A_{n-\\tau_\mathrm{g}}$') plt.title('Tx signal and symbols') plt.legend(loc='upper right') plt.grid(True) plt.xlabel('$n$') Explanation: show signals End of explanation plt.plot( np.arange(len(s_up_delayed)), np.max(np.abs(s_Tx)) * np.real(s_up_delayed),'x', ms=18, label='$A_{n-\\tau_\mathrm{g}}$') plt.plot( np.arange(len(s_up_delayed_delayed)), np.max(np.abs(s_Tx)) * np.real(s_up_delayed_delayed),'D', ms=12, label='$A_{n-2\\tau_\mathrm{g}}$') plt.plot( np.arange(len(s_Tx)), np.real(s_Tx), linewidth=2.0, label='$s(t)=\sum A_ng_{\\mathrm{rrc}}(t-nT)$') plt.plot( np.arange(len(y_mf_no_channel)), np.real(y_mf_no_channel), linewidth=2.0, label='$y_{\\mathrm{mf}}(t)=s(t)* g_{\\mathrm{rrc}}(t)$') plt.title('Rx after mf, without channel') plt.legend(loc='upper right') plt.grid(True) plt.xlabel('$n$') Explanation: Note I: The sybol values (except for scaling) may be obtained by sampling the signal $s(t)$ at times being delayed by $\tau_\mathrm{g}$, equaling the group delay of the pulse shaping filter. Note II: So, in upcoming plots symbols $A_n$ (without delay) will be omitted and only $A_{n-\tau_\mathrm{g}}$ will be shown. End of explanation plt.plot( np.arange(len(s_up_delayed)), np.max(np.abs(s_Tx)) * np.real(s_up_delayed),'x', ms=18, label='$A_{n-\\tau_\mathrm{g}}$') plt.plot( np.arange(len(s_up_delayed_delayed)), np.max(np.abs(s_Tx)) * np.real(s_up_delayed_delayed),'D', ms=12, label='$A_{n-2\\tau_\mathrm{g}}$') plt.plot( np.arange(len(s_Tx)), np.real(s_Tx), linewidth=2.0, label='$s(t)=\sum A_ng_{\\mathrm{rrc}}(t-nT)$') #plt.plot( np.arange(len(r)), np.real(r), linewidth=2.0, label='$r(t)=s(t)*h(t)$') plt.plot( np.arange(len(y_mf_rrc)), np.real(y_mf_rrc), linewidth=2.0, label='$y_{\\mathrm{mf}}(t)=r(t)* g_{\\mathrm{rrc}}(t)$') plt.title('Transmission with channel') plt.legend(loc='upper right') plt.grid(True) plt.xlabel('$n$') Explanation: Note: If the channel is perfect, i.e, no channel is active at all, then MF at the receiver--matched to rrc--leads to signal $y_\mathrm{rrc}(t)$ whose samples with delay $2\tau_\mathrm{g}$ corresponds to the transmitted symbols. End of explanation
5,174
Given the following text description, write Python code to implement the functionality described below step by step Description: Testing Nsaba Functionality Current Methods get_aba_ge() ge_ratio() get_ns_act() make_ge_ns_mat() coords_to_ge() Step1: Coordinates to gene expression Step2: Visualization Methods (testing)
Python Code: %matplotlib inline from nsaba.nsaba import Nsaba from nsaba.nsaba.visualizer import NsabaVisualizer import numpy as np import os import matplotlib.pyplot as plt import pandas as pd import itertools %load_ext line_profiler # Simon Path IO data_dir = '../../data_dir' os.chdir(data_dir) Nsaba.aba_load() Nsaba.ns_load() #Torben Path IO ns_path = "/Users/Torben/Documents/ABI analysis/current_data_new/" aba_path = '/Users/Torben/Documents/ABI analysis/normalized_microarray_donor9861/' Nsaba.aba_load(aba_path) Nsaba.ns_load(ns_path) # Loading gene expression for all ABA registered Entrez IDs. A = Nsaba() A.load_ge_pickle('Nsaba_ABA_ge.pkl') %time A.get_ns_act('attention', thresh=-1) A.get_ns_act('reward', thresh=-1) # Testing ge_ratio() A = Nsaba() A.ge_ratio((1813,1816)) Explanation: Testing Nsaba Functionality Current Methods get_aba_ge() ge_ratio() get_ns_act() make_ge_ns_mat() coords_to_ge() End of explanation rand = lambda null: np.random.uniform(-10,10,3).tolist() coord_num = 20 coords = [rand(0) for i in range(coord_num)] A.coords_to_ge(coords, entrez_ids=[1813,1816], search_radii=8) A.get_aba_ge([733,33,88]) A.get_ns_act("attention", thresh=-1, method='knn') # You can use the sphere method too, if you want to weight by bucket. # e.g: # A.get_ns_act("attention", thresh=.3, method='sphere') A.make_ge_ns_mat('attention', [733, 33, 88]) A.make_ge_ns_mat('attention', [733, 33, 88]) Explanation: Coordinates to gene expression: Provide a list of coordinates and entrez_ids and the function will return matrix of coordinates by gene expression End of explanation NV = NsabaVisualizer(A) NV.visualize_ge([1813]) NV.visualize_ns('attention', alpha=.3) NV.lstsq_ns_ge('attention', [1813]) NV.lstsq_ge_ge(1813, 1816); NV.lstsq_ns_ns('attention', 'reward') Explanation: Visualization Methods (testing) End of explanation
5,175
Given the following text description, write Python code to implement the functionality described below step by step Description: xlsxWriter tutorial install sudo pip install XlsxWriter 아나콘다 설치시 기본적으로 내장되어 있음 Step1: tutorial 1 Step2: <img src="https Step3: Tutorial 2 Step4: Tutorial 3 Step5: write() method 여기에 더 많은 정보가 있어요 write_string() write_number() write_blank() write_formula() write_datetime() write_boolean() write_url() workbook class constant_memory Step6: <img src="https Step7: <img src="https Step8: <img src="https
Python Code: import xlsxwriter workbook = xlsxwriter.Workbook('hello.xlsx') worksheet = workbook.add_worksheet() worksheet.write('A1', 'Hello world') workbook.close() Explanation: xlsxWriter tutorial install sudo pip install XlsxWriter 아나콘다 설치시 기본적으로 내장되어 있음 End of explanation expenses = ( ['Rent', 1000], ['Gas', 100], ['Food', 300], ['Gym', 50], ) workbook = xlsxwriter.Workbook('Expenses01.xlsx') # worksheet은 sheet1, sheet2, ... 기본 이름이지만 이름을 별도로 붙일 수 있음! worksheet = workbook.add_worksheet() # start from the first cell! zero index row = 0 col = 0 for item, cost in (expenses): worksheet.write(row, col, item) worksheet.write(row, col+1, cost) row += 1 # 데이터 입력할 경우 write 사용 worksheet.write(row, 0, 'Total') worksheet.write(row, 1, '=SUM(B1:B4)') workbook.close() # 항상 닫아줘야함 Explanation: tutorial 1: Created a Simple XLSX file End of explanation # default는 sheet1, sheet2 .... 임 worksheet1 = workbook.add_worksheet() # sheet1 worksheet2 = workbook.add_worksheet('Data') worksheet3 = workbook.add_worksheet() # sheet3 Explanation: <img src="https://xlsxwriter.readthedocs.io/_images/tutorial01.png"> XlsxWriter can only create new files. It cannot read or modify existing files. 수정하진 못함 주륵..! End of explanation workbook = xlsxwriter.Workbook("Expenses02.xlsx") worksheet = workbook.add_worksheet("Sheet1") # bold 처리 bold = workbook.add_format({"bold": True}) # format for cells money = workbook.add_format({"num_format": "$#,##0"}) # header 설정 worksheet.write('A1', 'Item', bold) worksheet.write('B1', 'Cost', bold) expenses = ( ['Rent', 1000], ['Gas', 100], ['Food', 300], ['Gym', 50], ) row = 1 # header를 작성했기에 row가 1부터 시작 col = 0 for item, cost in (expenses): worksheet.write(row, col, item) worksheet.write(row, col+1, cost, money) # write(row, col, 넣을 값(token), 형식[format]) row += 1 worksheet.write(row, 0, "Total", bold) worksheet.write(row, 1, "=sum(B2:B5)", money) workbook.close() Explanation: Tutorial 2 : Adding formatting to the XLSX file 특정 포맷..! bold처리 같은 것들 <img src="https://xlsxwriter.readthedocs.io/_images/tutorial02.png"> End of explanation from datetime import datetime workbook = xlsxwriter.Workbook("Expenses03.xlsx") worksheet = workbook.add_worksheet() bold = workbook.add_format({"bold":1}) money_format = workbook.add_format({"num_format": "$#,##0"}) date_format = workbook.add_format({"num_format": "mmmm d yyyy"}) worksheet.set_column(1, 1, 15) # column의 width 조절 worksheet.write("A1", "Item", bold) worksheet.write("B1", "Date", bold) worksheet.write("C1", "Cost", bold) expenses = ( ['Rent', '2013-01-13', 1000], ['Gas', '2013-01-14', 100], ['Food', '2013-01-16', 300], ['Gym', '2013-01-20', 50], ) # Start from the first cell below the headers. row = 1 col = 0 for item, date_str, cost in (expenses): date = datetime.strptime(date_str, "%Y-%m-%d") worksheet.write_string (row, col, item) worksheet.write_datetime(row, col + 1, date, date_format ) worksheet.write_number (row, col + 2, cost, money_format) row += 1 # Write a total using a formula. worksheet.write(row, 0, 'Total', bold) worksheet.write(row, 2, '=SUM(C2:C5)', money_format) workbook.close() Explanation: Tutorial 3: Writing different typs of data to the XLSX file 다른 타입의 데이터를 넣기! <img src="https://xlsxwriter.readthedocs.io/_images/tutorial03.png"> End of explanation import pandas as pd df = pd.DataFrame({"Data": [10, 20, 30, 20, 15, 30, 45]}) df ## pandas로 xlsxwriter 접근하기 df = pd.DataFrame({'Data': [10, 20, 30, 20, 15, 30, 45]}) writer = pd.ExcelWriter("pandas_simple.xlsx", engine="xlsxwriter") df.to_excel(writer, sheet_name="Sheet1") workbook = writer.book worksheet = writer.sheets["Sheet1"] # chart 객체 생성 chart = workbook.add_chart({"type":'column'}) # dataframe 데이터에서 chart 범위설정 chart.add_series({"values":"=Sheet1!$B$2:$B$8"}) # worksheet에 chart 삽입 worksheet.insert_chart("D2", chart) writer.save() Explanation: write() method 여기에 더 많은 정보가 있어요 write_string() write_number() write_blank() write_formula() write_datetime() write_boolean() write_url() workbook class constant_memory : 메모리에 있는 데이터를 효율적으로 관리 workbook = xlsxwriter.Workbook(filename, {"constant_memory":True}) tmpdir : 임시 파일을 저장할 장소..! workbook = xlsxwriter.Workbook(filename, {'tmpdir': '/home/user/tmp'}) working with python pandas and xlsxwriter xlwt와 openpyxl or xlsxWriter 모듈사용 End of explanation # 차트 속성을 수정하려면 어떻게 해야할까..! # dataframe formatting 색깔 설정 worksheet.conditional_format('B2:B8', {'type': '3_color_scale'}) # 이걸 하면 알록달록 아래처럼 나옴 Explanation: <img src="https://xlsxwriter.readthedocs.io/_images/pandas_chart.png"> End of explanation df1 = pd.DataFrame({'Data': [11, 12, 13, 14]}) df2 = pd.DataFrame({'Data': [21, 22, 23, 24]}) df3 = pd.DataFrame({'Data': [31, 32, 33, 34]}) df4 = pd.DataFrame({'Data': [41, 42, 43, 44]}) # writer 설정 writer = pd.ExcelWriter('pandas_positioning.xlsx', engine='xlsxwriter') # df1을 넣는 위치 설정 df1.to_excel(writer, sheet_name='Sheet1') # 기초 A1 df2.to_excel(writer, sheet_name='Sheet1', startcol=3) # col 4번째(3)부터 시작 df3.to_excel(writer, sheet_name='Sheet1', startrow=6) # row 7번째부터 시작 # header와 index 없이 설정가능! 단 시작 위치는 고려해야함 df4.to_excel(writer, sheet_name='Sheet1', startrow=7, startcol=4, header=False, index=False) # writer를 닫음 writer.save() Explanation: <img src="https://xlsxwriter.readthedocs.io/_images/pandas_conditional.png"> End of explanation list_data = [10, 20, 30, 20, 15, 30, 45] df = pd.DataFrame(list_data) excel_file = 'column.xlsx' sheet_name = 'Sheet1' writer = pd.ExcelWriter(excel_file, engine='xlsxwriter') df.to_excel(writer, sheet_name=sheet_name) workbook = writer.book worksheet = writer.sheets[sheet_name] chart = workbook.add_chart({'type': 'column'}) chart.add_series({ 'values': '=Sheet1!$B$2:$B$8', 'gap': 2 # 여백 }) # chart : y axis 설정 chart.set_y_axis({'major_gridlines': {'visible': False}}) # chart legend(범주) 해제 chart.set_legend({'position': 'none'}) # insert the chart into the worksheet worksheet.insert_chart('D2', chart) # d2는 에러나고 D2라고 해야되요 writer.save() Explanation: <img src="https://xlsxwriter.readthedocs.io/_images/pandas_positioning.png"> pandas + xlsx writer + vincent 로 차트 그리기 End of explanation
5,176
Given the following text description, write Python code to implement the functionality described below step by step Description: Analysis of Seattle Fremont Bridge Bike Traffic Step1: Get Data Step2: Shows a graph of data on a weekly basis. Let's investigate what the pattern is when we look at hourly rates on individual days... Pivot Plot a hourly traffic rates for all days in data. Step3: We can see two types of lines in this graph... One type with two peaks, and another type that have a peak in the middle of the day. We hypothesize that this is a difference between weekdays and weekends. Let's investigate further. Principal Component Analysis Step4: Unsupervised Clustering Step5: Comparing with Day of Week Step6: Analyzing Outliers The following points are weekdays in the "weekend" cluster
Python Code: %matplotlib inline import matplotlib.pyplot as plt; from jubiiworkflow.data import get_data import pandas as pd import numpy as np from sklearn.decomposition import PCA from sklearn.mixture import GaussianMixture plt.style.use('seaborn'); Explanation: Analysis of Seattle Fremont Bridge Bike Traffic End of explanation data = get_data() p = data.resample('W').sum().plot() p.set_ylim(0, None); Explanation: Get Data End of explanation pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date) pivoted.plot(legend=False, alpha=0.01); Explanation: Shows a graph of data on a weekly basis. Let's investigate what the pattern is when we look at hourly rates on individual days... Pivot Plot a hourly traffic rates for all days in data. End of explanation X = pivoted.fillna(0).T.values X.shape X2 = PCA(2, svd_solver='full').fit_transform(X) X2.shape plt.scatter(X2[:, 0], X2[:, 1]); Explanation: We can see two types of lines in this graph... One type with two peaks, and another type that have a peak in the middle of the day. We hypothesize that this is a difference between weekdays and weekends. Let's investigate further. Principal Component Analysis End of explanation gmm = GaussianMixture(2).fit(X) labels = gmm.predict(X) plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow') plt.colorbar(); fig, ax = plt.subplots(1, 2, figsize=(14, 6)) pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0]); pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1]); ax[0].set_title('Purple Cluster') ax[1].set_title('Red Cluster'); Explanation: Unsupervised Clustering End of explanation dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow') plt.colorbar(); Explanation: Comparing with Day of Week End of explanation dates = pd.DatetimeIndex(pivoted.columns) dates[(labels == 0) & (dayofweek < 5)] Explanation: Analyzing Outliers The following points are weekdays in the "weekend" cluster End of explanation
5,177
Given the following text description, write Python code to implement the functionality described below step by step Description: Python Tuples Reference Table of contents <a href="#1.-Creation">Creation</a> <a href="#2.-Basic-Operations">Basic Operations</a> <a href="#3.-Unpacking">Unpacking</a> <a href="#4.-Comparing">Comparing</a> <a href="#5.-Named-Tuples">Named Tuples</a> 1. Creation Step1: 2. Basic Operations Step2: 3. Unpacking Step3: 4. Comparing Step4: 5. Named Tuples collections.namedtuple() is a factory method that returns a subclass of the standard Python tuple type. You feed it a type name, and the fields it should have, and it returns a class that you can instantiate, passing in values for the fields you’ve defined, and so on. Step5: A subtle use of the _replace() method is that it can be a convenient way to populate named tuples that have optional or missing fields. To do this, you make a prototype tuple containing the default values and then use _replace() to create new instances with values replaced.
Python Code: # Create a tuple directly digits = (0, 1, 'two') digits # Create a tuple from a list digits = tuple([0, 1, 'two']) digits # For a single item tuple, a trailing comma is required to tell the intepreter its a tuple zero = (0,) zero Explanation: Python Tuples Reference Table of contents <a href="#1.-Creation">Creation</a> <a href="#2.-Basic-Operations">Basic Operations</a> <a href="#3.-Unpacking">Unpacking</a> <a href="#4.-Comparing">Comparing</a> <a href="#5.-Named-Tuples">Named Tuples</a> 1. Creation End of explanation # elements of a tuple cannot be modified (this would throw an error) # digits[2] = 2 #concatenate tuples digits = digits + (3,4) digits # Create a single tuple with elements repeated (also workss with lists) (3, 4) * 2 # sort a list of tuples tens = [(20,60), (10,40), (20,30)] sorted(tens) #sorts by first element in tuple, then second element Explanation: 2. Basic Operations End of explanation bart = ('male', 10, 'simpson') # create a tuple (sex, age, surname) = bart print(sex) print(age) print(surname) #use the star expression to load multiple values into a list record = ('Dave', 'dave@example.com', '773-555-1212', '847-555-1212') name, email, *phone_numbers = record phone_numbers # star expressions can be used in the middle of the iterable too! def drop_first_last(grades): first, *middle, last = grades return float(sum(middle)) / len(middle) drop_first_last([3, 8, 8, 8, 8, 8,8, 100]) Explanation: 3. Unpacking End of explanation # Tuples are compared by comparing their members in order (3,9) < (5,3) (3,'banana') < (3,'apple') (3,'banana') == (3,'ban') # Tuple members are only compared if needed. # Here is an uncomparable type class mytype: def go(): pass a = mytype() b = mytype() (3, a) < (5,b) # but the folllowing code would generate an error because a and b would need to be evaluated #(3,a) < (3,b) Explanation: 4. Comparing End of explanation from collections import namedtuple Subscriber = namedtuple('Subscriber', ['addr','joined']) # namedtuple(<tuple-name>, [field1, field2, field3...]) jonesy = Subscriber('jonesy@example.com','2012-10-19') jonesy len(jonesy) jonesy.addr # because a namedtuple is a tuple it is immutable. # the following code would fail if uncommented #jonesy.addr='another@example.com' #instead, use replace() jonesy = jonesy._replace(addr='another@example.com') jonesy Explanation: 5. Named Tuples collections.namedtuple() is a factory method that returns a subclass of the standard Python tuple type. You feed it a type name, and the fields it should have, and it returns a class that you can instantiate, passing in values for the fields you’ve defined, and so on. End of explanation Stock = namedtuple('Stock', ['name', 'shares', 'price', 'date', 'time']) # Create a prototype instance stock_prototype = Stock('', 0, 0.0, None, None) # Function to convert a dictionary to a Stock def dict_to_stock(s): return stock_prototype._replace(**s) a = {'name': 'ACME', 'shares': 100, 'price': 123.45} dict_to_stock(a) b = {'name': 'ACME', 'shares': 100, 'price': 123.45, 'date': '12/17/2012'} dict_to_stock(b) Explanation: A subtle use of the _replace() method is that it can be a convenient way to populate named tuples that have optional or missing fields. To do this, you make a prototype tuple containing the default values and then use _replace() to create new instances with values replaced. End of explanation
5,178
Given the following text description, write Python code to implement the functionality described below step by step Description: Homework 4 Step1: Now, let's connect to your Postgres database. On your Heroku Postgres details, look at the credentials for the database. Take the long URI in the credentials and replace the portion of the code that reads &lt;replace_me&gt; with the URI. It should start with postgres Step2: Table Descriptions Here is a list of the tables in the database. Each table links to the documentation on the FEC page for the dataset. Note that the table names here are slightly different from the ones in lecture. Consult the FEC page for the descriptions of the tables to find out what the correspondence is. cand Step3: For longer queries, you can save your query into a string, then use it in the %sql statement. The $query in the %sql statement pulls in the value in the Python variable query. Step4: In addition, you can assign the SQL statement to a variable and then call .DataFrame() on it to get a Pandas DataFrame. However, it will often be more efficient to express your computation directly in SQL. For this homework, we will be grading your SQL expressions so be sure to do all computation in SQL (unless otherwise requested). Step6: Question 1a We are interested in finding the PACs that donated large sums to the candidates. To begin to answer this question, we will look at the inter_comm table. We'll find all the transactions that exceed \$5,000. However, if there are a lot of transactions like that, it might not be useful to list them all. So before actually finding the transactions, find out how many such transactions there are. Use only SQL to compute the answer. (It should be a table with a single column called count and a single entry, the number of transactions.) We will be grading the query string query_q1a. You may modify our template but the result should contain the same information with the same names. Step8: Question 1b Having seen that there aren't too many transactions that exceed \$5,000, let's find them all. Using only SQL, construct a table containing the recipient committee's name, the ID of the donor committee, and the transaction amount, for transactions that exceed $5,000 dollars. Sort the transactions in decreasing order by amount. We will be grading the query string query_q1b. You may modify our template but the result should contain the same information with the same names. Step9: Question 1c Of course, individual transactions could be misleading. A more interesting question is Step11: If you peruse the results of your last query, you should notice that some names are listed twice with slightly different spellings. Perhaps this causes some contributions to be split extraneously. Question 1d Find a field that uniquely identifies recipient committees and repeat your analysis from the previous question using that new identifier. We will be grading the query string query_q1d. You may modify our template but the result should contain the same information with the same names. Step12: Question 1e Of course, your results are probably not very informative. Let's join these results with the comm table (perhaps twice?) to get the names of the committees involved in these transactions. As before, limit your results to the top 20 by total donation. We will be grading the query string query_q1e. You may modify our template but the result should contain the same information with the same names. Remember that the name column of inter_comm is not consistent. We found this out in 1(c) where we found that the same committees were named slightly differently. Because of this, you cannot use the name column of inter_comm to get the names of the committees. Step13: Question 2 What is the distribution of committee by state? Write a SQL query which computes for each state the number of committees in the comm table that are registered in that state. Display the results in descending order by count. We will be grading the query string query_q2. You may modify our template but the result should contain the same information with the same names. Step14: Question 3 Political Action Committees are major sources funding for campaigns. They typically represent business, labor, or ideological interests and influence campaigns through their funding. Because of this, we'd like to know how much money each committee received from PACs. For each committee, list the total amount of donations they got from Political Action Committees. If they got no such donations, the total should be listed as null. Order the result by pac_donations, then cmte_nm. We will be grading you on the query string query_q3. You may modify our template but the result should contain the same information with the same names. Step15: Question 4 Committees can also contribute to other committees. When does this happen? Perhaps looking at the data can help us figure it out. Find the names of the top 10 (directed) committee pairs that are affiliated with the Republican Party, who have the highest number of intercommittee transactions. By directed, we mean that a transaction where C1 donates to C2 is not the same as one where C2 donates to C1. We will be grading you on the query string query_q4. You may modify our template but the result should contain the same information with the same names. Step16: Question 5 Some committees received donations from a common contributor. Perhaps they were ideologically similar. Find the names of distinct candidate pairs that share a common committee contributor from Florida. If you list a pair ("Washington", "Lincoln") you should also list ("Lincoln, Washington"). Save the result in q5. Hint Step18: Part 2 Step19: Question 8 We want to know what proportion of this money came from small donors — individuals who donated \$200 or less. For example, if Hillary raised \$1000, and \$300 of that came from small donors, her proportion of small donors would be 0.3. Compute this proportion for each candidate by filling in the SQL query below. The resulting table should have two columns Step22: Question 9 Let's now do a bit of EDA. Fill in the SQL statements below to make histograms of the transaction amounts for both Hillary and Bernie. Note that we do take your entire result and put it into a dataframe. This is not scalable. If indiv_sample was large, your computer would run out of memory trying to store it in a dataframe. The better way to compute the histogram would be to use SQL to generate bins and count the number of contributions in each bin using the built-in width_bucket function. Step23: Question 10 Looks like there is a difference. Let's see if it's statistically significant. State appropriate null and alternative hypotheses for this problsm. Fill in your answer here. Constructing a Bootstrap CI We want to create a bootstrap confidence interval of the proportion of funds contributed to Hillary Clinton by small donors. To do this in SQL, we need to number the rows we want to bootstrap. The following cell creates a view called hillary. Views are like tables. However, instead of storing the rows in the database, Postgres will recompute the values in the view each time you query it. It adds a row_id column to each row in indiv_sample corresponding to a contribution to Hillary. Note that we use your hillary_cmte_id variable by including $hillary_cmte_id in the SQL. We'll do the same for Bernie, creating a view called bernie. Step25: Question 11 Let's contruct a view containing the rows we want to sample for each bootstrap trial. For example, if we want to create 100 bootstrap samples of 3 contributions to Hillary, we want something that looks like Step26: Question 12 Construct a view called hillary_trials that uses the hillary and hillary_design views to compute the total amount contributed by small donors for each trial as well as the overall amount. It should have three columns Step27: Question 13 Now, create a view called hillary_props that contains two columns Step29: Question 14 Now, repeat the process to bootstrap Bernie's proportion of funds raised by small donors. You should be able to mostly copy-paste your code for Hillary's bootstrap CI. Step30: Plotting the sample distribution Run the following cell to make a plot of the distribution of proportions for both Hillary and Bernie. Again, this would not be scalable if we took many bootstrap samples. However, 500 floats is reasonable to fit in memory. Step31: Computing the Confidence Interval Run the following cell to compute confidence intervals based on your hillary_props and bernie_props views. Think about what the intervals mean. Question 15 Based on your confidence intervals, should we reject the null? Are there any other factors that should be taken into consideration when making this conclusion? Write your answer here, replacing this text. Congrats! You finished the homework. Submitting your assignment First, run the next cell to run all the tests at once. Step32: Then, we'll submit the assignment to OkPy so that the staff will know to grade it. You can submit as many times as you want, and you can choose which submission you want us to grade by going to https
Python Code: import numpy as np import pandas as pd %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import sqlalchemy !pip install -U okpy from client.api.notebook import Notebook ok = Notebook('hw4.ok') Explanation: Homework 4: SQL, FEC Data, and Small Donors Due: 11:59pm Tuesday, March 14 Note: The due date has changed from March 7 to March 14. Happy studying! In this homework, we're going to explore the Federal Election Commission's data on the money exchanged during the 2016 election. This homework has two main parts: Answering questions and computing descriptive statistics on the data Conducting a hypothesis test This is very similar to what you've done before in this class. However, in this homework almost all of our computations will be done using SQL. Getting Started For this assignment, you're going to use a popular cloud services provider: Heroku. This will give you some experience provisioning a database in the cloud and working on that database from your computer. Since the free tier of Heroku's Postgres service limits users to 10,000 rows of data, we've provided a subset of the FEC dataset for you to work with. If you're interested, you can download and load the entire dataset from http://www.fec.gov/finance/disclosure/ftpdet.shtml. It is about 4GB and contains around 24 million rows. (With Heroku and other cloud services, it is relatively straightforward to rent clusters of machines to work on much larger datasets. In particular, it would be easy to rerun your analyses in this assignment on the full dataset.) Provisioning the Postgres DB Visit https://signup.heroku.com/postgres-home-button and sign up for an account if you don't have one already. Now, install the Heroku CLI: https://devcenter.heroku.com/articles/heroku-cli. Then, run heroku login to log into Heroku from your CLI. Now, visit https://dashboard.heroku.com/apps and click New -> App. Name the app whatever you want. You should be sent to the app details page. Click Resources in the navbar, then in the Add-on search bar, type "Postgres". You should be able to select Heroku Postgres. Make sure the free tier (Hobby Dev) is selected and click Provision. Now you should see Heroku Postgres :: Database in your Add-ons list. Loading the data into the Heroku DB (1) Run the lines below in your terminal to install necessary libraries. conda install -y psycopg2 conda install -y postgresql pip install ipython-sql (2) Click the Heroku Postgres :: Database link in your app's Add-ons list. (3) In the Heroku Data page you got redirected to, you should see the name of your database. Scroll down to Administration and click View Credentials. These are the credentials that allow you to connect to the database. The last entry of the list contains a line that looks like: heroku pg:psql db_name --app app_name In your terminal, take that command and add "&lt; fec.sql" to the end to get something like: heroku pg:psql db_name --app app_name &lt; fec.sql Run that command. It will run the commands in fec.sql, which load the dataset into the database. Now you should be able to run the command without the "&lt; fec.sql" to have a postgres prompt. Try typing "\d+" at the prompt. You should get something like: ds100-hw4-db::DATABASE=&gt; \d+ List of relations Schema | Name | Type | Owner | Size | Description --------+--------------+-------+----------------+------------+------------- public | cand | table | vibrgrsqevmzkj | 16 kB | public | comm | table | vibrgrsqevmzkj | 168 kB | public | indiv | table | vibrgrsqevmzkj | 904 kB | public | indiv_sample | table | vibrgrsqevmzkj | 600 kB | public | inter_comm | table | vibrgrsqevmzkj | 296 kB | public | link | table | vibrgrsqevmzkj | 8192 bytes | (6 rows) Congrats! You now have a Postgres database running containing the data you need for this project. Part 1: Descriptive Statistics End of explanation my_URI = <replace_me> %load_ext sql %sql $my_URI engine = sqlalchemy.create_engine(my_URI) connection = engine.connect() Explanation: Now, let's connect to your Postgres database. On your Heroku Postgres details, look at the credentials for the database. Take the long URI in the credentials and replace the portion of the code that reads &lt;replace_me&gt; with the URI. It should start with postgres://. End of explanation # We use `LIMIT 5` to avoid displaying a huge table. # Although our tables shouldn't get too large to display, # this is generally good practice when working in the # notebook environment. Jupyter notebooks don't handle # very large outputs well. %sql SELECT * from cand LIMIT 5 Explanation: Table Descriptions Here is a list of the tables in the database. Each table links to the documentation on the FEC page for the dataset. Note that the table names here are slightly different from the ones in lecture. Consult the FEC page for the descriptions of the tables to find out what the correspondence is. cand: Candidates table. Contains names and party affiliation. comm: Committees table. Contains committee names and types. link: Committee to candidate links. indiv: Individual contributions. Contains recipient committee ID and transaction amount. inter_comm: Committee-to-candidate and committee-to-committee contributions. Contains donor and recipient IDs and transaction amount. indiv_sample: Sample of individual contributions to Hillary Clinton and Bernie Sanders. Used in Part 2 only. Writing SQL queries You can write SQL directly in the notebook by using the %sql magic, as demonstrated in the next cell. Be careful when doing this. If you try to run a SQL query that returns a lot of rows (100k or more is a good rule of thumb) your browser will probably crash. This is why in this homework, we will strongly prefer using SQL as much as possible, only materializing the SQL queries when they are small. Because of this, your queries should work even as the size of your data goes into the terabyte range! This is the primary advantage of working with SQL as opposed to only dataframes. End of explanation query = ''' SELECT cand_id, cand_name FROM cand WHERE cand_pty_affiliation = 'REP' LIMIT 5 ''' %sql $query Explanation: For longer queries, you can save your query into a string, then use it in the %sql statement. The $query in the %sql statement pulls in the value in the Python variable query. End of explanation res = %sql select * from cand limit 5 res_df = res.DataFrame() res_df['cand_id'] Explanation: In addition, you can assign the SQL statement to a variable and then call .DataFrame() on it to get a Pandas DataFrame. However, it will often be more efficient to express your computation directly in SQL. For this homework, we will be grading your SQL expressions so be sure to do all computation in SQL (unless otherwise requested). End of explanation # complete the query string query_q1a = SELECT ... FROM ... WHERE ... q1a = %sql $query_q1a q1a _ = ok.grade('q01a') _ = ok.backup() Explanation: Question 1a We are interested in finding the PACs that donated large sums to the candidates. To begin to answer this question, we will look at the inter_comm table. We'll find all the transactions that exceed \$5,000. However, if there are a lot of transactions like that, it might not be useful to list them all. So before actually finding the transactions, find out how many such transactions there are. Use only SQL to compute the answer. (It should be a table with a single column called count and a single entry, the number of transactions.) We will be grading the query string query_q1a. You may modify our template but the result should contain the same information with the same names. End of explanation # complete the query string query_q1b = SELECT ... AS donor_cmte_id ... AS recipient_name ... AS transaction_amt FROM ... WHERE ... ORDER BY ... q1b = %sql $query_q1b q1b _ = ok.grade('q01b') _ = ok.backup() Explanation: Question 1b Having seen that there aren't too many transactions that exceed \$5,000, let's find them all. Using only SQL, construct a table containing the recipient committee's name, the ID of the donor committee, and the transaction amount, for transactions that exceed $5,000 dollars. Sort the transactions in decreasing order by amount. We will be grading the query string query_q1b. You may modify our template but the result should contain the same information with the same names. End of explanation # complete the query string query_q1c = ''' SELECT ... AS donor_cmte_id ... AS recipient_name ... AS total_transaction_amt FROM inter_comm GROUP BY ... ORDER BY ... DESC LIMIT ... ''' q1c = %sql $query_q1c q1c ok.grade('q01c') _ = ok.backup() Explanation: Question 1c Of course, individual transactions could be misleading. A more interesting question is: How much did each group give in total to each committee? Find the total transaction amounts after grouping by the recipient committee's name and the ID of the donor committee. This time, just use LIMIT 20 to limit your results to the top 20 total donations. We will be grading the query string query_q1c. You may modify our template but the result should contain the same information with the same names. End of explanation # complete the query string query_q1d = SELECT ... AS donor_cmte_id, ... AS recipient_id, ... AS total_transaction_amt FROM ... GROUP BY ... ORDER BY ... DESC LIMIT 20 q1d = %sql $query_q1d q1d _ = ok.grade('q01d') _ = ok.backup() Explanation: If you peruse the results of your last query, you should notice that some names are listed twice with slightly different spellings. Perhaps this causes some contributions to be split extraneously. Question 1d Find a field that uniquely identifies recipient committees and repeat your analysis from the previous question using that new identifier. We will be grading the query string query_q1d. You may modify our template but the result should contain the same information with the same names. End of explanation # complete the query string query_q1e = ''' SELECT ... AS donor_name, ... AS recipient_name, ... AS total_transaction_amt FROM ... WHERE ... GROUP BY ... ORDER BY ... DESC LIMIT 20 ''' q1e = %sql $query_q1e q1e _ = ok.grade('q01e') _ = ok.backup() Explanation: Question 1e Of course, your results are probably not very informative. Let's join these results with the comm table (perhaps twice?) to get the names of the committees involved in these transactions. As before, limit your results to the top 20 by total donation. We will be grading the query string query_q1e. You may modify our template but the result should contain the same information with the same names. Remember that the name column of inter_comm is not consistent. We found this out in 1(c) where we found that the same committees were named slightly differently. Because of this, you cannot use the name column of inter_comm to get the names of the committees. End of explanation # complete the query string query_q2 = ''' SELECT ... AS state, ... AS count FROM ... ... ''' q2 = %sql $query_q2 q2 _ = ok.grade('q02') _ = ok.backup() Explanation: Question 2 What is the distribution of committee by state? Write a SQL query which computes for each state the number of committees in the comm table that are registered in that state. Display the results in descending order by count. We will be grading the query string query_q2. You may modify our template but the result should contain the same information with the same names. End of explanation query_q3 = ''' WITH pac_donations(cmte_id, pac_donations) AS ( ... ) SELECT ... AS cmte_name, ... AS pac_donations FROM ... ORDER BY pac_donations, cmte_nm LIMIT 20 ''' q3 = %sql $query_q3 q3 _ = ok.grade('q03') _ = ok.backup() Explanation: Question 3 Political Action Committees are major sources funding for campaigns. They typically represent business, labor, or ideological interests and influence campaigns through their funding. Because of this, we'd like to know how much money each committee received from PACs. For each committee, list the total amount of donations they got from Political Action Committees. If they got no such donations, the total should be listed as null. Order the result by pac_donations, then cmte_nm. We will be grading you on the query string query_q3. You may modify our template but the result should contain the same information with the same names. End of explanation query_q4 = ''' SELECT ... AS from_cmte_name, ... AS to_cmte_name FROM ... WHERE ... GROUP BY ... ORDER ... DESC LIMIT 10 ''' q4 = %sql $query_q4 q4 _ = ok.grade('q04') _ = ok.backup() Explanation: Question 4 Committees can also contribute to other committees. When does this happen? Perhaps looking at the data can help us figure it out. Find the names of the top 10 (directed) committee pairs that are affiliated with the Republican Party, who have the highest number of intercommittee transactions. By directed, we mean that a transaction where C1 donates to C2 is not the same as one where C2 donates to C1. We will be grading you on the query string query_q4. You may modify our template but the result should contain the same information with the same names. End of explanation query_q5 = ''' SELECT DISTINCT ... AS cand_1, ... AS cand_2 FROM ... WHERE ... ... ''' q5 = %sql $query_q5 q5 _ = ok.grade('q05') _ = ok.backup() Explanation: Question 5 Some committees received donations from a common contributor. Perhaps they were ideologically similar. Find the names of distinct candidate pairs that share a common committee contributor from Florida. If you list a pair ("Washington", "Lincoln") you should also list ("Lincoln, Washington"). Save the result in q5. Hint: In SQL, the "not equals" operator is &lt;&gt; (it's != in Python). We will be grading you on the query string query_q5. You may modify our template but the result should contain the same information with the same names. End of explanation # Fill in the query query_q7 = SELECT comm.cmte_nm AS cmte_nm, sum(indiv_sample.transaction_amt) AS total_transaction_amt FROM ... WHERE ... GROUP BY ... HAVING ... # Do not change anything below this line res = %sql $query_q7 q7 = res.DataFrame().set_index("cmte_nm") q7 # q7 will be graded _ = ok.grade('q07') _ = ok.backup() Explanation: Part 2: Hypothesis Testing and Bootstrap in SQL In this part, we're going to perform a hypothesis test using SQL! This article describes a statement by Hillary Clinton where where she claims that the majority of her campaign was funded by small donors. The article argues that her statement is false, so we ask a slightly different question: Is there a difference in the proportion of money contributed by small donors between Hillary Clinton's and Bernie Sanders' campaigns? For these questions, we define small donors as individuals that donated $200 or less to a campaign. For review, we suggest looking over this chapter on Hypothesis Testing from the Data 8 textbook: https://www.inferentialthinking.com/chapters/10/testing-hypotheses.html Question 6 Before we begin, please think about and answer the following questions. For each question, state "Yes" or "No", followed by a one-sentence explanation. (a) If we were working with the entire FEC dataset instead of a sample, would we still conduct a hypothesis test? Why or why not? (b) If we were working with the entire FEC dataset instead of a sample, would we still conduct bootstrap resampling? Why or why not? (c) Let's suppose we take our sample and compute the proportion of money contributed by small donors to Hillary and Bernie's campaign. We find that the difference is 0.0 — they received the exact same proportion of small donations. Would we still need to conduct a hypothesis test? Why or why not? (d) Let's suppose we take our sample and compute the proportion of money contributed by small donors to Hillary and Bernie's campaign. We find that the difference is 0.3. Would we still need to conduct a hypothesis test? Why or why not? (a) Enter in your answer for (a) here, replacing this sentence. (b) Enter in your answer for (b) here, replacing this sentence. (c) Enter in your answer for (c) here, replacing this sentence. (d) Enter in your answer for (d) here, replacing this sentence. Question 7 We've taken a sample of around 2700 rows of the original FEC data for individual contributions that only include contributions to Clinton and Sanders. This sample is stored in the table indiv_sample. The individual contributions of donors are linked to committees, not candidates directly. Hillary's primary committee was called HILLARY FOR AMERICA, and Bernie's was BERNIE 2016. Fill in the SQL query below to compute the total contributions for each candidate's committee. We will be grading you on the query string query_q7. You may modify our template but the result should contain the same information with the same names. End of explanation # Fill in the query query_q8 = ''' SELECT comm.cmte_id AS cmte_id, comm.cmte_nm AS cmte_name, SUM (...) / SUM(...) AS prop_funds FROM ... WHERE ... GROUP BY ... HAVING ... ''' # Do not change anything below this line res = %sql $query_q8 small_donor_funds_prop = res.DataFrame() small_donor_funds_prop _ = ok.grade('q08') _ = ok.backup() Explanation: Question 8 We want to know what proportion of this money came from small donors — individuals who donated \$200 or less. For example, if Hillary raised \$1000, and \$300 of that came from small donors, her proportion of small donors would be 0.3. Compute this proportion for each candidate by filling in the SQL query below. The resulting table should have two columns: cmte_id which contains the Hillary's and Bernie's committee IDs cmte_name which contains the Hillary's and Bernie's committee names prop_funds which contains the proportion of funds contributed by small donors. You may not create a dataframe for this problem. By keeping the calculations in SQL, this query will also work on the original dataset of individual contributions (~ 3GB). Hint: Try using Postgres' CASE statement to filter out transactions under $200. Hint: Remember that you can append ::float to a column name to convert its values to float. You'll have to do this to perform division correctly. We will be grading you on the query string query_q8. You may modify our template but the result should contain the same information with the same names. End of explanation # Finish the SQL query to render the histogram of individual contributions # for 'HILLARY FOR AMERICA' query_q9a = SELECT transaction_amt FROM ... WHERE ... # Do not change anything below this line res = %sql $query_q9a hillary_contributions = res.DataFrame() print(hillary_contributions.head()) # Make the Plot sns.distplot(hillary_contributions) plt.title('Distribution of Contribution Amounts to Hillary') plt.xlim((-50, 3000)) plt.ylim((0, 0.02)) # Finish the SQL query to render the histogram of individual contributions # for 'BERNIE 2016' query_q9b = SELECT transaction_amt FROM ... WHERE ... # Do not change anything below this line res = %sql $query_q9b bernie_contributions = res.DataFrame() print(bernie_contributions.head()) sns.distplot(bernie_contributions) plt.title('Distribution of Contribution Amounts to Bernie') plt.xlim((-50, 3000)) plt.ylim((0, 0.02)) _ = ok.grade('q09') _ = ok.backup() Explanation: Question 9 Let's now do a bit of EDA. Fill in the SQL statements below to make histograms of the transaction amounts for both Hillary and Bernie. Note that we do take your entire result and put it into a dataframe. This is not scalable. If indiv_sample was large, your computer would run out of memory trying to store it in a dataframe. The better way to compute the histogram would be to use SQL to generate bins and count the number of contributions in each bin using the built-in width_bucket function. End of explanation %%sql DROP VIEW IF EXISTS hillary CASCADE; DROP VIEW IF EXISTS bernie CASCADE; CREATE VIEW hillary AS SELECT row_number() over () AS row_id, indiv_sample.* FROM indiv_sample, comm WHERE indiv_sample.cmte_id = comm.cmte_id AND comm.cmte_nm = 'HILLARY FOR AMERICA'; CREATE VIEW bernie AS SELECT row_number() over () AS row_id, indiv_sample.* FROM indiv_sample, comm WHERE indiv_sample.cmte_id = comm.cmte_id AND comm.cmte_nm = 'BERNIE 2016'; SELECT * FROM hillary LIMIT 5 Explanation: Question 10 Looks like there is a difference. Let's see if it's statistically significant. State appropriate null and alternative hypotheses for this problsm. Fill in your answer here. Constructing a Bootstrap CI We want to create a bootstrap confidence interval of the proportion of funds contributed to Hillary Clinton by small donors. To do this in SQL, we need to number the rows we want to bootstrap. The following cell creates a view called hillary. Views are like tables. However, instead of storing the rows in the database, Postgres will recompute the values in the view each time you query it. It adds a row_id column to each row in indiv_sample corresponding to a contribution to Hillary. Note that we use your hillary_cmte_id variable by including $hillary_cmte_id in the SQL. We'll do the same for Bernie, creating a view called bernie. End of explanation n_hillary_rows = 1524 n_trials = 500 seed = 0.42 query_q11 = CREATE VIEW hillary_design AS SELECT ... AS trial_id, ... AS row_id FROM ... # Do not change anything below this line # Fill in the $ variables set in the above string import string query_q11 = string.Template(query_q11).substitute(locals()) %sql drop view if exists hillary_design cascade %sql SET SEED TO $seed %sql $query_q11 %sql select * from hillary_design limit 5 _ = ok.grade('q11') _ = ok.backup() Explanation: Question 11 Let's contruct a view containing the rows we want to sample for each bootstrap trial. For example, if we want to create 100 bootstrap samples of 3 contributions to Hillary, we want something that looks like: trial_id | row_id ======== | ====== 1 | 1002 1 | 208 1 | 1 2 | 1524 2 | 1410 2 | 1023 3 | 423 3 | 68 3 | 925 ... | ... 100 | 10 This will let us later construct a join on the hillary view that computes the bootstrap sample for each trial by sampling with replacement. Create a view called hillary_design that contains two columns: trial_id and row_id. It should contain the IDs corresponding to 500 samples of the entire hillary view. The hillary view contains 1524 rows, so the hillary_design view should have a total of 500 * 1524 = 762000 rows. Hint: Recall how we generated a matrix of random numbers in class. Start with that, then start tweaking it until you get the view you want. Our solution uses the Postgres functions generate_series, floor, and random. End of explanation query_q12 = ''' CREATE VIEW hillary_trials as SELECT ... AS trial_id, ... AS small_donor_sum, ... AS total FROM ... WHERE ... GROUP BY ... ''' # Do not change anything below this line %sql drop view if exists hillary_trials cascade %sql SET SEED TO $seed %sql $query_q12 %sql select * from hillary_trials limit 5 _ = ok.grade('q12') _ = ok.backup() Explanation: Question 12 Construct a view called hillary_trials that uses the hillary and hillary_design views to compute the total amount contributed by small donors for each trial as well as the overall amount. It should have three columns: trial_id: The number of the trial, from 1 to 500 small_donor_sum: The total contributions from small donors in the trial total: The total contributions of all donations in the trial Hint: Our solution uses the CASE WHEN statement inside of a SUM() function call to compute the small_donor_sum. End of explanation query_q13 = ''' CREATE VIEW hillary_props as SELECT trial_id, ... AS small_donor_prop FROM hillary_trials ''' # Do not change anything below this line %sql drop view if exists hillary_props cascade %sql SET SEED TO $seed %sql $query_q13 %sql select * from hillary_props limit 5 _ = ok.grade('q13') _ = ok.backup() Explanation: Question 13 Now, create a view called hillary_props that contains two columns: trial_id: The number of the trial, from 1 to 500 small_donor_prop: The proportion contributed by small donors for each trial Hint: Remember that you can append ::float to a column name to convert its values to float. You'll have to do this to perform division correctly. End of explanation n_bernie_rows = 1173 n_trials = 500 create_bernie_design = CREATE VIEW bernie_design AS SELECT ... AS trial_id, ... AS row_id FROM ... create_bernie_trials = ''' CREATE VIEW bernie_trials as SELECT ... AS trial_id, ... AS small_donor_sum, ... AS total FROM ... WHERE ... GROUP BY ... ''' create_bernie_props = ''' CREATE VIEW bernie_props as SELECT trial_id, ... AS small_donor_prop FROM bernie_trials ''' # Do not change anything below this line # Fill in the $ variables set in the above string import string create_bernie_design = (string.Template(create_bernie_design) .substitute(locals())) %sql drop view if exists bernie_design cascade %sql $create_bernie_design %sql drop view if exists bernie_trials cascade %sql $create_bernie_trials %sql drop view if exists bernie_props %sql $create_bernie_props %sql SET SEED TO $seed %sql select * from bernie_props limit 5 _ = ok.grade('q14') _ = ok.backup() Explanation: Question 14 Now, repeat the process to bootstrap Bernie's proportion of funds raised by small donors. You should be able to mostly copy-paste your code for Hillary's bootstrap CI. End of explanation res = %sql select * from hillary_props hillary_trials_df = res.DataFrame() res = %sql select * from bernie_props bernie_trials_df = res.DataFrame() ax = plt.subplot(1,2,1) sns.distplot(hillary_trials_df['small_donor_prop'], ax=ax) plt.title('Hillary Bootstrap Prop') plt.xlim(0.1, 0.9) plt.ylim(0, 25) ax = plt.subplot(1,2,2) sns.distplot(bernie_trials_df['small_donor_prop'], ax=ax) plt.title('Bernie Bootstrap Prop') plt.xlim(0.1, 0.9) plt.ylim(0, 25) Explanation: Plotting the sample distribution Run the following cell to make a plot of the distribution of proportions for both Hillary and Bernie. Again, this would not be scalable if we took many bootstrap samples. However, 500 floats is reasonable to fit in memory. End of explanation _ = ok.grade_all() Explanation: Computing the Confidence Interval Run the following cell to compute confidence intervals based on your hillary_props and bernie_props views. Think about what the intervals mean. Question 15 Based on your confidence intervals, should we reject the null? Are there any other factors that should be taken into consideration when making this conclusion? Write your answer here, replacing this text. Congrats! You finished the homework. Submitting your assignment First, run the next cell to run all the tests at once. End of explanation # Now, we'll submit to okpy _ = ok.submit() Explanation: Then, we'll submit the assignment to OkPy so that the staff will know to grade it. You can submit as many times as you want, and you can choose which submission you want us to grade by going to https://okpy.org/cal/data100/sp17/. After you've done that, make sure you've pushed your changes to Github as well! End of explanation
5,179
Given the following text description, write Python code to implement the functionality described below step by step Description: Example plot for LFPy Step1: Fetch Mainen&Sejnowski 1996 model files Step2: Main script, set parameters and create cell, synapse and electrode objects Step3: Plot simulation output
Python Code: import LFPy import numpy as np import os import sys from urllib.request import urlopen import ssl import zipfile import matplotlib.pyplot as plt from matplotlib.collections import PolyCollection from os.path import join Explanation: Example plot for LFPy: Single-synapse contribution to the LFP Copyright (C) 2017 Computational Neuroscience Group, NMBU. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. End of explanation if not os.path.isfile(join('cells', 'cells', 'j4a.hoc')): #get the model files: u = urlopen('http://senselab.med.yale.edu/ModelDB/eavBinDown.asp?o=2488&a=23&mime=application/zip', context=ssl._create_unverified_context()) localFile = open('patdemo.zip', 'wb') localFile.write(u.read()) localFile.close() #unzip: myzip = zipfile.ZipFile('patdemo.zip', 'r') myzip.extractall('.') myzip.close() Explanation: Fetch Mainen&Sejnowski 1996 model files: End of explanation # Define cell parameters cell_parameters = { 'morphology' : join('cells', 'cells', 'j4a.hoc'), # from Mainen & Sejnowski, J Comput Neurosci, 1996 'cm' : 1.0, # membrane capacitance 'Ra' : 150., # axial resistance 'v_init' : -65., # initial crossmembrane potential 'passive' : True, # turn on NEURONs passive mechanism for all sections 'passive_parameters' : {'g_pas' : 1./30000, 'e_pas' : -65}, 'nsegs_method' : 'lambda_f', # spatial discretization method 'lambda_f' : 100., # frequency where length constants are computed 'dt' : 2.**-3, # simulation time step size 'tstart' : 0., # start time of simulation, recorders start at t=0 'tstop' : 100., # stop simulation at 100 ms. } # Create cell cell = LFPy.Cell(**cell_parameters) # Align cell cell.set_rotation(x=4.98919, y=-4.33261, z=0.) # Define synapse parameters synapse_parameters = { 'idx' : cell.get_closest_idx(x=0., y=0., z=900.), 'e' : 0., # reversal potential 'syntype' : 'ExpSyn', # synapse type 'tau' : 10., # syn. time constant 'weight' : .001, # syn. weight 'record_current' : True, } # Create synapse and set time of synaptic input synapse = LFPy.Synapse(cell, **synapse_parameters) synapse.set_spike_times(np.array([20.])) # Run simulation, record transmembrane currents cell.simulate(rec_imem=True) # Create a grid of measurement locations, in (mum) X, Z = np.mgrid[-500:501:50, -400:1201:50] Y = np.zeros(X.shape) # Define electrode parameters grid_electrode_parameters = { 'sigma' : 0.3, # extracellular conductivity 'x' : X.flatten(), # electrode requires 1d vector of positions 'y' : Y.flatten(), 'z' : Z.flatten() } # Create electrode objects grid_electrode = LFPy.RecExtElectrode(cell, **grid_electrode_parameters) # Calculate LFPs as product between linear transform and currents. # Create reference to data on class object grid_electrode.LFP = grid_electrode.get_transformation_matrix() @ cell.imem Explanation: Main script, set parameters and create cell, synapse and electrode objects: End of explanation fig = plt.figure(dpi=160, figsize=(12, 6)) ax = fig.add_axes([.1,.1,.25,.8], frameon=False) ax.axis('off') ax1 = fig.add_axes([.35,.1,.6,.8], aspect='auto', frameon=False) ax1.axis('off') from matplotlib.textpath import TextPath from matplotlib.font_manager import FontProperties from matplotlib.patches import PathPatch fp = FontProperties(family="Courier New", weight='roman') path = TextPath((0, 0), 'LFPy', prop=fp, size=50, ) ax1.add_patch(PathPatch(path, facecolor='C1', clip_on=False, linewidth=3)) ax1.axis([0, 120, -15, 35]) LFP = grid_electrode.LFP[:, cell.tvec==30].reshape(X.shape) linthresh = 1E-5 vmax = 2E-4 C0 = plt.cm.colors.hex2color('k') C1 = plt.cm.colors.hex2color('C4') C2 = plt.cm.colors.hex2color('C1') C3 = plt.cm.colors.hex2color('C3') C4 = plt.cm.colors.hex2color('k') cmap = plt.cm.colors.LinearSegmentedColormap.from_list('C0C1C2', colors=[C0, C1, C2, C3, C4], N=256, gamma=1) im = ax.contourf(X, Z, LFP, norm=plt.cm.colors.SymLogNorm(linthresh=linthresh, linscale=1, vmin=-vmax, vmax=vmax), levels=101, cmap=cmap, zorder=-2) #plot morphology zips = [] for x, z in cell.get_idx_polygons(): zips.append(list(zip(x, z))) polycol = PolyCollection(zips, edgecolors='k', facecolors='k', lw=0.2) ax.add_collection(polycol) ax.plot(synapse.x, synapse.z, 'o', ms=10, markeredgecolor='k', markerfacecolor='C1') fig.savefig('LFPy-logo.png', bbox_inches='tight') fig.savefig('LFPy-logo.svg', bbox_inches='tight') Explanation: Plot simulation output: End of explanation
5,180
Given the following text description, write Python code to implement the functionality described below step by step Description: This notebook provides an interactive overview to some if the ideas developed in a paper entitled "Error Statistics", by Mayo and Spanos. Goals For a sample of data Step1: All hypotheses discussed herein will be expressed with Gaussian / normal distributions. Let's look at the properties of this distribution. Start by plotting it. We'll set the mean to 0 and the width the 1...the standard normal distribution. Step2: Now look at the cumulative distribution function of the standard normal, which integrates from negative infinity up to the function argument, on a unit-normalized distribution. Step3: The function also accepts a list. Step4: Now let's be more explicit about the parameters of the distribution. Step5: In addition to exploring properties of the exact function, we can sample points from it. Step6: We can also approximate the exact distribution by sampling a large number of points from it. Step7: Data samples If we have sample of points, we can summarize them in a model-nonspecific way by calculating the mean. Here, we draw them from a Gaussian for convenience. Step8: Now let's generate a large number of data samples and plot the corresponding distribution of sample means. Step9: Note that by increasing the number of data points, the variation on the mean decreases. Notation Step10: Let's numerically determine the sampling distribution under the hypothesis Step11: With this sampling distribution (which can be calculated exactly), we know exectly how likely a particular result $d(X_0)$ is. We also know how likely it is to observe a result that is even less probable than $d(X_0)$, $P(d(X) > d(X_0); \mu)$. Rejecting the null This probability is the famous p-value. When the p-value for a particular experimental outcome is less that some pre-determined amount (usually called $\alpha$), we can Step12: Now, imagine that we observe $\bar X_0 = 0.4$. The probability of $\bar X > 0.4$ is less than $2\%$ under $H_0$, so let's say we've rejected $H_0$. Question, what regions of $\mu$ (defined as $\mu > \mu_1$) have been severely tested? $SEV(\mu>\mu_1) = P(d(X)<d(X_0);!(\mu>\mu_1)) = P(d(X)<d(X_0); \mu<=\mu_1)$ ---> $P(d(X)<d(X_0);\mu = \mu_1)$ So we only need to calculate the probability of a result less anomalous than $d(X_0)$, given $\mu_1$. Step13: Calculate the severity of an outcome that is rather unlike (is lower) than the lower bound of a range of alternate hypotheses ($\mu > \mu_1$). Step14: Calculate the severity for a set of observations.
Python Code: from scipy.stats import norm # properties of the distribution from numpy.random import normal # samples from the distribution import numpy as np import scipy from matplotlib import pyplot as plt %matplotlib inline Explanation: This notebook provides an interactive overview to some if the ideas developed in a paper entitled "Error Statistics", by Mayo and Spanos. Goals For a sample of data: (1) quantify the extent to which the sample is consistent with coming from a particular, hypothetical data source (2) if inconsistent, determine what other, particular data sources is the sample consistent with. Introduction Two notions of probability: Frequentist: probabilities represent relative frequency of occurence. e.g. $P(X;\mu)$ speaks to the probability of outcome $X$, given $\mu$. Bayesian: probabilities represent degrees of belief e.g. $P(\mu;X)$ speaks to the probability of $\mu$ being true, given $X$. Both are useful. Bayesian analyses: * incorporate prior knowledge * produce posterior probabilities Freqentist analyses: * allow for lack of prior knowledge * do not produce posterior probabilities Let's explore the use of frequentist statistics. $P(X;\mu)$ describes a set of probabilities for observed data $X$ given an assumption about the world, parameterized by $\mu$. Specifically, let's look at the statistics of errors of inference. Exploration End of explanation x = np.arange(-10, 10, 0.001) plt.plot(x,norm.pdf(x,0,1)) # final arguments are mean and width Explanation: All hypotheses discussed herein will be expressed with Gaussian / normal distributions. Let's look at the properties of this distribution. Start by plotting it. We'll set the mean to 0 and the width the 1...the standard normal distribution. End of explanation norm.cdf(0) Explanation: Now look at the cumulative distribution function of the standard normal, which integrates from negative infinity up to the function argument, on a unit-normalized distribution. End of explanation norm.cdf([-1., 0, 1]) Explanation: The function also accepts a list. End of explanation mu = 0 sigma = 1 norm(loc=mu, scale=sigma) norm.cdf([-1., 0, 1]) sigma=2 mu = 0 n = norm(loc=mu, scale=sigma) n.cdf([-1., 0, 1]) Explanation: Now let's be more explicit about the parameters of the distribution. End of explanation [normal() for _ in range(5)] Explanation: In addition to exploring properties of the exact function, we can sample points from it. End of explanation size = 1000000 num_bins = 300 plt.hist([normal() for _ in range(size)],num_bins) plt.xlim([-10,10]) Explanation: We can also approximate the exact distribution by sampling a large number of points from it. End of explanation n = 10 my_sample = [normal() for _ in range(n)] my_sample_mean = np.mean(my_sample) print(my_sample_mean) Explanation: Data samples If we have sample of points, we can summarize them in a model-nonspecific way by calculating the mean. Here, we draw them from a Gaussian for convenience. End of explanation n = 10 means_10 = [] for _ in range(10000): my_sample = [normal() for _ in range(n)] my_sample_mean = np.mean(my_sample) means_10.append(my_sample_mean) plt.hist(means_10,100) plt.xlim([-1.5,1.5]) plt.xlabel("P(mean(X))") plt.show() n = 100 means_100 = [] for _ in range(10000): my_sample = [normal() for _ in range(n)] my_sample_mean = np.mean(my_sample) means_100.append(my_sample_mean) plt.hist(means_100,100) plt.xlim([-1.5,1.5]) plt.xlabel("P(mean(X))") plt.show() # show 1/sqrt(n) scaling of deviation n_s = [] std_100 = [] for i in range(1, 1000, 50): means_100 = [] for _ in range(5000): my_sample = [normal() for _ in range(i)] my_sample_mean = np.mean(my_sample) means_100.append(my_sample_mean) my_sample_std = np.std(means_100) std_100.append(1./(my_sample_std*my_sample_std)) n_s.append(i) plt.scatter(n_s,std_100) plt.xlim([0,1000]) plt.ylabel("std(mean(X;sample))") plt.xlabel("sample") plt.show() Explanation: Now let's generate a large number of data samples and plot the corresponding distribution of sample means. End of explanation def d(X=[0], mu = 0, sigma = 1): X_bar = np.mean(X) return (X_bar - mu) / sigma * np.sqrt(len(X)) n = 10 my_sample = [normal() for _ in range(n)] d(my_sample) Explanation: Note that by increasing the number of data points, the variation on the mean decreases. Notation: the variable containing all possible n-sized sets of samples is called $X$. A specific $X$, like the one actually observed in an experiment, is called $X_0$. What can we say about the data? are the data consistent with having been sampled from a certain distribution? if not, what distribution are they consistent with? Hypotheses In our tutorial, a hypothesis is expressed as a distribution from which the data may have been drawn. Our goal is to provide a procedure for rejection of the null hypothesis, and, in the case of rejecting the null, provide warranted inference of one or more alternate hypotheses. Simplification: the hypothesis space is defined as all normal distributions with variable mean $\mu$ and fixed variance. Generalizing this assumption changes almost nothing. Corollary: the hypothesis space is one-dimensional, and the logical not of a hypothesis is simple to comprehend. A Test Statistic To relate observed data to hypotheses, we need to define a test statistic, which summarizes a particular experimental result. This statistic is also a function of the hypothesis, and will have different sampling distributions under different hypotheses. $d(X;H_\mu) = (\bar X - \mu)/(\sigma/\sqrt n)$, where $\bar X$ is the mean of $X$. For Gaussian hypotheses, $d$ is distributed as a unit normal. End of explanation size = 100000 n = 10 d_sample = [] for _ in range(size): my_sample = [normal() for _ in range(n)] # get a sample of size n d_sample.append(d(my_sample)) # add test statistic for this sample to the list plt.hist(d_sample,100) plt.xlabel("P(d(X);H0)") Explanation: Let's numerically determine the sampling distribution under the hypothesis: $H_0$: $\mu = 0, \sigma = 1$ End of explanation # look at the distributions of sample means for two hypotheses def make_histograms(mu0=0,mu1=1,num_samples=10000,n=100,sigma=1): #d0_sample = [] #d1_sample = [] m0_sample = [] m1_sample = [] for _ in range(num_samples): H0_sample = [normal(loc=mu0,scale=sigma) for _ in range(n)] # get a sample of size n from H0 H1_sample = [normal(loc=mu1,scale=sigma) for _ in range(n)] # get a sample of size n from H1 m0_sample.append( np.mean(H0_sample) ) # add mean for this sample to the m0 list m1_sample.append( np.mean(H1_sample) ) # add mean for this sample to the m1 list # remember that the test statistic is unit-normal-distributed for Gaussian hypotheses, # so these distributions should be identical #d0_sample.append( d(H0_sample,mu0,sigma) ) # add test statistic for this sample to the d0 list #d1_sample.append( d(H1_sample,mu1,sigma) ) # add test statistic for this sample to the d1 list plt.hist(m0_sample,100,label="H0") plt.hist(m1_sample,100,label="H1") plt.xlabel(r"$\bar{X}$") plt.legend() num_samples = 10000 n = 100 mu0 = 0 mu1 = 1 sigma=2 make_histograms(mu0=mu0,mu1=mu1,num_samples=num_samples,n=n,sigma=sigma) Explanation: With this sampling distribution (which can be calculated exactly), we know exectly how likely a particular result $d(X_0)$ is. We also know how likely it is to observe a result that is even less probable than $d(X_0)$, $P(d(X) > d(X_0); \mu)$. Rejecting the null This probability is the famous p-value. When the p-value for a particular experimental outcome is less that some pre-determined amount (usually called $\alpha$), we can: infer that $H_0$ is falsified at level $\alpha$ take the action that has been specified for this situation infer that $X_0$ indicates something about an alternate hypothesis. If $H_0$ corresponds to $\mu = \mu_0$, then we infer that $\mu > \mu_0 + \delta$. If $H_0$ is rejected, we can now also begin to speak about statistical properties of $H_1$ where $H_1 != H_0$. Neyman-Pearson digression The traditional frequentist procedure (due to Neyman and Pearson) is to construct a test that fixes the probability of rejecting $H_0$ when it’s true, and maximizes the power: the probability of statistical similarity with $H_1$ when it is true. In other words, for a fixed probability of rejecting $H_0$ when it's true, maximize the probability of accepting $H_1$ when it's true. The N-P construction is fixed before $X_0$ is observed. We wish to extend this and, when $H_0$ is rejected, infer regions of alternate parameter space that are severely tested by the outcome $X_0$. Inference of an alternate hypothesis When the null hypothesis is rejected, we are interested in ranges of alternate hypotheses that, if not true, are highly likely to have produced a test statistic less significant than $d(X_0)$. We say these ranges of parameters space, which can be thought of as composite hypotheses, have been severely tested. We call the level of testing severity and it is a function of the observed data ($X_0$), the range of alternate hypothesis ($H$), and the test constuction itself. This is the point of the tutorial: we are warrented to infer ranges of hypothesis space when that range has been severely tested. End of explanation # severity for the interval: mu > mu_1 # note that we calculate the probability in terms of the _lower bound_ of the interval, # since it will provide the _lowest_ severity def severity(mu_1=0, x=[0], mu0=0, sigma=sigma, n=100): # find the mean of the observed data x_bar = np.mean(x) # calculate the test statistic w.r.t. mu_1 dx = (x_bar - mu_1)/sigma*np.sqrt(n) # the test statistic is distributed as a unit normal n = norm() return n.cdf(dx) Explanation: Now, imagine that we observe $\bar X_0 = 0.4$. The probability of $\bar X > 0.4$ is less than $2\%$ under $H_0$, so let's say we've rejected $H_0$. Question, what regions of $\mu$ (defined as $\mu > \mu_1$) have been severely tested? $SEV(\mu>\mu_1) = P(d(X)<d(X_0);!(\mu>\mu_1)) = P(d(X)<d(X_0); \mu<=\mu_1)$ ---> $P(d(X)<d(X_0);\mu = \mu_1)$ So we only need to calculate the probability of a result less anomalous than $d(X_0)$, given $\mu_1$. End of explanation sigma = 2 mu_1 = 0.2 x = [0.4] severity(mu_1=mu_1,x=x,sigma=sigma) num_samples = 10000 n = 100 mu0 = 0 mu1 = 0.2 sigma=2 make_histograms(mu0=mu0,mu1=mu1,num_samples=num_samples,n=n,sigma=sigma) Explanation: Calculate the severity of an outcome that is rather unlike (is lower) than the lower bound of a range of alternate hypotheses ($\mu > \mu_1$). End of explanation x_bar_values = [[0.4],[0.6],[1.]] color_indices = ["b","k","r"] for x,color_idx in zip(x_bar_values,color_indices): mu_values = scipy.linspace(0,1,100) sev = [severity(mu_1=mu_1,x=x,sigma=sigma) for mu_1 in mu_values] plt.plot(mu_values,sev,color_idx,label=x) plt.ylim(0,1.1) plt.ylabel("severity for $H: \mu > \mu_1$") plt.legend(loc="lower left") plt.xlabel(r"$\mu_1$") Explanation: Calculate the severity for a set of observations. End of explanation
5,181
Given the following text description, write Python code to implement the functionality described below step by step Description: Script to copy files to Anaconda paths so can import and use scripts Step1: find main path for install Step2: make list of paths need to copy files to & add main path Step3: add paths to list for conda environments currently available MAKE SURE HAS 'module name' AT END OF PATH!! Step5: function to copy files to list of paths Step6: Run code for each location
Python Code: module_name = 'bradlib' Explanation: Script to copy files to Anaconda paths so can import and use scripts End of explanation from distutils.sysconfig import get_python_lib #; print(get_python_lib()) path_main = get_python_lib() path_main path_main.split('Anaconda3') Explanation: find main path for install End of explanation dest_paths_list = [] dest_paths_list.append(path_main + '\\' + module_name) dest_paths_list Explanation: make list of paths need to copy files to & add main path End of explanation x = !conda env list #x[2:-2] print('------------------------------------------------') print('Conda envrionments found which will install to:') for i in x[2:-2]: y = i.split(' ') print(y[0]) new_path = path_main.split('Anaconda3')[0] +'Anaconda3\\envs\\'+y[0]+'\\Lib\\site-packages\\' + module_name #print(new_path) dest_paths_list.append(new_path) #dest_paths_list Explanation: add paths to list for conda environments currently available MAKE SURE HAS 'module name' AT END OF PATH!! End of explanation import os def copy_to_paths(source,dest): Function takes source and destination folders and copies files. #Source and dest needed in fomrat below: #source = ".\\bradlib" #dest = "C:\\Users\\bjk1y13\\dev\\garbage\\bradlib" #### Remove __pycache__ folder as is not required pycache_loc = source + "\\__pycache__" if os.path.isdir(pycache_loc) == True: print("__pycache__ found in source and being deleted...") !rmdir $pycache_loc /S /Q #### Copy files to new destination print('------------------------') print('Destination: ', dest) print('---------') folder_exists = os.path.isdir(dest) if folder_exists == True: print('Folder exists') ### delete older version folder print('Deleting old folder...') !rmdir $dest /S /Q print('Copying new folder...') !xcopy $source $dest /E /I elif folder_exists == False: print('Folder does not exist') print('Copying new folder...') !xcopy $source $dest /E /I else: print('Something has gone wrong!!') print('COMPLETE') print('------------------------') return source = ".\\" + module_name Explanation: function to copy files to list of paths End of explanation for destination in dest_paths_list: print(destination) copy_to_paths(source, destination) print('INSTALL SUCCESSFUL') Explanation: Run code for each location End of explanation
5,182
Given the following text description, write Python code to implement the functionality described below step by step Description: scikit-learn-linear-reg Credits Step1: Linear Regression Linear Regression is a supervised learning algorithm that models the relationship between a scalar dependent variable y and one or more explanatory variables (or independent variable) denoted X. Generate some data Step2: Fit the model
Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt import seaborn; from sklearn.linear_model import LinearRegression import pylab as pl seaborn.set() Explanation: scikit-learn-linear-reg Credits: Forked from PyCon 2015 Scikit-learn Tutorial by Jake VanderPlas Linear Regression End of explanation # Create some simple data import numpy as np np.random.seed(0) X = np.random.random(size=(20, 1)) y = 3 * X.squeeze() + 2 + np.random.randn(20) plt.plot(X.squeeze(), y, 'o'); Explanation: Linear Regression Linear Regression is a supervised learning algorithm that models the relationship between a scalar dependent variable y and one or more explanatory variables (or independent variable) denoted X. Generate some data: End of explanation model = LinearRegression() model.fit(X, y) # Plot the data and the model prediction X_fit = np.linspace(0, 1, 100)[:, np.newaxis] y_fit = model.predict(X_fit) plt.plot(X.squeeze(), y, 'o') plt.plot(X_fit.squeeze(), y_fit); Explanation: Fit the model: End of explanation
5,183
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: Given the following example:
Problem: import numpy as np from sklearn.feature_selection import SelectKBest from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline import pandas as pd data, target = load_data() pipe = Pipeline(steps=[ ('select', SelectKBest(k=2)), ('clf', LogisticRegression())] ) select_out = pipe.named_steps['select'].fit_transform(data, target)
5,184
Given the following text description, write Python code to implement the functionality described below step by step Description: Advanced Functions Test Solutions For this test, you should use the built-in functions to be able to write the requested functions in one line. Problem 1 Use map to create a function which finds the length of each word in the phrase (broken by spaces) and return the values in a list. The function will have an input of a string, and output a list of integers. Step1: Problem 2 Use reduce to take a list of digits and return the number that they correspond to. Do not convert the integers to strings! Step2: Problem 3 Use filter to return the words from a list of words which start with a target letter. Step3: Problem 4 Use zip and list comprehension to return a list of the same length where each value is the two strings from L1 and L2 concatenated together with connector between them. Look at the example output below Step4: Problem 5 Use enumerate and other skills to return a dictionary which has the values of the list as keys and the index as the value. You may assume that a value will only appear once in the given list. Step5: Problem 6 Use enumerate and other skills from above to return the count of the number of items in the list whose value equals its index.
Python Code: def word_lengths(phrase): return list(map(len, phrase.split())) word_lengths('How long are the words in this phrase') Explanation: Advanced Functions Test Solutions For this test, you should use the built-in functions to be able to write the requested functions in one line. Problem 1 Use map to create a function which finds the length of each word in the phrase (broken by spaces) and return the values in a list. The function will have an input of a string, and output a list of integers. End of explanation def digits_to_num(digits): return reduce(lambda x,y: x*10 + y,digits) digits_to_num([3,4,3,2,1]) Explanation: Problem 2 Use reduce to take a list of digits and return the number that they correspond to. Do not convert the integers to strings! End of explanation def filter_words(word_list, letter): return filter(lambda word: word[0]==letter,word_list) l = ['hello','are','cat','dog','ham','hi','go','to','heart'] filter_words(l,'h') Explanation: Problem 3 Use filter to return the words from a list of words which start with a target letter. End of explanation def concatenate(L1, L2, connector): return [word1+connector+word2 for (word1,word2) in zip(L1,L2)] concatenate(['A','B'],['a','b'],'-') Explanation: Problem 4 Use zip and list comprehension to return a list of the same length where each value is the two strings from L1 and L2 concatenated together with connector between them. Look at the example output below: End of explanation def d_list(L): return {key:value for value,key in enumerate(L)} d_list(['a','b','c']) Explanation: Problem 5 Use enumerate and other skills to return a dictionary which has the values of the list as keys and the index as the value. You may assume that a value will only appear once in the given list. End of explanation def count_match_index(L): return len([num for count,num in enumerate(L) if num == count]) count_match_index([0,2,2,1,5,5,6,10]) Explanation: Problem 6 Use enumerate and other skills from above to return the count of the number of items in the list whose value equals its index. End of explanation
5,185
Given the following text description, write Python code to implement the functionality described below step by step Description: Chapter 11. Null Hypothesis Significance Testing Exercise 11.1 Exercise 11.2 Exercise 11.3 Exercise 11.1 Purpose Step1: Part B fixed z Step2: Exercise 11.2 Purpose Step3: Part A fixed N Step4: Part B fixed z Step5: Exercise 11.3 Purpose
Python Code: import numpy as np from scipy.misc import factorial N = 45 z = 3 theta = 1/6 def binomial(theta, N, z): coef = factorial(N) / factorial(N-z) / factorial(z) p = coef * theta**z * (1 - theta)**(N-z) return p tail = np.arange(z+1) tail p = binomial(theta, N, tail).sum() * 2 # left and right tail probability p Explanation: Chapter 11. Null Hypothesis Significance Testing Exercise 11.1 Exercise 11.2 Exercise 11.3 Exercise 11.1 Purpose: To compute p values for stopping at fixed N and fixed z. Part A fixed N End of explanation right_tail = np.arange(z, N) p_right = z / right_tail * binomial(theta, right_tail, z) p = (1 - p_right.sum()) * 2 p Explanation: Part B fixed z End of explanation N = 45 z = 3 Explanation: Exercise 11.2 Purpose: To determine NHST CIs, and notice that they depend on the experimenter’s intention. End of explanation left_tail = np.arange(z+1) theta = np.arange(0.170, 0.190, 0.001) p = map(lambda t: binomial(t, N, left_tail).sum()*2, theta) p = list(p) list(zip(theta, p)) p = np.array(p) p_idx = np.nonzero(p > 0.05)[0][-1] theta1 = theta[p_idx] p[p_idx], theta1 right_tail = np.arange(z, N) theta = np.arange(0.005, 0.020, 0.001) p = map(lambda t: binomial(t, N, right_tail).sum()*2, theta) p = list(p) p = np.array(p) p_idx = np.nonzero(p > 0.05)[0][0] theta2 = theta[p_idx] p[p_idx], theta2 theta2, theta1 Explanation: Part A fixed N End of explanation theta = np.arange(0.150, 0.160, 0.001) low_tail = np.arange(z, N) def p_greated_than(theta): p_right = z / low_tail * binomial(theta, low_tail, z) p = (1 - p_right.sum()) * 2 return p p = map(p_greated_than, theta) p = list(p) list(zip(theta, p)) p_idx = np.nonzero(np.array(p) > 0.05)[0][-1] theta1 = theta[p_idx] p[p_idx], theta1 theta = np.arange(0.005, 0.020, 0.001) high_tail = np.arange(z+1) high_tail = np.arange(z, N+1) def p_less_than(theta): p = z / high_tail * binomial(theta, high_tail, z) p = 2 * p.sum() return p p = map(p_less_than, theta) p = list(p) p = np.array(p) p_idx = np.nonzero(p > 0.05)[0][0] theta2 = theta[p_idx] p[p_idx], theta2 theta2, theta1 Explanation: Part B fixed z End of explanation N = 45 z = 3 theta = 1/6 Ns = np.arange(40, 51) p_N = np.ones_like(Ns) / len(Ns) p_total = 0 for i, n in enumerate(Ns): # For the current `n`, determine the max z that is in the low tail: z_max = np.arange(0, n+1) / n z_max = np.nonzero(z_max <= z/N)[0][-1] low_tail = np.arange(0, z_max+1) p = 2*binomial(theta, n, low_tail).sum() p_total += p_N[i] * p print(n, p) p_total Explanation: Exercise 11.3 Purpose: To determine the p value when data collection stops at a fixed duration. End of explanation
5,186
Given the following text description, write Python code to implement the functionality described below step by step Description: https Step1: https Step2: Symmetric Difference https
Python Code: # Это единственный комментарий который имеет смысл # I s def find_index(m,a): try: return a.index(m) except : return -1 def find_two_sum(a, s): ''' >>> (3, 5) == find_two_sum([1, 3, 5, 7, 9], 12) True ''' if len(a)<2: return (-1,-1) idx = dict( (v,i) for i,v in enumerate(a) ) for i in a: m = s - i k = idx.get(m,-1) if k != -1 : return (i,k) return (-1, -1) print(find_two_sum([1, 3, 5, 7, 9], 12)) if __name__ == '__main__': import doctest; doctest.testmod() Explanation: https://www.testdome.com/questions/python/two-sum/14289?questionIds=14288,14289&generatorId=92&type=fromtest&testDifficulty=Easy Write a function that, given a list and a target sum, returns zero-based indices of any two distinct elements whose sum is equal to the target sum. If there are no such elements, the function should return (-1, -1). For example, find_two_sum([1, 3, 5, 7, 9], 12) should return a tuple containing any of the following pairs of indices: 1 and 4 (3 + 9 = 12) 2 and 3 (5 + 7 = 12) 3 and 2 (7 + 5 = 12) 4 and 1 (9 + 3 = 12) End of explanation %%javascript IPython.keyboard_manager.command_shortcuts.add_shortcut('g', { handler : function (event) { var input = IPython.notebook.get_selected_cell().get_text(); var cmd = "f = open('.toto.py', 'w');f.close()"; if (input != "") { cmd = '%%writefile .toto.py\n' + input; } IPython.notebook.kernel.execute(cmd); //cmd = "import os;os.system('open -a /Applications/MacVim.app .toto.py')"; //cmd = "!open -a /Applications/MacVim.app .toto.py"; cmd = "!code .toto.py"; IPython.notebook.kernel.execute(cmd); return false; }} ); IPython.keyboard_manager.command_shortcuts.add_shortcut('u', { handler : function (event) { function handle_output(msg) { var ret = msg.content.text; IPython.notebook.get_selected_cell().set_text(ret); } var callback = {'output': handle_output}; var cmd = "f = open('.toto.py', 'r');print(f.read())"; IPython.notebook.kernel.execute(cmd, {iopub: callback}, {silent: false}); return false; }} ); # v=getattr(a, 'pop')(1) s='print 4 7 ' commands={ 'print':print, 'len':len } def exec_string(s): global commands chunks=s.split() func_name=chunks[0] if len(chunks) else 'blbl' func=commands.get(func_name,None) params=[int(x) for x in chunks[1:]] if func: func(*params) exec_string(s) Explanation: https://stackoverflow.com/questions/28309430/edit-ipython-cell-in-an-external-editor Edit IPython cell in an external editor This is what I came up with. I added 2 shortcuts: 'g' to launch gvim with the content of the current cell (you can replace gvim with whatever text editor you like). 'u' to update the content of the current cell with what was saved by gvim. So, when you want to edit the cell with your preferred editor, hit 'g', make the changes you want to the cell, save the file in your editor (and quit), then hit 'u'. Just execute this cell to enable these features: End of explanation M = int(input()) m =set((map(int,input().split()))) N = int(input()) n =set((map(int,input().split()))) m ^ n S='add 5 6' method, *args = S.split() print(method) print(*map(int,args)) method,(*map(int,args)) # methods # (*map(int,args)) # command='add'.split() # method, args = command[0], list(map(int,command[1:])) # method, args for _ in range(2): met, *args = input().split() print(met, args) try: pass # methods[met](*list(map(int,args))) except: pass class Stack: def __init__(self): self.data = [] def is_empty(self): return self.data == [] def size(self): return len(self.data) def push(self, val): self.data.append(val) def clear(self): self.data.clear() def pop(self): return self.data.pop() def __repr__(self): return "Stack("+str(self.data)+")" def sum_list(ls): if len(ls)==0: return 0 elif len(ls)==1: return ls[0] else: return ls[0] + sum_list(ls[1:]) def max_list(ls): print(ls) if len(ls)==0: return None elif len(ls)==1: return ls[0] else: m = max_list(ls[1:]) return ls[0] if ls[0]>m else m def reverse_list(ls): if len(ls)<2: return ls return reverse_list(ls[1:])+ls[0:1] def is_ana(s=''): if len(s)<2: return True return s[0]==s[-1] and is_ana(s[1:len(s)-1]) print(is_ana("abc")) import turtle myTurtle = turtle.Turtle() myWin = turtle.Screen() def drawSpiral(myTurtle, lineLen): if lineLen > 0: myTurtle.forward(lineLen) myTurtle.right(90) drawSpiral(myTurtle,lineLen-5) drawSpiral(myTurtle,100) # myWin.exitonclick() t.forward(100) from itertools import combinations_with_replacement list(combinations_with_replacement([1,1,3,3,3],2)) hash((1,2)) # 4 # a a c d # 2 from itertools import combinations # N=int(input()) # s=input().split() # k=int(input()) s='a a c d'.split() k=2 combs=list(combinations(s,k)) print('{:.4f}'.format(len([x for x in combs if 'a' in x])/len(combs))) # ------------------------------------------ import random num_trials=10000 num_found=0 for i in range(num_trials): if 'a' in random.sample(s,k): num_found+=1 print('{:.4f}'.format(num_found/num_trials)) dir(5) Explanation: Symmetric Difference https://www.hackerrank.com/challenges/symmetric-difference/problem Task Given sets of integers, and , print their symmetric difference in ascending order. The term symmetric difference indicates those values that exist in either or but do not exist in both. Input Format The first line of input contains an integer, . The second line contains space-separated integers. The third line contains an integer, . The fourth line contains space-separated integers. Output Format Output the symmetric difference integers in ascending order, one per line. Sample Input 4 2 4 5 9 4 2 4 11 12 Sample Output 5 9 11 12 End of explanation
5,187
Given the following text description, write Python code to implement the functionality described below step by step Description: PROV Templates in Python Author Step1: Generate an example prov template and print (and store) it's provn representation Step2: Generate an example binding from a python dictionaries (one for entitities and one for variable bindings) Step3: Convert example to xml xml and rdf are the supported exchange formats for python prov (besides provn) Step4: Instantiate prov template the instantiation is done via the provconf.instantate_template function for comparison the template file as well as the instance are printed Step5: Compare with original ProvToolbox example Step6: Instantiate PROV template using provconvert and compare results the python prov implementation does not support deserialization form provn and ttl - thus converting to xml or rdf is necessary .. Step7: Show provconvert result to compare Test
Python Code: # Define namespaces used and generate a new empty # python prov template instance # Define the variable settings in the template as a dictionary from provtemplates import provconv import prov.model as prov import six import itertools ns_dict = { 'prov':'http://www.w3.org/ns/prov#', 'var':'http://openprovenance.org/var#', 'vargen':'http://openprovenance.org/vargen#', 'tmpl':'http://openprovenance.org/tmpl#', 'foaf':'http://xmlns.com/foaf/0.1/', 'ex': 'http://example.org/', 'orcid':'http://orcid.org/', #document.set_default_namespace('http://example.org/0/') 'rdf':'http://www.w3.org/1999/02/22-rdf-syntax-ns#', 'rdfs':'http://www.w3.org/2000/01/rdf-schema#', 'xsd':'http://www.w3.org/2001/XMLSchema#', 'ex1': 'http://example.org/1/', 'ex2': 'http://example.org/2/' } entity_dict = { 'var:author':['orcid:0000-0002-3494-120X','orcid:1111-1111-1111-111X'], #'var:author':'orcid:0000-0002-3494-120X', 'var:quote':['ex:quote1','ex:quote2'] } attr_dict = { 'var:value':['A Little Provenance Goes a Long Way','Test Test Test'], 'var:name':'Luc Moreau', } instance_dict = entity_dict.copy() instance_dict.update(attr_dict) doc0 = provconv.set_namespaces(ns_dict,prov.ProvDocument()) binding0 = provconv.set_namespaces(ns_dict,prov.ProvDocument()) Explanation: PROV Templates in Python Author: Stephan Kindermann Affiliation: DKRZ Community: ENES (Earth System Sciences) Version: 0.4 (July 2018) Motivation: * The adoption of PROV templates in ENES community workflows is hindered by the following aspects: * template generation in ENES oftenly needs to be tool based (template structure depends on project/experiment configuration) * template instanciation in ENES oftenly is done in a scripting environment (most oftenly using python) * template expansion currently is only supported by the **provconvert** java tool * the transition from Java/provconvert to Python/prov is not fully supported (no import of provconvert output in prov supported) The sharing PROV adoption narratives is well supported by using jupyter notebooks and python. Core infrastructure services in ENES are implemented using python. Therfore: * The java based provconvert tool thus is difficult to exploit in ENES use cases * Using prov templates is by now only usefull for "documentation" purposes - no impact on community approaches. * To have an impact on community approaches in the short term simple python based wrappers are needed which can be integrated in our community workflows. Approach taken A simple template instantiation package (called provtemplates) was implemented supporting the instantiation of prov templates in python. This approach on the one hand side allows to generate and use PROV templates in python and which on the other hand side allows for pure python based template instantiations. Drawback (by now) is that the elaborated prov expansion algorithm is not yet implemented fully. But expansion rules can implemented easyly explicitely in python when needed (and later integrated into the python package). To make the expansion explicit on the basis of python can also seen as an advantage as the community PROV adopters don't need to dig into the expansion algorithm implemented as part of provconvert (and eventual errors therein). The approach taken is illustrated in the following: * PROV templates (being standard PROV documents) are generated in Python based on the prov library alongside PROV template instances. * A very simple instantiation algorithm is used to instantiate templates based on dictionaries containing the variable settings .. this instantiation algorithm can be stepwise expanded in the future Note: New location for provtemplate development The provtemplate package is now further developed as part of the ENVRI+ project and development continues at https://github.com/EnvriPlus-PROV/EnvriProvTemplates Generate a PROV template PROV templates are generated in functions with all RROV variables as parameters. This function is called with prov template variable names to generate prov templates. When called with instances the result is an prov document corresponding to the instantiated prov template. In the following the approach is illustrated based on a concrete examplec (corresponding to the first example in the provconvert tutorial). Initialize empty PROV template with namespaces needed End of explanation bundle = doc0.bundle('vargen:bundleid') #bundle.set_default_namespace('http://example.org/0/') #bundle = prov_doc (for test with doc without bundles) quote = bundle.entity('var:quote',( ('prov:value','var:value'), )) author = bundle.agent('var:author',( (prov.PROV_TYPE, "prov:Person"), ('foaf:name','var:name') )) bundle.wasAttributedTo('var:quote','var:author') #doc1 = provconv.save_and_show(doc0,'C:\\Users\\snkin\\Repos\\enes_graph_use_case\\prov_templates\\test\\xxxx') doc1 = provconv.save_and_show(doc0,'C:\\Users\\snkin\\Repos\\enes_graph_use_case\\prov_templates\\test\\xxxx') author.get_type() #author.get_asserted_types() #test = bundle.entity("var:test") #test.add_asserted_type() prov. Explanation: Generate an example prov template and print (and store) it's provn representation End of explanation binding1 = provconv.make_binding(binding0,entity_dict,attr_dict) print(binding1.get_provn()) Explanation: Generate an example binding from a python dictionaries (one for entitities and one for variable bindings) End of explanation print(doc1.serialize(format='xml')) Explanation: Convert example to xml xml and rdf are the supported exchange formats for python prov (besides provn) End of explanation new = provconv.instantiate_template(doc0,instance_dict) print(doc0.get_provn()) print(new.get_provn()) %matplotlib inline doc1.plot() new.plot() Explanation: Instantiate prov template the instantiation is done via the provconf.instantate_template function for comparison the template file as well as the instance are printed End of explanation # %load Downloads/ProvToolbox-Tutorial4-0.7.0/src/main/resources/template1.provn document prefix var <http://openprovenance.org/var#> prefix vargen <http://openprovenance.org/vargen#> prefix tmpl <http://openprovenance.org/tmpl#> prefix foaf <http://xmlns.com/foaf/0.1/> bundle vargen:bundleId entity(var:quote, [prov:value='var:value']) entity(var:author, [prov:type='prov:Person', foaf:name='var:name']) wasAttributedTo(var:quote,var:author) endBundle endDocument # take same instantiation as in the tutorial: # %load Downloads/ProvToolbox-Tutorial4-0.7.0/src/main/resources/binding1.ttl @prefix prov: <http://www.w3.org/ns/prov#> . @prefix xsd: <http://www.w3.org/2001/XMLSchema#> . @prefix tmpl: <http://openprovenance.org/tmpl#> . @prefix var: <http://openprovenance.org/var#> . @prefix ex: <http://example.com/#> . var:author a prov:Entity; tmpl:value_0 <http://orcid.org/0000-0002-3494-120X>. var:name a prov:Entity; tmpl:2dvalue_0_0 "Luc Moreau". var:quote a prov:Entity; tmpl:value_0 ex:quote1. var:value a prov:Entity; tmpl:2dvalue_0_0 "A Little Provenance Goes a Long Way". Explanation: Compare with original ProvToolbox example: End of explanation !provconvert -infile test/template1.provn -bindings test/binding1.ttl -outfile test/doc1.provn !provconvert -infile test/template1.provn -outfile test/template1.xml !provconvert -infile test/binding1.ttl -outfile test/binding1.xml !provconvert -infile test/doc1.provn -outfile test/doc1.xml !provconvert -infile test/template1.provn -outfile test/template1.rdf !provconvert -infile test/doc1.provn -outfile test/doc1.rdf # import template and generated provn representation in python with open('test/template1.xml') as in_file: prov_d = prov.ProvDocument() prov_d = prov_d.deserialize(source=in_file,format='xml') with open('test/binding1.xml') as in_file: prov_t = prov.ProvDocument() prov_t = prov_d.deserialize(source=in_file,format='xml') with open('test/doc1.xml') as in_file: prov_i = prov.ProvDocument() prov_i = prov_d.deserialize(source=in_file,format='xml') print(prov_d.get_provn()) print(prov_t.get_provn()) print(prov_i.get_provn()) print(prov_i.serialize(format='rdf')) # %load test/doc1.provn document bundle uuid:672b425e-a7db-470f-8653-7318d9ae8ec1 prefix foaf <http://xmlns.com/foaf/0.1/> prefix pre_0 <http://orcid.org/> prefix ex <http://example.com/#> prefix uuid <urn:uuid:> entity(ex:quote1,[prov:value = "A Little Provenance Goes a Long Way" %% xsd:string]) entity(pre_0:0000-0002-3494-120X,[prov:type = 'prov:Person', foaf:name = "Luc Moreau" %% xsd:string]) wasAttributedTo(ex:quote1, pre_0:0000-0002-3494-120X) endBundle endDocument !provconvert -infile test/doc1.provn -outfile test/doc1.png !provconvert -infile test/template1.provn -outfile test/template1.png # %load Downloads/ProvToolbox-Tutorial4-0.7.0/target/doc1.provn document bundle uuid:4c7236d5-6420-4a88-b192-6089e27aa88e prefix foaf <http://xmlns.com/foaf/0.1/> prefix pre_0 <http://orcid.org/> prefix ex <http://example.com/#> prefix uuid <urn:uuid:> entity(ex:quote1,[prov:value = "A Little Provenance Goes a Long Way" %% xsd:string]) entity(pre_0:0000-0002-3494-120X,[prov:type = 'prov:Person', foaf:name = "Luc Moreau" %% xsd:string]) wasAttributedTo(ex:quote1, pre_0:0000-0002-3494-120X) endBundle endDocument Explanation: Instantiate PROV template using provconvert and compare results the python prov implementation does not support deserialization form provn and ttl - thus converting to xml or rdf is necessary .. End of explanation import networkx as nx from bokeh.models import Range1d, Plot from bokeh.models.graphs import from_networkx from bokeh.io import show, output_notebook from bokeh.plotting import figure output_notebook() #prov.graph.prov_to_graph(prov_document) import prov.graph as graph nwx = graph.prov_to_graph(doc1) plot = Plot(x_range=Range1d(1, 10), y_range=Range1d(1, 10)) # Create a Bokeh graph from the NetworkX input using nx.spring_layout graph = from_networkx(nwx, nx.spring_layout, scale=1.8, center=(0,0)) plot.renderers.append(graph) show(plot) Explanation: Show provconvert result to compare Test: try bokeh for graph visualization End of explanation
5,188
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2019 The TensorFlow Authors. Step1: 检查 TensorFlow 图 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: 定义一个 Keras 模型 在此示例中,分类器是一个简单的四层顺序模型。 Step3: 下载并准备训练数据 Step4: 训练模型并记录数据 训练之前,请定义 Keras TensorBoard callback, 并指定日志目录。通过将此回调传递给 Model.fit(), 可以确保在 TensorBoard 中记录图形数据以进行可视化。 Step5: op-level graph 启动 TensorBoard,然后等待几秒钟以加载 UI。通过点击顶部的 “graph” 来选择图形仪表板。 Step6: 默认情况下,TensorBoard 显示 op-level图。(在左侧,您可以看到已选择 “Default” 标签。)请注意,图是倒置的。 数据从下到上流动,因此与代码相比是上下颠倒的。 但是,您可以看到该图与 Keras 模型定义紧密匹配,并具有其他计算节点的额外边缘。 图通常很大,因此您可以操纵图的可视化效果: 滚动到 zoom 来放大和缩小 拖到 pan 平移 双击切换 node expansion 来进行节点扩展(一个节点可以是其他节点的容器) 您还可以通过单击节点来查看元数据。这使您可以查看输入,输出,形状和其他详细信息。 <img class="tfo-display-only-on-site" src="https
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2019 The TensorFlow Authors. End of explanation # Load the TensorBoard notebook extension. %load_ext tensorboard from __future__ import absolute_import from __future__ import division from __future__ import print_function from datetime import datetime from packaging import version import tensorflow as tf from tensorflow import keras print("TensorFlow version: ", tf.__version__) assert version.parse(tf.__version__).release[0] >= 2, \ "This notebook requires TensorFlow 2.0 or above." # Clear any logs from previous runs !rm -rf ./logs/ Explanation: 检查 TensorFlow 图 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/tensorboard/graphs"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />在 TensorFlow.google.cn 访问</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tensorboard/graphs.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />在 Google Colab 运行</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tensorboard/graphs.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />在 GitHub 查看源代码</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tensorboard/graphs.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载此 notebook</a> </td> </table> 概述 TensorBoard 的 图仪表盘 是检查 TensorFlow 模型的强大工具。您可以快速查看模型结构的预览图,并确保其符合您的预期想法。 您还可以查看操作级图以了解 TensorFlow 如何理解您的程序。检查操作级图可以使您深入了解如何更改模型。例如,如果训练进度比预期的慢,则可以重新设计模型。 本教程简要概述了如何在 TensorBoard 的 图仪表板中生成图诊断数据并将其可视化。您将为 Fashion-MNIST 数据集定义和训练一个简单的 Keras 序列模型,并学习如何记录和检查模型图。您还将使用跟踪API为使用新的 tf.function 注释创建的函数生成图数据。 设置 End of explanation # Define the model. model = keras.models.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(32, activation='relu'), keras.layers.Dropout(0.2), keras.layers.Dense(10, activation='softmax') ]) model.compile( optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) Explanation: 定义一个 Keras 模型 在此示例中,分类器是一个简单的四层顺序模型。 End of explanation (train_images, train_labels), _ = keras.datasets.fashion_mnist.load_data() train_images = train_images / 255.0 Explanation: 下载并准备训练数据 End of explanation # Define the Keras TensorBoard callback. logdir="logs/fit/" + datetime.now().strftime("%Y%m%d-%H%M%S") tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir) # Train the model. model.fit( train_images, train_labels, batch_size=64, epochs=5, callbacks=[tensorboard_callback]) Explanation: 训练模型并记录数据 训练之前,请定义 Keras TensorBoard callback, 并指定日志目录。通过将此回调传递给 Model.fit(), 可以确保在 TensorBoard 中记录图形数据以进行可视化。 End of explanation %tensorboard --logdir logs Explanation: op-level graph 启动 TensorBoard,然后等待几秒钟以加载 UI。通过点击顶部的 “graph” 来选择图形仪表板。 End of explanation # The function to be traced. @tf.function def my_func(x, y): # A simple hand-rolled layer. return tf.nn.relu(tf.matmul(x, y)) # Set up logging. stamp = datetime.now().strftime("%Y%m%d-%H%M%S") logdir = 'logs/func/%s' % stamp writer = tf.summary.create_file_writer(logdir) # Sample data for your function. x = tf.random.uniform((3, 3)) y = tf.random.uniform((3, 3)) # Bracket the function call with # tf.summary.trace_on() and tf.summary.trace_export(). tf.summary.trace_on(graph=True, profiler=True) # Call only one tf.function when tracing. z = my_func(x, y) with writer.as_default(): tf.summary.trace_export( name="my_func_trace", step=0, profiler_outdir=logdir) %tensorboard --logdir logs/func Explanation: 默认情况下,TensorBoard 显示 op-level图。(在左侧,您可以看到已选择 “Default” 标签。)请注意,图是倒置的。 数据从下到上流动,因此与代码相比是上下颠倒的。 但是,您可以看到该图与 Keras 模型定义紧密匹配,并具有其他计算节点的额外边缘。 图通常很大,因此您可以操纵图的可视化效果: 滚动到 zoom 来放大和缩小 拖到 pan 平移 双击切换 node expansion 来进行节点扩展(一个节点可以是其他节点的容器) 您还可以通过单击节点来查看元数据。这使您可以查看输入,输出,形状和其他详细信息。 <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/graphs_computation.png?raw=1"/> <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/graphs_computation_detail.png?raw=1"/> 概念图 除了执行图,TensorBoard 还显示一个“概念图”。 这只是 Keras 模型的视图。 如果您要重新使用保存的模型并且想要检查或验证其结构,这可能会很有用。 要查看概念图,请选择 “keras” 标签。 在此示例中,您将看到一个折叠的 Sequential 节点。 双击节点以查看模型的结构: <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/graphs_tag_selection.png?raw=1"/> <br/> <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/graphs_conceptual.png?raw=1"/> tf.functions 的图 到目前为止的示例已经描述了 Keras 模型的图,其中这些图是通过定义 Keras 层并调用 Model.fit() 创建的。 您可能会遇到需要使用 tf.function 注释来[autograph]的情况,即将 Python 计算函数转换为高性能 TensorFlow 图。对于这些情况,您可以使用 TensorBoard 中的 TensorFlow Summary Trace API 记录签名函数以进行可视化。 要使用 Summary Trace API ,请执行以下操作: 使用 tf.function 定义和注释功能 在函数调用站点之前立即使用 tf.summary.trace_on() 通过传递 profiler=True 将配置文件信息(内存,CPU时间)添加到图中 使用摘要文件编写器,调用 tf.summary.trace_export() 保存日志数据 然后,您可以使用 TensorBoard 查看函数的行为。 End of explanation
5,189
Given the following text description, write Python code to implement the functionality described below step by step Description: Non-parametric 1 sample cluster statistic on single trial power This script shows how to estimate significant clusters in time-frequency power estimates. It uses a non-parametric statistical procedure based on permutations and cluster level statistics. The procedure consists in Step1: Set parameters Step2: Compute statistic Step3: View time-frequency plots
Python Code: # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne.time_frequency import tfr_morlet from mne.stats import permutation_cluster_1samp_test from mne.datasets import sample print(__doc__) Explanation: Non-parametric 1 sample cluster statistic on single trial power This script shows how to estimate significant clusters in time-frequency power estimates. It uses a non-parametric statistical procedure based on permutations and cluster level statistics. The procedure consists in: extracting epochs compute single trial power estimates baseline line correct the power estimates (power ratios) compute stats to see if ratio deviates from 1. End of explanation data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' tmin, tmax, event_id = -0.3, 0.6, 1 # Setup for reading the raw data raw = mne.io.read_raw_fif(raw_fname) events = mne.find_events(raw, stim_channel='STI 014') include = [] raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more # picks MEG gradiometers picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True, stim=False, include=include, exclude='bads') # Load condition 1 event_id = 1 epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), preload=True, reject=dict(grad=4000e-13, eog=150e-6)) # Take only one channel ch_name = 'MEG 1332' epochs.pick_channels([ch_name]) evoked = epochs.average() # Factor to down-sample the temporal dimension of the TFR computed by # tfr_morlet. Decimation occurs after frequency decomposition and can # be used to reduce memory usage (and possibly computational time of downstream # operations such as nonparametric statistics) if you don't need high # spectrotemporal resolution. decim = 5 freqs = np.arange(8, 40, 2) # define frequencies of interest sfreq = raw.info['sfreq'] # sampling in Hz tfr_epochs = tfr_morlet(epochs, freqs, n_cycles=4., decim=decim, average=False, return_itc=False, n_jobs=1) # Baseline power tfr_epochs.apply_baseline(mode='logratio', baseline=(-.100, 0)) # Crop in time to keep only what is between 0 and 400 ms evoked.crop(0., 0.4) tfr_epochs.crop(0., 0.4) epochs_power = tfr_epochs.data[:, 0, :, :] # take the 1 channel Explanation: Set parameters End of explanation threshold = 2.5 n_permutations = 100 # Warning: 100 is too small for real-world analysis. T_obs, clusters, cluster_p_values, H0 = \ permutation_cluster_1samp_test(epochs_power, n_permutations=n_permutations, threshold=threshold, tail=0) Explanation: Compute statistic End of explanation evoked_data = evoked.data times = 1e3 * evoked.times plt.figure() plt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43) # Create new stats image with only significant clusters T_obs_plot = np.nan * np.ones_like(T_obs) for c, p_val in zip(clusters, cluster_p_values): if p_val <= 0.05: T_obs_plot[c] = T_obs[c] vmax = np.max(np.abs(T_obs)) vmin = -vmax plt.subplot(2, 1, 1) plt.imshow(T_obs, cmap=plt.cm.gray, extent=[times[0], times[-1], freqs[0], freqs[-1]], aspect='auto', origin='lower', vmin=vmin, vmax=vmax) plt.imshow(T_obs_plot, cmap=plt.cm.RdBu_r, extent=[times[0], times[-1], freqs[0], freqs[-1]], aspect='auto', origin='lower', vmin=vmin, vmax=vmax) plt.colorbar() plt.xlabel('Time (ms)') plt.ylabel('Frequency (Hz)') plt.title('Induced power (%s)' % ch_name) ax2 = plt.subplot(2, 1, 2) evoked.plot(axes=[ax2], time_unit='s') plt.show() Explanation: View time-frequency plots End of explanation
5,190
Given the following text description, write Python code to implement the functionality described below step by step Description: Inital setup Step1: loading summary file that I previously created by scanning through L1A darks. Step2: These are the columns I have available in this dataset Step3: Correct mean DN for binning When binning was applied, each pixel in the stored array carries the sum of the pixels that have been binned. Hence I have to correct the mean DN value for the applied binning. Step4: Integration time correction This is just to normalize for integration time to create DN/s. I also put here the time of the observation into the index and sort the data by time. This way, people that are more aware of the timeline could correlate special events to irregularities in the data. Step5: Different dark modes Two different kinds of dark modes have been taken Step6: Calibrate temperatures I implemented my own DN_to_degC converter using the polynoms from our L1B pipeline. Step7: Checking how the stats look like for calibrated DET_TEMP Step8: Remaining trends in the DN_per_s Now that I should have removed influences from INT_TIME, different methods of dark imaging and binning counts, I can focus on temperature effects. Step9: This is a scatter matrix for the above chosen columns. Step10: Note that the DN_per_s and DET_TEMP scatter plot still shows different families. Because longer exposure times allow more cosmic rays to hit, it is to be expected that despite the correction for INT_TIME towards DN_per_s still shows differences per initial INT_TIME. Step11: Above we see that the longest INT_TIME shows the highest inherent scatter, most likely due to the CRs. Let's see how the DN_per_s develop in general over time of observations and how that compares to DET_TEMP. Step12: As previously known, we have a strong correlation between DET_TEMP and DN_per_s created. Looking at a scatter plot, there seem to exist identifiable situations that create different relationships between DET_TEMP and DN_per_s Step13: I was worried that I have to treat sets of dark images differently for some reason, so I filter out any data that has a valid NAXIS3 and look only at data that has no valid NAXIS3. (i.e. focusing on single dark images, which is the majority anyway, as seen here with the NaN entry). Step14: But the result looks the same, so this is no issue. Nevertheless, to be sure I will use this subframe from now on. INT_TIME dependencies As mentioned before, different integration times offer different probabilities for disturbant factors to happen. So let's focus on particular INT_TIMEs. Here's how the dark INT_TIMEs distribute over the L1A dataset. One also has to divide the MUV and FUV data. Step15: Looping over a chosen set of INT_TIMEs to create the following overview plot, where things are separated for different INT_TIMES. First, the MUV data. MUV DN_per_s vs DET_TEMP Step16: FUV DN_per_s vs DET_TEMP Step17: Following the interesting consistent separation in families of scatter points, let's look at what the DET_TEMP does over time during these different INT_TIMES. MUV DET_TEMP over Time Step18: FUV DET_TEMP over TIME
Python Code: %matplotlib inline plt.rcParams['figure.figsize'] = (10,10) from matplotlib.pyplot import subplots Explanation: Inital setup End of explanation import pandas as pd df = pd.read_hdf('/home/klay6683/to_keep/l1a_dark_stats.h5','df') df.DET_TEMP.plot( Explanation: loading summary file that I previously created by scanning through L1A darks. End of explanation print(df.columns.values) Explanation: These are the columns I have available in this dataset: End of explanation def correct_mean_value(row): return row['SPE_SIZE'] * row['SPA_SIZE'] df['pixelsum'] = df.apply(correct_mean_value, axis=1) df['corrected_mean'] = df.dark_mean / df.pixelsum Explanation: Correct mean DN for binning When binning was applied, each pixel in the stored array carries the sum of the pixels that have been binned. Hence I have to correct the mean DN value for the applied binning. End of explanation df['DN_per_s'] = df.corrected_mean / (df.INT_TIME/1000) df.index = df.FILENAME.map(lambda x: io.Filename(x).time) df.index.name = 'Time' df.sort_index(inplace=True) Explanation: Integration time correction This is just to normalize for integration time to create DN/s. I also put here the time of the observation into the index and sort the data by time. This way, people that are more aware of the timeline could correlate special events to irregularities in the data. End of explanation df.MCP_VOLT.value_counts() import numpy as np df[df.MCP_VOLT > 0] = np.nan Explanation: Different dark modes Two different kinds of dark modes have been taken: Using the shutter to block photon transport to the detector Setting the MCP_Voltage to approx zero to avoid creating signal in the detector Because of reduced reproducability for darks from mode 1, it was decided to use mode 2 for the future darks. I therefore filter out case 1 for now and set it to NAN. Below one can see that most of the darks have been taken in mode 2 anyway: End of explanation from iuvs import calib df.DET_TEMP = calib.iuvs_dn_to_temp(df.DET_TEMP, det_temp=True) df.CASE_TEMP = calib.iuvs_dn_to_temp(df.CASE_TEMP, det_temp=False) Explanation: Calibrate temperatures I implemented my own DN_to_degC converter using the polynoms from our L1B pipeline. End of explanation df.DET_TEMP.describe() df[df.XUV=='FUV'].DET_TEMP['2014-10-19':'2015-01-31'].plot(label='FUV') df[df.XUV=='MUV'].DET_TEMP['2014-10-19':'2015-01-31'].plot(label='MUV') import seaborn seaborn.set_context('talk') d1 = '2014-12-14' d2 = '2014-12-14' gb = df[d1:d2].groupby('XUV') gb.DET_TEMP.plot(style='*', legend=True, markersize=14, title='{}--{}'.format(d1,d2),x_compat=True) Explanation: Checking how the stats look like for calibrated DET_TEMP End of explanation cols = 'DN_per_s CASE_TEMP DET_TEMP INT_TIME'.split() Explanation: Remaining trends in the DN_per_s Now that I should have removed influences from INT_TIME, different methods of dark imaging and binning counts, I can focus on temperature effects. End of explanation pd.scatter_matrix(df[cols], figsize=(10,10)); Explanation: This is a scatter matrix for the above chosen columns. End of explanation df.plot(x='INT_TIME', y='DN_per_s', kind='scatter') Explanation: Note that the DN_per_s and DET_TEMP scatter plot still shows different families. Because longer exposure times allow more cosmic rays to hit, it is to be expected that despite the correction for INT_TIME towards DN_per_s still shows differences per initial INT_TIME. End of explanation _, axes = subplots(nrows=2) df[['DET_TEMP', 'DN_per_s']].plot(secondary_y='DN_per_s',ax=axes[0]) df.CASE_TEMP.plot(ax=axes[1], ylim=(4,6)) Explanation: Above we see that the longest INT_TIME shows the highest inherent scatter, most likely due to the CRs. Let's see how the DN_per_s develop in general over time of observations and how that compares to DET_TEMP. End of explanation df.plot(x='DET_TEMP', y='DN_per_s',kind='scatter') Explanation: As previously known, we have a strong correlation between DET_TEMP and DN_per_s created. Looking at a scatter plot, there seem to exist identifiable situations that create different relationships between DET_TEMP and DN_per_s: End of explanation df.NAXIS3.value_counts(dropna=False) subdf = df[df.NAXIS3.isnull()] subdf.plot(x='DET_TEMP', y='DN_per_s', kind='scatter') Explanation: I was worried that I have to treat sets of dark images differently for some reason, so I filter out any data that has a valid NAXIS3 and look only at data that has no valid NAXIS3. (i.e. focusing on single dark images, which is the majority anyway, as seen here with the NaN entry). End of explanation df.INT_TIME.value_counts() fuv = subdf[subdf.XUV=='FUV'] muv = subdf[subdf.XUV=='MUV'] Explanation: But the result looks the same, so this is no issue. Nevertheless, to be sure I will use this subframe from now on. INT_TIME dependencies As mentioned before, different integration times offer different probabilities for disturbant factors to happen. So let's focus on particular INT_TIMEs. Here's how the dark INT_TIMEs distribute over the L1A dataset. One also has to divide the MUV and FUV data. End of explanation inttimes = [14400, 10200, 6000, 4200, 4000, 1400] fig, axes = subplots(nrows=len(inttimes), figsize=(12,13)) for ax, inttime in zip(axes, inttimes): muv[muv.INT_TIME==inttime].plot(x='DET_TEMP', y='DN_per_s',kind='scatter', ax=ax, sharex=False) ax.set_title('INT_TIME = {}'.format(inttime)) fig.suptitle('MUV DN_per_s vs DET_TEMP, sorted by INT_TIME', fontsize=20) fig.tight_layout() Explanation: Looping over a chosen set of INT_TIMEs to create the following overview plot, where things are separated for different INT_TIMES. First, the MUV data. MUV DN_per_s vs DET_TEMP End of explanation inttimes = [14400, 10200, 6000, 4200, 4000, 1400] fig, axes = subplots(nrows=len(inttimes), figsize=(12,13)) for ax, inttime in zip(axes, inttimes): fuv[fuv.INT_TIME==inttime].plot(x='DET_TEMP', y='DN_per_s',kind='scatter', ax=ax, sharex=False) ax.set_title('INT_TIME = {}'.format(inttime)) fig.suptitle('FUV DN_per_s vs DET_TEMP, sorted by INT_TIME', fontsize=20) fig.tight_layout() Explanation: FUV DN_per_s vs DET_TEMP End of explanation fig, axes = subplots(nrows=len(inttimes), figsize=(13,14)) for ax,inttime in zip(axes, inttimes): muv[muv.INT_TIME==inttime]['DET_TEMP'].plot(ax=ax, style='*', sharex=False) ax.set_title('MUV INT_TIME={}'.format(inttime)) fig.tight_layout() Explanation: Following the interesting consistent separation in families of scatter points, let's look at what the DET_TEMP does over time during these different INT_TIMES. MUV DET_TEMP over Time End of explanation fig, axes = subplots(nrows=len(inttimes), figsize=(13,14)) for ax,inttime in zip(axes, inttimes): fuv[fuv.INT_TIME==inttime]['DET_TEMP'].plot(ax=ax, style='*', sharex=False) ax.set_title('FUV INT_TIME={}'.format(inttime)) fig.tight_layout() Explanation: FUV DET_TEMP over TIME End of explanation
5,191
Given the following text description, write Python code to implement the functionality described below step by step Description: Objective Overview of ML Model Build Process Logistic Regression Introduction Model Evaluations Step1: Model Building Process Step2: Dataset Step3: Logistic Regression - Model Take a weighted sum of the features and add a bias term to get the logit. Sqash this weighted sum to arange between 0-1 via a Sigmoid function. Sigmoid Function <img src="images/sigmoid.png",width=500> $$f(x) = \frac{e^x}{1+e^x}$$ Step4: Dataset - Take 2 Step5: Other Evaluation Methods Confusion Matrix
Python Code: from __future__ import print_function # Python 2/3 compatibility from IPython.display import Image import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline Explanation: Objective Overview of ML Model Build Process Logistic Regression Introduction Model Evaluations End of explanation Image("images/model-pipeline.png") Explanation: Model Building Process End of explanation centers = np.array([[0, 0]] * 100 + [[1, 1]] * 100) np.random.seed(42) X = np.random.normal(0, 0.2, (200, 2)) + centers y = np.array([0] * 100 + [1] * 100) plt.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.RdYlBu) plt.colorbar(); X[:5] y[:5], y[-5:] Explanation: Dataset End of explanation Image("images/logistic-regression.png") ## Build the Model from sklearn.linear_model import LogisticRegression ## Step 1 - Instantiate the Model with Hyper Parameters (We don't have any here) model = LogisticRegression() ## Step 2 - Fit the Model model.fit(X, y) ## Step 3 - Evaluate the Model model.score(X, y) def plot_decision_boundaries(model, X, y): pred_labels = model.predict(X) plt.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.RdYlBu, vmin=0.0, vmax=1) xx = np.linspace(-1, 2, 100) w0, w1 = model.coef_[0] bias = model.intercept_ yy = -w0 / w1 * xx - bias / w1 plt.plot(xx, yy, 'k') plt.axis((-1,2,-1,2)) plt.colorbar() plot_decision_boundaries(model, X, y) Explanation: Logistic Regression - Model Take a weighted sum of the features and add a bias term to get the logit. Sqash this weighted sum to arange between 0-1 via a Sigmoid function. Sigmoid Function <img src="images/sigmoid.png",width=500> $$f(x) = \frac{e^x}{1+e^x}$$ End of explanation centers = np.array([[0, 0]] * 100 + [[1, 1]] * 100) np.random.seed(42) X = np.random.normal(0, 0.5, (200, 2)) + centers y = np.array([0] * 100 + [1] * 100) plt.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.RdYlBu) plt.colorbar(); # Instantiate, Fit, Evalaute model = LogisticRegression() model.fit(X, y) print(model.score(X, y)) y_pred = model.predict(X) plot_decision_boundaries(model, X, y) Explanation: Dataset - Take 2 End of explanation from sklearn.metrics import confusion_matrix cm = confusion_matrix(y, y_pred) cm pd.crosstab(y, y_pred, rownames=['Actual'], colnames=['Predicted'], margins=True) Explanation: Other Evaluation Methods Confusion Matrix End of explanation
5,192
Given the following text description, write Python code to implement the functionality described below step by step Description: Training Collaborative Experts on MSR-VTT This notebook shows how to download code that trains a Collaborative Experts model with GPT-1 + NetVLAD on the MSR-VTT Dataset. Setup Download Code and Dependencies Import Modules Download Language Model Weights Download Datasets Generate Encodings for Dataset Captions Code Downloading and Dependency Downloading Specify tensorflow version Clone repository from Github cd into the correct directory Install the requirements Step1: Importing Modules Step2: Language Model Downloading Download GPT-1 Step3: Dataset downloading Downlaod Datasets Download Precomputed Features Step4: Note Step5: Embeddings Generation Generate Embeddings for MSR-VTT Note Step6: Training Build Train Datasets Initialize Models Compile Encoders Fit Model Test Model Datasets Generation Step7: Model Initialization Step8: Encoder Compliation Step9: Model fitting Step10: Tests
Python Code: %tensorflow_version 2.x !git clone https://github.com/googleinterns/via-content-understanding.git %cd via-content-understanding/videoretrieval/ !pip install -r requirements.txt !pip install --upgrade tensorflow_addons Explanation: Training Collaborative Experts on MSR-VTT This notebook shows how to download code that trains a Collaborative Experts model with GPT-1 + NetVLAD on the MSR-VTT Dataset. Setup Download Code and Dependencies Import Modules Download Language Model Weights Download Datasets Generate Encodings for Dataset Captions Code Downloading and Dependency Downloading Specify tensorflow version Clone repository from Github cd into the correct directory Install the requirements End of explanation import tensorflow as tf import languagemodels import train.encoder_datasets import train.language_model import experts import datasets import datasets.msrvtt.constants import os import models.components import models.encoder import helper.precomputed_features from tensorflow_addons.activations import mish import tensorflow_addons as tfa import metrics.loss Explanation: Importing Modules End of explanation gpt_model = languagemodels.OpenAIGPTModel() Explanation: Language Model Downloading Download GPT-1 End of explanation datasets.msrvtt_dataset.download_dataset() Explanation: Dataset downloading Downlaod Datasets Download Precomputed Features End of explanation url = datasets.msrvtt.constants.features_tar_url path = datasets.msrvtt.constants.features_tar_path os.system(f"curl {url} > {path}") helper.precomputed_features.cache_features( datasets.msrvtt_dataset, datasets.msrvtt.constants.expert_to_features, datasets.msrvtt.constants.features_tar_path,) Explanation: Note: The system curl is more memory efficent than the download function in our codebase, so here curl is used rather than the download function in our codebase. End of explanation train.language_model.generate_and_cache_contextual_embeddings( gpt_model, datasets.msrvtt_dataset) Explanation: Embeddings Generation Generate Embeddings for MSR-VTT Note: this will take 20-30 minutes on a colab, depending on the GPU End of explanation experts_used = [ experts.i3d, experts.r2p1d, experts.resnext, experts.senet, experts.speech_expert, experts.ocr_expert, experts.audio_expert, experts.densenet, experts.face_expert] train_ds, valid_ds, test_ds = ( train.encoder_datasets.generate_encoder_datasets( gpt_model, datasets.msrvtt_dataset, experts_used)) Explanation: Training Build Train Datasets Initialize Models Compile Encoders Fit Model Test Model Datasets Generation End of explanation class MishLayer(tf.keras.layers.Layer): def call(self, inputs): return mish(inputs) mish(tf.Variable([1.0])) text_encoder = models.components.TextEncoder( len(experts_used), num_netvlad_clusters=28, ghost_clusters=1, language_model_dimensionality=768, encoded_expert_dimensionality=512, residual_cls_token=False, ) video_encoder = models.components.VideoEncoder( num_experts=len(experts_used), experts_use_netvlad=[False, False, False, False, True, True, True, False, False], experts_netvlad_shape=[None, None, None, None, 19, 43, 8, None, None], expert_aggregated_size=512, encoded_expert_dimensionality=512, g_mlp_layers=3, h_mlp_layers=0, make_activation_layer=MishLayer) encoder = models.encoder.EncoderForFrozenLanguageModel( video_encoder, text_encoder, 0.0938, [1, 5, 10, 50], 20) Explanation: Model Initialization End of explanation def build_optimizer(lr=0.001): learning_rate_scheduler = tf.keras.optimizers.schedules.ExponentialDecay( initial_learning_rate=lr, decay_steps=101, decay_rate=0.95, staircase=True) return tf.keras.optimizers.Adam(learning_rate_scheduler) encoder.compile(build_optimizer(0.1), metrics.loss.bidirectional_max_margin_ranking_loss) train_ds_prepared = (train_ds .shuffle(1000) .batch(64, drop_remainder=True) .prefetch(tf.data.experimental.AUTOTUNE)) encoder.video_encoder.trainable = True encoder.text_encoder.trainable = True Explanation: Encoder Compliation End of explanation encoder.fit( train_ds_prepared, epochs=100, ) Explanation: Model fitting End of explanation captions_per_video = 20 num_videos_upper_bound = 100000 ranks = [] for caption_index in range(captions_per_video): batch = next(iter(test_ds.shard(captions_per_video, caption_index).batch( num_videos_upper_bound))) video_embeddings, text_embeddings, mixture_weights = encoder.forward_pass( batch, training=False) similarity_matrix = metrics.loss.build_similarity_matrix( video_embeddings, text_embeddings, mixture_weights, batch[-1]) rankings = metrics.rankings.compute_ranks(similarity_matrix) ranks += list(rankings.numpy()) def recall_at_k(ranks, k): return len(list(filter(lambda i: i <= k, ranks))) / len(ranks) median_rank = sorted(ranks)[len(ranks)//2] mean_rank = sum(ranks)/len(ranks) print(f"Median Rank: {median_rank}") print(f"Mean Rank: {mean_rank}") for k in [1, 5, 10, 50]: recall = recall_at_k(ranks, k) print(f"R@{k}: {recall}") Explanation: Tests End of explanation
5,193
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: Hi I've read a lot of question here on stackoverflow about this problem, but I have a little different task.
Problem: import pandas as pd df = pd.DataFrame({'DateTime': ['2000-01-04', '2000-01-05', '2000-01-06', '2000-01-07', '2000-01-08'], 'Close': [1460, 1470, 1480, 1480, 1450]}) df['DateTime'] = pd.to_datetime(df['DateTime']) def g(df): label = [] for i in range(len(df)-1): if df.loc[i, 'Close'] > df.loc[i+1, 'Close']: label.append(1) elif df.loc[i, 'Close'] == df.loc[i+1, 'Close']: label.append(0) else: label.append(-1) label.append(1) df['label'] = label df["DateTime"] = df["DateTime"].dt.strftime('%d-%b-%Y') return df df = g(df.copy())
5,194
Given the following text description, write Python code to implement the functionality described below step by step Description: Sentiment Analysis - Text Classification with Universal Embeddings Textual data in spite of being highly unstructured, can be classified into two major types of documents. - Factual documents which typically depict some form of statements or facts with no specific feelings or emotion attached to them. These are also known as objective documents. - Subjective documents on the other hand have text which expresses feelings, mood, emotions and opinion. Sentiment Analysis is also popularly known as opinion analysis or opinion mining. The key idea is to use techniques from text analytics, NLP, machine learning and linguistics to extract important information or data points from unstructured text. This in turn can help us derive the sentiment from text data Here we will be looking at building supervised sentiment analysis classification models thanks to the advantage of labeled data! The dataset we will be working with is the IMDB Large Movie Review Dataset having 50000 reviews classified into positive and negative sentiment. I have provided a compressed version of the dataset in this repository itself for your benefit! Do remember that the focus here is not sentiment analysis but text classification by leveraging universal sentence embeddings. We will leverage the following sentence encoders here for demonstration from TensorFlow Hub Step1: Load up Dependencies Step2: Check if GPU is available for use! Step3: Load and View Dataset Step4: Build train, validation and test datasets Step5: Basic Text Wrangling Step6: Build Data Ingestion Functions Step7: Build Deep Learning Model with Universal Sentence Encoder Step8: Train for approx 12 epochs Step9: Model Training Step10: Model Evaluation Step11: Build a Generic Model Trainer on any Input Sentence Encoder Step12: Train Deep Learning Models on difference Sentence Encoders NNLM - pre-trained and fine-tuning USE - pre-trained and fine-tuning Step13: Model Evaluations
Python Code: !pip install tensorflow-hub Explanation: Sentiment Analysis - Text Classification with Universal Embeddings Textual data in spite of being highly unstructured, can be classified into two major types of documents. - Factual documents which typically depict some form of statements or facts with no specific feelings or emotion attached to them. These are also known as objective documents. - Subjective documents on the other hand have text which expresses feelings, mood, emotions and opinion. Sentiment Analysis is also popularly known as opinion analysis or opinion mining. The key idea is to use techniques from text analytics, NLP, machine learning and linguistics to extract important information or data points from unstructured text. This in turn can help us derive the sentiment from text data Here we will be looking at building supervised sentiment analysis classification models thanks to the advantage of labeled data! The dataset we will be working with is the IMDB Large Movie Review Dataset having 50000 reviews classified into positive and negative sentiment. I have provided a compressed version of the dataset in this repository itself for your benefit! Do remember that the focus here is not sentiment analysis but text classification by leveraging universal sentence embeddings. We will leverage the following sentence encoders here for demonstration from TensorFlow Hub: Neural-Net Language Model (nnlm-en-dim128) Universal Sentence Encoder (universal-sentence-encoder) Developed by Dipanjan (DJ) Sarkar Install Tensorflow Hub End of explanation import tensorflow as tf import tensorflow_hub as hub import numpy as np import pandas as pd Explanation: Load up Dependencies End of explanation tf.test.is_gpu_available() tf.test.gpu_device_name() Explanation: Check if GPU is available for use! End of explanation dataset = pd.read_csv('movie_reviews.csv.bz2', compression='bz2') dataset.info() dataset['sentiment'] = [1 if sentiment == 'positive' else 0 for sentiment in dataset['sentiment'].values] dataset.head() Explanation: Load and View Dataset End of explanation reviews = dataset['review'].values sentiments = dataset['sentiment'].values train_reviews = reviews[:30000] train_sentiments = sentiments[:30000] val_reviews = reviews[30000:35000] val_sentiments = sentiments[30000:35000] test_reviews = reviews[35000:] test_sentiments = sentiments[35000:] train_reviews.shape, val_reviews.shape, test_reviews.shape Explanation: Build train, validation and test datasets End of explanation !pip install contractions !pip install beautifulsoup4 import contractions from bs4 import BeautifulSoup import unicodedata import re def strip_html_tags(text): soup = BeautifulSoup(text, "html.parser") [s.extract() for s in soup(['iframe', 'script'])] stripped_text = soup.get_text() stripped_text = re.sub(r'[\r|\n|\r\n]+', '\n', stripped_text) return stripped_text def remove_accented_chars(text): text = unicodedata.normalize('NFKD', text).encode('ascii', 'ignore').decode('utf-8', 'ignore') return text def expand_contractions(text): return contractions.fix(text) def remove_special_characters(text, remove_digits=False): pattern = r'[^a-zA-Z0-9\s]' if not remove_digits else r'[^a-zA-Z\s]' text = re.sub(pattern, '', text) return text def pre_process_document(document): # strip HTML document = strip_html_tags(document) # lower case document = document.lower() # remove extra newlines (often might be present in really noisy text) document = document.translate(document.maketrans("\n\t\r", " ")) # remove accented characters document = remove_accented_chars(document) # expand contractions document = expand_contractions(document) # remove special characters and\or digits # insert spaces between special characters to isolate them special_char_pattern = re.compile(r'([{.(-)!}])') document = special_char_pattern.sub(" \\1 ", document) document = remove_special_characters(document, remove_digits=True) # remove extra whitespace document = re.sub(' +', ' ', document) document = document.strip() return document pre_process_corpus = np.vectorize(pre_process_document) train_reviews = pre_process_corpus(train_reviews) val_reviews = pre_process_corpus(val_reviews) test_reviews = pre_process_corpus(test_reviews) Explanation: Basic Text Wrangling End of explanation # Training input on the whole training set with no limit on training epochs. train_input_fn = tf.estimator.inputs.numpy_input_fn( {'sentence': train_reviews}, train_sentiments, batch_size=256, num_epochs=None, shuffle=True) # Prediction on the whole training set. predict_train_input_fn = tf.estimator.inputs.numpy_input_fn( {'sentence': train_reviews}, train_sentiments, shuffle=False) # Prediction on the whole validation set. predict_val_input_fn = tf.estimator.inputs.numpy_input_fn( {'sentence': val_reviews}, val_sentiments, shuffle=False) # Prediction on the test set. predict_test_input_fn = tf.estimator.inputs.numpy_input_fn( {'sentence': test_reviews}, test_sentiments, shuffle=False) Explanation: Build Data Ingestion Functions End of explanation embedding_feature = hub.text_embedding_column( key='sentence', module_spec="https://tfhub.dev/google/universal-sentence-encoder/2", trainable=False) dnn = tf.estimator.DNNClassifier( hidden_units=[512, 128], feature_columns=[embedding_feature], n_classes=2, activation_fn=tf.nn.relu, dropout=0.1, optimizer=tf.train.AdagradOptimizer(learning_rate=0.005)) Explanation: Build Deep Learning Model with Universal Sentence Encoder End of explanation 256*1500 / 30000 Explanation: Train for approx 12 epochs End of explanation tf.logging.set_verbosity(tf.logging.ERROR) import time TOTAL_STEPS = 1500 STEP_SIZE = 100 for step in range(0, TOTAL_STEPS+1, STEP_SIZE): print() print('-'*100) print('Training for step =', step) start_time = time.time() dnn.train(input_fn=train_input_fn, steps=STEP_SIZE) elapsed_time = time.time() - start_time print('Train Time (s):', elapsed_time) print('Eval Metrics (Train):', dnn.evaluate(input_fn=predict_train_input_fn)) print('Eval Metrics (Validation):', dnn.evaluate(input_fn=predict_val_input_fn)) Explanation: Model Training End of explanation dnn.evaluate(input_fn=predict_train_input_fn) dnn.evaluate(input_fn=predict_test_input_fn) Explanation: Model Evaluation End of explanation import time TOTAL_STEPS = 1500 STEP_SIZE = 500 my_checkpointing_config = tf.estimator.RunConfig( keep_checkpoint_max = 2, # Retain the 2 most recent checkpoints. ) def train_and_evaluate_with_sentence_encoder(hub_module, train_module=False, path=''): embedding_feature = hub.text_embedding_column( key='sentence', module_spec=hub_module, trainable=train_module) print() print('='*100) print('Training with', hub_module) print('Trainable is:', train_module) print('='*100) dnn = tf.estimator.DNNClassifier( hidden_units=[512, 128], feature_columns=[embedding_feature], n_classes=2, activation_fn=tf.nn.relu, dropout=0.1, optimizer=tf.train.AdagradOptimizer(learning_rate=0.005), model_dir=path, config=my_checkpointing_config) for step in range(0, TOTAL_STEPS+1, STEP_SIZE): print('-'*100) print('Training for step =', step) start_time = time.time() dnn.train(input_fn=train_input_fn, steps=STEP_SIZE) elapsed_time = time.time() - start_time print('Train Time (s):', elapsed_time) print('Eval Metrics (Train):', dnn.evaluate(input_fn=predict_train_input_fn)) print('Eval Metrics (Validation):', dnn.evaluate(input_fn=predict_val_input_fn)) train_eval_result = dnn.evaluate(input_fn=predict_train_input_fn) test_eval_result = dnn.evaluate(input_fn=predict_test_input_fn) return { "Model Dir": dnn.model_dir, "Training Accuracy": train_eval_result["accuracy"], "Test Accuracy": test_eval_result["accuracy"], "Training AUC": train_eval_result["auc"], "Test AUC": test_eval_result["auc"], "Training Precision": train_eval_result["precision"], "Test Precision": test_eval_result["precision"], "Training Recall": train_eval_result["recall"], "Test Recall": test_eval_result["recall"] } Explanation: Build a Generic Model Trainer on any Input Sentence Encoder End of explanation tf.logging.set_verbosity(tf.logging.ERROR) results = {} results["nnlm-en-dim128"] = train_and_evaluate_with_sentence_encoder( "https://tfhub.dev/google/nnlm-en-dim128/1", path='/storage/models/nnlm-en-dim128_f/') results["nnlm-en-dim128-with-training"] = train_and_evaluate_with_sentence_encoder( "https://tfhub.dev/google/nnlm-en-dim128/1", train_module=True, path='/storage/models/nnlm-en-dim128_t/') results["use-512"] = train_and_evaluate_with_sentence_encoder( "https://tfhub.dev/google/universal-sentence-encoder/2", path='/storage/models/use-512_f/') results["use-512-with-training"] = train_and_evaluate_with_sentence_encoder( "https://tfhub.dev/google/universal-sentence-encoder/2", train_module=True, path='/storage/models/use-512_t/') Explanation: Train Deep Learning Models on difference Sentence Encoders NNLM - pre-trained and fine-tuning USE - pre-trained and fine-tuning End of explanation results_df = pd.DataFrame.from_dict(results, orient="index") results_df best_model_dir = results_df[results_df['Test Accuracy'] == results_df['Test Accuracy'].max()]['Model Dir'].values[0] best_model_dir embedding_feature = hub.text_embedding_column( key='sentence', module_spec="https://tfhub.dev/google/universal-sentence-encoder/2", trainable=True) dnn = tf.estimator.DNNClassifier( hidden_units=[512, 128], feature_columns=[embedding_feature], n_classes=2, activation_fn=tf.nn.relu, dropout=0.1, optimizer=tf.train.AdagradOptimizer(learning_rate=0.005), model_dir=best_model_dir) dnn def get_predictions(estimator, input_fn): return [x["class_ids"][0] for x in estimator.predict(input_fn=input_fn)] predictions = get_predictions(estimator=dnn, input_fn=predict_test_input_fn) predictions[:10] !pip install seaborn import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline with tf.Session() as session: cm = tf.confusion_matrix(test_sentiments, predictions).eval() LABELS = ['negative', 'positive'] sns.heatmap(cm, annot=True, xticklabels=LABELS, yticklabels=LABELS, fmt='g') xl = plt.xlabel("Predicted") yl = plt.ylabel("Actuals") from sklearn.metrics import classification_report print(classification_report(y_true=test_sentiments, y_pred=predictions, target_names=LABELS)) Explanation: Model Evaluations End of explanation
5,195
Given the following text description, write Python code to implement the functionality described below step by step Description: Speed Step1: prior to trying to fix #217
Python Code: %%timeit -n 15 lasio.examples.open("6038187_v1.2.las") Explanation: Speed End of explanation import pickle las = lasio.examples.open("logging_levels.las") len(pickle.dumps(las)) las = lasio.examples.open("6038187_v1.2.las") len(pickle.dumps(las)) Explanation: prior to trying to fix #217: 6038187_v1.2.las = 321 ms ± 34.8 ms per loop (mean ± std. dev. of 7 runs, 15 loops each) read-quoted-strings branch: 6038187_v1.2.las = 216 ms ± 37.9 ms per loop (mean ± std. dev. of 7 runs, 15 loops each) Memory End of explanation
5,196
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Logistic-Regression" data-toc-modified-id="Logistic-Regression-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Logistic Regression</a></span><ul class="toc-item"><li><span><a href="#Logistic-Function" data-toc-modified-id="Logistic-Function-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Logistic Function</a></span></li><li><span><a href="#Interpreting-the-Intercept" data-toc-modified-id="Interpreting-the-Intercept-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Interpreting the Intercept</a></span></li><li><span><a href="#Defining-The-Cost-Function" data-toc-modified-id="Defining-The-Cost-Function-1.3"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Defining The Cost Function</a></span></li><li><span><a href="#Gradient" data-toc-modified-id="Gradient-1.4"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Gradient</a></span></li><li><span><a href="#Stochastic/Mini-batch-Gradient" data-toc-modified-id="Stochastic/Mini-batch-Gradient-1.5"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Stochastic/Mini-batch Gradient</a></span></li><li><span><a href="#Implementation" data-toc-modified-id="Implementation-1.6"><span class="toc-item-num">1.6&nbsp;&nbsp;</span>Implementation</a></span></li><li><span><a href="#Comparing-Result-and-Convergence-Behavior" data-toc-modified-id="Comparing-Result-and-Convergence-Behavior-1.7"><span class="toc-item-num">1.7&nbsp;&nbsp;</span>Comparing Result and Convergence Behavior</a></span></li><li><span><a href="#Pros-and-Cons-of-Logistic-Regression" data-toc-modified-id="Pros-and-Cons-of-Logistic-Regression-1.8"><span class="toc-item-num">1.8&nbsp;&nbsp;</span>Pros and Cons of Logistic Regression</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Reference</a></span></li></ul></div> Step1: Logistic Regression Logistic regression is an excellent tool to know for classification problems, which are problems where the output value that we wish to predict only takes on only a small number of discrete values. Here we'll focus on the binary classification problem, where the output can take on only two distinct classes. To make our examples more concrete, we will consider the Glass dataset. Step2: Our task is to predict the household column using the al column. Let's visualize the relationship between the input and output and also train the logsitic regression to see the outcome that it produces. Step3: As we can see, logistic regression can output the probabilities of observation belonging to a specific class and these probabilities can be converted into class predictions by choosing a cutoff value (e.g. probability higher than 0.5 is classified as class 1). Logistic Function In Logistic Regression, the log-odds of a categorical response being "true" (1) is modeled as a linear combination of the features Step5: The logistic function has some nice properties. The y-value represents the probability and it is always bounded between 0 and 1, which is want we wanted for probabilities. For an x value of 0 you get a 0.5 probability. Also as you get more positive x value you get a higher probability, on the other hand, a more negative x value results in a lower probability. Toy sample code of how to predict the probability given the data and the weight is provided below. Step6: Interpreting the Intercept We can check logistic regression's coefficient does in fact generate the log-odds. Step7: Interpretation Step9: Defining The Cost Function When utilizing logistic regression, we are trying to learn the $w$ values in order to maximize the probability of correctly classifying our glasses. Let's say someone did give us some $w$ values of the logistic regression model, how would we determine if they were good values or not? What we would hope is that for the household of class 1, the probability values are close to 1 and for the household of class 0 the probability is close to 0. But we don't care about getting the correct probability for just one observation, we want to correctly classify all our observations. If we assume our data are independent and identically distributed (think of it as all of them are treated equally), we can just take the product of all our individually calculated probabilities and that becomes the objective function we want to maximize. So in math Step12: Note Step13: Comparing Result and Convergence Behavior We'll use the logistic regression code that we've implemented and compare the predicted auc score with scikit-learn's implementation. This only serves to check that the predicted results are similar and that our toy code is correctly implemented. Then we'll also explore the convergence difference between batch gradient descent and stochastic gradient descent.
Python Code: # code for loading the format for the notebook import os # path : store the current path to convert back to it later path = os.getcwd() os.chdir(os.path.join('..', 'notebook_format')) from formats import load_style load_style(plot_style = False) os.chdir(path) # 1. magic for inline plot # 2. magic to print version # 3. magic so that the notebook will reload external python modules # 4. magic to enable retina (high resolution) plots # https://gist.github.com/minrk/3301035 %matplotlib inline %load_ext watermark %load_ext autoreload %autoreload 2 %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn import metrics from sklearn.linear_model import LogisticRegression %watermark -a 'Ethen' -d -t -v -p numpy,pandas,matplotlib,sklearn Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Logistic-Regression" data-toc-modified-id="Logistic-Regression-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Logistic Regression</a></span><ul class="toc-item"><li><span><a href="#Logistic-Function" data-toc-modified-id="Logistic-Function-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Logistic Function</a></span></li><li><span><a href="#Interpreting-the-Intercept" data-toc-modified-id="Interpreting-the-Intercept-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Interpreting the Intercept</a></span></li><li><span><a href="#Defining-The-Cost-Function" data-toc-modified-id="Defining-The-Cost-Function-1.3"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Defining The Cost Function</a></span></li><li><span><a href="#Gradient" data-toc-modified-id="Gradient-1.4"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Gradient</a></span></li><li><span><a href="#Stochastic/Mini-batch-Gradient" data-toc-modified-id="Stochastic/Mini-batch-Gradient-1.5"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Stochastic/Mini-batch Gradient</a></span></li><li><span><a href="#Implementation" data-toc-modified-id="Implementation-1.6"><span class="toc-item-num">1.6&nbsp;&nbsp;</span>Implementation</a></span></li><li><span><a href="#Comparing-Result-and-Convergence-Behavior" data-toc-modified-id="Comparing-Result-and-Convergence-Behavior-1.7"><span class="toc-item-num">1.7&nbsp;&nbsp;</span>Comparing Result and Convergence Behavior</a></span></li><li><span><a href="#Pros-and-Cons-of-Logistic-Regression" data-toc-modified-id="Pros-and-Cons-of-Logistic-Regression-1.8"><span class="toc-item-num">1.8&nbsp;&nbsp;</span>Pros and Cons of Logistic Regression</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Reference</a></span></li></ul></div> End of explanation url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/glass/glass.data' col_names = ['id', 'ri', 'na', 'mg', 'al', 'si', 'k', 'ca', 'ba', 'fe', 'glass_type'] glass = pd.read_csv(url, names = col_names, index_col = 'id') glass.sort_values('al', inplace = True) # convert the glass type into binary outcome # types 1, 2, 3 are window glass # types 5, 6, 7 are household glass glass['household'] = glass['glass_type'].map({1: 0, 2: 0, 3: 0, 5: 1, 6: 1, 7: 1}) glass.head() Explanation: Logistic Regression Logistic regression is an excellent tool to know for classification problems, which are problems where the output value that we wish to predict only takes on only a small number of discrete values. Here we'll focus on the binary classification problem, where the output can take on only two distinct classes. To make our examples more concrete, we will consider the Glass dataset. End of explanation logreg = LogisticRegression(C = 1e9) X = glass['al'].values.reshape(-1, 1) # sklearn doesn't accept 1d-array, convert it to 2d y = np.array(glass['household']) logreg.fit(X, y) # predict the probability that each observation belongs to class 1 # The first column indicates the predicted probability of class 0, # and the second column indicates the predicted probability of class 1 glass['household_pred_prob'] = logreg.predict_proba(X)[:, 1] # plot the predicted probability (familiarize yourself with the S-shape) # change default figure and font size plt.rcParams['figure.figsize'] = 8, 6 plt.rcParams['font.size'] = 12 plt.scatter(glass['al'], glass['household']) plt.plot(glass['al'], glass['household_pred_prob']) plt.xlabel('al') plt.ylabel('household') plt.show() Explanation: Our task is to predict the household column using the al column. Let's visualize the relationship between the input and output and also train the logsitic regression to see the outcome that it produces. End of explanation x_values = np.linspace(-5, 5, 100) y_values = [1 / (1 + np.exp(-x)) for x in x_values] plt.plot(x_values, y_values) plt.title('Logsitic Function') plt.show() Explanation: As we can see, logistic regression can output the probabilities of observation belonging to a specific class and these probabilities can be converted into class predictions by choosing a cutoff value (e.g. probability higher than 0.5 is classified as class 1). Logistic Function In Logistic Regression, the log-odds of a categorical response being "true" (1) is modeled as a linear combination of the features: \begin{align} \log \left({p\over 1-p}\right) &= w_0 + w_1x_1, ..., w_jx_j \nonumber \ &= w^Tx \nonumber \end{align} Where: $w_{0}$ is the intercept term, and $w_1$ to $w_j$ represents the parameters for all the other features (a total of j features). By convention of we can assume that $x_0 = 1$, so that we can re-write the whole thing using the matrix notation $w^Tx$. This is called the logit function. The equation can be re-arranged into the logistic function: $$p = \frac{e^{w^Tx}} {1 + e^{w^Tx}}$$ Or in the more commonly seen form: $$h_w(x) = \frac{1}{ 1 + e^{-w^Tx} }$$ Let's take a look at the plot of the function: End of explanation def predict_probability(data, weights): probability predicted by the logistic regression score = np.dot(data, weights) predictions = 1 / (1 + np.exp(-score)) return predictions Explanation: The logistic function has some nice properties. The y-value represents the probability and it is always bounded between 0 and 1, which is want we wanted for probabilities. For an x value of 0 you get a 0.5 probability. Also as you get more positive x value you get a higher probability, on the other hand, a more negative x value results in a lower probability. Toy sample code of how to predict the probability given the data and the weight is provided below. End of explanation # compute predicted log-odds for al = 2 using the equation # convert log-odds to odds # convert odds to probability logodds = logreg.intercept_ + logreg.coef_[0] * 2 odds = np.exp(logodds) prob = odds / (1 + odds) print(prob) logreg.predict_proba(2)[:, 1] # examine the coefficient for al print('a1', logreg.coef_[0]) Explanation: Interpreting the Intercept We can check logistic regression's coefficient does in fact generate the log-odds. End of explanation # increasing al by 1 (so that al now becomes 3) # increases the log-odds by 4.18 logodds = logodds + logreg.coef_[0] odds = np.exp(logodds) prob = odds / (1 + odds) print(prob) logreg.predict_proba(3)[:, 1] Explanation: Interpretation: 1 unit increase in al is associated with a 4.18 unit increase in the log-odds of the observation being classified as household 1. We can confirm that again by doing the calculation ourselves. End of explanation def compute_avg_log_likelihood(data, label, weights): the function uses a simple check to prevent overflow problem, where numbers gets too large to represent and is converted to inf an example of overflow is provided below, when this problem occurs, simply use the original score (without taking the exponential) scores = np.array( [ -10000, 200, 300 ] ) logexp = np.log( 1 + np.exp(-scores) ) logexp scores = np.dot(data, weights) logexp = np.log(1 + np.exp(-scores)) # simple check to prevent overflow mask = np.isinf(logexp) logexp[mask] = -scores[mask] log_likelihood = np.sum((label - 1) * scores - logexp) / data.shape[0] return log_likelihood Explanation: Defining The Cost Function When utilizing logistic regression, we are trying to learn the $w$ values in order to maximize the probability of correctly classifying our glasses. Let's say someone did give us some $w$ values of the logistic regression model, how would we determine if they were good values or not? What we would hope is that for the household of class 1, the probability values are close to 1 and for the household of class 0 the probability is close to 0. But we don't care about getting the correct probability for just one observation, we want to correctly classify all our observations. If we assume our data are independent and identically distributed (think of it as all of them are treated equally), we can just take the product of all our individually calculated probabilities and that becomes the objective function we want to maximize. So in math: $$\prod_{class1}h_w(x)\prod_{class0}1 - h_w(x)$$ The $\prod$ symbol means take the product of the $h_w(x)$ for the observations that are classified as that class. You will notice that for observations that are labeled as class 0, we are taking 1 minus the logistic function. That is because we are trying to find a value to maximize, and since observations that are labeled as class 0 should have a probability close to zero, 1 minus the probability should be close to 1. This procedure is also known as the maximum likelihood estimation and the following link contains a nice discussion of maximum likelihood using linear regression as an example. Blog: The Principle of Maximum Likelihood Next we will re-write the original cost function as: $$\ell(w) = \sum_{i=1}^{N}y_{i}log(h_w(x_{i})) + (1-y_{i})log(1-h_w(x_{i}))$$ Where: We define $y_{i}$ to be 1 when the $i_{th}$ observation is labeled class 1 and 0 when labeled as class 0, then we only compute $h_w(x_{i})$ for observations that are labeled class 1 and $1 - h_w(x_{i})$ for observations that are labeled class 0, which is still the same idea as the original function. Next we'll transform the original $h_w(x_{i})$ by taking the log. As we'll later see this logarithm transformation will make our cost function more convenient to work with, and because the logarithm is a monotonically increasing function, the logarithm of a function achieves its maximum value at the same points as the function itself. When we take the log, our product across all data points, it becomes a sum. See log rules for more details (Hint: log(ab) = log(a) + log(b)). The $N$ simply represents the total number of the data. Often times you'll also see the notation above be simplified in the form of a maximum likelihood estimator: $$ \ell(w) = \sum_{i=1}^{N} log \big( P( y_i \mid x_i, w ) \big) $$ The equation above simply denotes the idea that , $\mathbf{w}$ represents the parameters we would like to estimate the parameters $w$ by maximizing conditional probability of $y_i$ given $x_i$. Now by definition of probability in the logistic regression model: $h_w(x_{i}) = 1 \big/ 1 + e^{-w^T x_i}$ and $1- h_w(x_{i}) = e^{ -w^T x_i } \big/ ( 1 + e^{ -w^T x_i } )$. By substituting these expressions into our $\ell(w)$ equation and simplifying it further we can obtain a simpler expression. $$ \begin{align} \ell(w) &= \sum_{i=1}^{N}y_{i}log(h_w(x_{i})) + (1-y_{i})log(1-h_w(x_{i})) \nonumber \ &= \sum_{i=1}^{N} y_{i} log( \frac{1}{ 1 + e^{ -w^T x_i } } ) + ( 1 - y_{i} ) log( \frac{ e^{ -w^T x_i } }{ 1 + e^{ -w^T x_i } } ) \nonumber \ &= \sum_{i=1}^{N} -y_{i} log( 1 + e^{ -w^T x_i } ) + ( 1 - y_{i} ) ( -w^T x_i - log( 1 + e^{ -w^T x_i } ) ) \nonumber \ &= \sum_{i=1}^{N} ( y_{i} - 1 ) ( w^T x_i ) - log( 1 + e^{ -w^T x_i } ) \nonumber \end{align} $$ We'll use the formula above to compute the log likelihood for the entire dataset, which is used to assess the convergence of the algorithm. Toy code provided below. End of explanation # put the code together into one cell def predict_probability(data, weights): probability predicted by the logistic regression score = np.dot(data, weights) predictions = 1 / (1 + np.exp(-score)) return predictions def compute_avg_log_likelihood(data, label, weights): the function uses a simple check to prevent overflow problem, where numbers gets too large to represent and is converted to inf an example of overflow is provided below, when this problem occurs, simply use the original score (without taking the exponential) scores = np.array([-10000, 200, 300]) logexp = np.log(1 + np.exp(-scores)) logexp scores = np.dot(data, weights) logexp = np.log(1 + np.exp(-scores)) # simple check to prevent overflow mask = np.isinf(logexp) logexp[mask] = -scores[mask] log_likelihood = np.sum((label - 1) * scores - logexp) / data.shape[0] return log_likelihood def logistic_regression(data, label, step_size, batch_size, max_iter): # weights of the model are initialized as zero data_num = data.shape[0] feature_num = data.shape[1] weights = np.zeros(data.shape[1]) # `i` keeps track of the starting index of current batch # and shuffle the data before starting i = 0 permutation = np.random.permutation(data_num) data, label = data[permutation], label[permutation] # do a linear scan over data, for each iteration update the weight using # batches of data, and store the log likelihood record to visualize convergence log_likelihood_record = [] for _ in range(max_iter): # extract the batched data and label use it to compute # the predicted probability using the current weight and the errors batch = slice(i, i + batch_size) batch_data, batch_label = data[batch], label[batch] predictions = predict_probability(batch_data, weights) errors = batch_label - predictions # loop over each coefficient to compute the derivative and update the weight for j in range(feature_num): derivative = np.dot(errors, batch_data[:, j]) weights[j] += step_size * derivative / batch_size # track whether log likelihood is increasing after # each weight update log_likelihood = compute_avg_log_likelihood( data = batch_data, label = batch_label, weights = weights ) log_likelihood_record.append(log_likelihood) # update starting index of for the batches # and if we made a complete pass over data, shuffle again # and refresh the index that keeps track of the batch i += batch_size if i + batch_size > data_num: permutation = np.random.permutation(data_num) data, label = data[permutation], label[permutation] i = 0 # We return the list of log likelihoods for plotting purposes. return weights, log_likelihood_record Explanation: Note: We made one tiny modification to the log likelihood function We added a ${1/N}$ term which averages the log likelihood across all data points. The ${1/N}$ term will make it easier for us to compare stochastic gradient ascent with batch gradient ascent later. Gradient Now that we obtain the formula to assess our algorithm, we'll dive into the meat of the algorithm, which is to derive the gradient for the formula (the derivative of the formula with respect to each coefficient): $$\ell(w) = \sum_{i=1}^{N} ( y_{i} - 1 ) ( w^T x_i ) - log( 1 + e^{ -w^T x_i } )$$ And it turns out the derivative of log likelihood with respect to to a single coefficient $w_j$ is as follows (the form is the same for all coefficients): $$ \frac{\partial\ell(w)}{\partial w_j} = \sum_{i=1}^N (x_{ij})\left( y_i - \frac{1}{ 1 + e^{-w^Tx_i} } \right ) $$ To compute it, you simply need the following two terms: $\left( y_i - \frac{1}{ 1 + e^{-w^Tx_i} } \right )$ is the vector containing the difference between the predicted probability and the original label. $x_{ij}$ is the vector containing the $j_{th}$ feature's value. For a step by step derivation, consider going through the following link. Blog: Maximum likelihood and gradient descent demonstration, it uses a slightly different notation, but the walkthrough should still be pretty clear. Stochastic/Mini-batch Gradient The problem with computing the gradient (or so called batched gradient) is the term $\sum_{i=1}^{N}$. This means that we must sum the contributions over all the data points to calculate the gradient, and this can be problematic if the dataset we're studying is extremely large. Thus, in stochastic gradient, we can use a single point as an approximation to the gradient: $$ \frac{\partial\ell_i(w)}{\partial w_j} = (x_{ij})\left( y_i - \frac{1}{ 1 + e^{-w^Tx_i} } \right ) $$ Note1: Because the Stochastic Gradient algorithm uses each row of data in turn to update the gradient, if our data has some sort of implicit ordering, this will negatively affect the convergence of the algorithm. At an extreme, what if we had the data sorted so that all positive reviews came before negative reviews? In that case, even if most reviews are negative, we might converge on an answer of +1 because we never get to see the other data. To avoid this, one practical trick is to shuffle the data before we begin so the rows are in random order. Note2: Stochastic gradient compute the gradient using only 1 data point to update the the parameters, while batch gradient uses all $N$ data points. An alternative to these two extremes is a simple change that allows us to use a mini-batch of $B \leq N$ data points to calculate the gradient. This simple approach is faster than batch gradient but less noisy than stochastic gradient that uses only 1 data point. Given a mini-batch (or a set of data points) $\mathbf{x}{i}, \mathbf{x}{i+1} \ldots \mathbf{x}_{i+B}$, the gradient function for this mini-batch of data points is given by: $$ \sum_{s = i}^{i+B} \frac{\partial\ell_s(w)}{\partial w_j} = \frac{1}{B} \sum_{s = i}^{i+B} (x_{sj})\left( y_i - \frac{1}{ 1 + e^{-w^Tx_i} } \right ) $$ Here, the $\frac{1}{B}$ means that we are normalizing the gradient update rule by the batch size $B$. In other words, we update the coefficients using the average gradient over data points (instead of using a pure summation). By using the average gradient, we ensure that the magnitude of the gradient is approximately the same for all batch sizes. This way, we can more easily compare various batch sizes and study the effect it has on the algorithm. Implementation Recall our task is to find the optimal value for each individual weight to lower the cost. This requires taking the partial derivative of the cost/error function with respect to a single weight, and then running gradient descent for each individual weight to update them. Thus, for any individual weight $w_j$, we'll compute the following: $$ w_j^{(t + 1)} = w_j^{(t)} + \alpha * \sum_{s = i}^{i+B} \frac{\partial\ell_s(w)}{\partial w_j}$$ Where: $\alpha$ denotes the the learning rate or so called step size, in other places you'll see it denoted as $\eta$. $w_j^{(t)}$ denotes the weight of the $j_{th}$ feature at iteration $t$. And we'll do this iteratively for each weight, many times, until the whole network's cost function is minimized. End of explanation # manually append the coefficient term, # every good open-source library does not # require this additional step from the user data = np.c_[np.ones(X.shape[0]), X] # using our logistic regression code weights_batch, log_likelihood_batch = logistic_regression( data = data, label = np.array(y), step_size = 5e-1, batch_size = X.shape[0], # batch gradient descent max_iter = 200 ) # compare both logistic regression's auc score logreg = LogisticRegression(C = 1e9) logreg.fit(X, y) pred_prob = logreg.predict_proba(X)[:, 1] proba = predict_probability(data, weights_batch) # check that the auc score is similar auc1 = metrics.roc_auc_score(y, pred_prob) auc2 = metrics.roc_auc_score(y, proba) print('auc', auc1, auc2) weights_sgd, log_likelihood_sgd = logistic_regression( data = data, label = y, step_size = 5e-1, batch_size = 30, # stochastic gradient descent max_iter = 200 ) weights_minibatch, log_likelihood_minibatch = logistic_regression( data = data, label = y, step_size = 5e-1, batch_size = 100, # mini-batch gradient descent max_iter = 200 ) plt.figure(figsize = (10, 7)) plt.plot(log_likelihood_sgd, label = 'stochastic gradient descent') plt.plot(log_likelihood_batch, label = 'batch gradient descent') plt.plot(log_likelihood_minibatch, label = 'mini-batch gradient descent') plt.legend(loc = 'best') plt.xlabel('# of iterations') plt.ylabel('Average log likelihood') plt.title('Convergence Plot') plt.show() Explanation: Comparing Result and Convergence Behavior We'll use the logistic regression code that we've implemented and compare the predicted auc score with scikit-learn's implementation. This only serves to check that the predicted results are similar and that our toy code is correctly implemented. Then we'll also explore the convergence difference between batch gradient descent and stochastic gradient descent. End of explanation
5,197
Given the following text description, write Python code to implement the functionality described below step by step Description: Decision Trees and Random Forests in Python Learning Objectives Explore and analyze data using a Pairplot Train a single Decision Tree Predict and evaluate the Decision Tree Compare the Decision Tree model to a Random Forest Introduction In this lab, you explore and analyze data using a Pairplot, train a single Decision Tree, predict and evaluate the Decision Tree, and compare the Decision Tree model to a Random Forest. Recall that the Decision Tree algorithm belongs to the family of supervised learning algorithms. Unlike other supervised learning algorithms, the decision tree algorithm can be used for solving both regression and classification problems too. Simply, the goal of using a Decision Tree is to create a training model that can use to predict the class or value of the target variable by learning simple decision rules inferred from prior data(training data). Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook Step1: Restart the kernel before proceeding further (On the Notebook menu, select Kernel > Restart Kernel > Restart). Load necessary libraries We will start by importing the necessary libraries for this lab. Step2: Get the Data Step3: Exploratory Data Analysis Lab Task #1 Step4: Train Test Split Let's split up the data into a training set and a test set! Step5: Decision Trees Lab Task #2 Step6: Prediction and Evaluation Lab Task #3 Step7: Tree Visualization Scikit learn actually has some built-in visualization capabilities for decision trees, you won't use this often and it requires you to install the pydot library, but here is an example of what it looks like and the code to execute this Step8: Random Forests Lab Task #4
Python Code: !pip install scikit-learn==0.22.2 Explanation: Decision Trees and Random Forests in Python Learning Objectives Explore and analyze data using a Pairplot Train a single Decision Tree Predict and evaluate the Decision Tree Compare the Decision Tree model to a Random Forest Introduction In this lab, you explore and analyze data using a Pairplot, train a single Decision Tree, predict and evaluate the Decision Tree, and compare the Decision Tree model to a Random Forest. Recall that the Decision Tree algorithm belongs to the family of supervised learning algorithms. Unlike other supervised learning algorithms, the decision tree algorithm can be used for solving both regression and classification problems too. Simply, the goal of using a Decision Tree is to create a training model that can use to predict the class or value of the target variable by learning simple decision rules inferred from prior data(training data). Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook End of explanation # Importing necessary tensorflow library and printing the TF version. import tensorflow as tf print("TensorFlow version: ",tf.version.VERSION) import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline Explanation: Restart the kernel before proceeding further (On the Notebook menu, select Kernel > Restart Kernel > Restart). Load necessary libraries We will start by importing the necessary libraries for this lab. End of explanation # Reading "kyphosis.csv" file using the read_csv() function included in the pandas library df = pd.read_csv('../kyphosis.csv') df.head() Explanation: Get the Data End of explanation # Use the pairplot() function to plot multiple pairwise bivariate distributions in a dataset # TODO 1 # TODO -- Your code here. Explanation: Exploratory Data Analysis Lab Task #1: Check a pairplot for this small dataset. End of explanation from sklearn.model_selection import train_test_split X = df.drop('Kyphosis',axis=1) y = df['Kyphosis'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30) Explanation: Train Test Split Let's split up the data into a training set and a test set! End of explanation from sklearn.tree import DecisionTreeClassifier dtree = DecisionTreeClassifier() # Train Decision Tree Classifer # TODO 2 # TODO -- Your code here. Explanation: Decision Trees Lab Task #2: Train a single decision tree. End of explanation predictions = dtree.predict(X_test) from sklearn.metrics import classification_report,confusion_matrix # build a text report showing the main classification metrics # TODO 3a # TODO -- Your code here. # compute confusion matrix to evaluate the accuracy of a classification # TODO 3b # TODO -- Your code here. Explanation: Prediction and Evaluation Lab Task #3: Evaluate our decision tree. End of explanation from IPython.display import Image from sklearn.externals.six import StringIO from sklearn.tree import export_graphviz import pydot features = list(df.columns[1:]) features dot_data = StringIO() export_graphviz(dtree, out_file=dot_data,feature_names=features,filled=True,rounded=True) graph = pydot.graph_from_dot_data(dot_data.getvalue()) Image(graph[0].create_png()) Explanation: Tree Visualization Scikit learn actually has some built-in visualization capabilities for decision trees, you won't use this often and it requires you to install the pydot library, but here is an example of what it looks like and the code to execute this: End of explanation from sklearn.ensemble import RandomForestClassifier rfc = RandomForestClassifier(n_estimators=100) rfc.fit(X_train, y_train) rfc_pred = rfc.predict(X_test) # compute confusion matrix to evaluate the accuracy # TODO 4a # TODO -- Your code here. # build a text report showing the main metrics # TODO 4b # TODO -- Your code here. Explanation: Random Forests Lab Task #4: Compare the decision tree model to a random forest. End of explanation
5,198
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Atmos MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required Step9: 2.2. Canonical Horizontal Resolution Is Required Step10: 2.3. Range Horizontal Resolution Is Required Step11: 2.4. Number Of Vertical Levels Is Required Step12: 2.5. High Top Is Required Step13: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required Step14: 3.2. Timestep Shortwave Radiative Transfer Is Required Step15: 3.3. Timestep Longwave Radiative Transfer Is Required Step16: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required Step17: 4.2. Changes Is Required Step18: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required Step19: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required Step20: 6.2. Scheme Method Is Required Step21: 6.3. Scheme Order Is Required Step22: 6.4. Horizontal Pole Is Required Step23: 6.5. Grid Type Is Required Step24: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required Step25: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required Step26: 8.2. Name Is Required Step27: 8.3. Timestepping Type Is Required Step28: 8.4. Prognostic Variables Is Required Step29: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required Step30: 9.2. Top Heat Is Required Step31: 9.3. Top Wind Is Required Step32: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required Step33: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required Step34: 11.2. Scheme Method Is Required Step35: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required Step36: 12.2. Scheme Characteristics Is Required Step37: 12.3. Conserved Quantities Is Required Step38: 12.4. Conservation Method Is Required Step39: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required Step40: 13.2. Scheme Characteristics Is Required Step41: 13.3. Scheme Staggering Type Is Required Step42: 13.4. Conserved Quantities Is Required Step43: 13.5. Conservation Method Is Required Step44: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required Step45: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required Step46: 15.2. Name Is Required Step47: 15.3. Spectral Integration Is Required Step48: 15.4. Transport Calculation Is Required Step49: 15.5. Spectral Intervals Is Required Step50: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required Step51: 16.2. ODS Is Required Step52: 16.3. Other Flourinated Gases Is Required Step53: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required Step54: 17.2. Physical Representation Is Required Step55: 17.3. Optical Methods Is Required Step56: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required Step57: 18.2. Physical Representation Is Required Step58: 18.3. Optical Methods Is Required Step59: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required Step60: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required Step61: 20.2. Physical Representation Is Required Step62: 20.3. Optical Methods Is Required Step63: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required Step64: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required Step65: 22.2. Name Is Required Step66: 22.3. Spectral Integration Is Required Step67: 22.4. Transport Calculation Is Required Step68: 22.5. Spectral Intervals Is Required Step69: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required Step70: 23.2. ODS Is Required Step71: 23.3. Other Flourinated Gases Is Required Step72: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required Step73: 24.2. Physical Reprenstation Is Required Step74: 24.3. Optical Methods Is Required Step75: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required Step76: 25.2. Physical Representation Is Required Step77: 25.3. Optical Methods Is Required Step78: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required Step79: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required Step80: 27.2. Physical Representation Is Required Step81: 27.3. Optical Methods Is Required Step82: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required Step83: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required Step84: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required Step85: 30.2. Scheme Type Is Required Step86: 30.3. Closure Order Is Required Step87: 30.4. Counter Gradient Is Required Step88: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required Step89: 31.2. Scheme Type Is Required Step90: 31.3. Scheme Method Is Required Step91: 31.4. Processes Is Required Step92: 31.5. Microphysics Is Required Step93: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required Step94: 32.2. Scheme Type Is Required Step95: 32.3. Scheme Method Is Required Step96: 32.4. Processes Is Required Step97: 32.5. Microphysics Is Required Step98: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required Step99: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required Step100: 34.2. Hydrometeors Is Required Step101: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required Step102: 35.2. Processes Is Required Step103: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required Step104: 36.2. Name Is Required Step105: 36.3. Atmos Coupling Is Required Step106: 36.4. Uses Separate Treatment Is Required Step107: 36.5. Processes Is Required Step108: 36.6. Prognostic Scheme Is Required Step109: 36.7. Diagnostic Scheme Is Required Step110: 36.8. Prognostic Variables Is Required Step111: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required Step112: 37.2. Cloud Inhomogeneity Is Required Step113: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required Step114: 38.2. Function Name Is Required Step115: 38.3. Function Order Is Required Step116: 38.4. Convection Coupling Is Required Step117: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required Step118: 39.2. Function Name Is Required Step119: 39.3. Function Order Is Required Step120: 39.4. Convection Coupling Is Required Step121: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required Step122: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required Step123: 41.2. Top Height Direction Is Required Step124: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required Step125: 42.2. Number Of Grid Points Is Required Step126: 42.3. Number Of Sub Columns Is Required Step127: 42.4. Number Of Levels Is Required Step128: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required Step129: 43.2. Type Is Required Step130: 43.3. Gas Absorption Is Required Step131: 43.4. Effective Radius Is Required Step132: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required Step133: 44.2. Overlap Is Required Step134: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required Step135: 45.2. Sponge Layer Is Required Step136: 45.3. Background Is Required Step137: 45.4. Subgrid Scale Orography Is Required Step138: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required Step139: 46.2. Source Mechanisms Is Required Step140: 46.3. Calculation Method Is Required Step141: 46.4. Propagation Scheme Is Required Step142: 46.5. Dissipation Scheme Is Required Step143: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required Step144: 47.2. Source Mechanisms Is Required Step145: 47.3. Calculation Method Is Required Step146: 47.4. Propagation Scheme Is Required Step147: 47.5. Dissipation Scheme Is Required Step148: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required Step149: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required Step150: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required Step151: 50.2. Fixed Value Is Required Step152: 50.3. Transient Characteristics Is Required Step153: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required Step154: 51.2. Fixed Reference Date Is Required Step155: 51.3. Transient Method Is Required Step156: 51.4. Computation Method Is Required Step157: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required Step158: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required Step159: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'cnrm-esm2-1', 'atmos') Explanation: ES-DOC CMIP6 Model Properties - Atmos MIP Era: CMIP6 Institute: CNRM-CERFACS Source ID: CNRM-ESM2-1 Topic: Atmos Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. Properties: 156 (127 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:52 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "AGCM" # "ARCM" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of atmospheric model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "primitive equations" # "non-hydrostatic" # "anelastic" # "Boussinesq" # "hydrostatic" # "quasi-hydrostatic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the atmosphere. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.4. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on the computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.high_top') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 2.5. High Top Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the dynamics, e.g. 30 min. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.2. Timestep Shortwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the shortwave radiative transfer, e.g. 1.5 hours. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Timestep Longwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the longwave radiative transfer, e.g. 3 hours. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "modified" # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the orography. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.changes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "related to ice sheets" # "related to tectonics" # "modified mean" # "modified variance if taken into account in model (cf gravity waves)" # TODO - please enter value(s) Explanation: 4.2. Changes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N If the orography type is modified describe the time adaptation changes. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of grid discretisation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "spectral" # "fixed grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "finite elements" # "finite volumes" # "finite difference" # "centered finite difference" # TODO - please enter value(s) Explanation: 6.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "second" # "third" # "fourth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.3. Scheme Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation function order End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "filter" # "pole rotation" # "artificial island" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.4. Horizontal Pole Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal discretisation pole singularity treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Gaussian" # "Latitude-Longitude" # "Cubed-Sphere" # "Icosahedral" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.5. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "isobaric" # "sigma" # "hybrid sigma-pressure" # "hybrid pressure" # "vertically lagrangian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type of vertical coordinate system End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere dynamical core End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the dynamical core of the model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Adams-Bashforth" # "explicit" # "implicit" # "semi-implicit" # "leap frog" # "multi-step" # "Runge Kutta fifth order" # "Runge Kutta second order" # "Runge Kutta third order" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.3. Timestepping Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestepping framework type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "surface pressure" # "wind components" # "divergence/curl" # "temperature" # "potential temperature" # "total water" # "water vapour" # "water liquid" # "water ice" # "total water moments" # "clouds" # "radiation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of the model prognostic variables End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary condition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Top Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary heat treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.3. Top Wind Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary wind treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Type of lateral boundary condition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal diffusion scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "iterated Laplacian" # "bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal diffusion scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heun" # "Roe and VanLeer" # "Roe and Superbee" # "Prather" # "UTOPIA" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Tracer advection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Eulerian" # "modified Euler" # "Lagrangian" # "semi-Lagrangian" # "cubic semi-Lagrangian" # "quintic semi-Lagrangian" # "mass-conserving" # "finite volume" # "flux-corrected" # "linear" # "quadratic" # "quartic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "dry mass" # "tracer mass" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.3. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme conserved quantities End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Priestley algorithm" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.4. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracer advection scheme conservation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "VanLeer" # "Janjic" # "SUPG (Streamline Upwind Petrov-Galerkin)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Momentum advection schemes name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "2nd order" # "4th order" # "cell-centred" # "staggered grid" # "semi-staggered grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa D-grid" # "Arakawa E-grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.3. Scheme Staggering Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Angular momentum" # "Horizontal momentum" # "Enstrophy" # "Mass" # "Total energy" # "Vorticity" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.4. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme conserved quantities End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.5. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme conservation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.aerosols') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "sulphate" # "nitrate" # "sea salt" # "dust" # "ice" # "organic" # "BC (black carbon / soot)" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "polar stratospheric ice" # "NAT (nitric acid trihydrate)" # "NAD (nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particle)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Aerosols whose radiative effect is taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of shortwave radiation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme spectral integration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Shortwave radiation transport calculation methods End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme number of spectral intervals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud ice crystals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud liquid droplets End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with aerosols End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with gases End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of longwave radiation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the longwave radiation scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme spectral integration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Longwave radiation transport calculation methods End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 22.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme number of spectral intervals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud ice crystals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24.2. Physical Reprenstation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud liquid droplets End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with aerosols End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with gases End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere convection and turbulence End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Mellor-Yamada" # "Holtslag-Boville" # "EDMF" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Boundary layer turbulence scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "TKE prognostic" # "TKE diagnostic" # "TKE coupled with water" # "vertical profile of Kz" # "non-local diffusion" # "Monin-Obukhov similarity" # "Coastal Buddy Scheme" # "Coupled with convection" # "Coupled with gravity waves" # "Depth capped at cloud base" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Boundary layer turbulence scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Closure Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boundary layer turbulence scheme closure order End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.4. Counter Gradient Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Uses boundary layer turbulence scheme counter gradient End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Deep convection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "adjustment" # "plume ensemble" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CAPE" # "bulk" # "ensemble" # "CAPE/WFN based" # "TKE/CIN based" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vertical momentum transport" # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "updrafts" # "downdrafts" # "radiative effect of anvils" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of deep convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Shallow convection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "cumulus-capped boundary layer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N shallow convection scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "same as deep (unified)" # "included in boundary layer turbulence" # "separate diagnosis" # TODO - please enter value(s) Explanation: 32.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 shallow convection scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of shallow convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for shallow convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of large scale cloud microphysics and precipitation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the large scale precipitation parameterisation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "liquid rain" # "snow" # "hail" # "graupel" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34.2. Hydrometeors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Precipitating hydrometeors taken into account in the large scale precipitation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the microphysics parameterisation scheme used for large scale clouds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mixed phase" # "cloud droplets" # "cloud ice" # "ice nucleation" # "water vapour deposition" # "effect of raindrops" # "effect of snow" # "effect of graupel" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 35.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Large scale cloud microphysics processes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the atmosphere cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "atmosphere_radiation" # "atmosphere_microphysics_precipitation" # "atmosphere_turbulence_convection" # "atmosphere_gravity_waves" # "atmosphere_solar" # "atmosphere_volcano" # "atmosphere_cloud_simulator" # TODO - please enter value(s) Explanation: 36.3. Atmos Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Atmosphere components that are linked to the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.4. Uses Separate Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "entrainment" # "detrainment" # "bulk cloud" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.6. Prognostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a prognostic scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.7. Diagnostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a diagnostic scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud amount" # "liquid" # "ice" # "rain" # "snow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.8. Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List the prognostic variables used by the cloud scheme, if applicable. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "random" # "maximum" # "maximum-random" # "exponential" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account overlapping of cloud layers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Cloud Inhomogeneity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) Explanation: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 38.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 38.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) Explanation: 38.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale water distribution coupling with convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) Explanation: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 39.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 39.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) Explanation: 39.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale ice distribution coupling with convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of observation simulator characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "no adjustment" # "IR brightness" # "visible optical depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator ISSCP top height estimation methodUo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "lowest altitude level" # "highest altitude level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. Top Height Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator ISSCP top height direction End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Inline" # "Offline" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP run configuration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.2. Number Of Grid Points Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of grid points End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.3. Number Of Sub Columns Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.4. Number Of Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of levels End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar frequency (Hz) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "surface" # "space borne" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 43.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 43.3. Gas Absorption Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses gas absorption End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 43.4. Effective Radius Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses effective radius End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "ice spheres" # "ice non-spherical" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator lidar ice type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "max" # "random" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 44.2. Overlap Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator lidar overlap End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of gravity wave parameterisation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Rayleigh friction" # "Diffusive sponge layer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.2. Sponge Layer Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sponge layer in the upper levels in order to avoid gravity wave reflection at the top. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "continuous spectrum" # "discrete spectrum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.3. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background wave distribution End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "effect on drag" # "effect on lifting" # "enhanced topography" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.4. Subgrid Scale Orography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Subgrid scale orography effects taken into account. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the orographic gravity wave scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "linear mountain waves" # "hydraulic jump" # "envelope orography" # "low level flow blocking" # "statistical sub-grid scale variance" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave source mechanisms End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "non-linear calculation" # "more than two cardinal directions" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave calculation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "includes boundary layer ducting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave propogation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave dissipation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the non-orographic gravity wave scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convection" # "precipitation" # "background spectrum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave source mechanisms End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "spatially dependent" # "temporally dependent" # TODO - please enter value(s) Explanation: 47.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave calculation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave propogation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave dissipation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of solar insolation of the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SW radiation" # "precipitating energetic particles" # "cosmic rays" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Pathways for the solar forcing of the atmosphere model domain End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) Explanation: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the solar constant. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 50.2. Fixed Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the solar constant is fixed, enter the value of the solar constant (W m-2). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 50.3. Transient Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 solar constant transient characteristics (W m-2) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) Explanation: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of orbital parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 51.2. Fixed Reference Date Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date for fixed orbital parameters (yyyy) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 51.3. Transient Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of transient orbital parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Berger 1978" # "Laskar 2004" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 51.4. Computation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used for computing orbital parameters. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does top of atmosphere insolation impact on stratospheric ozone? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the implementation of volcanic effects in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "high frequency solar constant anomaly" # "stratospheric aerosols optical thickness" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How volcanic effects are modeled in the atmosphere. End of explanation
5,199
Given the following text description, write Python code to implement the functionality described below step by step Description: Lecture 1. Introduction to Tensorflow Deep Learning Library Theano, Torch, Tensorflow가 가장 많이 사용되는 lib | Software | Creator |Opensource | Interface | |------------------|----------------------------------------------------|------------|-------------| |Apache SINGA | Apache Incubator |Yes | Python, C++, Java | |Caffe | Berkeley Vision and Learning Center |Yes | Python, MATLAB | |Deeplearning4j | Skymind engineering team; Deeplearning4j community; originally Adam Gibson |Yes | Java, Scala, Clojure, Python (Keras) | |Dlib | Davis King |Yes | C++ | |Keras | François Chollet |Yes | Python | |Microsoft Cognitive Toolkit | Microsoft Research |Yes | Python, C++, Command line, BrainScript (.NET on roadmap) | |MXNet | Distributed (Deep) Machine Learning Community |Yes | C++, Python, Julia, Matlab, JavaScript, Go, R, Scala, Perl | |Neural Designer | Artelnics |No | Graphical user interface | |OpenNN | Artelnics |Yes | C++ | |TensorFlow | Google Brain team |Yes | Python (Keras), C/C++, Java, Go, R | |Theano | Université de Montréal |Yes | Python | |Torch | Ronan Collobert, Koray Kavukcuoglu, Clement Farabet |Yes | Lua, LuaJIT, C, utility library for C++/OpenCL | |Wolfram Mathematica | Wolfram Research |Yes | Command line, Java, C++ | Why TensorFlow? Python API Portability Step1: 2. TF-slim(tf.contrib.slim) Contrib 중 하나의 library로, 상위 수준의 개념(argument scoping, layer, variable)으로 모델을 짧고 쉽게 정의할 수 있게 만듦 많이 사용되는 regularizer를 사용하여 모델을 단순하게 함. VGG, AlexNet과 같이 많이 쓰이는 모델을 개발 해놓음 without TF-Slim Step2: with TF-Slim Step3: Data Flow Graph Step4: How to get the value of a? Session을 시작해줘야 operation이 작동함 Step5: More graphs Step6: Why graphs 계산 저장 가능 계산을 작은 단위로 나누어 자동차별화를 용이하게 해줌 분산 컴퓨팅을 가능케하고 CPU, GPU 혹은 여러 장치를 한번에 사용할 수 있게 해줌 많은 기계학습 모델들이 이미 그래프를 통해 가르쳐지고 시각화되고 있기 때문 Lecture 2 Step7: 텐서보드 실행법 터미널에서 python [yourprogram.py] tensorboard --logdir="./graphs" http Step8: 텐서의 원소로 특정한 값을 생성할 수 있음 유사함수 Step9: tf.zeros_like(input, dtype=None, name=None, opitmize=True) 모든 원소를 0으로 만드는 함수 Step10: tf.one(shape, dtype=tf.float32, name=None) 모든 원소가 1인 텐서쉐입을 만드는 함수 Step11: tf.ones_like(input_tensor, dtype=None, name=None, optimize=True) 모든 원소를 1로 만드는 함수 Step12: tf.fill(dims, value, name=None) 텐서를 한가지 스칼라값으로 채움 Step13: tf.linspace(start, stop, num, name=None) 특정 구간에서 균등하게 증가(개수만큼)하는 수열 생성 Step14: tf.range(start, limit=None, delta=1, dtype=None, name='range') 등차수열 생성 Step15: 텐서는 반복문에 사용할 수 없음 Step16: 특정 분포에서 난수를 생성할 수 있음 tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) 정규분포로부터의 난수 생성<br><br/> tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None,name=None) 절단정규분포로부터의 난수 생성<br><br/> tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None,name=None) 균등분포로부터의 난수 생성<br><br/> tf.multinomial(logits, num_samples, seed=None, name=None) 다항분포로부터의 난수 생성<br><br/> tf.random_gamma(shape, alpha, beta=None, dtype=tf.float32, seed=None, name=None) 감마분포로부터의 난수 생성<br><br/> tf.random_shuffle(value, seed=None, name=None) 값의 첫번째 차원을 기준으로 랜덤하게 섞어줌<br><br/> tf.random_crop(value, size, seed=None, name=None) 텐서를 주어진 value만큼 랜덤하게 잘라냄 Step17: 3. Math Operations Step18: 4. Data Types ### Python Native Types Step19: TensorFlow Native Types 텐서플로우는 Numpy 처럼 tf.int32, tf.float32 독자적인 데이터 타입을 사용함 https Step20: Declare variables tf.Variable 함수로 생성 Step21: 변수를 사용하기 전에는 항상 변수를 초기화해야함 tf.global_variables_initializer() 함수는 모든 변수를 초기화해줌 Step22: To initialize only as subset of varuables tf.variables_initializer() Step23: Evaluate values of variables 그냥 프린트하면 텐서와 유형, 쉐입만 볼수 있음 Step24: eval()함수를 사용하면 값까지 볼수 있음 Step25: Assign values to variables tf.Variable.assign()함수를 사용함 Step26: Tensorflow session은 각각 유지됨 각각의 session은 그래프에서 정의된 변수들의 현재값을 각각 가질 수 있다 Step27: 다른 변수를 사용해서 변수를 만들수 있음 Step28: 6. InteractiveSession 인터렉티브 세션은 그것 자체로 디폴트세션으로 작동함 따로 run()을 선언하지 않아도 실행됨 Step29: 7. Control Dependencies 두개 이상의 오퍼레이션이 존재할 때, 오퍼레이션의 실행 순서 지정 tf.Graph.control_dependencies(control_inputs) 함수사용 Step30: 8. Placeholders and feed_dict 우리는 값을 알지 못하는 상태에서 그래프 먼저 그려야한다! 그래프를 그리고 나중에 데이터를 공급해주기 위해 placeholder사용함 ### tf.placeholder(dtype, shape=None, name=None) Step31: 꼭 placeholder가 아니여도 feed 가능 tf.Graph.is_feedable(tensor) 위의 함수를 사용하여 feed가능 여부를 확인할 수 있음 Step32: 9. The trap of lazy loading 속도를 조금 더 빠르게 해주는 팁 연산을 최대한 뒤로 미루는 것
Python Code: # Load Module import numpy as np from sklearn import datasets from sklearn import metrics from sklearn import model_selection import tensorflow as tf # Load dataset. iris = datasets.load_iris() # 총 150개의 붓꽃 사진과 class load x_train, x_test, y_train, y_test = model_selection.train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) print('train and test ready') x_train[:10] # 각 열은 꽃받침 길이, 꽃받침 너비, 꽃잎 길이, 꽃잎 너비 y_train[:10] # 0,1,2는 꽃의 종 의미 # 10, 20, 10 단위로 각각 3층 DNN 생성 feature_columns = tf.contrib.learn.infer_real_valued_columns_from_input(x_train) # list feature column classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns, hidden_units=[10, 20, 10], n_classes=3) # Train. classifier.fit(x_train, y_train, steps=200) predictions = list(classifier.predict(x_test, as_iterable=True)) # Score with sklearn. score = metrics.accuracy_score(y_test, predictions) print('Accuracy: {0:f}'.format(score)) new_samples = np.array( [[6.4, 3.2, 4.5, 1.5], [5.8, 3.1, 5.0, 1.7]], dtype=float) y = list(classifier.predict(new_samples, as_iterable=True)) print('Predictions: {}'.format(str(y))) Explanation: Lecture 1. Introduction to Tensorflow Deep Learning Library Theano, Torch, Tensorflow가 가장 많이 사용되는 lib | Software | Creator |Opensource | Interface | |------------------|----------------------------------------------------|------------|-------------| |Apache SINGA | Apache Incubator |Yes | Python, C++, Java | |Caffe | Berkeley Vision and Learning Center |Yes | Python, MATLAB | |Deeplearning4j | Skymind engineering team; Deeplearning4j community; originally Adam Gibson |Yes | Java, Scala, Clojure, Python (Keras) | |Dlib | Davis King |Yes | C++ | |Keras | François Chollet |Yes | Python | |Microsoft Cognitive Toolkit | Microsoft Research |Yes | Python, C++, Command line, BrainScript (.NET on roadmap) | |MXNet | Distributed (Deep) Machine Learning Community |Yes | C++, Python, Julia, Matlab, JavaScript, Go, R, Scala, Perl | |Neural Designer | Artelnics |No | Graphical user interface | |OpenNN | Artelnics |Yes | C++ | |TensorFlow | Google Brain team |Yes | Python (Keras), C/C++, Java, Go, R | |Theano | Université de Montréal |Yes | Python | |Torch | Ronan Collobert, Koray Kavukcuoglu, Clement Farabet |Yes | Lua, LuaJIT, C, utility library for C++/OpenCL | |Wolfram Mathematica | Wolfram Research |Yes | Command line, Java, C++ | Why TensorFlow? Python API Portability: 단일 API를 사용하여 데스크톱, 서버 또는 모바일 장치의 하나 이상의 CPU 또는 GPU에서 계산 가능 Flexibility(유연성): from Raspberry Pi, Android, Windows, iOS, Linux to server farms Visualization (TensorBoard) Checkpoints (for managing experiments) Getting started with one-liner Tensorflow 1. TF Learn(tf.contrib.learn) End of explanation input = ... with tf.name_scope('conv1_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(input, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(bias, name=scope) Explanation: 2. TF-slim(tf.contrib.slim) Contrib 중 하나의 library로, 상위 수준의 개념(argument scoping, layer, variable)으로 모델을 짧고 쉽게 정의할 수 있게 만듦 많이 사용되는 regularizer를 사용하여 모델을 단순하게 함. VGG, AlexNet과 같이 많이 쓰이는 모델을 개발 해놓음 without TF-Slim End of explanation input = ... net = slim.conv2d(input, 128, [3, 3], scope='conv1_1') Explanation: with TF-Slim End of explanation import tensorflow as tf a = tf.add(2, 3) a = tf.add(3, 5) print (a) Explanation: Data Flow Graph End of explanation sess = tf.Session() sess.run(a) a = tf.add(3, 5) with tf.Session() as sess: print (sess.run(a)) Explanation: How to get the value of a? Session을 시작해줘야 operation이 작동함 End of explanation x = 2 y = 3 op1 = tf.add(x, y) op2 = tf.multiply(x, y) useless = tf.multiply(x, op1) op3 = tf.pow(op2, op1) with tf.Session() as sess: op3 = sess.run(op3) x = 2 y = 3 op1 = tf.add(x, y) op2 = tf.multiply(x, y) useless = tf.multiply(x, op1) op3 = tf.pow(op2, op1) with tf.Session() as sess: op3, not_useless = sess.run([op3, useless]) # Creates a graph. with tf.device("/cpu:0"): # 연산장치 선택 가능 a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape = [2,3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape = [3,2], name='b') c = tf.matmul(a, b) # Creates a session with log_device_placement set to True. sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) # tf.ConfigProto(log_device_placement=True) cpu와 gpu연산이 모두 가능할 때 gpu선택 # Runs the op. print (sess.run(a)) print (sess.run(b)) print (sess.run(c)) Explanation: More graphs End of explanation a = tf.constant(2) b = tf.constant(3) x = tf.add(a, b) with tf.Session() as sess: writer = tf.summary.FileWriter('./graphs', sess.graph) # 텐서보드에서 볼 수 있는 그래프 저장 print (sess.run(x)) # close the writer when you’re done using it writer.close() Explanation: Why graphs 계산 저장 가능 계산을 작은 단위로 나누어 자동차별화를 용이하게 해줌 분산 컴퓨팅을 가능케하고 CPU, GPU 혹은 여러 장치를 한번에 사용할 수 있게 해줌 많은 기계학습 모델들이 이미 그래프를 통해 가르쳐지고 시각화되고 있기 때문 Lecture 2: Tensorflow Ops 1. Fun with TensorBoard 텐서보드 활성화를 위해 그래프의 학습루프가 실행되기 전 코드 삽입 writer = tf.summary.FileWriter(logs_dir, sess.graph) tf.train.FileWriter -> 에러 tf.train.SummaryWriter -> 에러 이벤트의 로그가 지정된 폴더에 저장됨 End of explanation # constant of 1d tensor (vector) a = tf.constant([2, 2], name="vector") # constant of 2x2 tensor (matrix) b = tf.constant([[0, 1], [2, 3]], name="b") with tf.Session() as sess: print(sess.run(a)) print(sess.run(b)) Explanation: 텐서보드 실행법 터미널에서 python [yourprogram.py] tensorboard --logdir="./graphs" http://localhost:6006/ 로 이동<br><br/> jupyter 터미널에서도 실행 가능 윈도우버전 jupyter 실행 불가 2. Constant types 상수유형 공식문서 https://www.tensorflow.org/api_docs/python/constant_op/ constant 함수로 상수, 스칼라, 텐서값 등을 생성할 수 있음 tf.constant(value, dtype=None, shape=None, name='const', verify=False) End of explanation with tf.Session() as sess: print (sess.run(tf.zeros([2, 3], tf.int32))) # [[0, 0, 0], [0, 0, 0]] import numpy as np np.zeros((2,3), dtype=np.int32) Explanation: 텐서의 원소로 특정한 값을 생성할 수 있음 유사함수: numpy.zeros, numpy.zeros_like, numpy.ones, numpy.ones_like tf.zero(shape, dtype=tf.float32, name=None) 모든 원소가 0인 텐서쉐입을 만드는 함수 End of explanation input_tensor = [[0, 1], [2, 3], [4, 5]] with tf.Session() as sess: print (sess.run(tf.zeros_like(input_tensor))) # [[0, 0], [0, 0], [0, 0]] np.zeros_like(input_tensor) Explanation: tf.zeros_like(input, dtype=None, name=None, opitmize=True) 모든 원소를 0으로 만드는 함수 End of explanation with tf.Session() as sess: print(sess.run(tf.ones([2, 3], tf.int32))) # [[1, 1, 1], [1, 1, 1]] np.ones([2,3], dtype=np.int32) Explanation: tf.one(shape, dtype=tf.float32, name=None) 모든 원소가 1인 텐서쉐입을 만드는 함수 End of explanation input_tensor = [[0, 1], [2, 3], [4, 5]] with tf.Session() as sess: print(sess.run(tf.ones_like(input_tensor))) # [[1, 1], [1, 1], [1, 1]] np.ones_like(input_tensor) Explanation: tf.ones_like(input_tensor, dtype=None, name=None, optimize=True) 모든 원소를 1로 만드는 함수 End of explanation with tf.Session() as sess: print(sess.run(tf.fill([2, 3], 8))) # [[8, 8, 8], [8, 8, 8]] Explanation: tf.fill(dims, value, name=None) 텐서를 한가지 스칼라값으로 채움 End of explanation with tf.Session() as sess: print(sess.run(tf.linspace(10.0, 13.0, 4, name="linspace"))) # [10.0 11.0 12.0 13.0] Explanation: tf.linspace(start, stop, num, name=None) 특정 구간에서 균등하게 증가(개수만큼)하는 수열 생성 End of explanation with tf.Session() as sess: print(sess.run(tf.range(3, 18, 3))) # [3, 6, 9, 12, 15] Explanation: tf.range(start, limit=None, delta=1, dtype=None, name='range') 등차수열 생성 End of explanation for _ in range(4):# OK a for _ in tf.range(4): # TypeError("'Tensor' object is not iterable.") a Explanation: 텐서는 반복문에 사용할 수 없음 End of explanation with tf.Session() as sess: print(sess.run(tf.random_normal(shape = [2,3]))) with tf.Session() as sess: print(sess.run(tf.truncated_normal(shape = [2,3]))) with tf.Session() as sess: print(sess.run(tf.multinomial(tf.random_normal(shape = [2,3]),5))) with tf.Session() as sess: print(sess.run(tf.random_gamma(shape = [2,3], alpha = 1))) a = tf.constant([[2,1], [3,2], [7,3]]) with tf.Session() as sess: print(sess.run(tf.random_shuffle(a))) with tf.Session() as sess: print(sess.run(tf.random_crop(a, [2,1]))) Explanation: 특정 분포에서 난수를 생성할 수 있음 tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) 정규분포로부터의 난수 생성<br><br/> tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None,name=None) 절단정규분포로부터의 난수 생성<br><br/> tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None,name=None) 균등분포로부터의 난수 생성<br><br/> tf.multinomial(logits, num_samples, seed=None, name=None) 다항분포로부터의 난수 생성<br><br/> tf.random_gamma(shape, alpha, beta=None, dtype=tf.float32, seed=None, name=None) 감마분포로부터의 난수 생성<br><br/> tf.random_shuffle(value, seed=None, name=None) 값의 첫번째 차원을 기준으로 랜덤하게 섞어줌<br><br/> tf.random_crop(value, size, seed=None, name=None) 텐서를 주어진 value만큼 랜덤하게 잘라냄 End of explanation a = tf.constant([3, 6]) b = tf.constant([2, 2]) with tf.Session() as sess: print(sess.run(tf.add(a, b))) # >> [5 8], 2개의 input을 받아 덧셈 with tf.Session() as sess: print(sess.run(tf.add_n([a, b, b]))) # >> [7 10]. 모든 input을 덧셈 with tf.Session() as sess: print(sess.run(tf.multiply(a, b))) # >> [6 12] because mul is element wise # matmul: 2차원이상의 텐서간의 곱 with tf.Session() as sess: print(sess.run(tf.matmul(tf.reshape(a, shape=[1, 2]), tf.reshape(b, shape=[2, 1])))) with tf.Session() as sess: print(sess.run(tf.div(a, b))) # >> [1 3], 나눗셈 실행 with tf.Session() as sess: print(sess.run(tf.mod(a, b))) # >> [1 0], 나머지 반환 Explanation: 3. Math Operations End of explanation # 0차원 상수텐서 - 스칼라 t_0 = 19 with tf.Session() as sess: print(sess.run(tf.zeros_like(t_0))) # ==> 0 print(sess.run(tf.ones_like(t_0))) # ==> 1 # 1차원 텐서 - 벡터 t_1 = [b"apple", b"peach", b"grape"] with tf.Session() as sess: print(sess.run(tf.zeros_like(t_1))) # ==> ['' '' ''] print(sess.run(tf.ones_like(t_1))) # ==> TypeError: Expected string, got 1 of type 'int' instead. # 2차원 텐서 - 메트릭스 t_2 = [[True, False, False], [False, False, True], [False, True, False]] with tf.Session() as sess: print(sess.run(tf.zeros_like(t_2))) # ==> 2x2 tensor, 모든 원소값 False print(sess.run(tf.ones_like(t_2))) # ==> 2x2 tensor, 모든 원소값 True Explanation: 4. Data Types ### Python Native Types : 불린, 숫자, 스트링 - 단일 값은 0차원 텐서 (스칼라) - 값의 리스트는 1차원 텐서 (벡터) - 리스트의 리스트는 2차원 텐서 (매트릭스) End of explanation my_const = tf.constant([1.0, 2.0], name="my_const") print (tf.get_default_graph().as_graph_def()) Explanation: TensorFlow Native Types 텐서플로우는 Numpy 처럼 tf.int32, tf.float32 독자적인 데이터 타입을 사용함 https://www.tensorflow.org/versions/r0.11/resources/dims_types 5. Variables 변수는 할당될수 있고 변경 될수 있음 상수는 그래프에 값이 저장되어 있어 그래프를 로딩할때 함께 로딩됨 변수는 그래프와 별도로 저장됨(파라미터 서버에 살고 있음) End of explanation # a를 스칼라 값으로 생성 a = tf.Variable(2, name="scalar") # b를 벡터로 생성 b = tf.Variable([2, 3], name="vector") # c를 2x2 matrix로 생성 c = tf.Variable([[0, 1], [2, 3]], name="matrix") # W를 0으로 채워진 784x10 tensor로 생성 W = tf.Variable(tf.zeros([784,10])) Explanation: Declare variables tf.Variable 함수로 생성 End of explanation init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) Explanation: 변수를 사용하기 전에는 항상 변수를 초기화해야함 tf.global_variables_initializer() 함수는 모든 변수를 초기화해줌 End of explanation init_ab = tf.variables_initializer([a, b], name = "init_ab") with tf.Session() as sess: sess.run(init) Explanation: To initialize only as subset of varuables tf.variables_initializer() End of explanation W = tf.Variable(tf.truncated_normal([700, 10])) with tf.Session() as sess: sess.run(W.initializer) print (W) Explanation: Evaluate values of variables 그냥 프린트하면 텐서와 유형, 쉐입만 볼수 있음 End of explanation with tf.Session() as sess: sess.run(W.initializer) print (W.eval()) Explanation: eval()함수를 사용하면 값까지 볼수 있음 End of explanation W = tf.Variable(10) W.assign(100) # 100이 W에 할당되지 않음 with tf.Session() as sess: sess.run(W.initializer) print (W.eval()) # >> 10 W = tf.Variable(10) assign_op = W.assign(100) # assign이 W를 initialize시킴 with tf.Session() as sess: sess.run(assign_op) print (W.eval()) # >> 100 # 값이 2인 변수 a 생성 a = tf.Variable(2, name="scalar") # a_times_two에 a * 2 할당 a_times_two = a.assign(a * 2) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) # a_times_two가 a에 따라 바뀌기 때문에 반드시 a를 initialize시켜줘야 함 sess.run(a_times_two) # >> 4 # sess.run(a_times_two) # >> 8 # sess.run(a_times_two) # >> 16 print (a_times_two.eval()) W = tf.Variable(10) with tf.Session() as sess: sess.run(W.initializer) # assign_add와 assign_sub는 assign과는 다르게 variable을 initialize시켜주지 않음 print(sess.run(W.assign_add(10))) print(sess.run(W.assign_sub(2))) Explanation: Assign values to variables tf.Variable.assign()함수를 사용함 End of explanation W = tf.Variable(10) sess1 = tf.Session() sess2 = tf.Session() sess1.run(W.initializer) sess2.run(W.initializer) print(sess1.run(W.assign_add(10))) # ==> 20 print(sess2.run(W.assign_sub(2))) # ==> 8 print(sess1.run(W.assign_add(100))) # ==> 120 print(sess2.run(W.assign_sub(50))) # ==> -42 sess1.close() sess2.close() Explanation: Tensorflow session은 각각 유지됨 각각의 session은 그래프에서 정의된 변수들의 현재값을 각각 가질 수 있다 End of explanation W = tf.Variable(tf.truncated_normal([700, 10])) U = tf.Variable(W * 2) Explanation: 다른 변수를 사용해서 변수를 만들수 있음 End of explanation sess = tf.InteractiveSession() a = tf.constant(5.0) b = tf.constant(6.0) c = a * b print(c.eval()) sess.close() Explanation: 6. InteractiveSession 인터렉티브 세션은 그것 자체로 디폴트세션으로 작동함 따로 run()을 선언하지 않아도 실행됨 End of explanation # your graph g have 5 ops: a, b, c, d, e with g.control_dependencies([a, b, c]): d = .... e = .... Explanation: 7. Control Dependencies 두개 이상의 오퍼레이션이 존재할 때, 오퍼레이션의 실행 순서 지정 tf.Graph.control_dependencies(control_inputs) 함수사용 End of explanation # create a placeholder of type float 32-bit, shape is a vector of 3 elements a = tf.placeholder(tf.float32, shape=[3]) # create a constant of type float 32-bit, shape is a vector of 3 elements b = tf.constant([5, 5, 5], tf.float32) # use the placeholder as you would a constant or a variable c = a + b # Short for tf.add(a, b) with tf.Session() as sess: # feed [1, 2, 3] to placeholder a via the dict {a: [1, 2, 3]} # fetch value of c writer = tf.summary.FileWriter('./my_graph', sess.graph) # print(sess.run(c)) # ==> Error print(sess.run(c, {a: [1, 2, 3]})) Explanation: 8. Placeholders and feed_dict 우리는 값을 알지 못하는 상태에서 그래프 먼저 그려야한다! 그래프를 그리고 나중에 데이터를 공급해주기 위해 placeholder사용함 ### tf.placeholder(dtype, shape=None, name=None) End of explanation # create Operations, Tensors, etc (using the default graph) a = tf.add(2, 5) b = tf.multiply(a, 3) # start up a `Session` using the default graph sess = tf.Session() # define a dictionary that says to replace the value of `a` with 15 replace_dict = {a: 15} # Run the session, passing in `replace_dict` as the value to `feed_dict` sess.run(b, feed_dict=replace_dict) # returns 45 Explanation: 꼭 placeholder가 아니여도 feed 가능 tf.Graph.is_feedable(tensor) 위의 함수를 사용하여 feed가능 여부를 확인할 수 있음 End of explanation # Normal loading x = tf.Variable(10, name='x') y = tf.Variable(20, name='y') z = tf.add(x, y) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) writer = tf.summary.FileWriter('./my_graph/l2', sess.graph) for _ in range(10): sess.run(z) writer.close() # Lazy loading x = tf.Variable(10, name='x') y = tf.Variable(20, name='y') with tf.Session() as sess: sess.run(tf.global_variables_initializer()) writer = tf.summary.FileWriter('./my_graph/l2', sess.graph) for _ in range(10): sess.run(tf.add(x, y)) # someone decides to be clever to save one line of code writer.close() Explanation: 9. The trap of lazy loading 속도를 조금 더 빠르게 해주는 팁 연산을 최대한 뒤로 미루는 것 End of explanation