code
stringlengths
2.5k
150k
kind
stringclasses
1 value
# Python Data Analysis ## Introduction In this lab, we'll make use of everything we've learned about pandas, data cleaning, and simple data analysis. In order to complete this lab, you'll have to import, clean, combine, reshape, and visualize data to answer questions provided, as well as your own questions! ## Objectives You will be able to: - Practice opening and inspecting the contents of CSVs using pandas dataframes - Practice identifying and handling missing values - Practice identifying and handling invalid values - Practice cleaning text data by removing whitespace and fixing typos - Practice joining multiple dataframes ## Your Task: Clean the Superheroes Dataset with Pandas ### Data Understanding In this lab, we'll work with a version of the comprehensive Superheroes Dataset, which can be found on [Kaggle](https://www.kaggle.com/claudiodavi/superhero-set/data) and was originally scraped from [SuperHeroDb](https://www.superherodb.com/). We have modified the structure and contents of the dataset somewhat for the purposes of this lab. Note that this data was collected in June 2017, so it may not reflect the most up-to-date superhero lore. The data is contained in two separate CSV files: 1. `heroes_information.csv`: each record represents a superhero, with attributes of that superhero (e.g. eye color). Height is measured in centimeters, and weight is measured in pounds. 2. `super_hero_powers.csv`: each record represents a superpower, then has True/False values representing whether each superhero has that power ### Business Understanding The business questions you have been provided are: 1. What is the distribution of superheroes by publisher? 2. What is the relationship between height and number of superpowers? And does this differ based on gender? 3. What are the 5 most common superpowers in Marvel Comics vs. DC Comics? This lab also simulates something you are likely to encounter at some point or another in your career in data science: someone has given you access to a dataset, as well as a few questions, and has told you to "find something interesting". So, in addition to completing the basic data cleaning tasks and the aggregation and reshaping tasks needed to answer the provided questions, you will also need to formulate a question of your own and perform any additional cleaning/aggregation/reshaping that is needed to answer it. ### Requirements #### 1. Load the Data with Pandas Create a dataframes `heroes_df` and `powers_df` that represent the two CSV files. Use pandas methods to inspect the shape and other attributes of these dataframes. #### 2. Perform Data Cleaning Required to Answer First Question The first question is: *What is the distribution of superheroes by publisher?* In order to answer this question, you will need to: * Identify and handle missing values * Identify and handle text data requiring cleaning #### 3. Perform Data Aggregation and Cleaning Required to Answer Second Question The second question is: *What is the relationship between height and number of superpowers? And does this differ based on gender?* In order to answer this question, you will need to: * Join the dataframes together * Identify and handle invalid values #### 4. Perform Data Aggregation Required to Answer Third Question The third question is: *What are the 5 most common superpowers in Marvel Comics vs. DC Comics?* This should not require any additional data cleaning or joining of tables, but it will require some additional aggregation. #### 5. Formulate and Answer Your Own Question This part is fairly open-ended. Think of a question that can be answered with the available data, and perform any cleaning or aggregation required to answer that question. ## 1. Load the Data with Pandas In the cell below, we: * Import and alias `pandas` as `pd` * Import and alias `numpy` as `np` * Import and alias `seaborn` as `sns` * Import and alias `matplotlib.pyplot` as `plt` * Set Matplotlib visualizations to display inline in the notebook ``` # Run this cell without changes import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline ``` ### Superheroes In the cell below, load `heroes_information.csv` as `heroes_df`: ``` # Your code here heroes_df.head() ``` It looks like that CSV came with an index column, resulting in an extra column called `Unnamed: 0`. We don't need that column, so write code to get rid of it below. There are two ways to do this: 1. Re-load with `read_csv`, and specify the parameter `index_col=0` 2. Drop the column `Unnamed: 0` with `axis=1` ``` # Your code here heroes_df.head() ``` The following code checks that the dataframe was loaded correctly. ``` # Run this cell without changes # There should be 734 rows assert heroes_df.shape[0] == 734 # There should be 10 columns. If this fails, make sure you got rid of # the extra index column assert heroes_df.shape[1] == 10 # These should be the columns assert list(heroes_df.columns) == ['name', 'Gender', 'Eye color', 'Race', 'Hair color', 'Height', 'Publisher', 'Skin color', 'Alignment', 'Weight'] ``` Now you want to get familiar with the data. This step includes: * Understanding the dimensionality of your dataset * Investigating what type of data it contains, and the data types used to store it * Discovering how missing values are encoded, and how many there are * Getting a feel for what information it does and doesn't contain In the cell below, inspect the overall shape of the dataframe: ``` # Your code here ``` Now let's look at the info printout: ``` # Run this cell without changes heroes_df.info() ``` In the cell below, interpret that information. Do the data types line up with what we expect? Are there any missing values? ``` # Replace None with appropriate text """ None """ ``` ### Superpowers Now, repeat the same process with `super_hero_powers.csv`. Name the dataframe `powers_df`. This time, make sure you use `index_col=0` when opening the CSV because the index contains important information. ``` # Your code here (create more cells as needed) ``` The following code will check if it was loaded correctly: ``` # Run this cell without changes # There should be 167 rows, 667 columns assert powers_df.shape == (167, 667) # The first column should be '3-D Man' assert powers_df.columns[0] == '3-D Man' # The last column should be 'Zoom' assert powers_df.columns[-1] == 'Zoom' # The first index should be 'Agility' assert powers_df.index[0] == 'Agility' # The last index should be 'Omniscient' assert powers_df.index[-1] == 'Omniscient' ``` ## 2. Perform Data Cleaning Required to Answer First Question Recall that the first question is: *What is the distribution of superheroes by publisher?* To answer this question, we will only need to use `heroes_df`, which contains the `Publisher` column. ### Identifying and Handling Missing Values As you likely noted above, the `Publisher` column is missing some values. Let's take a look at some samples with and without missing publisher values: ``` # Run this cell without changes has_publisher_sample = heroes_df[heroes_df["Publisher"].notna()].sample(5, random_state=1) has_publisher_sample # Run this cell without changes missing_publisher_sample = heroes_df[heroes_df["Publisher"].isna()].sample(5, random_state=1) missing_publisher_sample ``` What do we want to do about these missing values? Recall that there are two general strategies for dealing with missing values: 1. Fill in missing values (either using another value from the column, e.g. the mean or mode, or using some other value like "Unknown") 2. Drop rows with missing values Write your answer below, and explain how it relates to the information we have: ``` # Replace None with appropriate text """ None """ ``` Now, implement your chosen strategy using code. (You can also check the solution branch for the answer to the question above if you're really not sure.) ``` # Your code here ``` Now there should be no missing values in the publisher column: ``` # Run this cell without changes assert heroes_df["Publisher"].isna().sum() == 0 ``` ### Identifying and Handling Text Data Requiring Cleaning The overall field of natural language processing (NLP) is quite broad, and we're not going to get into any advanced text processing, but it's useful to be able to clean up minor issues in text data. Let's take a look at the counts of heroes grouped by publisher: ``` # Run this cell without changes heroes_df["Publisher"].value_counts() ``` There are two cases where we appear to have data entry issues, and publishers that should be encoded the same have not been. In other words, there are four categories present that really should be counted as two categories (and you do not need specific comic book knowledge to be able to identify them). Identify those two cases below: ``` # Replace None with appropriate text """ None """ ``` Now, write some code to handle these cases. If you're not sure where to start, look at the pandas documentation for [replacing values](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.replace.html) and [stripping off whitespace](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.strip.html). ``` # Your code here ``` Check your work below: ``` # Run this cell without changes heroes_df["Publisher"].value_counts() ``` ### Answering the Question Now we should be able to answer *What is the distribution of superheroes by publisher?* If your data cleaning was done correctly, this code should work without any further changes: ``` # Run this cell without changes # Set up plots fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(16, 5)) # Create variables for easier reuse value_counts = heroes_df["Publisher"].value_counts() top_5_counts = value_counts.iloc[:5] # Plot data ax1.bar(value_counts.index, value_counts.values) ax2.bar(top_5_counts.index, top_5_counts.values) # Customize appearance ax1.tick_params(axis="x", labelrotation=90) ax2.tick_params(axis="x", labelrotation=45) ax1.set_ylabel("Count of Superheroes") ax2.set_ylabel("Count of Superheroes") ax1.set_title("Distribution of Superheroes by Publisher") ax2.set_title("Top 5 Publishers by Count of Superheroes"); ``` ## 3. Perform Data Aggregation and Cleaning Required to Answer Second Question Recall that the second question is: *What is the relationship between height and number of superpowers? And does this differ based on gender?* Unlike the previous question, we won't be able to answer this with just `heroes_df`, since information about height is contained in `heroes_df`, while information about superpowers is contained in `powers_df`. ### Joining the Dataframes Together First, identify the shared key between `heroes_df` and `powers_df`. (Shared key meaning, the values you want to join on.) Let's look at them again: ``` # Run this cell without changes heroes_df # Run this cell without changes powers_df ``` In the cell below, identify the shared key, and your strategy for joining the data (e.g. what will one record represent after you join, will you do a left/right/inner/outer join): ``` # Replace None with appropriate text """ None """ ``` In the cell below, create a new dataframe called `heroes_and_powers_df` that contains the joined data. You can look at the above answer in the solution branch if you're not sure where to start. ***Hint:*** Note that the `.join` method requires that the two dataframes share an index ([documentation here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html)) whereas the `.merge` method can join using any columns ([documentation here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html)). It is up to you which one you want to use. ``` # Your code here (create more cells as needed) ``` Run the code below to check your work: ``` # Run this cell without changes # Confirms you have created a dataframe with the specified name assert type(heroes_and_powers_df) == pd.DataFrame # Confirms you have the right number of rows assert heroes_and_powers_df.shape[0] == 647 # Confirms you have the necessary columns # (If you modified the value of powers_df along the way, you might need to # modify this test. We are checking that all of the powers are present as # columns.) assert [power in heroes_and_powers_df.columns for power in powers_df.index] # (If you modified the value of heroes_df along the way, you mgith need to # modify this as well. We are checking that all of the attribute columns from # heroes_df are present as columns in the joined df) assert [attribute in heroes_and_powers_df.columns for attribute in heroes_df.columns] ``` Now that we have created a joined dataframe, we can aggregate the number of superpowers by superhero. This code is written for you: ``` # Run this cell without changes # Note: we can use sum() with True and False values and they will # automatically be cast to 1s and 0s heroes_and_powers_df["Power Count"] = sum([heroes_and_powers_df[power_name] for power_name in powers_df.index]) heroes_and_powers_df ``` ### Answering the Question Now we can plot the height vs. the count of powers: ``` # Run this cell without changes fig, ax = plt.subplots(figsize=(16, 8)) ax.scatter( x=heroes_and_powers_df["Height"], y=heroes_and_powers_df["Power Count"], alpha=0.3 ) ax.set_xlabel("Height (cm)") ax.set_ylabel("Number of Superpowers") ax.set_title("Height vs. Power Count"); ``` Hmm...what is that stack of values off below zero? What is a "negative" height? ### Identifying and Handling Invalid values One of the trickier tasks in data cleaning is identifying invalid or impossible values. In these cases, you have to apply your domain knowledge rather than any particular computational technique. For example, if you were looking at data containing dates of past home sales, and one of those dates was 100 years in the future, pandas wouldn't flag that as an issue, but you as a data scientist should be able to identify it. In this case, we are looking at heights, which are 1-dimensional, positive numbers. In theory we could have a very tiny height close to 0 cm because the hero is microscopic, but it does not make sense that we would have a height below zero. Let's take a look at a sample of those negative heights: ``` # Run this cell without changes heroes_and_powers_df[heroes_and_powers_df["Height"] < 0].sample(5, random_state=1) ``` It looks like not only are those heights negative, those weights are negative also, and all of them are set to exactly -99.0. It seems like this data source probably filled in -99.0 as the height or weight whenever it was unknown, instead of just leaving it as NaN. Depending on the purpose of the analysis, maybe this would be a useful piece of information, but for our current question, let's go ahead and drop the records where the height is -99.0. We'll make a new temporary dataframe to make sure we don't accidentally delete anything that will be needed in a future question. ``` # Run this cell without changes question_2_df = heroes_and_powers_df[heroes_and_powers_df["Height"] != -99.0].copy() question_2_df ``` ### Answering the Question, Again Now we can redo that plot without those negative heights: ``` # Run this cell without changes fig, ax = plt.subplots(figsize=(16, 8)) ax.scatter( x=question_2_df["Height"], y=question_2_df["Power Count"], alpha=0.3 ) ax.set_xlabel("Height (cm)") ax.set_ylabel("Number of Superpowers") ax.set_title("Height vs. Power Count"); ``` Ok, that makes more sense. It looks like there is not much of a relationship between height and number of superpowers. Now we can go on to answering the second half of question 2: *And does this differ based on gender?* To indicate multiple categories within a scatter plot, we can use color to add a third dimension: ``` # Run this cell without changes fig, ax = plt.subplots(figsize=(16, 8)) # Select subsets question_2_male = question_2_df[question_2_df["Gender"] == "Male"] question_2_female = question_2_df[question_2_df["Gender"] == "Female"] question_2_other = question_2_df[(question_2_df["Gender"] != "Male") & (question_2_df["Gender"] != "Female")] # Plot data with different colors ax.scatter( x=question_2_male["Height"], y=question_2_male["Power Count"], alpha=0.5, color="cyan", label="Male" ) ax.scatter( x=question_2_female["Height"], y=question_2_female["Power Count"], alpha=0.5, color="gray", label="Female" ) ax.scatter( x=question_2_other["Height"], y=question_2_other["Power Count"], alpha=0.5, color="yellow", label="Other" ) # Customize appearance ax.set_xlabel("Height (cm)") ax.set_ylabel("Number of Superpowers") ax.set_title("Height vs. Power Count") ax.legend(); ``` It appears that there is still no clear relationship between count of powers and height, regardless of gender. We do however note that "Male" is the most common gender, and that male superheroes tend to be taller, on average. ## 4. Perform Data Aggregation Required to Answer Third Question Recall that the third question is: *What are the 5 most common superpowers in Marvel Comics vs. DC Comics?* We'll need to keep using `heroes_and_powers_df` since we require information from both `heroes_df` and `powers_df`. Your resulting `question_3_df` should contain aggregated data, with columns `Superpower Name`, `Marvel Comics` (containing the count of occurrences in Marvel Comics), and `DC Comics` (containing the count of occurrences in DC Comics). Each row should represent a superpower. In other words, `question_3_df` should look like this: ![question 3 df](images/question_3.png) Don't worry if the rows or columns are in a different order, all that matters is that you have the right rows and columns with all the data. ***Hint:*** refer to the [documentation for `.groupby`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html) and treat each publisher as a group. ``` # Your code here (create more cells as needed) ``` The code below checks that you have the correct dataframe structure: ``` # Run this cell without changes # Checking that you made a dataframe called question_3_df assert type(question_3_df) == pd.DataFrame # Checking the shape assert question_3_df.shape == (167, 3) # Checking the column names assert sorted(list(question_3_df.columns)) == ['DC Comics', 'Marvel Comics', 'Superpower Name'] ``` ### Answering the Question The code below uses the dataframe you created to find and plot the most common superpowers in Marvel Comics and DC Comics. ``` # Run this cell without changes marvel_most_common = question_3_df.drop("DC Comics", axis=1) marvel_most_common = marvel_most_common.sort_values(by="Marvel Comics", ascending=False)[:5] marvel_most_common # Run this cell without changes dc_most_common = question_3_df.drop("Marvel Comics", axis=1) dc_most_common = dc_most_common.sort_values(by="DC Comics", ascending=False)[:5] dc_most_common # Run this cell without changes fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(15, 5)) ax1.bar( x=marvel_most_common["Superpower Name"], height=marvel_most_common["Marvel Comics"] ) ax2.bar( x=dc_most_common["Superpower Name"], height=dc_most_common["DC Comics"] ) ax1.set_ylabel("Count of Superheroes") ax2.set_ylabel("Count of Superheroes") ax1.set_title("Frequency of Top Superpowers in Marvel Comics") ax2.set_title("Frequency of Top Superpowers in DC Comics"); ``` It looks like super strength is the most popular power in both Marvel Comics and DC Comics. Overall, the top 5 powers are fairly similar — 4 out of 5 overlap, although Marvel contains agility whereas DC contains flight. ## 5. Formulate and Answer Your Own Question For the remainder of this lab, you'll be focusing on coming up with and answering your own question, just like we did above. Your question should not be overly simple, and should require both descriptive statistics and data visualization to answer. In case you're unsure of what questions to ask, some sample questions have been provided below. Pick one of the following questions to investigate and answer, or come up with one of your own! * Which powers have the highest chance of co-occurring in a hero (e.g. super strength and flight)? * What is the distribution of skin colors amongst alien heroes? * How are eye color and hair color related in this dataset? Explain your question below: ``` # Replace None with appropriate text: """ None """ ``` Some sample cells have been provided to give you room to work. Feel free to create more cells as needed. Be sure to include thoughtful, well-labeled visualizations to back up your analysis! ## Summary In this lab, you demonstrated your mastery of using pandas to clean and aggregate data in order to answer several business questions. This included identifying and handling missing values, text requiring preprocessing, and invalid values. You also performed aggregation and reshaping tasks such as transposing, joining, and grouping data. Great job, there was a lot here!
github_jupyter
``` import math def init_computer(code, inputs): return { 'mem': code.copy(), 'mem_size': len(code), 'extend_mem' : {}, 'inst': 0, 'rel': 0, 'inputs': inputs.copy(), 'outputs': [], 'halt': False, 'needs_input': False } def read_mem(computer, pos): if(pos >= computer['mem_size']): if(pos in computer['extend_mem']): return computer['extend_mem'][pos] else: return 0 else: return computer['mem'][pos] def write_mem(computer, pos, val): if(pos < 0): print("invalid mem pos %i" % pos) return if(pos >= computer['mem_size']): computer['extend_mem'][pos] = val else: computer['mem'][pos] = val def run(computer): code_size = len(computer['mem']) i = computer['inst'] outputs = [] op_info = {1:3, 2:3, 3:1, 4:1, 5:2, 6:2, 7:3, 8:3, 9:1, 99:0} computer['needs_input'] = False while(True): op = read_mem(computer, i) opcode = op % 100 if(not(opcode in op_info)): print("error unknown opcode %i" % (opcode)) computer['needs_input'] = False break a0 = -1 a1 = -1 a2 = -1 jump = False if(op_info[opcode] > 0): p_mode = (math.floor(op / 100) % 10) if( p_mode == 0 ): #position mode (pointer) a0 = read_mem(computer, i + 1) elif( p_mode == 1 ): #immediate mode (value) a0 = i + 1 elif( p_mode == 2 ): #relative mode a0 = read_mem(computer, i + 1) + computer['rel'] if(op_info[opcode] > 1): p_mode = (math.floor(op / 1000) % 10) if( p_mode == 0 ): #position mode (pointer) a1 = read_mem(computer, i + 2) elif( p_mode == 1 ): #immediate mode (value) a1 = i + 2 elif( p_mode == 2 ): #relative mode a1 = read_mem(computer, i + 2) + computer['rel'] if(op_info[opcode] > 2): p_mode = (math.floor(op / 10000) % 10) if( p_mode == 0 ): #position mode (pointer) a2 = read_mem(computer, i + 3) elif( p_mode == 1 ): #immediate mode (value) a2 = i + 3 elif( p_mode == 2 ): #relative mode a2 = read_mem(computer, i + 3) + computer['rel'] if(opcode == 1): #add op write_mem(computer, a2, read_mem(computer, a0) + read_mem(computer, a1)) elif(opcode == 2): #mult op write_mem(computer, a2, read_mem(computer, a0) * read_mem(computer, a1)) elif(opcode == 3): #read op if(len(computer['inputs']) == 0): computer['needs_input'] = True break write_mem(computer, a0, computer['inputs'][0]) computer['inputs'] = computer['inputs'][1:] elif(opcode == 4): outputs.append(read_mem(computer, a0)) elif(opcode == 5): #jump if true op if(read_mem(computer, a0) != 0): jump = True i = read_mem(computer, a1) elif(opcode == 6): #jump if false op if(read_mem(computer, a0) == 0): jump = True i = read_mem(computer, a1) elif(opcode == 7): #check less than op write_mem(computer, a2, 1 if(read_mem(computer, a0) < read_mem(computer, a1)) else 0) elif(opcode == 8): #check equals op write_mem(computer, a2, 1 if(read_mem(computer, a0) == read_mem(computer, a1)) else 0) elif(opcode == 9): #change relative param op computer['rel'] = computer['rel'] + read_mem(computer, a0) elif(opcode == 99): #halt op computer['halt'] = True computer['needs_input'] = False break if(not(jump)): i = i + op_info[opcode] + 1 if(i >= code_size): print('exiting b/c end of code reached') computer['needs_input'] = False computer['outputs'] = outputs computer['inst'] = i return computer code = [109,4797,21101,3124,0,1,21101,13,0,0,1105,1,1424,21101,0,166,1,21102,1,24,0,1105,1,1234,21102,1,31,0,1105,1,1984,1105,1,13,6,4,3,2,52,51,21,4,28,56,55,3,19,-9,-10,47,89,88,90,90,6,77,73,85,71,1,76,68,63,65,22,-27,70,76,81,87,5,105,105,107,108,95,4,97,92,109,109,5,110,105,110,108,95,4,115,96,109,109,13,-3,59,101,85,92,97,13,84,80,92,78,34,-15,26,-16,46,88,72,79,84,0,72,76,-3,85,74,79,75,-8,64,68,75,57,65,70,64,66,72,8,-41,32,-22,56,77,82,-4,60,76,62,70,-2,74,-11,55,52,68,67,73,56,60,52,-20,44,56,66,-24,48,58,42,49,54,-16,-53,10,0,56,99,96,95,82,94,83,45,-9,23,-13,61,85,88,74,71,82,73,79,73,89,67,65,-4,62,73,70,69,56,68,57,2,-35,24,-14,64,85,90,4,70,67,79,7,83,-2,68,75,-5,78,65,57,75,-10,76,53,76,0,-37,31,-21,57,78,83,-3,64,74,72,0,76,-9,73,58,57,-13,70,57,49,67,-18,54,64,48,55,-23,48,44,56,42,-14,-51,14,-4,74,95,100,14,97,77,86,79,9,92,79,75,5,27,-17,61,82,87,1,68,78,76,4,80,-5,66,58,78,60,-10,73,60,52,70,-15,57,67,51,58,-6,-43,14,-4,74,95,100,14,81,94,90,90,9,92,79,75,5,60,-50,23,42,38,-32,38,39,30,42,47,-38,30,36,28,25,41,38,34,31,18,23,29,19,33,-52,20,29,-55,27,27,27,8,15,-61,22,16,-64,24,13,18,-54,-69,-70,-14,7,12,-74,-8,-11,1,-71,5,-80,-4,-3,3,-15,-84,-85,-109,29,-19,59,80,85,-1,82,62,71,64,-6,77,64,60,-10,62,66,57,59,63,57,67,51,-19,56,58,57,57,-10,-47,44,-34,39,58,54,-16,60,61,57,64,48,56,-23,52,40,60,38,-28,44,53,-31,55,32,55,-35,48,42,41,-39,32,38,42,-42,-44,12,33,38,-48,28,19,25,32,-52,-76,-77,59,-49,13,55,-30,42,51,-33,49,50,32,31,31,39,36,48,-42,24,35,32,34,29,21,35,19,25,37,-53,14,10,26,18,-57,-59,-3,18,23,-63,1,17,3,-67,1,-4,14,-2,6,-73,-8,14,-76,-12,-78,-40,2,4,-13,-82,-106,-107,35,-25,53,74,79,0,74,60,-10,65,53,72,64,52,56,52,50,-19,53,57,62,56,-24,58,54,38,39,40,-29,-31,2,56,35,-34,-58,-59,138,-128,-74,-108,-33,-31,-26,-44,-101,-114,-33,-37,-51,-39,-35,-47,-54,-122,-37,-45,-52,-59,-58,-128,-46,-65,-42,-49,-133,-132,-102,-60,-68,-56,-55,-139,-141,-106,-61,-65,-72,-78,-64,-148,-70,-72,-151,-68,-81,-81,-72,-156,-74,-86,-86,-80,-161,-97,-81,-95,-165,-94,-98,-103,-83,-97,-102,-90,-173,-90,-103,-111,-99,-178,-95,-108,-112,-182,-115,-115,-101,-117,-120,-104,-120,-122,-191,-106,-128,-118,-110,-127,-196,-196,-199,-135,-123,-134,-203,-115,-126,-121,-207,-143,-127,-141,-211,-143,-139,-145,-148,-132,-148,-150,-219,-154,-156,-155,-148,-224,-141,-147,-227,-144,-157,-161,-231,-165,-161,-165,-168,-161,-157,-159,-166,-162,-157,-228,-265,138,-128,-74,-108,-33,-31,-26,-44,-101,-114,-33,-37,-51,-39,-35,-47,-54,-122,-37,-45,-52,-59,-58,-128,-46,-65,-42,-49,-133,-132,-102,-60,-68,-56,-55,-139,-141,-106,-61,-65,-72,-78,-64,-148,-70,-72,-151,-68,-81,-81,-72,-156,-74,-86,-86,-80,-161,-97,-81,-95,-165,-90,-94,-97,-97,-86,-102,-90,-173,-90,-103,-111,-99,-178,-95,-108,-112,-182,-115,-115,-101,-117,-120,-104,-120,-122,-191,-106,-128,-118,-110,-127,-196,-196,-199,-135,-123,-134,-203,-115,-126,-121,-207,-143,-127,-141,-211,-143,-139,-145,-148,-132,-148,-150,-219,-154,-156,-155,-148,-224,-141,-147,-227,-144,-157,-161,-231,-165,-161,-165,-168,-161,-157,-159,-166,-162,-157,-228,-265,263,-253,-199,-233,-158,-156,-151,-169,-226,-239,-158,-162,-176,-164,-160,-172,-179,-247,-162,-170,-177,-184,-183,-253,-171,-190,-167,-174,-258,-257,-227,-183,-197,-187,-175,-182,-193,-184,-268,-202,-191,-194,-192,-197,-205,-191,-207,-276,-278,-222,-201,-196,-282,-206,-219,-196,-286,-207,-206,-210,-223,-222,-223,-225,-280,-293,-296,-232,-220,-231,-300,-212,-223,-218,-304,-236,-228,-223,-239,-227,-310,-227,-240,-244,-314,-248,-237,-250,-243,-239,-247,-237,-308,-345,-273,-260,-248,-243,-263,-329,-252,-252,-248,-260,-267,-266,-253,-337,-249,-260,-255,-259,-342,-260,-267,-280,-270,-271,-348,-281,-268,-272,-279,-285,-342,-355,-280,-278,-279,-284,-277,-361,-282,-278,-274,-275,-290,-298,-300,-369,-300,-292,-290,-373,-309,-375,-299,-298,-301,-310,-302,-297,-370,-383,-302,-316,-321,-311,-315,-299,-321,-308,-392,-306,-322,-330,-312,-397,-326,-334,-317,-401,-330,-338,-324,-325,-337,-329,-339,-341,-398,-411,-347,-335,-346,-415,-334,-352,-350,-346,-341,-338,-422,-334,-345,-340,-344,-427,-345,-357,-357,-351,-432,-365,-361,-353,-367,-370,-354,-363,-351,-427,-464,-441,-397,-373,-434,-447,-376,-380,-374,-375,-373,-452,-454,-398,-377,-372,-458,-376,-388,-382,-377,-387,-396,-465,-400,-398,-468,-404,-404,-395,-403,-473,-390,-396,-476,-406,-409,-395,-480,-408,-404,-483,-418,-396,-486,-403,-399,-409,-417,-413,-421,-493,37,-5,73,71,-8,75,62,58,-12,62,55,74,64,48,50,-19,45,63,-22,61,48,44,-26,50,37,44,48,-31,33,40,48,41,43,30,37,-25,-38,-63,0,0,109,7,21101,0,0,-2,22208,-2,-5,-1,1205,-1,1169,22202,-2,-4,1,22201,1,-6,1,21202,-2,1,2,21101,0,1162,0,2106,0,-3,21201,-2,1,-2,1106,0,1136,109,-7,2106,0,0,109,6,2102,1,-5,1181,21002,0,1,-2,21102,0,1,-3,21201,-5,1,-5,22208,-3,-2,-1,1205,-1,1229,2201,-5,-3,1204,21001,0,0,1,22101,0,-3,2,21201,-2,0,3,21102,1222,1,0,2106,0,-4,21201,-3,1,-3,1105,1,1192,109,-6,2105,1,0,109,2,21201,-1,0,1,21101,0,1256,2,21101,0,1251,0,1106,0,1174,109,-2,2106,0,0,109,5,22201,-4,-3,-1,22201,-2,-1,-1,204,-1,109,-5,2106,0,0,109,3,2101,0,-2,1280,1006,0,1303,104,45,104,32,1201,-1,66,1291,21001,0,0,1,21101,0,1301,0,1106,0,1234,104,10,109,-3,2106,0,0,0,0,109,2,2102,1,-1,1309,1102,0,1,1308,21101,4601,0,1,21101,13,0,2,21102,1,4,3,21102,1353,1,4,21102,1,1343,0,1106,0,1130,21001,1308,0,-1,109,-2,2105,1,0,59,109,3,1202,-2,1,1360,20008,0,1309,-1,1206,-1,1419,1005,1308,1398,1101,0,1,1308,21008,1309,-1,-1,1206,-1,1387,21101,0,106,1,1105,1,1391,21102,1,92,1,21102,1,1398,0,1105,1,1234,104,45,104,32,1201,-2,1,1408,20101,0,0,1,21101,1417,0,0,1106,0,1234,104,10,109,-3,2106,0,0,109,3,1201,-2,0,1128,21102,34,1,1,21101,0,1441,0,1106,0,1234,1001,1128,0,1447,20101,0,0,1,21102,1,1456,0,1105,1,1234,21102,41,1,1,21102,1,1467,0,1106,0,1234,1001,1128,1,1473,20101,0,0,1,21102,1,1482,0,1106,0,1234,21101,0,46,1,21101,1493,0,0,1105,1,1234,21001,1128,3,1,21101,0,4,2,21102,1,1,3,21102,1273,1,4,21101,1516,0,0,1105,1,1130,20102,1,1128,1,21101,1527,0,0,1106,0,1310,1001,1128,2,1532,21002,0,1,-1,1206,-1,1545,21101,0,1545,0,2106,0,-1,109,-3,2106,0,0,109,0,99,109,2,1102,0,1,1550,21101,4601,0,1,21102,13,1,2,21101,4,0,3,21102,1664,1,4,21102,1582,1,0,1105,1,1130,2,2486,1352,1551,1101,0,0,1552,20102,1,1550,1,21102,1,33,2,21101,1702,0,3,21101,0,1609,0,1105,1,2722,21007,1552,0,-1,1205,-1,1630,20107,0,1552,-1,1205,-1,1637,21101,1630,0,0,1106,0,1752,21102,548,1,1,1105,1,1641,21102,687,1,1,21102,1648,1,0,1106,0,1234,21102,4457,1,1,21102,1659,1,0,1105,1,1424,109,-2,2106,0,0,109,4,21202,-2,-1,-2,2101,0,-3,1675,21008,0,-1,-1,1206,-1,1697,1201,-3,2,1687,20101,-27,0,-3,22201,-3,-2,-3,2001,1550,-3,1550,109,-4,2105,1,0,109,5,21008,1552,0,-1,1206,-1,1747,1201,-3,1901,1716,21002,0,1,-2,1205,-4,1736,20207,-2,1551,-1,1205,-1,1747,1102,1,-1,1552,1106,0,1747,22007,1551,-2,-1,1205,-1,1747,1101,1,0,1552,109,-5,2105,1,0,109,1,21102,826,1,1,21101,0,1765,0,1106,0,1234,20101,0,1550,1,21102,1776,1,0,1106,0,2863,21102,1,1090,1,21101,1787,0,0,1105,1,1234,99,1106,0,1787,109,-1,2106,0,0,109,1,21102,512,1,1,21101,1809,0,0,1106,0,1234,99,1105,1,1809,109,-1,2105,1,0,109,1,1101,0,1,1129,109,-1,2105,1,0,109,1,21102,1,377,1,21101,0,1842,0,1105,1,1234,1105,1,1831,109,-1,2105,1,0,109,1,21101,0,407,1,21101,1863,0,0,1105,1,1234,99,1106,0,1863,109,-1,2106,0,0,109,1,21101,0,452,1,21102,1,1885,0,1105,1,1234,99,1106,0,1885,109,-1,2105,1,0,1941,1947,1953,1958,1965,1972,1978,2623,2514,3150,2746,2962,2854,2944,3059,3164,2869,2566,2522,3029,2456,2878,2436,3145,3193,2644,2715,2828,2566,2960,3035,2621,2719,2648,2890,2605,2797,2544,2890,2837,2281,2468,2418,2450,2487,2125,2505,5,95,108,104,104,23,5,96,91,108,108,1,4,101,105,112,3,6,104,104,106,107,94,-1,6,109,104,109,107,94,-1,5,111,91,100,93,23,5,114,95,108,108,1,109,3,21102,1993,1,0,1105,1,2634,1006,1129,2010,21101,316,0,1,21101,2007,0,0,1105,1,1234,1106,0,2076,21102,1,0,-1,1201,-1,1894,2020,20101,0,0,1,21101,0,0,2,21101,0,0,3,21101,0,2037,0,1105,1,2525,1206,1,2054,1201,-1,1934,2050,21102,2051,1,0,106,0,0,1106,0,2076,21201,-1,1,-1,21207,-1,7,-2,1205,-2,2014,21102,177,1,1,21102,1,2076,0,1106,0,1234,109,-3,2106,0,0,109,3,2001,1128,-2,2089,20101,0,0,-1,1205,-1,2108,21101,0,201,1,21102,1,2105,0,1105,1,1234,1105,1,2119,22102,1,-1,1,21102,2119,1,0,1105,1,1424,109,-3,2106,0,0,0,109,1,1102,1,0,2124,21102,1,4601,1,21102,13,1,2,21102,4,1,3,21101,0,2173,4,21102,2154,1,0,1106,0,1130,1005,2124,2168,21102,1,226,1,21102,1,2168,0,1105,1,1234,109,-1,2105,1,0,109,3,1005,2124,2275,1201,-2,0,2183,20008,0,1128,-1,1206,-1,2275,1201,-2,1,2194,21002,0,1,-1,22101,0,-1,1,21101,0,5,2,21102,1,1,3,21101,2216,0,0,1105,1,2525,1206,1,2275,21102,1,258,1,21101,2230,0,0,1106,0,1234,21201,-1,0,1,21101,0,2241,0,1106,0,1234,104,46,104,10,1102,1,1,2124,1201,-2,0,2256,1102,1,-1,0,1201,-2,3,2262,21002,0,1,-1,1206,-1,2275,21101,2275,0,0,2105,1,-1,109,-3,2106,0,0,0,109,1,1101,0,0,2280,21101,0,4601,1,21101,13,0,2,21101,0,4,3,21102,2329,1,4,21101,2310,0,0,1105,1,1130,1005,2280,2324,21102,1,273,1,21101,0,2324,0,1105,1,1234,109,-1,2106,0,0,109,3,1005,2280,2413,1201,-2,0,2339,21008,0,-1,-1,1206,-1,2413,1201,-2,1,2350,21001,0,0,-1,21202,-1,1,1,21101,0,5,2,21101,1,0,3,21102,2372,1,0,1106,0,2525,1206,1,2413,21102,1,301,1,21101,2386,0,0,1105,1,1234,21202,-1,1,1,21101,0,2397,0,1106,0,1234,104,46,104,10,1101,1,0,2280,1201,-2,0,2412,1002,1128,1,0,109,-3,2106,0,0,109,1,21101,-1,0,1,21101,0,2431,0,1105,1,1310,1205,1,2445,21102,133,1,1,21101,2445,0,0,1105,1,1234,109,-1,2105,1,0,109,1,21101,3,0,1,21102,2463,1,0,1105,1,2081,109,-1,2106,0,0,109,1,21101,0,4,1,21101,2481,0,0,1106,0,2081,109,-1,2106,0,0,53,109,1,21102,1,5,1,21102,2500,1,0,1105,1,2081,109,-1,2106,0,0,109,1,21101,6,0,1,21102,1,2518,0,1105,1,2081,109,-1,2106,0,0,0,0,109,5,2101,0,-3,2523,1101,0,1,2524,21201,-4,0,1,21101,0,2585,2,21102,2550,1,0,1106,0,1174,1206,-2,2576,1202,-4,1,2558,2001,0,-3,2566,101,3094,2566,2566,21008,0,-1,-1,1205,-1,2576,1101,0,0,2524,20101,0,2524,-4,109,-5,2105,1,0,109,5,22201,-4,-3,-4,22201,-4,-2,-4,21208,-4,10,-1,1206,-1,2606,21101,-1,0,-4,201,-3,2523,2615,1001,2615,3094,2615,21001,0,0,-1,22208,-4,-1,-1,1205,-1,2629,1101,0,0,2524,109,-5,2106,0,0,109,4,21101,0,3094,1,21102,1,30,2,21101,0,1,3,21102,1,2706,4,21102,1,2659,0,1105,1,1130,21102,0,1,-3,203,-2,21208,-2,10,-1,1205,-1,2701,21207,-2,0,-1,1205,-1,2663,21207,-3,29,-1,1206,-1,2663,2101,3094,-3,2693,1201,-2,0,0,21201,-3,1,-3,1105,1,2663,109,-4,2106,0,0,109,2,2101,0,-1,2715,1102,-1,1,0,109,-2,2106,0,0,0,109,5,2101,0,-2,2721,21207,-4,0,-1,1206,-1,2739,21102,1,0,-4,22101,0,-4,1,22101,0,-3,2,21102,1,1,3,21102,2758,1,0,1105,1,2763,109,-5,2105,1,0,109,6,21207,-4,1,-1,1206,-1,2786,22207,-5,-3,-1,1206,-1,2786,21202,-5,1,-5,1105,1,2858,21202,-5,1,1,21201,-4,-1,2,21202,-3,2,3,21102,2805,1,0,1106,0,2763,22101,0,1,-5,21102,1,1,-2,22207,-5,-3,-1,1206,-1,2824,21101,0,0,-2,22202,-3,-2,-3,22107,0,-4,-1,1206,-1,2850,21201,-2,0,1,21201,-4,-1,2,21102,2850,1,0,105,1,2721,21202,-3,-1,-3,22201,-5,-3,-5,109,-6,2106,0,0,109,3,21208,-2,0,-1,1205,-1,2902,21207,-2,0,-1,1205,-1,2882,1105,1,2888,104,45,21202,-2,-1,-2,21202,-2,1,1,21102,1,2899,0,1105,1,2909,1106,0,2904,104,48,109,-3,2106,0,0,109,4,21201,-3,0,1,21101,0,10,2,21102,2926,1,0,1106,0,3010,22102,1,1,-2,21201,2,0,-1,1206,-2,2948,22102,1,-2,1,21102,2948,1,0,1105,1,2909,22101,48,-1,-1,204,-1,109,-4,2106,0,0,1,2,4,8,16,32,64,128,256,512,1024,2048,4096,8192,16384,32768,65536,131072,262144,524288,1048576,2097152,4194304,8388608,16777216,33554432,67108864,134217728,268435456,536870912,1073741824,2147483648,4294967296,8589934592,17179869184,34359738368,68719476736,137438953472,274877906944,549755813888,1099511627776,2199023255552,4398046511104,8796093022208,17592186044416,35184372088832,70368744177664,140737488355328,281474976710656,562949953421312,1125899906842624,109,8,21102,0,1,-4,21101,0,0,-3,21102,51,1,-2,21201,-2,-1,-2,1201,-2,2959,3033,21001,0,0,-1,21202,-3,2,-3,22207,-7,-1,-5,1205,-5,3059,21201,-3,1,-3,22102,-1,-1,-5,22201,-7,-5,-7,22207,-3,-6,-5,1205,-5,3078,22102,-1,-6,-5,22201,-3,-5,-3,22201,-1,-4,-4,1205,-2,3024,21201,-4,0,-7,21202,-3,1,-6,109,-8,2106,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3131,3143,0,3348,3252,3390,0,11,61,105,95,94,17,50,97,83,78,79,83,108,-19,2,7,-79,-9,-2,2,-83,-11,-7,-86,-3,-16,-7,-11,-6,-21,-21,-94,-30,-96,-25,-19,-23,-31,-101,-29,-25,-104,-21,-34,-38,-108,-39,-34,-32,-33,-31,-114,-43,-47,-35,-49,-105,-120,-69,-43,-123,-49,-56,-57,-47,-128,-40,-51,-46,-50,-133,-51,-63,-63,-57,-138,-69,-58,-62,-65,-143,-79,-69,-63,-68,-148,-79,-68,-82,-83,-63,-81,-77,-85,-145,-158,-75,-88,-92,-162,-91,-85,-89,-97,-167,-96,-104,-87,-171,-106,-104,-105,-97,-176,-94,-109,-114,-104,-112,-114,-169,3259,3268,0,0,3609,3832,3124,8,59,102,104,103,93,87,97,99,79,5,24,20,-50,26,17,31,11,21,-56,30,7,17,16,22,-62,2,14,3,-66,17,4,0,-70,6,-3,11,-9,1,-76,-7,-2,0,-1,1,-82,-18,-2,-16,-86,-4,-12,-16,-19,-19,-8,-17,-5,-95,-28,-24,-28,-29,-31,-19,-33,-25,-20,-105,-39,-28,-32,-30,-28,-28,-98,-113,-67,-33,-116,-52,-36,-50,-120,-37,-50,-54,-35,-94,3355,3363,0,3445,0,3124,0,7,68,97,107,89,93,89,97,26,43,91,73,85,91,85,72,72,76,68,3,78,-6,63,74,60,59,79,57,0,54,67,57,52,50,-5,3397,3404,0,3124,0,0,3517,6,59,107,91,88,90,90,40,38,70,68,58,-12,66,56,-15,68,55,51,-19,47,44,44,50,54,44,58,56,-28,54,39,38,45,-33,50,44,-36,35,27,47,29,-41,38,36,43,24,36,-33,3452,3461,0,0,3882,3348,0,8,72,88,105,104,85,90,87,100,55,29,48,44,63,-20,54,40,-30,34,-32,43,39,49,48,39,31,-39,44,46,31,40,40,44,-46,18,30,19,-50,32,32,12,28,29,17,21,13,-59,24,18,-62,13,15,14,9,-67,-3,7,6,-71,-7,3,-1,0,-7,-63,3524,3532,0,0,3390,0,3698,7,65,89,99,98,108,85,108,76,8,27,27,36,-48,16,32,18,13,-53,18,10,27,-57,8,10,9,17,-62,16,16,19,7,10,5,21,-1,-3,-72,-3,5,7,-76,6,1,-2,-11,3,-10,-10,-6,-14,-59,-87,1,-10,-5,-84,-10,-24,-94,-21,-11,-14,-14,-99,-22,-22,-18,-103,-23,-20,-33,-23,-39,-109,-27,-26,-30,-44,-114,-28,-44,-52,-34,-105,3616,3625,0,0,3763,0,3252,8,75,96,89,96,20,53,83,106,72,11,44,38,37,35,37,38,36,-48,17,29,33,20,-53,-4,14,12,-44,-12,20,23,8,6,-63,-14,4,7,11,0,0,-1,11,-72,4,-5,-7,-3,-10,-5,-1,-11,-81,-17,-5,-16,-85,-4,-18,-17,-4,-14,-26,-10,-93,-12,-26,-23,-19,-30,-30,-31,-19,-102,-26,-35,-37,-33,-40,-35,-31,-41,-97,3705,3728,0,0,3517,0,0,22,65,74,90,87,6,41,86,76,88,70,0,44,63,70,74,79,63,71,57,69,57,58,34,39,81,-4,60,74,73,61,56,72,72,-12,71,65,-15,50,52,-18,68,59,61,53,50,54,46,-26,51,51,53,47,34,44,43,55,-21,3770,3791,0,4062,0,0,3609,20,51,84,80,93,8,62,88,70,84,83,75,79,71,-1,33,66,74,79,63,75,40,32,70,77,-11,57,63,69,54,-16,51,61,-19,69,58,63,-23,63,57,39,53,-28,51,52,38,51,36,44,49,47,-37,41,39,-40,43,30,26,-44,26,33,-16,3839,3847,0,3252,3998,0,0,7,76,108,88,88,97,89,102,34,48,66,69,73,62,62,61,73,3,72,61,77,55,53,-2,-17,34,53,49,68,-15,59,45,-25,39,49,48,-29,39,46,48,51,55,-21,3889,3912,0,0,3941,0,3445,22,50,88,92,7,41,77,83,70,81,77,65,83,67,-3,34,74,79,71,76,56,63,67,28,55,82,79,70,72,78,85,9,-4,68,78,0,75,-9,73,73,61,63,62,-15,71,62,64,56,53,57,49,-9,3948,3962,0,0,0,0,3882,13,54,100,86,103,15,63,98,77,93,94,78,90,90,35,49,68,64,-6,59,61,59,73,-11,53,69,55,-15,49,59,58,-19,64,58,57,-23,59,52,39,49,48,-29,40,48,50,-33,55,44,49,-23,4005,4013,0,0,4309,0,3832,7,76,108,102,104,86,91,88,48,36,55,51,-19,46,58,66,46,59,-25,48,58,55,55,-30,36,47,45,50,30,37,41,-38,38,39,41,27,-43,22,34,42,22,35,-35,-50,-51,-2,16,13,30,26,26,15,27,9,15,27,-49,4069,4081,0,4133,0,3763,0,11,58,98,90,91,95,85,84,96,86,90,82,51,38,59,64,-22,60,45,44,-26,38,-28,58,42,42,52,36,32,44,29,45,30,-39,47,32,42,29,-44,35,30,18,30,34,-50,19,27,29,-54,-4,24,25,15,19,11,7,20,16,9,3,-66,19,-50,-55,4140,4151,0,0,4229,4062,0,10,68,86,106,92,89,82,100,88,93,91,77,6,38,18,36,36,33,-25,-52,-2,30,27,9,21,10,10,8,-47,-62,-15,12,4,-1,16,1,-69,13,14,8,7,2,14,-76,0,-9,-14,3,4,0,-14,-7,-16,-8,-3,-5,-89,-20,-9,-13,-16,-94,-25,-23,-27,-14,-10,-100,-18,-18,-38,-22,-22,-106,-23,-29,-109,-28,-42,-45,-48,-38,-42,-50,-35,-53,-35,-51,-107,4236,4248,0,4384,0,0,4133,11,68,86,102,87,99,102,80,98,92,94,100,60,24,43,39,51,37,-33,31,47,33,-37,27,-39,30,28,45,-43,40,24,30,22,35,18,29,29,17,30,-27,-55,28,15,11,30,-53,21,7,-63,1,11,10,-67,-2,10,6,13,-3,-5,-74,-7,3,10,0,-67,-80,3,-10,-4,1,-14,-14,-73,4316,4328,0,0,0,0,3998,11,72,87,92,87,95,83,84,14,57,77,77,55,34,55,60,-26,56,41,40,-30,38,54,40,34,34,42,30,31,-39,32,28,40,26,-44,34,24,-47,32,33,29,33,27,31,35,25,13,-57,22,20,16,28,15,6,18,-65,2,2,15,4,1,7,-72,14,5,7,-1,-63,4391,4400,0,4457,0,4229,0,8,64,102,98,100,88,88,85,92,56,27,54,51,42,51,49,39,-31,51,36,35,42,47,-37,46,40,-40,31,23,43,25,-45,30,22,22,35,-50,22,32,-53,25,23,-56,27,14,10,-60,-22,11,2,14,19,-66,-28,14,4,-2,-71,11,-4,10,9,-3,1,-7,-65,4464,4484,0,0,0,4384,4556,19,64,81,78,95,91,81,91,95,5,39,75,71,68,75,79,77,70,74,79,71,2,38,-41,42,29,25,-45,32,22,40,35,-50,31,27,26,23,-43,-56,8,-58,21,22,8,21,20,21,17,3,-54,15,0,8,12,1,11,-1,11,-7,-77,-8,-3,-1,-2,0,-83,3,-12,-10,-11,-88,-3,-21,-9,-19,-23,-5,-95,-7,-18,-13,-17,-100,-28,-34,-34,-26,-21,-33,-23,-19,-95,4563,4588,1553,0,4457,0,0,24,56,89,75,88,87,88,84,70,13,50,67,75,79,68,78,66,78,60,-10,27,64,66,65,67,12,53,97,83,93,105,105,87,91,83,25,24,23,3252,4653,2075,0,3998,4662,28,1850,3390,4674,29,1829,3698,4688,16777246,0,4309,4699,31,1872,3348,4707,32,1796,4384,4718,97,0,3609,4728,1073741858,0,3941,4737,2097187,0,3517,4742,37,0,3763,4752,32805,0,4229,4764,65574,0,3882,4777,39,1818,8,103,105,100,86,97,88,96,101,11,98,99,95,102,86,94,15,90,78,98,76,13,92,96,87,89,93,87,97,81,11,86,88,87,87,10,91,86,103,103,87,99,16,84,85,84,7,105,96,102,106,100,98,102,10,91,104,87,84,98,86,16,95,93,81,9,95,111,101,89,101,85,102,82,84,8,96,102,98,100,91,101,83,94,4,95,92,101,94,9,93,107,90,96,19,85,86,92,91,11,89,85,101,93,17,93,80,98,97,81,93,12,95,95,87,90,94,15,80,92,96,95,86,78,19,84,85,76,88,93,8,76,82,74,71,87,84,80,77,64,69,75,65,79] computer = init_computer(code, []) run(computer) mapstr = ''.join(chr(i) for i in computer['outputs'][:-1]) print(mapstr) computer['outputs'] = [] def run_command(computer, command): computer['inputs'] = [ord(c) for c in command + '\n'] run(computer) print(''.join(chr(i) for i in computer['outputs'][:-1])) computer['outputs'] = [] run_command(computer, 'north') run_command(computer, 'north') run_command(computer, 'east') run_command(computer, 'east') run_command(computer, 'take cake') run_command(computer, 'west') run_command(computer, 'west') run_command(computer, 'south') run_command(computer, 'south') run_command(computer, 'east') run_command(computer, 'take ornament') run_command(computer, 'east') run_command(computer, 'take hologram') run_command(computer, 'east') run_command(computer, 'take dark matter') run_command(computer, 'north') run_command(computer, 'north') run_command(computer, 'east') run_command(computer, 'take klein bottle') run_command(computer, 'north') run_command(computer, 'take hypercube') run_command(computer, 'north') run_command(computer, 'west') [ord(c) for c in 'passwordPASSWORD'] ''.join(chr(i-1) for i in [103,105,100,86,97,88,96,101,11,98,99,95,102,86,94,15,90,78,98,76,13,92,96,87,89,93,87,97,81,11,86,88,87,87,10,91,86,103,103,87,99,16,84,85,84,7,105,96,102,106,100,98,102,10,91,104,87,84,98,86,16,95,93,81,9,95,111,101,89,101,85,102,82,84,8,96,102,98,100,91,101,83,94,4,95,92,101,94,9,93,107,90,96,19,85,86,92,91,11,89,85,101,93,17,93,80,98,97,81,93,12,95,95,87,90,94,15,80,92,96,95,86,78,19,84,85,76,88,93,8,76,82,74,71,87,84,80,77,64,69,75,65,79]) computer['extend_mem'] ```
github_jupyter
# Gradient Class Activation Map ``` import cv2 import numpy as np import matplotlib.pyplot as plt import json import os import pandas as pd from pocovidnet.evaluate_covid19 import Evaluator from pocovidnet.grad_cam import GradCAM from pocovidnet.cam import get_class_activation_map from pocovidnet.model import get_vgg16_model def convolve_faster(img, kernel): """ Convolve a 2d img with a kernel, storing the output in the cell corresponding the the left or right upper corner :param img: 2d numpy array :param kernel: kernel (must have equal size and width) :param neg: if neg=0, store in upper left corner, if neg=1, store in upper right corner :return convolved image of same size """ k_size = len(kernel) # a = np.pad(img, ((0, k_size-1), (0, k_size-1))) padded = np.pad(img, ((k_size//2, k_size//2), (k_size//2, k_size//2))) s = kernel.shape + tuple(np.subtract(padded.shape, kernel.shape) + 1) strd = np.lib.stride_tricks.as_strided subM = strd(padded, shape=s, strides=padded.strides * 2) return np.einsum('ij,ijkl->kl', kernel, subM) path_crossval = "./image_cross_val" weights_dir = "./trained_models/model_0" gt_dict = {"Reg":2, "Pne":1, "pne":1, "Cov":0} gradcam = GradCAM() all_predictions = [] heatmap_points, predicted, gt_class, overlays, fnames = [], [], [], [], [] for fold in range(5): # load weights of the respective fold model print("NEW FOLD", fold) # make sure the variable is cleared evaluator = None # load weights evaluator = Evaluator(weights_dir="./trained_models/model_16/val", ensemble=False, split=fold, model_id="vgg_16", num_classes=3) # get all names belonging to this fold all_images_arr = [] gt, name = [], [] for mod in ["covid", "pneumonia", "regular"]: for f in os.listdir(os.path.join(path_crossval, "split"+str(fold), mod)): if f[0]!=".": img_loaded = cv2.imread(os.path.join(path_crossval, "split"+str(fold), mod, f)) img_preprocc = evaluator.preprocess(img_loaded)[0] gt.append(gt_dict[f[:3]]) all_images_arr.append(img_preprocc) name.append(f) all_images_arr = np.array(all_images_arr) # predicciones print("process all images in fold", fold, "with shape", all_images_arr.shape) fold_preds = evaluator.models[0].predict(all_images_arr) class_idx_per_img = np.argmax(fold_preds, axis=1) all_predictions.append(fold_preds) # heatmap for i, img in enumerate(all_images_arr): overlay, heatmap = gradcam.explain(img, evaluator.models[0], gt[i], return_map=True, image_weight=1, layer_name="block5_conv3", zeroing=0.65, heatmap_weight=0.25) overlays.append(overlay.astype(int)) # convolve with big kernel convolved_overlay = convolve_faster(heatmap, np.ones((19,19))) x_coord, y_coord = divmod(np.argmax(convolved_overlay.flatten()), len(convolved_overlay[0])) heatmap_points.append([x_coord, y_coord]) predicted.append(class_idx_per_img[i]) gt_class.append(gt[i]) fnames.append(name[i]) import numpy as np import matplotlib.pyplot as plt w=12 h=15 fig=plt.figure(figsize=(15, 12)) #fig.tight_layout(h_pad=20) columns = 4 rows = 3 item_img = [overlays[4], overlays[7], overlays[16], overlays[97], overlays[778], overlays[1041], overlays[1061], overlays[1333], overlays[819], overlays[849], overlays[843], overlays[1075]] item_img_fp = [] fig, big_axes = plt.subplots(figsize=(15, 12), nrows=3, ncols=1, sharey=True) classes_sp=['saludables', 'con neumonía', 'con COVID-19'] for idx, big_ax in enumerate(big_axes, start=1): big_ax.set_title("Grad-CAM en LUS de pacientes %s \n\n\n" % classes_sp[idx - 1], fontsize=16, pad=-30) big_ax.tick_params(labelcolor='w', top=False, bottom=False, left=False, right=False) big_ax._frameon = False for i in range(1, columns*rows +1): img = item_img[i-1] fig.add_subplot(rows, columns, i) fig.tight_layout(h_pad=1) plt.imshow(img) plt.show() fig.savefig("grad-cam-img.pdf",bbox_inches='tight') ```
github_jupyter
Derived from https://arxiv.org/pdf/1711.07128.pdf ``` import warnings warnings.filterwarnings("ignore") import sys import os import tensorflow as tf # sys.path.append("../libs") sys.path.insert(1, '../') from libs import input_data from libs import models from libs import trainer from libs import freeze flags=tf.app.flags flags=tf.app.flags #Important Directories flags.DEFINE_string('data_dir','..\\..\\_inputs\\raw','Train Data Folder') flags.DEFINE_string('summaries_dir','..\\..\\summaries','Summaries Folder') flags.DEFINE_string('train_dir','..\\..\\logs&checkpoint','Directory to write event logs and checkpoint') flags.DEFINE_string('models_dir','..\\..\\models','Models Folder') #Task Specific Parameters flags.DEFINE_string('wanted_words','yes,no,up,down,left,right,on,off,stop,go','Wanted Words') flags.DEFINE_float('validation_percentage',10,'Validation Percentage') flags.DEFINE_float('testing_percentage',10,'Testing Percentage') flags.DEFINE_integer('sample_rate',16000,'Sample Rate') flags.DEFINE_integer('clip_duration_ms',1000,'Clip Duration in ms') flags.DEFINE_float('window_size_ms',40,'How long each spectogram timeslice is') flags.DEFINE_float('window_stride_ms',20.0,'How far to move in time between frequency windows.') flags.DEFINE_integer('dct_coefficient_count',40,'How many bins to use for the MFCC fingerprint') flags.DEFINE_float('time_shift_ms',100.0,'Range to randomly shift the training audio by in time.') FLAGS=flags.FLAGS model_architecture='ds_cnn' start_checkpoint=None logging_interval=10 eval_step_interval=1000 save_step_interval=1 silence_percentage=10.0 unknown_percentage=10.0 background_frequency=0.8 background_volume=0.3 learning_rate='0.0005,0.0001,0.00002' #Always seperated by comma, trains with each of the learning rate for the given number of iterations train_steps='1000,1000,1000' #Declare the training steps for which the learning rates will be used batch_size=256 model_size_info=[5, 64, 10, 4, 2, 2, 64, 3, 3, 1, 1, 64, 3, 3, 1, 1, 64, 3, 3, 1, 1, 64, 3, 3, 1, 1] remaining_args = FLAGS([sys.argv[0]] + [flag for flag in sys.argv if flag.startswith("--")]) assert(remaining_args == [sys.argv[0]]) train_dir=os.path.join(FLAGS.data_dir,'train','audio') model_settings = models.prepare_model_settings( len(input_data.prepare_words_list(FLAGS.wanted_words.split(','))), FLAGS.sample_rate, FLAGS.clip_duration_ms, FLAGS.window_size_ms, FLAGS.window_stride_ms, FLAGS.dct_coefficient_count) audio_processor = input_data.AudioProcessor( train_dir, silence_percentage, unknown_percentage, FLAGS.wanted_words.split(','), FLAGS.validation_percentage, FLAGS.testing_percentage, model_settings,use_silence_folder=True) def get_train_data(args): sess=args time_shift_samples = int((FLAGS.time_shift_ms * FLAGS.sample_rate) / 1000) train_fingerprints, train_ground_truth = audio_processor.get_data( batch_size, 0, model_settings,background_frequency, background_volume, time_shift_samples, 'training', sess) return train_fingerprints,train_ground_truth def get_val_data(args): ''' Input: (sess,offset) ''' sess,i=args validation_fingerprints, validation_ground_truth = ( audio_processor.get_data(batch_size, i, model_settings, 0.0, 0.0, 0, 'validation', sess)) return validation_fingerprints,validation_ground_truth # def get_test_data(args): # ''' # Input: (sess,offset) # ''' # sess,i=args # test_fingerprints, test_ground_truth = audio_processor.get_data( # batch_size, i, model_settings, 0.0, 0.0, 0, 'testing', sess) # return test_fingerprints,test_ground_truth def main(_): sess=tf.InteractiveSession() # Placeholders fingerprint_size = model_settings['fingerprint_size'] label_count = model_settings['label_count'] fingerprint_input = tf.placeholder( tf.float32, [None, fingerprint_size], name='fingerprint_input') ground_truth_input = tf.placeholder( tf.float32, [None, label_count], name='groundtruth_input') set_size = audio_processor.set_size('validation') label_count = model_settings['label_count'] # Create Model logits, dropout_prob = models.create_model( fingerprint_input, model_settings, model_architecture, model_size_info=model_size_info, is_training=True) #Start Training extra_args=(dropout_prob,label_count,batch_size,set_size) trainer.train(sess,logits,fingerprint_input,ground_truth_input,get_train_data, get_val_data,train_steps,learning_rate,eval_step_interval, logging_interval=logging_interval, start_checkpoint=start_checkpoint,checkpoint_interval=save_step_interval, model_name=model_architecture,train_dir=FLAGS.train_dir, summaries_dir=FLAGS.summaries_dir,args=extra_args) tf.app.run(main=main) # save_checkpoint='..\\..\\logs&checkpoint\\ds_cnn\\ckpt-899' # save_path=os.path.join(FLAGS.models_dir,model_architecture,'%s.pb'%os.path.basename(save_checkpoint)) # freeze.freeze_graph(FLAGS,model_architecture,save_checkpoint,save_path,model_size_info=model_size_info) # save_path=os.path.join(FLAGS.models_dir,model_architecture,'%s-small-batched.pb'%os.path.basename(save_checkpoint)) # freeze.freeze_graph(FLAGS,model_architecture,save_checkpoint,save_path,batched=True,model_size_info=model_size_info) ```
github_jupyter
# Importing the large datasets to a postgresql server and computing their metrics It is not possible to load the larger data sets in the memory of a local machine therefeore an alternative is to import them to a psql table and query them from there. By adding the right indices this can make the queries fast enough. After this import one can extract some basic statistics using sql and also export smaller portions of the data which can be handled by spark or pandas on a local machine. ## Helper functions ``` import timeit def stopwatch(function): start_time = timeit.default_timer() result = function() print('Elapsed time: %i sec' % int(timeit.default_timer() - start_time)) return result ``` ## Unzipping the data and converting it to csv format Unfortunately psql does not support an import of record json files therefore we need to convert the data sets to csv. We use here the command line tool [json2csv](https://github.com/jehiah/json2csv). **WARNING:** The following two commands will run for a while, especially the second one. You can expect approximately **1 minute per GB** of unzipped data. ``` start_time = timeit.default_timer() !ls ./data/large-datasets/*.gz | grep -Po '.*(?=.gz)' | xargs -I {} gunzip {}.gz print('Elapsed time: %i sec' % int(timeit.default_timer() - start_time)) start_time = timeit.default_timer() !ls ./data/large-datasets/*.json | xargs sed -i 's/|/?/g;s/\u0000/?/g' print('Elapsed time: %i sec' % int(timeit.default_timer() - start_time)) start_time = timeit.default_timer() !ls ./data/large-datasets/*.json | grep -Po '.*(?=.json)' | xargs -I {} json2csv -p -d '|' -k asin,helpful,overall,reviewText,reviewTime,reviewerID,reviewerName,summary,unixReviewTime -i {}.json -o {}.csv !rm ./data/large-datasets/*.json print('Elapsed time: %i sec' % int(timeit.default_timer() - start_time)) ``` ## Importing the data in psql To import the data in psql we create a table with the appropriate shape and import form the csv files generated above. ### Some preparation to run psql transactions and queries in python ``` import psycopg2 as pg import pandas as pd db_conf = { 'user': 'mariosk', 'database': 'amazon_reviews' } connection_factory = lambda: pg.connect(user=db_conf['user'], database=db_conf['database']) def transaction(*statements): try: connection = connection_factory() cursor = connection.cursor() for statement in statements: cursor.execute(statement) connection.commit() cursor.close() except pg.DatabaseError as error: print(error) finally: if connection is not None: connection.close() def query(statement): try: connection = connection_factory() cursor = connection.cursor() cursor.execute(statement) header = [ description[0] for description in cursor.description ] rows = cursor.fetchall() cursor.close() return pd.DataFrame.from_records(rows, columns=header) except (Exception, pg.DatabaseError) as error: print(error) return None finally: if connection is not None: connection.close() ``` ### Creating tables for with indices for the large datasets ``` import re table_names = [ re.search('reviews_(.*)_5.csv', filename).group(1) for filename in sorted(os.listdir('./data/large-datasets')) if not filename.endswith('json') ] def create_table(table_name): transaction( 'create table %s (asin text, helpful text, overall double precision, reviewText text, reviewTime text, reviewerID text, reviewerName text, summary text, unixReviewTime int);' % table_name, 'create index {0}_asin ON {0} (asin);'.format(table_name), 'create index {0}_overall ON {0} (overall);'.format(table_name), 'create index {0}_reviewerID ON {0} (reviewerID);'.format(table_name), 'create index {0}_unixReviewTime ON {0} (unixReviewTime);'.format(table_name)) for table_name in table_names: create_table(table_name) ``` ### Importing the datasets to psql **WARNING:** The following command will take long time to complete. Estimate ~1 minute for each GB of csv data. ``` start_time = timeit.default_timer() !ls ./data/large-datasets | grep -Po '(?<=reviews_).*(?=_5.csv)' | xargs -I {} psql -U mariosk -d amazon_reviews -c "\copy {} from './data/large-datasets/reviews_{}_5.csv' with (format csv, delimiter '|', header true);" print('Elapsed time: %i sec' % int(timeit.default_timer() - start_time)) ``` ## Querying the metrics ``` def average_reviews_per_product(table_name): return (query(''' with distinct_products as (select count(distinct asin) as products from {0}), reviews_count as (select cast(count(*) as double precision) as reviews from {0}) select reviews / products as reviews_per_product from distinct_products cross join reviews_count '''.format(table_name)) .rename(index={0: table_name.replace('_', ' ')})) def average_reviews_per_reviewer(table_name): return (query(''' with distinct_reviewers as (select count(distinct reviewerID) as reviewers from {0}), reviews_count as (select cast(count(*) as double precision) as reviews from {0}) select reviews / reviewers as reviews_per_reviewer from distinct_reviewers cross join reviews_count '''.format(table_name)) .rename(index={ 0: table_name.replace('_', ' ')})) def percentages_per_rating(table_name): return (query(''' with rating_counts as (select overall, count(overall) as rating_count from {0} group by overall), reviews_count as (select cast(count(*) as double precision) as reviews from {0}) select cast(overall as int) as dataset_name, rating_count / reviews as row from rating_counts cross join reviews_count '''.format(table_name)) .set_index('dataset_name') .sort_index() .transpose() .rename(index={'row': table_name.replace('_', ' ')})) def number_of_reviews(table_name): return (query(''' select count(*) as number_of_reviews from {0} '''.format(table_name)) .rename(index={ 0: table_name.replace('_', ' ') })) def all_metrics(table_name): print(table_name) return pd.concat( [ f(table_name) for f in [ percentages_per_rating, number_of_reviews, average_reviews_per_product, average_reviews_per_reviewer ]], axis=1) metrics = stopwatch(lambda: pd.concat([ all_metrics(table) for table in table_names ])) metrics.index.name = 'dataset_name' metrics.to_csv('./metadata/large-datasets-evaluation-metrics.csv') metrics ```
github_jupyter
<a href="https://colab.research.google.com/github/Ava100rav/shark-tank-india/blob/main/shark_tank_india.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` from google.colab import drive drive.mount('/content/drive') import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import matplotlib.style pd.set_option('display.max_columns',None) pd.set_option('display.max_rows',None) df=pd.read_csv('/content/drive/MyDrive/shark-tank-india/Shark-Tank-India-Dataset.csv') df.head(2) df.tail(2) df.shape df.columns df.info() df.isnull().sum() df['deal'].value_counts() sns.countplot(df['deal']) df.drop(['episode_number','pitch_number','brand_name'],axis=1,inplace=True) df.shape df['idea'].unique df.drop('idea',axis=1,inplace=True) df.shape print(df.corr()) # Copy all the predictor variables into X dataframe X = df.drop('deal', axis=1) y=df['deal'] from sklearn.model_selection import train_test_split # Split X and y into training and test set in 65:35 ratio X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.35 , random_state=10) from sklearn.linear_model import LogisticRegression classifier = LogisticRegression(random_state = 0) classifier.fit(X_train, y_train) y_pred = classifier.predict(X_test) from sklearn.metrics import classification_report, confusion_matrix cm = confusion_matrix(y_test, y_pred) print ("Confusion Matrix : \n", cm) print (classification_report(y_test, y_pred)) from sklearn.metrics import classification_report, confusion_matrix cm = confusion_matrix(y_test, y_pred) print ("Confusion Matrix : \n", cm) from sklearn.metrics import accuracy_score print ("Accuracy : ", accuracy_score(y_test, y_pred)) # Import necessary modules from sklearn.neighbors import KNeighborsClassifier # Split into training and test set X_train, X_test, y_train, y_test = train_test_split( X, y, test_size = 0.2, random_state=42) knn = KNeighborsClassifier(n_neighbors=7) knn.fit(X_train, y_train) # Calculate the accuracy of the model print(knn.score(X_test, y_test)) from sklearn.tree import DecisionTreeClassifier # Splitting the dataset into train and test X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 100) # Function to perform training with giniIndex. def train_using_gini(X_train, X_test, y_train): # Creating the classifier object clf_gini = DecisionTreeClassifier(criterion = "gini", random_state = 100,max_depth=3, min_samples_leaf=5) # Performing training clf_gini.fit(X_train, y_train) return clf_gini # Function to perform training with entropy. def tarin_using_entropy(X_train, X_test, y_train): # Decision tree with entropy clf_entropy = DecisionTreeClassifier( criterion = "entropy", random_state = 100, max_depth = 3, min_samples_leaf = 5) # Performing training clf_entropy.fit(X_train, y_train) return clf_entropy # Function to make predictions def prediction(X_test, clf_object): # Predicton on test with giniIndex y_pred = clf_object.predict(X_test) print("Predicted values:") print(y_pred) return y_pred # Function to calculate accuracy def cal_accuracy(y_test, y_pred): print("Confusion Matrix: ", confusion_matrix(y_test, y_pred)) print ("Accuracy : ", accuracy_score(y_test,y_pred)*100) print("Report : ", classification_report(y_test, y_pred)) clf_gini = train_using_gini(X_train, X_test, y_train) clf_entropy = tarin_using_entropy(X_train, X_test, y_train) # Operational Phase print("Results Using Gini Index:") # Prediction using gini y_pred_gini = prediction(X_test, clf_gini) cal_accuracy(y_test, y_pred_gini) print("Results Using Entropy:") # Prediction using entropy y_pred_entropy = prediction(X_test, clf_entropy) cal_accuracy(y_test, y_pred_entropy) ```
github_jupyter
# "E is for Exploratory Data Analysis: Categorical Data" > What is Exploratory Data Analysis (EDA), why is it done, and how do we do it in Python? - toc: false - badges: True - comments: true - categories: [E] - hide: False - image: images/e-is-for-eda-text/alphabet-close-up-communication-conceptual-278887.jpg ## _What is **Exploratory Data Analysis(EDA)**?_ While I answered these questions in the [last post](https://educatorsrlearners.github.io/an-a-z-of-machine-learning/e/2020/06/15/e-is-for-eda.html), since [all learning is repetition](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=224340), I'll do it again :grin: EDA is an ethos for how we scrutinize data including, but not limited to: - what we look for (i.e. shapes, trends, outliers) - the approaches we employ (i.e. [five-number summary](https://www.statisticshowto.com/how-to-find-a-five-number-summary-in-statistics/), visualizations) - and the decisions we reach{% fn 1 %} ## _Why is it done?_ Two main reasons: 1. If we collected the data ourselves, we need to know if our data suits our needs or if we need to collect more/different data. 2. If we didn't collect the data ourselves, we need to interrogate the data to answer the "5 W's" - __What__ kind of data do we have (i.e. numeric, categorical)? - __When__ was the data collected? There could be more recent data which we could collect which would better inform our model. - __How__ much data do we have? Also, how was the data collected? - __Why__ was the data collected? The original motivation could highlight potential areas of bias in the data. - __Who__ collected the data? Some of these questions can't necessarily be answered by looking at the data alone which is fine because _[nothing comes from nothing](http://parmenides.me/nothing-comes-from-nothing/)_; someone will know the answers so all we have to do is know where to look and whom to ask. ## _How do we do it in Python?_ As always, I'll follow the steps outlined in [_Hands-on Machine Learning with Scikit-Learn, Keras & TensorFlow_](https://github.com/ageron/handson-ml/blob/master/ml-project-checklist.md) ### Step 1: Frame the Problem "Given a set of features, can we determine how old someone needs to be to read a book?" ### Step 2: Get the Data We'll be using the same dataset as in the [previous post](https://educatorsrlearners.github.io/an-a-z-of-machine-learning/e/2020/06/15/e-is-for-eda.html). ### Step 3: Explore the Data to Gain Insights (i.e. EDA) As always, import the essential libraries, then load the data. ``` #hide import warnings; warnings.simplefilter('ignore') #For data manipulation import pandas as pd import numpy as np #For visualization import seaborn as sns import matplotlib.pyplot as plt import missingno as msno url = 'https://raw.githubusercontent.com/educatorsRlearners/book-maturity/master/csv/book_info_complete.csv' df = pd.read_csv(url) ``` To review, ***How much data do we have?*** ``` df.shape ``` - 23 features - one target - 5,816 observations ***What type of data do we have?*** ``` df.info() ``` Looks like mostly categorical with some numeric. Lets take a closer look. ``` df.head().T ``` Again, I collected the data so I know the target is `csm_rating` which is the minimum age Common Sense Media (CSM) says a reader should be for the given book. Also, we have essentially three types of features: - Numeric - `par_rating` : Ratings of the book by parents - `kids_rating` : Ratings of the book by children - :dart:`csm_rating` : Ratings of the books by Common Sense Media - `Number of pages` : Length of the book - `Publisher's recommended age(s)`: Self explanatory - Date - `Publication date` : When the book was published - `Last updated`: When the book's information was updated on the website with the rest of the features being categorical and text; these features will be our focus for today. #### Step 3.1 Housekeeping Clean the feature names to make inspection easier. {% fn 3 %} ``` df.columns df.columns = df.columns.str.strip().str.lower().str.replace(' ', '_').str.replace('(', '').str.replace(')', '') df.columns ``` Much better. Now lets subset the data frame so we only have the features of interest. Given there are twice as many text features compared to non-text features, and the fact that I'm ~~lazy~~ efficient, I'll create a list of the features I ***don't*** want ``` numeric = ['par_rating', 'kids_rating', 'csm_rating', 'number_of_pages', "publisher's_recommended_ages", "publication_date", "last_updated"] ``` and use it to keep the features I ***do*** want. ``` df_strings = df.drop(df[numeric], axis=1) ``` _Voila!_ ``` df_strings.head().T ``` Clearly, the non-numeric data falls into two groups: - text - `description` - `plot` - `csm_review` - `need_to_know` - categories - `author`/`authors` - `genre` - `award`/`awards` - etc. Looking at the output above, so many questions come to mind: 1. How many missing values do we have? 2. How long are the descriptions? 3. What's the difference between `csm_review` and `need_to_know`? 4. Similarly, what the difference between `description` and `plot`? 5. How many different authors do we have in the dataset? 6. How many types of books do we have? and I'm sure more will arise once we start. Where to start? Lets answer the easiest questions first :grin: ## Categories #### ***How many missing values do we have?*** A cursory glance at the output above indicates there are potentially a ton of missing values; lets inspect this hunch visually. ``` msno.bar(df_strings, sort='descending'); ``` Hunch confirmed: 10 the 17 columns are missing values with some being practically empty. To get a precise count, we can use `sidetable`.{% fn 2 %} ``` import sidetable df_strings.stb.missing(clip_0=True, style=True) ``` OK, we have lots of missing values and several columns which appear to be measuring similar features (i.e., authors, illustrators, publishers, awards) so lets inspect these features in pairs. ### `author` and `authors` Every book has an author, even if the author is "[Anonymous](https://bookshop.org/a/9791/9781538718469)," so then why do we essentially have two columns for the same thing? :thinking: `author` is for books with a single writer whereas `authors` is for books with multiple authors like [_Good Omens_](https://bookshop.org/a/9791/9780060853983). Let's test that theory. ``` msno.matrix(df_strings.loc[:, ['author', 'authors']]); ``` *Bazinga!* We have a perfect correlation between missing data for `author` and `authors` but lets' have a look just in case. ``` df_strings.loc[df_strings['author'].isna() & df_strings["authors"].notna(), ['title', 'author', 'authors']].head() df_strings.loc[df_strings['author'].notna() & df_strings["authors"].isna(), ['title', 'author', 'authors']].head() df_strings.loc[df_strings['author'].notna() & df_strings["authors"].notna(), ['title', 'author', 'authors']].head() ``` My curiosity is satiated. Now the question is how to successfully merge the two columns? We could replace the `NaN` in `author` with the: - values in `authors` - word `multiple` - first author in `authors` - more/most popular of the authors in `authors` and I'm sure I could come up with even more if I thought about/Googled it but the key is to understand that no matter what we choose, it will have consequences when we build our model{% fn 3 %}. Next question which comes to mind is: :thinking: ***How many different authors are there?*** ``` df_strings.loc[:, 'author'].nunique() ``` Wow! Nearly half our our observations contain a unique name meaning this feature has [high cardinality](https://www.kdnuggets.com/2016/08/include-high-cardinality-attributes-predictive-model.html). :thinking: ***Which authors are most represented in the data set?*** Lets create a [frequency table](https://www.mathsteacher.com.au/year8/ch17_stat/03_freq/freq.htm) to find out. ``` author_counts = df_strings.loc[:, ["title", 'author']].groupby('author').count().reset_index() author_counts.sort_values('title', ascending=False).head(10) ``` Given that I've scraped the data from a website focusing on children, teens, and young adults, the results above only make sense; authors like [Dr. Seuss](https://bookshop.org/contributors/dr-seuss), [Eoin Coifer](https://bookshop.org/contributors/eoin-colfer-20dba4fd-138e-477e-bca5-75b9fa9bfe2f), and [Lemony Snicket](https://bookshop.org/books?keywords=lemony+snicket) are famous children's authors whereas [Rick Riordan](https://bookshop.org/books?keywords=percy+jackson), [Walter Dean Myers](https://bookshop.org/books?keywords=Walter+Dean+Myers) occupy the teen/young adult space and [Neil Gaiman](https://bookshop.org/contributors/neil-gaiman) writes across ages. :thinking: ***How many authors are only represented once?*** That's easy to check. ``` from matplotlib.ticker import FuncFormatter ax = author_counts['title'].value_counts(normalize=True).nlargest(5).plot.barh() ax.invert_yaxis(); #Set the x-axis to a percentage ax.xaxis.set_major_formatter(FuncFormatter(lambda x, _: '{:.0%}'.format(x))) ``` Wow! So approximately 60% of the authors have one title in our data set. **Why does that matter?** When it comes time to build our model we'll need to either [label encode](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html), [one-hot encode](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html), or [hash](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.FeatureHasher.html) this feature and whichever we decide to do will end up effecting the model profoundly due to the high [cardinality](https://pkghosh.wordpress.com/2017/10/09/combating-high-cardinality-features-in-supervised-machine-learning/) of this feature; however, we'll deal with all this another time :grin:. ### `illustrator` and `illustrators` Missing values can be quite informative. :thinking: What types of books typically have illustrators? :bulb: Children's books! Therefore, if a book's entries for both `illustrator` and `illustrators` is blank, that *probably* means that book doesn't have illustrations which would mean it is *more likely* to be for older children. Let's test this theory in the simplest way I can think of :smile: ``` #Has an illustrator df.loc[df['illustrator'].notna() | df['illustrators'].notna(), ['csm_rating']].hist(); #Doesn't have an illustrator df.loc[df['illustrators'].isna() & df["illustrator"].isna(), ['csm_rating']].hist(); ``` :bulb: *Who* the illustrator is doesn't matter as much as *whether* there is an illustrator. Looks like when I do some feature engineering I'll need to create a `has_illustrator` feature. ### `book_type` and `genre` These two features should be relatively straightforward but we'll have a quick look anyway. `book_type` should be easy because, after a cursory inspection using `head` above, I'd expect to only see 'fiction' or 'non-fiction' but I'll double check. ``` ax_book_type = df_strings['book_type'].value_counts().plot.barh(); ax_book_type.invert_yaxis() ``` Good! The only values I have are the ones I expected but the ratio is highly skewed. :thinking: What impact will this have on our model? `genre` (e.g. fantasy, romance, sci-fi) is a *far* broader topic than `booktype` but how many different genres are represented in the data set? ``` df_strings['genre'].nunique() ``` :roll_eyes: Great What's the breakdown? ``` ax_genre = df_strings['genre'].value_counts().plot.barh(); ax_genre.invert_yaxis() ``` That's not super useful but what if I took 10 most common genres? ``` ax_genre_10 = df_strings['genre'].value_counts(normalize=True).nlargest(10).plot.barh(); ax_genre_10.invert_yaxis() #Set the x axis to percentage ax_genre_10.xaxis.set_major_formatter(FuncFormatter(lambda x, _: '{:.0%}'.format(x))) ``` Hmmm. Looks like approximately half the books fall into one of three genres. :bulb: To reduce dimensionality, recode any genre outside of the top 10 as 'other'. Will save that idea for the feature engineering stage. ### `award` and `awards` Since certain awards (e.g. [The Caldecott Medal](https://cloviscenter.libguides.com/children/caldecott#:~:text=The%20Medal%20shall%20be%20awarded,the%20illustrations%20be%20original%20work.)) are only awarded to children's books whereas others, namely [The RITA Award](https://en.wikipedia.org/wiki/RITA_Award#Winners) is only for "mature" readers. :thinking: Will knowing if a work is an award winner provide insight? :thinking: Which awards are represented? ``` award_ax = df_strings['award'].value_counts().plot.barh() award_ax.invert_yaxis(); awards_ax = df_strings['awards'].str.split(",").explode().str.strip().value_counts().plot.barh() awards_ax.invert_yaxis() ``` Hmmmmm. The Caldecott Medal is for picture books so that should mean the target readers are very young; however, we've already seen that "picture books" is the second most common value in `genre` so being a Caldecott Medal winner won't add much. Also, to be eligible for the other awards, a book needs to be aimed a t children 14 or below so that doesn't really tell us much either. Conclusion: drop this feature. While I could keep going and analyze `publisher`, `publishers`, and `available_on`, I'd be using the exact same techniques as above so, instead, time to move on to... ## Text ### `description`, `plot`, `csm_review`, `need_to_know` Now for some REALLY fun stuff! :thinking: How long are each of these observations? Trying to be as efficient as possible, I'll: - make a list of the features I want ``` variables = ['description', 'plot', 'csm_review', 'need_to_know'] ``` - write a function to: - convert the text to lowercase - tokenize the text and remove [stop words](https://en.wikipedia.org/wiki/Stop_words) - identify the length of each feature ``` from nltk import word_tokenize from nltk.corpus import stopwords stop = stopwords.words('english') def text_process(df, feature): df.loc[:, feature+'_tokens'] = df.loc[:, feature].apply(str.lower) df.loc[:, feature+'_tokens'] = df.loc[:, feature+'_tokens'].apply(lambda x: [item for item in x.split() if item not in stop]) df.loc[:, feature+'_len'] = df.loc[:, feature+'_tokens'].apply(len) return df ``` - loop through the list of variables saving it to the data frame ``` for var in variables: df_text = text_process(df_strings, var) df_text.iloc[:, -8:].head() ``` :thinking: `description` seems to be significantly shorter than the other three. Let's plot them to investigate. ``` len_columns = df_text.columns.str.endswith('len') df_text.loc[:,len_columns].hist(); plt.tight_layout() ``` Yep - `description` is significantly shorter but how do the other three compare. ``` columns = ['plot_len', 'need_to_know_len', 'csm_review_len'] df_text[columns].plot.box() plt.xticks(rotation='vertical'); ``` Hmmm. Lots of outliers for `csm_review` but, in general, the three features are of similar lengths. ### Next Steps While I could create [word clouds](https://www.datacamp.com/community/tutorials/wordcloud-python) to visualize the most frequent words for each feature, or calculate the [sentiment](https://towardsdatascience.com/a-complete-exploratory-data-analysis-and-visualization-for-text-data-29fb1b96fb6a) of each feature, my stated goal is to identify how old someone should be to read a book and not whether a review is good or bad. To that end, my curiosity about these features is satiated so I'm ready to move on to another chapter. ## Summary - :ballot_box_with_check: numeric data - :ballot_box_with_check: categorical data - :black_square_button: images (book covers) Two down; one to go! Going forward, my key points to remember are: ### What type of categorical data do I have? There is a huge difference between ordered (i.e. "bad", "good", "great") and truly nominal data that has no order/ranking like different genres; just because ***I*** prefer science fiction to fantasy, it doesn't mean it actually ***is*** superior. ### Are missing values really missing? Several of the features had missing values which were, in fact, not truly missing; for example, the `award` and `awards` features were mostly blank for a very good reason: the book didn't win one of the four awards recognized by Common Sense Media. In conclusion, both of the points above can be summarized simply by as "be sure to get to know your data." Happy coding! #### Footnotes {{ 'Adapted from [_Engineering Statistics Handbook_](https://www.itl.nist.gov/div898/handbook/eda/section1/eda11.htm)' | fndetail: 1 }} {{ 'Be sure to check out this excellent [post](https://beta.deepnote.com/article/sidetable-pandas-methods-you-didnt-know-you-needed) by Jeff Hale for more examples on how to use this package' | fndetail: 2 }} {{ 'See this post on [Smarter Ways to Encode Categorical Data](https://towardsdatascience.com/smarter-ways-to-encode-categorical-data-for-machine-learning-part-1-of-3-6dca2f71b159)' | fndetail: 3 }} {{ 'Big *Thank You* to [Chaim Gluck](https://medium.com/@chaimgluck1/working-with-pandas-fixing-messy-column-names-42a54a6659cd) for providing this tip' | fndetail: 4 }}
github_jupyter
``` from IPython.display import Image import torch import torch.nn as nn import torch.nn.functional as F import math, random from scipy.optimize import linear_sum_assignment from utils import NestedTensor, nested_tensor_from_tensor_list, MLP Image(filename="figs/model.png", retina=True) ``` This notebook provides a Pytorch implementation for the sequential variant of PRTR (Pose Regression TRansformers) in [Pose Recognition with Cascade Transformers](https://arxiv.org/abs/2104.06976). It is intended to provide researchers interested in sequential PRTR with a concrete understanding that only code can deliver. It can also be used as a starting point for end-to-end top-down pose estimation research. ``` class PRTR_sequential(nn.Module): def __init__(self, backbone, transformer, transformer_kpt, level, x_res=10, y_res=10): super().__init__() self.backbone = backbone self.transformer = transformer hidden_dim = transformer.d_model self.class_embed = nn.Linear(hidden_dim, 2) self.bbox_embed = MLP(hidden_dim, hidden_dim, 4, 3) self.query_embed = nn.Embedding(100, hidden_dim) self.input_proj = nn.Conv2d(backbone.num_channels, hidden_dim, kernel_size=1) self.transformer_kpt = transformer_kpt x_interpolate = torch.linspace(-1.25, 1.25, x_res, requires_grad=False).unsqueeze(0) # [1, x_res], ANNOT ?(1) y_interpolate = torch.linspace(-1.25, 1.25, y_res, requires_grad=False).unsqueeze(0) # [1, y_res] self.register_buffer("x_interpolate", x_interpolate) self.register_buffer("y_interpolate", y_interpolate) self.x_res = x_res self.y_res = y_res self.level = level mask = torch.zeros(1, y_res, x_res, requires_grad=False) # [1, y_res, x_res] self.register_buffer("mask", mask) self.build_pe() ``` Class `PRTR_sequential` needs the following arguments: + backbone: a customizable CNN backbone which returns a pyramid of feature maps with different spatial size + transformer: a customizable Transformer for person detection (1st Transformer) + transformer_kpt: a customizable Transformer for keypoint detection (2nd Transformer) + level: from which layers of pyramid we will extract features + x_res: the width of STN-cropped featrure map fed to 2nd Transformer + y_res: the height of STN-cropped featrure map fed to 2nd Transformer Some annotations: 1. For `x_interpolate` and `y_interpolate`, we use an extended eyesight of 125% to the orginal boudning box to provide more information from backbone to the 2nd Transformer. ``` def build_pe(self): # fixed sine pe not_mask = 1 - self.mask y_embed = not_mask.cumsum(1, dtype=torch.float32) x_embed = not_mask.cumsum(2, dtype=torch.float32) eps = 1e-6; scale = 2 * math.pi # normalize? y_embed = y_embed / (y_embed[:, -1:, :] + eps) * scale x_embed = x_embed / (x_embed[:, :, -1:] + eps) * scale num_pos_feats = 128; temperature = 10000 dim_t = torch.arange(num_pos_feats, dtype=torch.float32, device=self.mask.device) dim_t = temperature ** (2 * (dim_t // 2) / num_pos_feats) pos_x = x_embed[:, :, :, None] / dim_t pos_y = y_embed[:, :, :, None] / dim_t pos_x = torch.stack((pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4).flatten(3) pos_y = torch.stack((pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4).flatten(3) pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) self.register_buffer("pe", pos) # learnable pe self.row_embed = nn.Embedding(num_pos_feats, self.x_res) self.col_embed = nn.Embedding(num_pos_feats, self.y_res) nn.init.uniform_(self.row_embed.weight) nn.init.uniform_(self.col_embed.weight) def get_leant_pe(self): y_embed = self.col_embed.weight.unsqueeze(-1).expand(-1, -1, self.x_res) x_embed = self.row_embed.weight.unsqueeze(1).expand(-1, self.y_res, -1) embed = torch.cat([y_embed, x_embed], dim=0).unsqueeze(0) return embed PRTR_sequential.build_pe = build_pe PRTR_sequential.get_leant_pe = get_leant_pe ``` Then we build positional embedding for the 2nd Transformer, which ensembles both fixed sinusoidal embedding and learnt embedding. For each box containing person cropped from original image, we use the same positional embedding, irrelevent to where the box is. ``` def forward(self, samples): # the 1st Transformer, to detect person features, pos = self.backbone(samples) hs = self.transformer(self.input_proj(features[-1].tensors), features[-1].mask, self.query_embed.weight, pos[-1])[0][-1] # [B, person per image, f] logits = self.class_embed(hs) # [B, person per image, 2] bboxes = self.bbox_embed(hs).sigmoid() # [B, person per image, 4] outputs = {'pred_logits': logits, 'pred_boxes': bboxes} # some preperation for STN feature cropping person_per_image = hs.size(1) num_person = person_per_image * hs.size(0) heights, widths = samples.get_shape().unbind(-1) # [B] * 2 rh = heights.repeat_interleave(person_per_image) # [person per image * B] rw = widths.repeat_interleave(person_per_image) # [person per image * B] srcs = [features[_].decompose()[0] for _ in self.level] cx, cy, w, h = bboxes.flatten(end_dim=1).unbind(-1) # [person per image * B] * 4 cx, cy, w, h = cx * rw, cy * rh, w * rw, h * rh # ANNOT (1) # STN cropping y_grid = (h.unsqueeze(-1) @ self.y_interpolate + cy.unsqueeze(-1) * 2 - 1).unsqueeze(-1).unsqueeze(-1) # [person per image * B, y_res, 1, 1] x_grid = (w.unsqueeze(-1) @ self.x_interpolate + cx.unsqueeze(-1) * 2 - 1).unsqueeze(-1).unsqueeze(1) # [person per image * B, 1, x_res, 1] grid = torch.cat([x_grid.expand(-1, self.y_res, -1, -1), y_grid.expand(-1, -1, self.x_res, -1)], dim=-1) cropped_feature = [] cropped_pos = [] for j, l in enumerate(self.level): cropped_feature.append(F.grid_sample(srcs[j].expand(num_person, -1, -1, -1), grid, padding_mode="border")) # [person per image * B, C, y_res, x_res] cropped_feature = torch.cat(cropped_feature, dim=1) cropped_pos.append(self.pe.expand(num_person, -1, -1, -1)) cropped_pos.append(self.get_leant_pe().expand(num_person, -1, -1, -1)) cropped_pos = torch.cat(cropped_pos, dim=1) mask = self.mask.bool().expand(num_person, -1, -1) # ANNOT (2) # 2nd Transformer coord, logtis = self.transformer_kpt(bboxes, cropped_feature, cropped_pos, mask) # [person per image * B, 17, 2] outputs["pred_kpt_coord"] = coord.reshape(hs.size(0), -1, self.transformer_kpt.num_queries, 2) outputs["pred_kpt_logits"] = logtis.reshape(hs.size(0), -1, self.transformer_kpt.num_queries, self.transformer_kpt.num_kpts + 1) return outputs PRTR_sequential.forward = forward ``` `forward` method takes in a `NestedTensor` and returns a dictionary of predictions, some annotations: 1. Input `samples` and `features` are `NestedTensor`, which basically stacks a list of tensors of different shapes by their top-left corner and uses masks to denote valid positions. Thus when we need to crop person bounding boxes from the whole feature map, we need to scale boxes according to image size 2. we always gives unmasked image to the 2nd Transformer, becasue all the persons are cropped to the same resolution ``` def infer(self, samples): self.eval() outputs = self(samples) out_logits, out_coord = outputs['pred_kpt_logits'], outputs['pred_kpt_coord'] C_stacked = out_logits[..., 1:].transpose(2, 3).flatten(0, 1).detach().cpu().numpy() # [person per image * B, 17, num queries (for keypoint)] out_coord = out_coord.flatten(0, 1) coord_holder = [] for b, C in enumerate(C_stacked): _, query_ind = linear_sum_assignment(-C) coord_holder.append(out_coord[b, query_ind.tolist()]) matched_coord = torch.stack(coord_holder, dim=0).reshape(out_logits.size(0), out_logits.size(1), 17, -1) return matched_coord # [B, num queries, num kpts, 2] PRTR_sequential.infer = infer ``` `infer` takes the same input as `forward`, but instead of returning all keypoint queries for loss calculaiton, it leverages a Hungarian algorithm to select the 17 keytpoints as prediction. The selection process can be thought of as a bipartite graph matching problem, graph constructed as below: + for each query in 2nd Transformer a node is made, creating set Q + for each keypoint type, a node is made, creating set K + set Q and K are fully inter-connected, edge weight between $Q_i$ and $K_j$ are the _unnormalized logits_ of query $i$ classified as keypoint type $k$ + Q, K have no intra-connection, Hungarian algorithm will find the matching between Q and K with highest edge weights, selected queries are returned as prediction. A minimal example with only 3 queries and 2 keypoint types are shown as below: ![](figs/readout.png) ``` class DETR_kpts(nn.Module): def __init__(self, transformer, num_kpts, num_queries, input_dim): super().__init__() self.num_kpts = num_kpts self.num_queries = num_queries hidden_dim = transformer.d_model self.query_embed = nn.Embedding(num_queries, hidden_dim) self.input_proj = nn.Conv2d(input_dim, hidden_dim, kernel_size=1) self.transformer = transformer self.coord_predictor = MLP(hidden_dim, hidden_dim, 2, num_layers=3) self.class_predictor = nn.Linear(hidden_dim, num_kpts + 1) def forward(self, bboxes, features, pos, mask): src_proj = self.input_proj(features) j_embed = self.transformer(src_proj, mask, self.query_embed.weight, pos)[0][-1] # [B, num queries, hidden dim] j_coord_ = self.coord_predictor(j_embed).sigmoid() x, y = j_coord_.unbind(-1) # [B, Q] * 2 x = (x * 1.25 - 0.625) * bboxes[:, 2].unsqueeze(-1) + bboxes[:, 0].unsqueeze(-1) y = (y * 1.25 - 0.625) * bboxes[:, 3].unsqueeze(-1) + bboxes[:, 1].unsqueeze(-1) x = x.clamp(0, 1) y = y.clamp(0, 1) j_coord = torch.stack([x, y], dim=-1) j_class = self.class_predictor(j_embed[-1]) # [B, J, c+1], logits return j_coord, j_class ``` Class `DETR_kpts` is the 2nd Transformer in PRTR and needs the following arguments: + transformer: a customizable Transformer for keypoint detection (2nd Transformer) + num_kpts: number of keypoint annotations per person of this dataset, e.g., COCO has 17 keypoints + num_queries: query number, similar to DETR + input_dim: image feature dimension from 1st Transformer Its `forward` takes in `bboxes` becasue we need to recover per-person prediction to whole image coordinates, `features`, `pos` and `mask` for Transformer input. `forward` returns predicted keypoint coordinates in 0 to 1, relative to whole image, and their probability belonging to each keypoint class, e.g. nose, left shoulder.
github_jupyter
## Baltic test case configuration Diagnostics output to close heat, salt, thickness budgets, and derive watermass transformation. This notebook is a working space to explore that output. ``` import xarray as xr import numpy as np from xhistogram.xarray import histogram ### Data loading, grabbed from MOM6-analysis cookbook # Load data on native grid rootdir = '/archive/gam/MOM6-examples/ice_ocean_SIS2/Baltic_OM4_025/tutorial_wmt/' gridname = 'natv' prefix = '19000101.ocean_' # Diagnostics were saved into different files suffixs = ['thck','heat','salt','surf','xtra'] ds = xr.Dataset() for suffix in suffixs: filename = prefix+gridname+'_'+suffix+'*.nc' dsnow = xr.open_mfdataset(rootdir+filename) ds = xr.merge([ds,dsnow]) gridname = '19000101.ocean_static.nc' grid = xr.open_dataset(rootdir+gridname).squeeze() # Specify constants for the reference density and the specific heat capacity rho0 = 1035. Cp = 3992. # Specify the diffusive tendency terms processes=['boundary forcing','vertical diffusion','neutral diffusion', 'frazil ice','internal heat'] terms = {} terms['heat'] = {'boundary forcing':'boundary_forcing_heat_tendency', 'vertical diffusion':'opottempdiff', 'neutral diffusion':'opottemppmdiff', 'frazil ice':'frazil_heat_tendency', 'internal heat':'internal_heat_heat_tendency'} terms['salt'] = {'boundary forcing':'boundary_forcing_salt_tendency', 'vertical diffusion':'osaltdiff', 'neutral diffusion':'osaltpmdiff', 'frazil ice':None, 'internal heat':None} terms['thck'] = {'boundary forcing':'boundary_forcing_h_tendency', 'vertical diffusion':None, 'neutral diffusion':None, 'frazil ice':None, 'internal heat':None} colors = {'boundary forcing':'tab:blue', 'vertical diffusion':'tab:orange', 'neutral diffusion':'tab:green', 'frazil ice':'tab:red', 'internal heat':'tab:purple'} ``` *** 11/11/20 gmac In equating the content tendency output by the model with the tendency of the materially conserved tracer (e.g. heat tendency and temperature), I think I am making an error by not accomodating changes in thickness. The product rule shows clearly that $h\dot{\lambda} \neq \dot{(h\lambda)}$, and it is the LHS that we wish to have in the WMT expression. Here, try applying a correction for $\lambda\dot{h}$. *[But, look again carefully at MOM5_elements, Eq. 36.87, equates the two. There is no thickness rate of change on the LHS. This is true due to continuity, **except** in the presence of a surface volume flux. This is what is then explored in Section 36.8.6.]* ``` G_prior = xr.Dataset() G = xr.Dataset() budget = 'salt' # Specify the tracer, its range and bin widths (\delta\lambda) for the calculation if budget == 'heat': tracer = ds['temp'] delta_l = 0.2 lmin = -2 lmax = 10 elif budget == 'salt': tracer = ds['salt'] delta_l = 0.2 lmin = 2 lmax = 36 bins = np.arange(lmin,lmax,delta_l) for process in processes: term = terms[budget][process] if term is not None: nanmask = np.isnan(ds[term]) tendency = ds[term] if budget == 'heat': tendency /= Cp*rho0 # Calculate G prior to thickness correction G_prior[process] = histogram(tracer.where(~nanmask).squeeze(), bins=[bins], dim=['xh','yh','zl'], weights=( rho0*(tendency )*grid['areacello'] ).where(~nanmask).squeeze() )/np.diff(bins) # Accomodate thickness changes if nonzero term_thck = terms['thck'][process] if term_thck is not None: tendency -= tracer*ds[term_thck] G[process] = histogram(tracer.where(~nanmask).squeeze(), bins=[bins], dim=['xh','yh','zl'], weights=( rho0*(tendency )*grid['areacello'] ).where(~nanmask).squeeze() )/np.diff(bins) for process in G.data_vars: G_prior[process].plot(label=process,color=colors[process],linestyle=':') G[process].plot(label=process,color=colors[process]) ```
github_jupyter
## 1. Import necessary packages For this exercise we need * pandas * train_test_split * LogisticRegression * pyplot from matplotlib * KNeighborsClassifier * LogisticRegressionClassifier * RandomForestClassifier * DummyClassifier ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split, cross_val_score from sklearn.dummy import DummyClassifier from sklearn.metrics import confusion_matrix from sklearn.utils.multiclass import unique_labels from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, fbeta_score, classification_report from sklearn.metrics import roc_curve, precision_recall_curve, roc_auc_score from sklearn.preprocessing import StandardScaler from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import RandomForestClassifier %matplotlib inline ``` ## 2. Load and prepare the dataset * Load the training data into a dataframe named df_train_data (this step is done for you). * Create binary classification problem - rename some class labels (this step done for you). * Create a dataframe of 9 features named X. * Create a data frame of labels named y. * Split the data into a training set and a test set. ``` url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/statlog/shuttle/shuttle.tst' df = pd.read_csv(url, header=None, sep=' ') df.head() df.loc[df[9] != 4, 9] = 0 df.loc[df[9] == 4, 9] = 1 X = df.drop([9], axis=1) y = df[9] X_train, X_test, y_train, y_test = train_test_split(X, y) print('There are {} training samples and {} test samples'.format(X_train.shape[0], X_test.shape[0])) ``` ## Create the model * Instantiate a Logistic Regression classifier with a lbfgs solver. * Fit the classifier to the data. ``` lr = LogisticRegression(solver='lbfgs', penalty='none', max_iter=1000) lr.fit(X_train, y_train) ``` ## 4. Calculate Accuracy ``` lr.score(X_test, y_test) ``` ## 5. Dummy Classifier ``` dummy = DummyClassifier(strategy = 'uniform') dummy.fit(X_train, y_train) dummy.score(X_test, y_test) ``` ## 6. Confusion Matrix ``` y_pred = lr.predict(X_test) confusion = confusion_matrix(y_test, y_pred) print(confusion) def plot_confusion_matrix(cm, target_names, title='Confusion matrix', cmap=None, normalize=True): """ given a sklearn confusion matrix (cm), make a nice plot Arguments --------- cm: confusion matrix from sklearn.metrics.confusion_matrix target_names: given classification classes such as [0, 1, 2] the class names, for example: ['high', 'medium', 'low'] title: the text to display at the top of the matrix cmap: the gradient of the values displayed from matplotlib.pyplot.cm see http://matplotlib.org/examples/color/colormaps_reference.html plt.get_cmap('jet') or plt.cm.Blues normalize: If False, plot the raw numbers If True, plot the proportions Usage ----- plot_confusion_matrix(cm = cm, # confusion matrix created by # sklearn.metrics.confusion_matrix normalize = True, # show proportions target_names = y_labels_vals, # list of names of the classes title = best_estimator_name) # title of graph Citiation --------- http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html """ import matplotlib.pyplot as plt import numpy as np import itertools accuracy = np.trace(cm) / float(np.sum(cm)) misclass = 1 - accuracy if cmap is None: cmap = plt.get_cmap('Blues') plt.figure(figsize=(8, 6)) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() if target_names is not None: tick_marks = np.arange(len(target_names)) plt.xticks(tick_marks, target_names, rotation=45) plt.yticks(tick_marks, target_names) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] thresh = cm.max() / 1.5 if normalize else cm.max() / 2 for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): if normalize: plt.text(j, i, "{:0.4f}".format(cm[i, j]), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") else: plt.text(j, i, "{:,}".format(cm[i, j]), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('Predicted label') plt.xlabel('True label\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass)) plt.show() ``` ## 7. Plot a nicer confusion matrix (Use the plt_confusion_matrix function) ``` plot_confusion_matrix(cm=confusion, target_names = ['Positive', 'Negative'], title = 'Confusion Matrix',normalize=False) ``` ## 8. Calculate Metrics ``` accuracy = accuracy_score(y_test, y_pred) precision = precision_score(y_test, y_pred) recall = recall_score(y_test, y_pred) f1 = f1_score(y_test, y_pred) fbeta_precision = fbeta_score(y_test, y_pred, 0.5) fbeta_recall = fbeta_score(y_test, y_pred, 2) print('Accuracy score: {}'.format(accuracy)) print('Precision score: {}'.format(precision)) print('Recall score: {}'.format(recall)) print('F1 score: {}'.format(f1)) print('Fbeta score favoring precision: {}'.format(fbeta_precision)) print('FBeta score favoring recall: {}'.format(fbeta_recall)) ``` ## 9. Print a classification report ``` report = classification_report(y_test, y_pred, target_names=['Negative', 'Positive']) print(report) ``` ## 10. Plot ROC Curve and AUC ``` probs = lr.predict_proba(X_test)[:, 1] fpr, tpr, thresholds = roc_curve(y_test, probs) fig = plt.figure(figsize = (6, 6)) plt.plot([0, 1], [0, 1], 'k--') plt.plot(fpr, tpr) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC curve for Logistic Regression Model') plt.show() auc = roc_auc_score(y_test, probs) print('Area under the ROC curve: {:.3f}'.format(auc)) ``` ## 11. Plot Precision-Recall Curve ``` pres, rec, thresholds = precision_recall_curve(y_test, y_pred) fig = plt.figure(figsize = (6, 6)) plt.plot(rec, pres) plt.xlabel('Recall') plt.ylabel('Precision') plt.title('Precision-Recall Curve') plt.show() c_vals = np.arange(0.05, 1.5, 0.05) test_accuracy = [] train_accuracy = [] for c in c_vals: lr = LogisticRegression(solver='lbfgs', penalty='l2', C=c, max_iter=1000) lr.fit(X_train, y_train) test_accuracy.append(lr.score(X_test, y_test)) train_accuracy.append(lr.score(X_train, y_train)) fig = plt.figure(figsize=(8, 4)) ax1 = fig.add_subplot(1, 1, 1) ax1.plot(c_vals, test_accuracy, '-g', label='Test Accuracy') ax1.plot(c_vals, train_accuracy, '-b', label='Train Accuracy') ax1.set(xlabel='C', ylabel='Accuracy') ax1.set_title('Effect of C on Accuracy') ax1.legend() plt.show() ``` ## 12. Cross Validation ``` clf = LogisticRegression(solver='lbfgs', max_iter=1000) cv_scores = cross_val_score(clf, X_train, y_train, cv = 5) print('Accuracy scores for the 5 folds: ', cv_scores) print('Mean cross validation score: {:.3f}'.format(np.mean(cv_scores))) ``` ## 13. Is this really linear? ``` knn = KNeighborsClassifier(n_neighbors=7) # Then fit the model knn.fit(X_train, y_train) # How well did we do knn_7_score = knn.score(X_test, y_test) print('Accuracy of KNN (k = 7): {:.3f}'.format(knn_7_score)) ``` ## 14. Random Forest ``` rf = RandomForestClassifier(n_estimators = 22, random_state = 40) rf.fit(X_train,y_train) rf_score = rf.score(X_test, y_test) print('Accuracy of Random Forest: {:.3f}'.format(rf_score)) ```
github_jupyter
``` """ @Time : 15/12/2020 19:01 @Author : Alaa Grable """ !pip install transformers==3.0.0 !pip install emoji import gc #import os import emoji as emoji import re import string import numpy as np import pandas as pd import torch import torch.nn as nn from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report, accuracy_score from transformers import AutoModel from transformers import BertModel, BertTokenizer class BERT_Arch(nn.Module): def __init__(self, bert): super(BERT_Arch, self).__init__() self.bert = BertModel.from_pretrained('bert-base-uncased') self.conv = nn.Conv2d(in_channels=13, out_channels=13, kernel_size=(3, 768), padding='valid') self.relu = nn.ReLU() # change the kernel size either to (3,1), e.g. 1D max pooling # or remove it altogether self.pool = nn.MaxPool2d(kernel_size=(3, 1), stride=1) self.dropout = nn.Dropout(0.1) # be careful here, this needs to be changed according to your max pooling # without pooling: 443, with 3x1 pooling: 416 self.fc = nn.Linear(416, 3) self.flat = nn.Flatten() self.softmax = nn.LogSoftmax(dim=1) def forward(self, sent_id, mask): _, _, all_layers = self.bert(sent_id, attention_mask=mask, output_hidden_states=True) # all_layers = [13, 32, 64, 768] x = torch.transpose(torch.cat(tuple([t.unsqueeze(0) for t in all_layers]), 0), 0, 1) del all_layers gc.collect() torch.cuda.empty_cache() x = self.pool(self.dropout(self.relu(self.conv(self.dropout(x))))) x = self.fc(self.dropout(self.flat(self.dropout(x)))) return self.softmax(x) def read_dataset(): data = pd.read_csv("/content/BERT-CNN-Fine-Tuning-For-Hate-Speech-Detection-in-Online-Social-Media/labeled_data.csv") data = data.drop(['count', 'hate_speech', 'offensive_language', 'neither'], axis=1) #data = data.loc[0:9599,:] print(len(data)) return data['tweet'].tolist(), data['class'] def pre_process_dataset(values): new_values = list() # Emoticons emoticons = [':-)', ':)', '(:', '(-:', ':))', '((:', ':-D', ':D', 'X-D', 'XD', 'xD', 'xD', '<3', '</3', ':\*', ';-)', ';)', ';-D', ';D', '(;', '(-;', ':-(', ':(', '(:', '(-:', ':,(', ':\'(', ':"(', ':((', ':D', '=D', '=)', '(=', '=(', ')=', '=-O', 'O-=', ':o', 'o:', 'O:', 'O:', ':-o', 'o-:', ':P', ':p', ':S', ':s', ':@', ':>', ':<', '^_^', '^.^', '>.>', 'T_T', 'T-T', '-.-', '*.*', '~.~', ':*', ':-*', 'xP', 'XP', 'XP', 'Xp', ':-|', ':->', ':-<', '$_$', '8-)', ':-P', ':-p', '=P', '=p', ':*)', '*-*', 'B-)', 'O.o', 'X-(', ')-X'] for value in values: # Remove dots text = value.replace(".", "").lower() text = re.sub(r"[^a-zA-Z?.!,¿]+", " ", text) users = re.findall("[@]\w+", text) for user in users: text = text.replace(user, "<user>") urls = re.findall(r'(https?://[^\s]+)', text) if len(urls) != 0: for url in urls: text = text.replace(url, "<url >") for emo in text: if emo in emoji.UNICODE_EMOJI: text = text.replace(emo, "<emoticon >") for emo in emoticons: text = text.replace(emo, "<emoticon >") numbers = re.findall('[0-9]+', text) for number in numbers: text = text.replace(number, "<number >") text = text.replace('#', "<hashtag >") text = re.sub(r"([?.!,¿])", r" ", text) text = "".join(l for l in text if l not in string.punctuation) text = re.sub(r'[" "]+', " ", text) new_values.append(text) return new_values def data_process(data, labels): input_ids = [] attention_masks = [] bert_tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") for sentence in data: bert_inp = bert_tokenizer.__call__(sentence, max_length=36, padding='max_length', pad_to_max_length=True, truncation=True, return_token_type_ids=False) input_ids.append(bert_inp['input_ids']) attention_masks.append(bert_inp['attention_mask']) #del bert_tokenizer #gc.collect() #torch.cuda.empty_cache() input_ids = np.asarray(input_ids) attention_masks = np.array(attention_masks) labels = np.array(labels) return input_ids, attention_masks, labels def load_and_process(): data, labels = read_dataset() num_of_labels = len(labels.unique()) input_ids, attention_masks, labels = data_process(pre_process_dataset(data), labels) return input_ids, attention_masks, labels # function to train the model def train(): model.train() total_loss, total_accuracy = 0, 0 # empty list to save model predictions total_preds = [] # iterate over batches total = len(train_dataloader) for i, batch in enumerate(train_dataloader): step = i+1 percent = "{0:.2f}".format(100 * (step / float(total))) lossp = "{0:.2f}".format(total_loss/(total*batch_size)) filledLength = int(100 * step // total) bar = '█' * filledLength + '>' *(filledLength < 100) + '.' * (99 - filledLength) print(f'\rBatch {step}/{total} |{bar}| {percent}% complete, loss={lossp}, accuracy={total_accuracy}', end='') # push the batch to gpu batch = [r.to(device) for r in batch] sent_id, mask, labels = batch del batch gc.collect() torch.cuda.empty_cache() # clear previously calculated gradients model.zero_grad() # get model predictions for the current batch #sent_id = torch.tensor(sent_id).to(device).long() preds = model(sent_id, mask) # compute the loss between actual and predicted values loss = cross_entropy(preds, labels) # add on to the total loss total_loss += float(loss.item()) # backward pass to calculate the gradients loss.backward() # clip the the gradients to 1.0. It helps in preventing the exploding gradient problem torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) # update parameters optimizer.step() # model predictions are stored on GPU. So, push it to CPU #preds = preds.detach().cpu().numpy() # append the model predictions #total_preds.append(preds) total_preds.append(preds.detach().cpu().numpy()) gc.collect() torch.cuda.empty_cache() # compute the training loss of the epoch avg_loss = total_loss / (len(train_dataloader)*batch_size) # predictions are in the form of (no. of batches, size of batch, no. of classes). # reshape the predictions in form of (number of samples, no. of classes) total_preds = np.concatenate(total_preds, axis=0) # returns the loss and predictions return avg_loss, total_preds # function for evaluating the model def evaluate(): print("\n\nEvaluating...") # deactivate dropout layers model.eval() total_loss, total_accuracy = 0, 0 # empty list to save the model predictions total_preds = [] # iterate over batches total = len(val_dataloader) for i, batch in enumerate(val_dataloader): step = i+1 percent = "{0:.2f}".format(100 * (step / float(total))) lossp = "{0:.2f}".format(total_loss/(total*batch_size)) filledLength = int(100 * step // total) bar = '█' * filledLength + '>' * (filledLength < 100) + '.' * (99 - filledLength) print(f'\rBatch {step}/{total} |{bar}| {percent}% complete, loss={lossp}, accuracy={total_accuracy}', end='') # push the batch to gpu batch = [t.to(device) for t in batch] sent_id, mask, labels = batch del batch gc.collect() torch.cuda.empty_cache() # deactivate autograd with torch.no_grad(): # model predictions preds = model(sent_id, mask) # compute the validation loss between actual and predicted values loss = cross_entropy(preds, labels) total_loss += float(loss.item()) #preds = preds.detach().cpu().numpy() #total_preds.append(preds) total_preds.append(preds.detach().cpu().numpy()) gc.collect() torch.cuda.empty_cache() # compute the validation loss of the epoch avg_loss = total_loss / (len(val_dataloader)*batch_size) # reshape the predictions in form of (number of samples, no. of classes) total_preds = np.concatenate(total_preds, axis=0) return avg_loss, total_preds # Specify the GPU # Setting up the device for GPU usage device = 'cuda' if torch.cuda.is_available() else 'cpu' print(device) # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Load Data-set ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~# input_ids, attention_masks, labels = load_and_process() df = pd.DataFrame(list(zip(input_ids, attention_masks)), columns=['input_ids', 'attention_masks']) # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~# # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ class distribution ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~# # class = class label for majority of CF users. 0 - hate speech 1 - offensive language 2 - neither # ~~~~~~~~~~ Split train data-set into train, validation and test sets ~~~~~~~~~~# train_text, temp_text, train_labels, temp_labels = train_test_split(df, labels, random_state=2018, test_size=0.2, stratify=labels) val_text, test_text, val_labels, test_labels = train_test_split(temp_text, temp_labels, random_state=2018, test_size=0.5, stratify=temp_labels) del temp_text gc.collect() torch.cuda.empty_cache() train_count = len(train_labels) test_count = len(test_labels) val_count = len(val_labels) # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~# # ~~~~~~~~~~~~~~~~~~~~~ Import BERT Model and BERT Tokenizer ~~~~~~~~~~~~~~~~~~~~~# # import BERT-base pretrained model bert = AutoModel.from_pretrained('bert-base-uncased') # bert = AutoModel.from_pretrained('bert-base-uncased') # Load the BERT tokenizer #tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~# # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Tokenization ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~# # for train set train_seq = torch.tensor(train_text['input_ids'].tolist()) train_mask = torch.tensor(train_text['attention_masks'].tolist()) train_y = torch.tensor(train_labels.tolist()) # for validation set val_seq = torch.tensor(val_text['input_ids'].tolist()) val_mask = torch.tensor(val_text['attention_masks'].tolist()) val_y = torch.tensor(val_labels.tolist()) # for test set test_seq = torch.tensor(test_text['input_ids'].tolist()) test_mask = torch.tensor(test_text['attention_masks'].tolist()) test_y = torch.tensor(test_labels.tolist()) # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~# # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Create DataLoaders ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~# from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler # define a batch size batch_size = 32 # wrap tensors train_data = TensorDataset(train_seq, train_mask, train_y) # sampler for sampling the data during training train_sampler = RandomSampler(train_data) # dataLoader for train set train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size) # wrap tensors val_data = TensorDataset(val_seq, val_mask, val_y) # sampler for sampling the data during training val_sampler = SequentialSampler(val_data) # dataLoader for validation set val_dataloader = DataLoader(val_data, sampler=val_sampler, batch_size=batch_size) # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~# # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Freeze BERT Parameters ~~~~~~~~~~~~~~~~~~~~~~~~~~~~# # freeze all the parameters for param in bert.parameters(): param.requires_grad = False # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~# # pass the pre-trained BERT to our define architecture model = BERT_Arch(bert) # push the model to GPU model = model.to(device) # optimizer from hugging face transformers from transformers import AdamW # define the optimizer optimizer = AdamW(model.parameters(), lr=2e-5) #from sklearn.utils.class_weight import compute_class_weight # compute the class weights #class_wts = compute_class_weight('balanced', np.unique(train_labels), train_labels) #print(class_wts) # convert class weights to tensor #weights = torch.tensor(class_wts, dtype=torch.float) #weights = weights.to(device) # loss function #cross_entropy = nn.NLLLoss(weight=weights) cross_entropy = nn.NLLLoss() # set initial loss to infinite best_valid_loss = float('inf') # empty lists to store training and validation loss of each epoch #train_losses = [] #valid_losses = [] #if os.path.isfile("/content/drive/MyDrive/saved_weights.pth") == False: #if os.path.isfile("saved_weights.pth") == False: # number of training epochs epochs = 3 current = 1 # for each epoch while current <= epochs: print(f'\nEpoch {current} / {epochs}:') # train model train_loss, _ = train() # evaluate model valid_loss, _ = evaluate() # save the best model if valid_loss < best_valid_loss: best_valid_loss = valid_loss #torch.save(model.state_dict(), 'saved_weights.pth') # append training and validation loss #train_losses.append(train_loss) #valid_losses.append(valid_loss) print(f'\n\nTraining Loss: {train_loss:.3f}') print(f'Validation Loss: {valid_loss:.3f}') current = current + 1 #else: #print("Got weights!") # load weights of best model #model.load_state_dict(torch.load("saved_weights.pth")) #model.load_state_dict(torch.load("/content/drive/MyDrive/saved_weights.pth"), strict=False) # get predictions for test data gc.collect() torch.cuda.empty_cache() with torch.no_grad(): preds = model(test_seq.to(device), test_mask.to(device)) #preds = model(test_seq, test_mask) preds = preds.detach().cpu().numpy() print("Performance:") # model's performance preds = np.argmax(preds, axis=1) print('Classification Report') print(classification_report(test_y, preds)) print("Accuracy: " + str(accuracy_score(test_y, preds))) tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model ```
github_jupyter
``` # Convenient jupyter setup %load_ext autoreload %autoreload 2 from src.constants import GEDI_L2A_PATH from src.utils.os import list_content from src.utils.download import download from tqdm.autonotebook import tqdm import geopandas as gpd save_dir = GEDI_L2A_PATH/ "v002" / "amazon_basin" feather_files = list(save_dir.glob("*/*.feather")) print(f"There are {len(feather_files)} feather files.") ``` ## Count number of total shots Takeaway: >It takes about 2-3 seconds to read a simple feather dataframe into geopandas. This means in total it will take about 3-4h to read in all the data. > From a rough look at the first 100 samples, there will be about 500 Mio - 1 B shots over the Amazon. > Exact number: 452'202'228 (450 Mio.) Note: if we just want to get the lenght, we can also read via pandas: ``` import pandas as pd n_shots = 0 for feather in tqdm(feather_files): n_shots += len(pd.read_feather(feather, columns=["quality_flag"])) print(n_shots) print(n_shots) ``` ## Look at a sample of the dataset ``` feather_files[0].stat().st_size / 1024 / 1024 sample = gpd.read_feather(feather_files[0]) sample.head() ``` ## Upload to PostGIS database ``` from sqlalchemy import create_engine import sqlalchemy as db from src.constants import DB_CONFIG engine = create_engine(DB_CONFIG, echo=False) gedi_l2a = db.Table("level_2a", db.MetaData(), autoload=True, autoload_with=engine) for i, feather_file in enumerate(tqdm(feather_files[479:])): try: print(i+479) sample = gpd.read_feather(feather_file) sample[sample.quality_flag == 1] sample.to_postgis(name="level_2a", if_exists="append", con=engine, index=False, index_label="shot_number") except Exception as e: print(e) continue ``` ## Load from PostGIS ### Runtime comparision after uploading only `feather_files[0]` ``` %%time df = pd.read_sql(gedi_l2a.select(), con=engine) # reads only data, not geometry %%time df = gpd.read_postgis(gedi_l2a.select(), con=engine, geom_col="geometry") # reads geometry as well %%time sample = pd.read_feather(feather_files[0], columns=["granule_name"]) # read from feather format (no geometry) %%time sample = gpd.read_feather(feather_files[0], columns=["geometry"]) # read only geometry column from feather format %%time sample = gpd.read_feather(feather_files[0]) ``` ### Test out sql query ``` sql = "SELECT * FROM gedi_l2a" df = gpd.read_postgis(sql, con=engine) ```
github_jupyter
``` from pathlib import Path import numpy as np import pandas as pd import swifter import cleantext pd.options.display.max_colwidth = 1000 OUT = Path('~/data/ynacc_proc/replicate/threads_last') BASE_PATH = Path('/mnt/data/datasets/ydata-ynacc-v1_0') ANN1 = BASE_PATH/'ydata-ynacc-v1_0_expert_annotations.tsv' ANN2 = BASE_PATH/'ydata-ynacc-v1_0_turk_annotations.tsv' UNL = BASE_PATH/'ydata-ynacc-v1_0_unlabeled_conversations.tsv' TRAIN_IDS = BASE_PATH/'ydata-ynacc-v1_0_train-ids.txt' trainids = pd.read_csv(TRAIN_IDS, header=None) df_an1 = pd.read_table(ANN1) df_an1 = df_an1[df_an1['sdid'].isin(list(trainids[0]))] df_an1 = df_an1[['sdid', 'text', 'commentindex']] df_an1 = df_an1.drop_duplicates() df_an1 df_an2 = pd.read_table(ANN2) df_an2 = df_an2[df_an2['sdid'].isin(list(trainids[0]))] df_an2 = df_an2[['sdid', 'text', 'commentindex']] df_an2 = df_an2.drop_duplicates() df_an2 df_notan = pd.read_csv(UNL, engine='python', sep='\t', quoting=3, error_bad_lines=False) df_notan = df_notan[['sdid', 'text', 'commentindex']] # not needed anmoyre # df['text'] = df.apply(lambda x: 'xx_root_comment ' + x['text'] if pd.isnull(x['parentid']) else x['text'], axis=1) # df['parentid'] = df.apply(lambda x: x['commentid'] if pd.isnull(x['parentid']) else x['parentid'], axis=1) df = pd.concat([df_an1, df_an2, df_notan]) # clean up df = df.dropna(subset=['text']) df["commentindex"] = pd.to_numeric(df["commentindex"]) df df['text'] = df['text'].swifter.apply(lambda x: cleantext.clean(x, lower=False, no_urls=True, no_emails=True, zero_digits=True)) df = df.drop_duplicates() # get list of all comennts per thread res = df.sort_values(by=['commentindex']).groupby('sdid').agg({'text': lambda x: list(x)}).reset_index() res # create all possible thread combinations new_items = [] def create_threads(row): for i in range(1, len(row['text']) + 1): x = row['text'][:i] new = 'xx_thread_start ' + ' '.join([ 'xx_comment_start ' + (' xx_last ' + xx if xx == list(x)[-1] else xx) + ' xx_comment_end' for xx in list(x)]) + ' xx_thread_end' new_items.append({'text': new, 'sdid': row['sdid']}) for _, row in res.iterrows(): create_threads(row) final = pd.DataFrame(new_items) final # final['text'] = final['text'].swifter.apply(lambda x: clean(x, lower=False)) final.groupby('sdid').count() final.shape split_id = 130000 final["sdid"] = pd.to_numeric(final["sdid"]) train = final[final['sdid'] <= split_id][['text']] val = final[final['sdid'] > split_id][['text']] train val Path('/home/group7/data/ynacc_proc/replicate/threads_last').mkdir(exist_ok=True) ! ls /home/group7/data/ynacc_proc/replicate train.to_csv(OUT/'train.csv', index=False) val.to_csv(OUT/'val.csv', index=False) ```
github_jupyter
Copyright 2020 Natasha A. Sahr, Andrew M. Olney and made available under [CC BY-SA](https://creativecommons.org/licenses/by-sa/4.0) for text and [Apache-2.0](http://www.apache.org/licenses/LICENSE-2.0) for code. # Ridge and Lasso Regression: L1 and L2 penalization ## Regularization Up to this point, we've focused on relatively small numbers of predictors in our models. When we have datasets with large numbers of predictors, we need to think about new techniques to deal with the additional complexity. Part of the reason is that our highly manual methods no longer scale well. Imagine if you had to make and evaluate plots for 1000 variables! The other part is that once we have many variables, the chances of them interacting with each other in very complicated ways gets increasingly larger. We talked about one "bad" kind of interaction before, multicolinearity. Multicolinearity occurs when two variables mostly measure the same thing. The problem with multicolinearity in linear models is that the variables involved will no longer have unique solutions for their estimated coefficients. What this means in practice is that multicolinearity is a small, but manageable problem for small datasets, but multicolinearity becomes a very serious problem for large datasets, at least for linear models, which arguably are the most important models in science. Today we will talk about two methods that address the complexity of having many variables, including multicolinearity. Both of these methods use a "big idea" in data science called **regularization**. The idea behind regularization is that you **penalize** complex models in favor of simpler ones. These simpler models use fewer variables, making them easier to understand. If you penalization is set up in the right way, the simpler models can also avoid multicolinearity problems. Today we will focus on ridge and lasso regression, but it is important to remember that many other models use similar regularization techniques. Once you know to look for it, you will start to see it everywhere! ## What you will learn In the sections that follow, you will learn about ridge and lasso regularization and how they can help us assess a large number of variables in for candidacy in regression models by penalizing variables that don't contribute a large effect in the variability of the outcome. We will study the following: - Ridge regression with the L2 penalty - Lasso regression with the L1 penalty - Assessing model accuracy - Comparing regularized models - Selection of the tuning parameter $\lambda$ ## When to use regularization/penalization in regression Regularization is a general strategy that applies a penalty in the optimization of a regression model. With the correct tuning parameter selection, it will prevent overfitting a model to a particular dataset and improve the potential for generalization to new datasets. Regularization becomes particularly important in regression where there are large numbers of predictors because it can mitigate multicollinearity and cause shrinkage (for L2) or encourage sparsity (for L1) of the coefficients for variables that contribute less to the prediction of the outcome. ## Vanilla logistic regression Let's start by applying logistic regression to some breast cancer data. This model will serve as a baseline for comparison to the ridge and lasso models that come later. We're going to get the breast cancer data from `sklearn` instead of loading a CSV. Libraries like `sklearn` frequently come with their own datasets for demonstration purposes. ### Load data The [data](https://scikit-learn.org/stable/datasets/index.html#breast-cancer-dataset) consists of the following variables as mean, standard error, and "worst" (mean of three largest variables) collected by digital imagery of a biopsy. | Variable | Type | Description | |:-------|:-------|:-------| |radius | Ratio | mean of distances from center to points on the perimeter| |texture | Ratio | standard deviation of gray-scale values| |perimeter | Ratio | perimeter of cancer| |area | Ratio | area of cancer| |smoothness | Ratio | local variation in radius lengths| |compactness | Ratio | perimeter^2 / area - 1.0| |concavity | Ratio | severity of concave portions of the contour| |concave points | Ratio | number of concave portions of the contour| |symmetry | Ratio | symmetry of cancer| |fractal dimension | Ratio | "coastline approximation" - 1| | class | Nominal (binary) | malignant (1) or benign (0) <div style="text-align:center;font-size: smaller"> <b>Source:</b> This dataset was taken from the <a href="https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)">UCI Machine Learning Repository library </a> </div> <br> We want to predict the presence/absence of cancer, so it makes sense to use logistic regression rather than linear regression. Ridge and lasso penalties work the same way in both kinds of regression. First, import libraries for dataframes and to load the dataset: - `import pandas as pd` - `import sklearn.datasets as datasets` Next we need to do some conversion to put the `sklearn` data into a dataframe, which is the form we are most comfortable with: - Create variable `cancer_sklearn` - Set it to `with datasets do load_breast_cancer using` The next step is to put this `sklearn` data into a dataframe: - Create variable `dataframe` - Set it to `with pd create DataFrame using` a list containing - `from cancer_sklearn get data` - freestyle `columns=` followed by `from cancer_sklearn get feature_names` You can use this approach to put any `sklearn` format dataset into a dataframe. Because this data is too "easy", we need to make it more complicated to really show the benefits of ridge and lasso. Remember that multicolinearity is bad? Let's make it artificially multicolinear by duplicating the columns in the dataframe so that we have four side by side: - Set `dataframe` to `with pd do concat using` a list containing - A list containing - `dataframe` - `dataframe` - `dataframe` - `dataframe` - freestyle `axis=1` The axis tells it to stack horizontally rather than vertically. `sklearn` stores the predictors and the target (outcome) variable separately, so we need to `assign` it to the dataframe: - Set `dataframe` to `with dataframe do assign using` a list containing - freestyle `Target=` followed by `from cancer_sklearn get target` - `dataframe` (to display) As you can see, we're now working with 120 predictor variables instead of the original 30. It's a lot more than we'd like to have to examine manually. ### Explore data Based on the earlier discussion, you might guess that this dataset has a problem with multicolinearity. And you'd be right! Let's make a quick heatmap to show this: - `import plotly.express as px` And now a "one line" heatmap: - `with px do imshow using` as list containing - `with dataframe do corr using` - A freestyle block **with a notch on the right** containing `x=`, connected to `from dataframe get columns` - A freestyle block **with a notch on the right** containing `y=`, connected to `from dataframe get columns` Anything light orange to yellow could give us positive colinearity problems, and anything dark purple to indigo could give us negative colinearity problems. Depending on the size of your screen, `plotly` may only show every second or third variable name. You can use the Zoom tool to explore this correlation matrix more closely. ## Prepare train/test sets We need to split the dataframe into `X`, our predictors, and `Y`, our target variable (breast cancer positive). Do the imports for splitting: - `import sklearn.model_selection as model_selection` Create `X` by dropping the label from the dataframe: - Create variable `X` - Set it to `with dataframe do drop using` a list containing - freestyle `columns=["Target"]` - `X` (to display) Create `Y` by pulling just `Target` from the dataframe: - Create variable `Y` - Set it to `dataframe [ ] ` containing the following in a list - `"Target"` - `Y` (to display) Now do the splits: - Create variable `splits` - Set it to `with model_selection do train_test_split using` a list containing - `X` - `Y` - freestyle `random_state=2` (this will make your random split the same as mine) **Notice we did not specify a test size; `sklearn` will use .25 by default**. ### Model 1: Logistic regression Let's do something we already suspect is not a great idea: regular logistic regression. First, the imports for regression, evaluation, preprocessing, and pipelines: - `import sklearn.linear_model as linear_model` - `import sklearn.metrics as metrics` - `import numpy as np` - `import sklearn.preprocessing as pp` - `import sklearn.pipeline as pipe` #### Training We're going to make a pipeline so we can scale and train in one step: - Create variable `std_clf` - Set it to `with pipe do make_pipeline using` a list containing - `with pp create StandardScaler using` - `with linear_model create LogisticRegression using` a list containing - freestyle `penalty="none"` **The "none" penalty here is important because `sklearn` uses a ridge penalty by default.** We can treat the whole pipeline as a classifier and call `fit` on it: - `with std_clf do fit using` a list containing - `in list splits get # 1` (this is Xtrain) - `with np do ravel using` a list containing - `in list splits get # 3` (this is Ytrain) Now we can get predictions from the model for our test data: - Create variable `predictions` - Set it to `with std_clf do predict using` a list containing - `in list splits get # 2` (this is Xtest) - `predictions` (to display) #### Evaluation To get the accuracy: - `print create text with` - "Accuracy:" - `with metrics do accuracy_score using` a list containing - `in list splits get # 4` (this is `Ytest`) - `predictions` To get precision, recall, and F1: - `print with metrics do classification_report using` a list containing - `in list splits get # 4` (this is `Ytest`) - `predictions` ## Regression with a Ridge (L2) Penalty In ridge regression, also known as L2 penalization, the cost function is altered by adding a penalty equivalent to the square of the magnitude of the coefficients. This is equivalent to saying: for some $c > 0$, $\sum_{j=0}^p \beta_j^2 < 0$ for coefficients $\beta_j, j=1,\dots,p$. The cost function for ridge regression is $$\sum_{i=1}^N (y_i-\hat{y_i})^2 = \sum_{i=1}^N (y_i - \sum_{j=0}^p \beta_i x_{ij})^2 + \lambda \sum_{j=0}^p \beta_j^2$$ When $\lambda = 0$, we have is a linear regression model. The $\lambda$ regularizes the coefficients so the optimization function is penalized if the coefficients are large. This type of penalization leads to coefficients close to, but not exactly, zero. This feature of ridge regression shrinks the coefficients allowing for a reduction of model complexity and multicollinearity. ### Model 2: Logistic ridge regression (C=.75) Adding a ridge penalty is almost *exactly* like the model we did before. There are two differences: - penalty="l2" - C = .75 The ridge penalty is an l2 penalty (because it's squared). The `C` value is the **amount** of the penalty. In `sklearn` it is inverted, so smaller numbers mean more penalty. **Do yourself a favor and copy the blocks you've already done. You can save your notebook, right click it in the file browser, select "Duplicate", and then open that copy in 3 pane view by dragging the tab to the center right. Make sure you change variable names as directed.** **Here's how to duplicate:** ![image.png](attachment:image.png) **Here's 3 pane view:** ![image.png](attachment:image.png) #### Training - Create variable `std_clf_ridge75` - Set it to `with pipe do make_pipeline using` a list containing - `with pp create StandardScaler using` - `with linear_model create LogisticRegression using` a list containing - freestyle `penalty="l2"` - freestyle `C=0.75` We can treat the whole pipeline as a classifier and call `fit` on it: - `with std_clf_ridge75 do fit using` a list containing - `in list splits get # 1` (this is Xtrain) - `with np do ravel using` a list containing - `in list splits get # 3` (this is Ytrain) Now we can get predictions from the model for our test data: - Create variable `predictions_ridge75` - Set it to `with std_clf_ridge75 do predict using` a list containing - `in list splits get # 2` (this is Xtest) #### Evaluation To get the accuracy: - `print create text with` - "Accuracy:" - `with metrics do accuracy_score using` a list containing - `in list splits get # 4` (this is `Ytest`) - `predictions_ridge75` To get precision, recall, and F1: - `print with metrics do classification_report using` a list containing - `in list splits get # 4` (this is `Ytest`) - `predictions_ridge75` We went from .923 accuracy and .92 weighted avg f1 to .965 accuracy and .97 weighted average f1, just by using the ridge penalty. ### Model 3: Logistic ridge regression (C=.25) This model is the same as model 2 but with a different ridge penalty: - penalty="l2" - C = .25 #### Training - Create variable `std_clf_ridge25` - Set it to `with pipe do make_pipeline using` a list containing - `with pp create StandardScaler using` - `with linear_model create LogisticRegression using` a list containing - freestyle `penalty="l2"` - freestyle `C=0.25` We can treat the whole pipeline as a classifier and call `fit` on it: - `with std_clf_ridge25 do fit using` a list containing - `in list splits get # 1` (this is Xtrain) - `with np do ravel using` a list containing - `in list splits get # 3` (this is Ytrain) Now we can get predictions from the model for our test data: - Create variable `predictions_ridge25` - Set it to `with std_clf_ridge25 do predict using` a list containing - `in list splits get # 2` (this is Xtest) #### Evaluation To get the accuracy: - `print create text with` - "Accuracy:" - `with metrics do accuracy_score using` a list containing - `in list splits get # 4` (this is `Ytest`) - `predictions_ridge25` To get precision, recall, and F1: - `print with metrics do classification_report using` a list containing - `in list splits get # 4` (this is `Ytest`) - `predictions_ridge25` We went from .965 accuracy and .97 weighted average f1 to .972 accuracy and .97 weighted average f1, again just by using the ridge penalty but with a greater penalty. ### Comparing Ridge models Let's plot the coefficients of models 1 to 3 to show the effect of the ridge penalty on the coefficents. Remember, the penalty *shrinks* coefficients. Do the imports for plotting with layers: - `import plotly.graph_objects as go` and we need to create dummy x-axis for our coefficents: - Create variable `dummyX` - Set it to `with np do linspace using` a list containing: - `1` - `length of` `from dataframe get columns` - `1` - `length of` `from dataframe get columns` - `1` **That last part is supposed to be twice. Since there is one target column and the rest are predictors, we subtract 1.** Create an empty figure: - Create `fig` - Set it to `with go create Figure using` Add three scatterplots to `fig`: - `with fig do add_scatter using` - freestyle `x=dummyX` - freestyle `y=np.ravel(std_clf[1].coef_)` - freestyle `name='Logistic Regression'` - freestyle `mode='markers'` - freestyle `marker=dict(color='blue', opacity=0.25, size=30)` **For the next two, copy the first and make small changes.** - `with fig do add_scatter using` - freestyle `x=dummyX` - freestyle `y=np.ravel(std_clf_ridge75[1].coef_)` - freestyle `name='Logistic Ridge Regression C=.75'` - freestyle `mode='markers'` - freestyle `marker=dict(color='green', opacity=0.50, size=15)` - `with fig do add_scatter using` - freestyle `x=dummyX` - freestyle `y=np.ravel(std_clf_ridge25[1].coef_)` - freestyle `name='Logistic Ridge Regression C=.25'` - freestyle `mode='markers'` - freestyle `marker=dict(color='red', opacity=0.75, size=8)` To get a sense of the shrinkage, use the magnifying glass tool in `plotly` to zoom in until the y axis is about -2 to 2. Then you can see that C=.25 penalty even more tightly shrunk that C=.75. ## Regression with a Lasso (L1) Penalty In lasso regression, also known as L1 penalization, the cost function is altered by adding a penalty equivalent to the absolute value of the magnitude of the coefficients. This is equivalent to saying: for some $c > 0$, $|\beta_j| < 0$ for coefficients $\beta_j, j=1,\dots,p$. The cost function for lasso regression is $$\sum_{i=1}^N (y_i-\hat{y_i})^2 = \sum_{i=1}^N (y_i - \sum_{j=0}^p \beta_i x_{ij})^2 + \lambda \sum_{j=0}^p |\beta_j|$$ When $\lambda = 0$, we have is a linear regression model. The $\lambda$ regularizes the coefficients so the optimization function is penalized if the coefficients are large. This type of penalization leads to exactly zero coefficients. This feature of lasso regression shrinks the coefficients allowing for a reduction of model complexity and multicollinearity and allows use to perform feature selection. ### Model 4: Logistic lasso regression (C=.75) Adding a lasso penalty is almost *exactly* like the model we did before. There are three differences: - penalty="l1" - C = .75 - solver="liblinear" The lasso penalty is an l1 penalty (because it's absolute value). The "solver" is the algorithm that implements the l1 penalty. #### Training - Create variable `std_clf_lasso75` - Set it to `with pipe do make_pipeline using` a list containing - `with pp create StandardScaler using` - `with linear_model create LogisticRegression using` a list containing - freestyle `penalty="l1"` - freestyle `C=0.75` - freestyle `solver="liblinear"` We can treat the whole pipeline as a classifier and call `fit` on it: - `with std_clf_lasso75 do fit using` a list containing - `in list splits get # 1` (this is Xtrain) - `with np do ravel using` a list containing - `in list splits get # 3` (this is Ytrain) Now we can get predictions from the model for our test data: - Create variable `predictions_lasso75` - Set it to `with std_clf_lasso75 do predict using` a list containing - `in list splits get # 2` (this is Xtest) #### Evaluation To get the accuracy: - `print create text with` - "Accuracy:" - `with metrics do accuracy_score using` a list containing - `in list splits get # 4` (this is `Ytest`) - `predictions_lasso75` To get precision, recall, and F1: - `print with metrics do classification_report using` a list containing - `in list splits get # 4` (this is `Ytest`) - `predictions_lasso75` Interestingly, model 4 is the same as model 2. Both are .965 accuracy and .97 weighted average f1. ### Model 5: Logistic lasso regression (C=.25) This model is the same as model 4 but with a different ridge penalty: - penalty="l1" - C = .25 - solver="liblinear" #### Training - Create variable `std_clf_lasso25` - Set it to `with pipe do make_pipeline using` a list containing - `with pp create StandardScaler using` - `with linear_model create LogisticRegression using` a list containing - freestyle `penalty="l1"` - freestyle `C=0.25` - freestyle `solver="liblinear"` We can treat the whole pipeline as a classifier and call `fit` on it: - `with std_clf_lasso25 do fit using` a list containing - `in list splits get # 1` (this is Xtrain) - `with np do ravel using` a list containing - `in list splits get # 3` (this is Ytrain) Now we can get predictions from the model for our test data: - Create variable `predictions_lasso25` - Set it to `with std_clf_lasso25 do predict using` a list containing - `in list splits get # 2` (this is Xtest) #### Evaluation To get the accuracy: - `print create text with` - "Accuracy:" - `with metrics do accuracy_score using` a list containing - `in list splits get # 4` (this is `Ytest`) - `predictions_lasso25` To get precision, recall, and F1: - `print with metrics do classification_report using` a list containing - `in list splits get # 4` (this is `Ytest`) - `predictions_lasso25` Model 5 (accuracy .979 and weighted avg f1 .98) is slightly better than model 3 (.972 accuracy and .97 weighted average f1). Additionally, lasso has done something that ridge didn't do: it has shrunk many coefficients to zero. So it's actually pretty amazing that lasso is slightly better than ridge after so many variables have been removed. To see which ones, run the cell below: ``` m5coefficients = pd.DataFrame( {"variable":X.columns, "coefficient":np.ravel(std_clf_lasso25[1].coef_) }) print(m5coefficients.to_string()) print( 'Variables removed (zero coefficient):' , len( m5coefficients[m5coefficients['coefficient']==0.0] ) ) ``` We started with 120 and zeroed out 84. Obviously many of the variables remaining are duplicates; handling these is a more complex topic! ### Comparing Lasso models As before, let's plot the coefficients of models 1, 4, and 5 to show the effect of the lasso penalty on the coefficients. Create an empty figure: - Set `fig` to `with go create Figure using` Add three scatterplots to `fig`: - `with fig do add_scatter using` - freestyle `x=dummyX` - freestyle `y=np.ravel(std_clf[1].coef_)` - freestyle `name='Logistic Regression'` - freestyle `mode='markers'` - freestyle `marker=dict(color='blue', opacity=0.25, size=30)` **For the next two, copy the first and make small changes.** - `with fig do add_scatter using` - freestyle `x=dummyX` - freestyle `y=np.ravel(std_clf_lasso75[1].coef_)` - freestyle `name='Logistic Lasso Regression C=.75'` - freestyle `mode='markers'` - freestyle `marker=dict(color='green', opacity=0.50, size=15)` - `with fig do add_scatter using` - freestyle `x=dummyX` - freestyle `y=np.ravel(std_clf_lasso25[1].coef_)` - freestyle `name='Logistic Lasso Regression C=.25'` - freestyle `mode='markers'` - freestyle `marker=dict(color='red', opacity=0.75, size=8)` Again, us the magnifying glass tool in `plotly` to zoom in until the y axis is about -1 to 1, and notice how many coefficients are zero for C=.75 and C=.25. We can count how many are zero in each case as well: - `print create text with` - `"C=.75:"` - `with np do sum using` freestyle `std_clf_lasso75[1].coef_` = `0` - `print create text with` - `"C=.25:"` - `with np do sum using` freestyle `std_clf_lasso25[1].coef_` = `0` Even a small amount of penalization, in this case, removed a lot of variables. Lasso is a great way to simplify your model! ## Choosing $\lambda$ (AKA C) As we've seen, different values of the penalty parameter $\lambda$ have different effects on our accuracy and other performance metrics. So how do we choose $\lambda$? There are a number of different methods of finding a **metaparameter** like $\lambda$, and these methods are fairly general and apply to other problems we've seen before, like choosing the optimal number of clusters or the optimal number of nearest neighbors. We will revisit this idea in the future.
github_jupyter
``` import matplotlib.pyplot as plt from bs4 import BeautifulSoup from collections import defaultdict, Counter import requests import random ``` # Word cloud ``` data = [ ("big data", 100, 15), ("Hadoop", 95, 25), ("Python", 75, 50), ("R", 50, 40), ("machine learning", 80, 20), ("statistics", 20, 60), ("data science", 60, 70), ("analytics", 90, 3), ("team player", 85, 85), ("dynamic", 2, 90), ("synergies", 70, 0), ("actionable insights", 40, 30), ("think out of the box", 45, 10), ("self-starter", 30, 50), ("customer focus", 65, 15), ("thought leadership", 35, 35)] def text_size(total): """equals 8 if total is 0, 28 if total is 200""" return 8 + total / 200 * 20 for word, job_popularity, resume_popularity in data: plt.text(job_popularity, resume_popularity, word, ha='center', va='center', size=text_size(job_popularity + resume_popularity)) plt.xlabel("Popularity on Job Postings") plt.ylabel("Popularity on Resumes") plt.axis([0, 100, 0, 100]) plt.show() ``` # N-gram language models ``` def fix_unicode(text): return text.replace(u"\u2019", "'") def get_document(): url = "http://radar.oreilly.com/2010/06/what-is-data-science.html" html = requests.get(url).text soup = BeautifulSoup(html, 'html5lib') content = soup.find("div", "entry-content") # find entry-content div regex = r"[\w']+|[\.]" # matches a word or a period document = [] for paragraph in content("p"): words = re.findall(regex, fix_unicode(paragraph.text)) document.extend(words) return document def generate_using_bigrams(transitions): current = "." # this means the next word will start a sentence result = [] while True: next_word_candidates = transitions[current] # bigrams (current, _) current = random.choice(next_word_candidates) # choose one at random result.append(current) # append it to results if current == ".": return " ".join(result) # if "." we're done trigrams = list(zip(document, document[1:], document[2:])) trigram_transitions = defaultdict(list) starts = [] for prev, current, next in trigrams: if prev == ".": # if the previous "word" was a period starts.append(current) # then this is a start word trigram_transitions[(prev, current)].append(next) def generate_using_trigrams(starts, trigram_transitions): current = random.choice(starts) # choose a random starting word prev = "." # and precede it with a '.' result = [current] while True: next_word_candidates = trigram_transitions[(prev, current)] next = random.choice(next_word_candidates) prev, current = current, next result.append(current) if current == ".": return " ".join(result) ``` # Grammars ``` grammar = { "_S" : ["_NP _VP"], "_NP" : ["_N", "_A _NP _P _A _N"], "_VP" : ["_V", "_V _NP"], "_N" : ["data science", "Python", "regression"], "_A" : ["big", "linear", "logistic"], "_P" : ["about", "near"], "_V" : ["learns", "trains", "tests", "is"] } def is_terminal(token): return token[0] != "_" def expand(grammar, tokens): for i, token in enumerate(tokens): # ignore terminals if is_terminal(token): continue # choose a replacement at random replacement = random.choice(grammar[token]) if is_terminal(replacement): tokens[i] = replacement else: tokens = tokens[:i] + replacement.split() + tokens[(i+1):] return expand(grammar, tokens) # if we get here we had all terminals and are done return tokens def generate_sentence(grammar): return expand(grammar, ["_S"]) ``` # Remark: Gibbs sampling method ``` def roll_a_die(): return random.choice([1,2,3,4,5,6]) def direct_sample(): d1 = roll_a_die() d2 = roll_a_die() return d1, d1 + d2 def random_y_given_x(x): """equally likely to be x + 1, x + 2, ... , x + 6""" return x + roll_a_die() def random_x_given_y(y): if y <= 7: # if the total is 7 or less, the first die is equally likely to be # 1, 2, ..., (total - 1) return random.randrange(1, y) else: # if the total is 7 or more, the first die is equally likely to be # (total - 6), (total - 5), ..., 6 return random.randrange(y - 6, 7) def gibbs_sample(num_iters=100): x, y = 1, 2 # doesn't really matter for _ in range(num_iters): x = random_x_given_y(y) y = random_y_given_x(x) return x, y def compare_distributions(num_samples=1000): counts = defaultdict(lambda: [0, 0]) for _ in range(num_samples): counts[gibbs_sample()][0] += 1 counts[direct_sample()][1] += 1 return counts ``` # Topic modeling ``` def sample_from(weights): total = sum(weights) rnd = total * random.random() # uniform between 0 and total for i, w in enumerate(weights): rnd -= w # return the smallest i such that if rnd <= 0: return i # sum(weights[:(i+1)]) >= rnd documents = [ ["Hadoop", "Big Data", "HBase", "Java", "Spark", "Storm", "Cassandra"], ["NoSQL", "MongoDB", "Cassandra", "HBase", "Postgres"], ["Python", "scikit-learn", "scipy", "numpy", "statsmodels", "pandas"], ["R", "Python", "statistics", "regression", "probability"], ["machine learning", "regression", "decision trees", "libsvm"], ["Python", "R", "Java", "C++", "Haskell", "programming languages"], ["statistics", "probability", "mathematics", "theory"], ["machine learning", "scikit-learn", "Mahout", "neural networks"], ["neural networks", "deep learning", "Big Data", "artificial intelligence"], ["Hadoop", "Java", "MapReduce", "Big Data"], ["statistics", "R", "statsmodels"], ["C++", "deep learning", "artificial intelligence", "probability"], ["pandas", "R", "Python"], ["databases", "HBase", "Postgres", "MySQL", "MongoDB"], ["libsvm", "regression", "support vector machines"] ] K = 4 document_topic_counts = [Counter() for _ in documents] topic_word_counts = [Counter() for _ in range(K)] topic_counts = [0 for _ in range(K)] document_lengths = [len(d) for d in documents] distinct_words = set(word for document in documents for word in document) W = len(distinct_words) D = len(documents) document_topic_counts[3][1] def p_topic_given_document(topic, d, alpha=0.1): """the fraction of words in document _d_ that are assigned to _topic_ (plus some smoothing)""" return ((document_topic_counts[d][topic] + alpha) / (document_lengths[d] + K * alpha)) def p_word_given_topic(word, topic, beta=0.1): """the fraction of words assigned to _topic_ that equal _word_ (plus some smoothing)""" return ((topic_word_counts[topic][word] + beta) / (topic_counts[topic] + W * beta)) def topic_weight(d, word, k): """given a document and a word in that document, return the weight for the k-th topic""" return p_word_given_topic(word, k) * p_topic_given_document(k, d) def choose_new_topic(d, word): return sample_from([topic_weight(d, word, k) for k in range(K)]) random.seed(0) document_topics = [[random.randrange(K) for word in document] for document in documents] for d in range(D): for word, topic in zip(documents[d], document_topics[d]): document_topic_counts[d][topic] += 1 topic_word_counts[topic][word] += 1 topic_counts[topic] += 1 for iter in range(1000): for d in range(D): for i, (word, topic) in enumerate(zip(documents[d], document_topics[d])): # remove this word / topic from the counts # so that it doesn't influence the weights document_topic_counts[d][topic] -= 1 topic_word_counts[topic][word] -= 1 topic_counts[topic] -= 1 document_lengths[d] -= 1 # choose a new topic based on the weights new_topic = choose_new_topic(d, word) document_topics[d][i] = new_topic # and now add it back to the counts document_topic_counts[d][new_topic] += 1 topic_word_counts[new_topic][word] += 1 topic_counts[new_topic] += 1 document_lengths[d] += 1 for k, word_counts in enumerate(topic_word_counts): for word, count in word_counts.most_common(): if count > 0: print(k, word, count) topic_names = ["Big Data and programming languages", "databases", "machine learning", "statistics"] for document, topic_counts in zip(documents, document_topic_counts): print(document) for topic, count in topic_counts.most_common(): if count > 0: print(topic_names[topic], count) print() ```
github_jupyter
``` import json import pickle from indra.literature.adeft_tools import universal_extract_text from indra.databases.hgnc_client import get_hgnc_name, get_hgnc_id from indra_db.util.content_scripts import get_text_content_from_pmids from indra_db.util.content_scripts import get_stmts_with_agent_text_like from indra_db.util.content_scripts import get_text_content_from_stmt_ids from adeft.discover import AdeftMiner from adeft.gui import ground_with_gui from adeft.modeling.label import AdeftLabeler from adeft.modeling.classify import AdeftClassifier from adeft.disambiguate import AdeftDisambiguator from adeft_indra.s3 import model_to_s3 from adeft_indra.ground import gilda_ground shortforms = ['FMS'] genes = ['CSF1R'] families = {} groundings = [f'HGNC:{get_hgnc_id(gene)}' for gene in genes] for family, members in families.items(): genes.extend(members) groundings.append(f'FPLX:{family}') with open('../data/entrez_all_pmids.json', 'r') as f: all_pmids = json.load(f) entrez_texts = [] entrez_refs = set() for gene, grounding in zip(genes, groundings): try: pmids = all_pmids[gene] except KeyError: continue _, content = get_text_content_from_pmids(pmids) entrez_texts.extend([(universal_extract_text(text), grounding) for text in content.values() if text]) entrez_refs.update(content.keys()) miners = dict() all_texts = set() for shortform in shortforms: stmts = get_stmts_with_agent_text_like(shortform)[shortform] _, content = get_text_content_from_stmt_ids(stmts) shortform_texts = [universal_extract_text(text, contains=shortforms) for ref, text in content.items() if text and ref not in entrez_refs] miners[shortform] = AdeftMiner(shortform) miners[shortform].process_texts(shortform_texts) all_texts |= set(shortform_texts) ``` It's then necessary to check if Acromine produced the correct results. We must fix errors manually ``` top = miners['FMS'].top() top longforms0 = miners['FMS'].get_longforms() list(enumerate(longforms0)) longforms0 = [(longform, score) for i, (longform, score) in enumerate(longforms0) if i not in [3, 7]] list(enumerate(top)) longforms0.extend((longform, score) for i, (longform, score) in enumerate(top) if i in [14, 15]) longforms = longforms0 longforms.sort(key=lambda x: -x[1]) longforms, scores = zip(*longforms) longforms grounding_map = {} names = {} for longform in longforms: grounding = gilda_ground(longform) if grounding[0]: grounding_map[longform] = f'{grounding[0]}:{grounding[1]}' names[grounding_map[longform]] = grounding[2] grounding_map names grounding_map, names, pos_labels = ground_with_gui(longforms, scores, grounding_map=grounding_map, names=names) result = (grounding_map, names, pos_labels) result grounding_map, names, pos_labels = ({'fibromyalgia': 'MESH:D005356', 'fibromyalgia syndrome': 'MESH:D005356', 'fimasartan': 'CHEBI:CHEBI:136044', 'fluorous mixture synthesis': 'ungrounded', 'functional magnetic stimulation': 'MESH:D055909', 'functional mesoporous silica': 'ungrounded', 'fundamental motor skills': 'ungrounded', 'fundamental movement skills': 'ungrounded'}, {'MESH:D005356': 'Fibromyalgia', 'CHEBI:CHEBI:136044': 'fimasartan', 'MESH:D055909': 'Magnetic Field Therapy'}, ['CHEBI:CHEBI:136044', 'MESH:D005356', 'MESH:D055909']) names['HGNC:2433'] = 'CSF1R' grounding_dict = {'FMS': grounding_map} classifier = AdeftClassifier('FMS', pos_labels=pos_labels) param_grid = {'C': [100.0], 'max_features': [10000]} labeler = AdeftLabeler(grounding_dict) corpus = labeler.build_from_texts(shortform_texts) corpus.extend(entrez_texts) texts, labels = zip(*corpus) classifier.cv(texts, labels, param_grid, cv=5, n_jobs=8) classifier.stats disamb = AdeftDisambiguator(classifier, grounding_dict, names) d = disamb.disambiguate(shortform_texts) a = [text for pred, text in zip(d, shortform_texts)if pred[0] == 'HGNC:2433'] a[2] disamb.dump('FMS', '../results') from adeft.disambiguate import load_disambiguator, load_disambiguator_directly disamb.classifier.training_set_digest model_to_s3(disamb) d.disambiguate(texts[0]) print(d.info()) a = load_disambiguator('AR') a.disambiguate('Androgen') logit = d.classifier.estimator.named_steps['logit'] logit.classes_ model_to_s3(disamb) classifier.feature_importances()['FPLX:RAC'] d = load_disambiguator('ALK', '../results') d.info() print(d.info()) model_to_s3(d) d = load_disambiguator('TAK', '../results') print(d.info()) model_to_s3(d) from adeft import available_shortforms print(d.info()) d.classifier.feature_importances() from adeft import __version__ __version__ from adeft.disambiguate import load_disambiguator_directly d = load_disambiguator_directly('../results/TEK/') print(d.info()) model_to_s3(d) d.grounding_dict !python -m adeft.download --update from adeft import available_shortforms len(available_shortforms) available_shortforms 'TEC' in available_shortforms 'TECs' in available_shortforms !python -m adeft.download --update !python -m adeft.download --update ```
github_jupyter
# ArterialVis Morphology Embedding and Animation ## Import the ArterialVis morphology module ``` from arterialvis.download import make_output_dir from arterialvis.morphology import * ``` ## Get a list of all morphology files ``` print(get_files.__doc__) files = get_files() ``` ## Create a directory to cache analytics and store outputs ``` print(make_output_dir.__doc__) output = make_output_dir(files[0]) ``` ## Render a simplified morphology, colorcoded by group if available ``` print(build_grouped_graph.__doc__) build_grouped_graph(files[0],output=output) print(build_compound_graph.__doc__) build_compound_graph(files[0],output=os.path.join(output)) ``` ## Dashboard for comparison The following function is not included as a code cell in-notebook because **you must pause execution of this cell in order to continue on with the notebook**. `build_comparison_dashboard()` ``` build_comparison_dashboard() ``` ## 3D Rendering of morphology (with colorcoding if available) ``` print(get_edgelist.__doc__) edgelist = get_edgelist(files[0],output=output) print(generate_graph.__doc__) G = generate_graph(edgelist,output=output) print(draw_graph.__doc__) draw_graph(get_3d_traces(G, edgelist),output=output) ``` ## Simplifying a graph It is possible to remove all interstitial nodes and only retain bifucations and leaves by removing all nodes with a degree of 2 using the `simplifyGraph` command: ``` print(simplifyGraph.__doc__) sparse = simplifyGraph(G,output=output) draw_graph( get_3d_traces( G = sparse, edgelist = edgelist, nodeSize=5 ), output=os.path.join(output, 'simplified')) ``` ## Converting 3D morphology to a 2D graph ### For a simplified morphology ``` print(draw_graph.__doc__) draw_graph( get_2d_traces( G=sparse, edgelist=edgelist, nodesize=5 ), output=os.path.join(output, 'simplified') ) print(build_animation.__doc__) build_animation( G=sparse, edgelist=edgelist, output=os.path.join(output,'sparse_animation')) ``` ### For the complete morphology (Note: SLOW) ``` print(extract_real_abstract.__doc__) real_edgelist, abstract_edgelist, extended_edgelist = extract_real_abstract( G=G, edgelist=edgelist, output=os.path.join(output, 'complete')) draw_graph( get_3d_traces( G=G, edgelist=real_edgelist ), output=os.path.join(output,'complete')) ``` ## Generating an animation between complex 3D and 2D representation ``` build_animation(G=G, edgelist=edgelist, output=os.path.join(output,'complex_animation')) ```
github_jupyter
``` import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import gc import json import math import cv2 import PIL from PIL import Image import seaborn as sns sns.set(style='darkgrid') from sklearn.preprocessing import LabelEncoder from keras.utils import to_categorical from keras import layers from keras.applications import ResNet50,MobileNet, DenseNet201, InceptionV3, NASNetLarge, InceptionResNetV2, NASNetMobile from keras.callbacks import Callback, ModelCheckpoint, ReduceLROnPlateau, TensorBoard from keras.preprocessing.image import ImageDataGenerator from keras.utils.np_utils import to_categorical from keras.models import Sequential from keras.optimizers import Adam import matplotlib.pyplot as plt import pandas as pd from sklearn.model_selection import train_test_split from sklearn.metrics import cohen_kappa_score, accuracy_score import scipy from tqdm import tqdm import tensorflow as tf from keras import backend as K import gc from functools import partial from sklearn import metrics from collections import Counter import json import itertools import matplotlib.pyplot as plt from sklearn.model_selection import KFold from sklearn.preprocessing import OneHotEncoder from tqdm import tqdm from sklearn.decomposition import PCA %matplotlib inline sub = pd.read_csv('/kaggle/input/siim-isic-melanoma-classification/sample_submission.csv') import os print(os.listdir("../input/siim-isic-melanoma-classification")) #Loading Train and Test Data train = pd.read_csv("../input/siim-isic-melanoma-classification/train.csv") test = pd.read_csv("../input/siim-isic-melanoma-classification/test.csv") print("{} images in train set.".format(train.shape[0])) print("{} images in test set.".format(test.shape[0])) train.head() test.head() ``` Let's look at the distribution of teh target: ``` np.mean(train.target) ``` So this is a binary classification problem with highly imbalanced data. Let's take a look at a few images. ``` plt.figure(figsize=(10,5)) sns.countplot(x='target', data=train, order=list(train['target'].value_counts().sort_index().index) , color='cyan') train['target'].value_counts() train.columns z=train.groupby(['target','sex'])['benign_malignant'].count().to_frame().reset_index() z.style.background_gradient(cmap='Reds') sns.catplot(x='target',y='benign_malignant', hue='sex',data=z,kind='bar') from keras.models import Sequential from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Convolution2D,Conv2D from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D from keras.optimizers import SGD from keras.callbacks import TensorBoard from keras import applications ``` **TRAINING** ``` import time start=time.time() train_images = np.load('../input/rgb-3500-96/train_images_rgb_3500_96.npy') end=time.time() print(f"\nTime to load train images: {round(end-start,5)} seconds.") print('Train_images shape: ',train_images.shape) start=time.time() test_images = np.load('../input/test-images-rgb-10000-96/test_images_rbg_10000_96.npy') end=time.time() print(f"\nTime to load test images: {round(end-start,5)} seconds.") print('Test_images shape: ',test_images.shape) #target data train_labels =np.load('../input/rgb-3500-96/train_labels_rgb_3500_96.npy') print('Train_labels shape: ',train_labels.shape) #spliting train data from sklearn.model_selection import train_test_split x_train,x_val,y_train,y_val=train_test_split(train_images,train_labels,test_size=0.3) print('x_train shape: ',x_train.shape) print('x_val shape: ',x_val.shape) ``` **DATA AUGMENTATION** ``` augs = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization=True, rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=True) augs.fit(x_train) ``` **MODELLING** ``` #VGG-16 MODEL NO. 1 from keras.applications.vgg16 import VGG16 model = Sequential() model.add(ZeroPadding2D((1,1),input_shape=(32,32,3))) model.add(Convolution2D(64, 3, 3, activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(64, 3, 3, activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(128, 3, 3, activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(128, 3, 3, activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(256, 3, 3, activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(256, 3, 3, activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(256, 3, 3, activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1,1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(Flatten()) model.add(Dense(4096, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(4096, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.summary() model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy']) #XCEPTION MODEL NO. 2 from keras.layers import Dropout, DepthwiseConv2D, MaxPooling2D, concatenate from keras.models import Model inp = Input(shape = (96,96, 3)) x = inp x = Conv2D(32, (3, 3), strides = 2, padding = "same", activation = "relu")(x) x = BatchNormalization(axis = 3)(x) x = Dropout(0.4)(x) x = Conv2D(64, (3, 3), strides = 1, padding = "same", activation = "relu")(x) x = BatchNormalization(axis = 3)(x) x = Dropout(0.4)(x) x1 = DepthwiseConv2D((3, 3), (1, 1), padding = "same", activation = "relu")(x) x = BatchNormalization(axis = 3)(x) x = Dropout(0.4)(x) x1 = DepthwiseConv2D((3, 3), (1, 1), padding = "same", activation = "relu")(x1) x = BatchNormalization(axis = 3)(x) x = Dropout(0.4)(x) x1 = MaxPooling2D((2, 2), strides = 1)(x1) x = concatenate([x1, Conv2D(64, (2, 2), strides = 1)(x)]) x1 = Activation("relu")(x) x1 = Conv2D(256, (3, 3), strides = 1, padding = "same", activation = "relu")(x1) x = BatchNormalization(axis = 3)(x) x = Dropout(0.4)(x) x1 = DepthwiseConv2D((3, 3), strides = 1, padding = "same", activation = "relu")(x1) x = BatchNormalization(axis = 3)(x) x = Dropout(0.4)(x) x1 = DepthwiseConv2D((3, 3), strides = 1, padding = "same")(x1) x = BatchNormalization(axis = 3)(x) x = Dropout(0.4)(x) x1 = MaxPooling2D((2, 2), strides = 1)(x1) x = concatenate([x1, Conv2D(256, (2, 2), strides = 1)(x)]) x = Activation("relu")(x) x = Conv2D(256, (3, 3), strides = 1, padding = "same", activation = "relu")(x) x = BatchNormalization(axis = 3)(x) x = Dropout(0.4)(x) x = Conv2D(128, (3, 3), strides = 1, padding = "same", activation = "relu")(x) x = BatchNormalization(axis = 3)(x) x = Dropout(0.4)(x) x = Flatten()(x) x = Dense(1, activation = "sigmoid")(x) model2 = Model(inp, x) model2.compile(optimizer = "adam", loss = "binary_crossentropy", metrics = ["accuracy"]) model2.summary() #DENSENET MODEL NO. 3 from tensorflow.keras.applications import DenseNet201 import tensorflow.keras.layers as L dnet201 = DenseNet201( input_shape=(96,96, 3), include_top=False ) dnet201.trainable = True model3 = tf.keras.Sequential([ dnet201, L.GlobalAveragePooling2D(), L.Dense(1, activation='sigmoid') ]) model3.compile( optimizer='adam', loss = 'binary_crossentropy', metrics=['accuracy'] ) model3.summary() batch_size=128 epochs=30 history = model.fit(x_train, y_train, batch_size=batch_size, nb_epoch=epochs, verbose=1, validation_data=(x_val,y_val)) batch_size=128 epochs=15 history2 = model2.fit(x_train, y_train, batch_size=batch_size, nb_epoch=epochs, verbose=1, validation_data=(x_val,y_val)) batch_size=128 epochs=30 history3 = model3.fit(x_train, y_train, batch_size=batch_size, nb_epoch=epochs, verbose=1, validation_data=(x_val,y_val)) model.save("vgg16.h5") model2.save("xception.h5") model3.save("densenet.h5") ``` **EVALUATION** ``` scores = model.evaluate(x_val, y_val, verbose=0) print('Test loss:', scores[0]) print('Test accuracy:', scores[1]) scores = model2.evaluate(x_val, y_val, verbose=0) print('Test loss:', scores[0]) print('Test accuracy:', scores[1]) scores = model3.evaluate(x_val, y_val, verbose=0) print('Test loss_3:', scores[0]) print('Test accuracy_3:', scores[1]) ``` **PREDICTION** ``` y_test_prob = model.predict(test_images) pred_df = pd.DataFrame({'image_name': test['image_name'], 'target': np.concatenate(y_test_prob)}) pred_df.to_csv('submission_vgg.csv',header=True, index=False) pred_df.head(10) y_test_prob2 = model2.predict(test_images) pred_df2 = pd.DataFrame({'image_name': test['image_name'], 'target': np.concatenate(y_test_prob2)}) pred_df2.to_csv('submission_xception.csv',header=True, index=False) pred_df2.head(10) y_test_prob3 = model3.predict(test_images) pred_df3 = pd.DataFrame({'image_name': test['image_name'], 'target': np.concatenate(y_test_prob3)}) pred_df3.to_csv('submission_dense.csv',header=True, index=False) pred_df3.head(10) ``` **ENSEMBLE** ``` en = pd.DataFrame({'image_name':test['image_name'], 'target':(0.3*pred_df['target'] + 0.3*pred_df2['target'] + 0.3*pred_df3['target'])}) en.to_csv('ensemble1.csv',header=True, index=False) en.head(10) ```
github_jupyter
# Intro to reimbursements: overview with visualization This notebook provides an overview of the `2017-03-15-reimbursements.xz` dataset, which contains broad data regarding CEAP usage in all terms since 2009. It aims to provide an example of basic analyses and visualization by exploring topics such as: - Average monthly spending per congressperson along the years - Seasonality in reimbursements - Reimbursements by type of spending - Which party has the most spending congressmen? - Which state has the most spending congressmen? - Who were the most hired suppliers by amount paid? - Which were the most expensive individual reimbursements? Questions are not explicitly answered. Charts and tables are provided for free interpretation, some of them with brief commentaries from the author. **Obs**.: original analysis was made considering data from 2009 to 2017 (mainly until 2016). One might want to filter by terms (e.g. 2010-2014) to make more realistic comparisons (spenditures by state, party, congressperson, etc.). Code cell #4 provides an example of how it could be done. --- ``` import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt from pylab import rcParams %matplotlib inline # Charts styling plt.style.use('ggplot') rcParams['figure.figsize'] = 15, 8 matplotlib.rcParams.update({'font.size': 14}) #rcParams['font.family'] = 'Georgia' # Type setting for specific columns #DTYPE = dict(cnpj=np.str, cnpj_cpf=np.str, ano=np.int16, term=np.str) # Experimenting with 'category' type to reduce df size DTYPE =dict(cnpj_cpf=np.str,\ year=np.int16,\ month=np.int16,\ installment='category',\ term_id='category',\ term='category',\ document_type='category',\ subquota_group_id='category',\ subquota_group_description='category',\ #subquota_description='category',\ subquota_number='category',\ state='category',\ party='category') reimbursements = pd.read_csv('../data/2017-03-15-reimbursements.xz', \ dtype=DTYPE, low_memory=False, parse_dates=['issue_date']) # Creates a DataFrame copy with fewer columns r = reimbursements[['year', 'month', 'total_net_value', 'party', 'state', 'term', 'issue_date',\ 'congressperson_name', 'subquota_description','supplier', 'cnpj_cpf']] r.head() ``` ## Filters depending on the scope of analysis Here, filters by state, party, years, etc. can be applied. Obs.: chart commentaries provided might not remain valid depending on filters chosen. ``` # Filters only most recent years (from 2015) #r = r[(r.year == 2015) | (r.year == 2016) | (r.year == 2017)] #r.head() ``` ## Questions & answers ### Evolution of average monthly spending along the years Are congressmen spending more today in relation to past years? #### How many congressmen in each year? ``` years = r.year.unique() # Computes unique names in each year and saves into a pd.Series d = dict() for y in years: d[y] = r[r.year == y].congressperson_name.nunique() s = pd.Series(d) s s.plot(kind='bar') plt.title('Qtdy of congressmen listed per year') ``` ##### Commentary Greater number of congressmen in 2011 and 2015 is due to term transitions which occur during the year. --- #### How much did they spend, in average, per month in each year? This analysis takes into consideration the following elements: - Main data: - Monthly average spending per congressman during each year - Relevant aspects for trend comparison: - CEAP limit for each year (i.e. the maximum allowed quota increased during the years) - Inflation indexes (i.e. prices of goods raised during the years) ##### Evolution of inflation (IPCA) ``` # Source: http://www.ibge.gov.br/home/estatistica/indicadores/precos/inpc_ipca/defaultseriesHist.shtm ipca_years = [2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016] ipca_indexes = [0.0431, 0.0590, 0.0650, 0.0583, 0.0591, 0.0641, 0.1067, 0.0629] ipca = pd.DataFrame({ 'year': ipca_years, 'ipca': ipca_indexes }) # Filters only by years in dataset ipca = ipca[ipca['year'].isin(r.year.unique())].set_index('year') ipca.head() ``` ##### Maximum quota allowed (CEAP limits) There is information available for maximum CEAP for 2009 and 2017. Therefore, a simple compound growth rate (CAGR) is calculated from 2009 to 2017. Values for years in between are assumed to be a linear composition of the growth rate. ``` states = ['AC', 'AL', 'AM', 'AP', 'BA', 'CE', 'DF', 'ES', 'GO', 'MA', 'MG', 'MS', 'MT', 'PA', 'PB', 'PE', 'PI', 'PR', 'RJ', 'RN', 'RO', 'RR', 'RS', 'SC', 'SE', 'SP', 'TO'] # Source: http://www2.camara.leg.br/a-camara/estruturaadm/diretorias/dirgeral/estrutura-1/deapa/portal-da-posse/ceap-1 ceap_2009 = [40711.32, 37318.73, 39734.17, 39554.50, 35540.51, 38705.50, 27977.66, 34080.83, 32317.69, 38429.49, 32856.38, 36949.65, 35924.24, 38499.17, 38319.91, 37992.68, 37344.18, 35412.67, 32550.32, 38963.25, 39828.33, 41612.80, 37256.00, 36337.92, 36578.43, 33730.95, 35993.76] # Source: http://www2.camara.leg.br/comunicacao/assessoria-de-imprensa/cota-parlamentar ceap_2017 = [44632.46, 40944.10, 43570.12, 43374.78, 39010.85, 42451.77, 30788.66, 37423.91, 35507.06, 42151.69, 36092.71, 40542.84, 39428.03, 42227.45, 42032.56, 41676.80, 40971.77, 38871.86, 35759.97, 42731.99, 43672.49, 45612.53, 40875.90, 39877.78, 40139.26, 37043.53, 39503.61] ceap_limit_states = pd.DataFrame({ 'ceap_2009': ceap_2009, 'ceap_2017': ceap_2017 }, index=states) ceap_limit_states.head() all_years = ipca_years # Calculates CAGR according to data available (CEAP@2009 and CEAP@2017), using the CEAP average among states cagr = ((ceap_limit_states.ceap_2017.mean() / ceap_limit_states.ceap_2009.mean())**(1./(2017-2009)) - 1) # Computes estimated CEAP values for years in between 2009 and 2017 using CAGR ceap_values = [] for i in range(2017-2009): if i == 0: ceap_values.append(ceap_limit_states.ceap_2009.mean()) elif i == (r.year.nunique() - 1): ceap_values.append(ceap_limit_states.ceap_2017.mean()) else: ceap_values.append(ceap_values[i-1] * (1 + cagr)) # Creates df with all years ceap_limit_years = pd.DataFrame({ 'year': all_years, 'max_avg_ceap': ceap_values }) # Filters only by years in dataset ceap_limit_years = ceap_limit_years[ceap_limit_years['year'].isin(r.year.unique())].set_index('year') ceap_limit_years.head() # Groups by name summing up spendings a = r.groupby(['year']).sum().drop('month', 1) a['congressmen_qty'] = s a['avg_monthly_value_per_congressmen'] = a['total_net_value'] / a['congressmen_qty'] / 12 a = a.drop(2017, 0) # Neglets 2017 # Adds columns for CEAP limits and IPCA indexes a['max_avg_ceap'] = ceap_limit_years['max_avg_ceap'] a['pct_of_quota_used'] = (a['avg_monthly_value_per_congressmen'] / a['max_avg_ceap']) * 100 a['ipca'] = ipca['ipca'] a['acc_ipca'] = (a['ipca'] + 1).cumprod() - 1 a # Procedure to handle secondary Y axis fig0, ax0 = plt.subplots() ax1 = ax0.twinx() y0 = a[['avg_monthly_value_per_congressmen', 'max_avg_ceap']].plot(kind='line', ax=ax0)#, label='Itens vendidos') y1 = (a['acc_ipca']*100).plot(kind='line', secondary_y=False, style='g--', ax=ax1)#, label='Preço unitário') y0.legend(loc=2) # bar legend to the left y1.legend(loc=1) # line legend to the right y0.set_ylim((0,50000)) #y1.set_ylim((0,50000)) y0.set_ylabel('CEAP usage and limit (R$)') y1.set_ylabel('Accumulated IPCA index (%)') plt.title('Avg. monthly congressmen spending vs. maximum quota and inflation idx.') plt.show() plt.close() ``` ##### Commentary Although average spending has increased along the years, it can be due to both aspects considered: raises in prices and expanded limit for reimbursements. The next chart shows how spending has increased with respect to quota limits. ``` a.pct_of_quota_used.plot() plt.ylim((0,100)) plt.title('Fluctuation of monthly CEAP spending per congressperson (% of max. quota)') ``` ##### Commentary The chart shows that average spending has increased more than quota limits were raised (from ca. 40% to 60% of quota usage). This might be due to the steep rise in inflation levels, as observed in the previous chart. --- ### Average monthly spending per congressperson along the years This table shows the data above detailed per congressperson. ``` # Groups by name summing up spendings a = r.groupby(['congressperson_name', 'year'])\ .sum()\ .drop('month', 1) # Computes average spending per month and unstacks a['monthly_total_net_value'] = a['total_net_value'] / 12 a = a.drop('total_net_value', 1).unstack() # Creates subtotal column to the right a['mean'] = a.mean(axis=1) a.head() ``` ### Seasonality in reimbursements Out of curiosity,in which period of the year more reimbursements were issued? ``` r.groupby('month')\ .sum()\ .total_net_value\ .sort_index()\ .plot(kind='bar', rot=0) plt.title('Fluctuation of reimbursements issued by months (R$)') ``` ### Reimbursements by type of spending For what are congressmen most using their quota? ``` r.groupby('subquota_description')\ .sum()\ .total_net_value\ .sort_values(ascending=True)\ .plot(kind='barh') plt.title('Total spent by type of service (R$)') ``` ##### Commentary This chart makes it clear what is prioritized by congressmen: publicity of their activity. Voters might judge whether this choice is reasonable or not. --- ### Which party has the most spending congressmen? ##### How many congressmen in each party? ``` parties = r.party.unique() parties # Computes unique names in each state and saves into a pd.Series d = dict() for p in parties: d[p] = r[r.party == p].congressperson_name.nunique() s = pd.Series(d) s ``` #### How much did congressmen from each party spend in the year, in average? ``` t = r.groupby('party').sum() t = t.drop(['year', 'month'], 1) # Removes useless columns t['congressmen_per_party'] = s years = r.year.nunique() t['monthly_value_per_congressperson'] = t['total_net_value'] / t['congressmen_per_party'] / (12*years) t.sort_values(by='monthly_value_per_congressperson', ascending=False).head() t.monthly_value_per_congressperson\ .sort_values(ascending=False)\ .plot(kind='bar') plt.title('Average monthly reimbursements per congressperson by party (R$)') ``` ##### Commentary It is important to note that many congressmen change parties frequently. Therefore, anyone interested in drawing conclusions regarding parties might want to analyse the data in further detail than it is presented here. --- ### Which state has the most spending congressmen? ##### How many congressmen in each state? ``` states = r.state.unique() states # Computes unique names in each party and saves into a pd.Series d = dict() for s in states: d[s] = r[r.state == s].congressperson_name.nunique() s = pd.Series(d) s ``` #### How much did congressmen from each party spend in the year, in average? ##### (!) Important: CEAP maximum value differs among states As already commented previously, CEAP max. quota varies among state, according to: http://www2.camara.leg.br/comunicacao/assessoria-de-imprensa/cota-parlamentar, ``` # CEAP maximum values from 2017 ceap_states = ceap_limit_states.drop('ceap_2009',1) ceap_states.columns = ['monthly_max_ceap'] # Renames column to be compatible to code below ceap_states.head() t = r.groupby('state').sum() t = t.drop(['year', 'month'], 1) # Removes useless columns t['congressmen_per_state'] = s t['monthly_max_ceap'] = ceap_states years = r.year.nunique() t['monthly_value_per_congressperson'] = t['total_net_value'] / t['congressmen_per_state'] / (12*years) t['ceap_usage'] = (t['monthly_value_per_congressperson'] / t['monthly_max_ceap']) * 100 t.sort_values(by='ceap_usage', ascending=False).head() t.ceap_usage\ .sort_values(ascending=False)\ .plot(kind='bar', rot=0) plt.title('Average monthly CEAP usage per congressperson by state (% of max. quota)') ``` #### Comparison between given state and the country's average ``` t.head() country_average = t.ceap_usage.mean() country_average # Parametrizes single state analysis state = 'SP' state_average = t.loc[state].ceap_usage state_average s = pd.Series() s['average_all_states'] = country_average s[state] = state_average s s.plot(kind='bar', rot=0) plt.title('Average monthly CEAP usage per congressperson: ' + state + ' vs. rest of the country (% of max. quota)') ``` ### Who were the top spenders of all time in absolute terms? ``` r.groupby('congressperson_name')\ .sum()\ .total_net_value\ .sort_values(ascending=False)\ .head(10) r.groupby('congressperson_name')\ .sum()\ .total_net_value\ .sort_values(ascending=False)\ .head(30)\ .plot(kind='bar') plt.title('Total reimbursements issued per congressperson (all years)') ``` ##### Commentary Because the dataset comprises 2009-2017, it might not be reasonable to draw any hard conclusions by looking to this chart alone. Some congressmen might have been elected for longer periods and that would reflect on higher reimbursement total values. For a more detailed - hence coherent - analysis, one might want to make this comparison for each term (e.g. 2010-2014). That would better identify "top spenders" by comparing congressmen spendings on the same time period. Another interesting analysis can be made by expanding the chart to all congressmen, not only the top 30. This enables a richer look at how discrepant top spenders are from the rest. To do that, just change `.head(30)\` argument in the previous cell. --- ### Who were the most hired suppliers by amount paid? This analysis identifies suppliers by their unique CNPJ. It is worth noting that, commonly, some telecom carriers use different CNPJ for its subsidiaries in different states (e.g. TIM SP, TIM Sul, etc). ``` sp = r.groupby(['cnpj_cpf', 'supplier', 'subquota_description'])\ .sum()\ .drop(['year', 'month'], 1)\ .sort_values(by='total_net_value', ascending=False) sp.reset_index(inplace=True) sp = sp.set_index('cnpj_cpf') sp.head() cnpj = r.groupby('cnpj_cpf')\ .sum()\ .drop(['year', 'month'], 1)\ .sort_values(by='total_net_value', ascending=False) cnpj.head() # Adds supplier name besides total_net_value in cnpj df cnpj['supplier'] = '' # Creates empty column cnpj = cnpj.head(1000) # Gets only first 1000 for this analysis # Looks up for supplier names in sp df and fills cnpj df (it might take a while to compute...) for i in range(len(cnpj)): try: cnpj.set_value(cnpj.index[i], 'supplier', sp.loc[cnpj.index[i]].supplier.iloc[0]) except: cnpj.set_value(cnpj.index[i], 'supplier', sp.loc[cnpj.index[i]].supplier) cnpj.head(10) # Fixes better indexing to plot in a copy sp2 = cnpj.set_index('supplier') sp2.head(30)\ .plot(kind='bar') plt.title('Most hired suppliers (unique CNPJ) by total amount paid (R$)') ``` ##### Commentary In general, telecom carries were the suppliers with higher concentration of reimbursements. It is worth noting, however, that Telecommunication subquota accounts for only 8% of the reimbursents. This might suggest a 'long tail' pattern for other subquota types such as publicity, which accounts for 28% of all reimbursements. Another aspect worth noting is the fact that some individual suppliers ("pessoas físicas") appear as top 15 suppliers (e.g. Mr. Douglas da Silva and Mrs. Joceli do Nascimento). One might wonder if such concentration of reimbursements for single-person suppliers is reasonable. ``` pct_telecom = r[r['subquota_description'] == 'Telecommunication'].total_net_value.sum() / r.total_net_value.sum() pct_telecom pct_publicity = r[r['subquota_description'] == 'Publicity of parliamentary activity'].total_net_value.sum() / r.total_net_value.sum() pct_publicity ``` #### Congressmen that hired the top supplier and how much they paid ``` r.groupby(['cnpj_cpf', 'congressperson_name'])\ .sum()\ .sort_values(by='total_net_value', ascending=False)\ .loc['02558157000162']\ .total_net_value\ .head(20) ``` ### Which are the most expensive individual reimbursements? ``` r = r.sort_values(by='total_net_value', ascending=False) r.head(20) ```
github_jupyter
**Copyright 2021 The TensorFlow Hub Authors.** Licensed under the Apache License, Version 2.0 (the "License"); ``` # Copyright 2021 The TensorFlow Hub Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== ``` <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/senteval_for_universal_sentence_encoder_cmlm.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/senteval_for_universal_sentence_encoder_cmlm.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/senteval_for_universal_sentence_encoder_cmlm.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> <td> <a href="https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a> </td> </table> #Universal Sentence Encoder SentEval demo This colab demostrates the [Universal Sentence Encoder CMLM model](https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1) using the [SentEval](https://github.com/facebookresearch/SentEval) toolkit, which is a library for measuring the quality of sentence embeddings. The SentEval toolkit includes a diverse set of downstream tasks that are able to evaluate the generalization power of an embedding model and to evaluate the linguistic properties encoded. Run the first two code blocks to setup the environment, in the third code block you can pick a SentEval task to evaluate the model. A GPU runtime is recommended to run this Colab. To learn more about the Universal Sentence Encoder CMLM model, see https://openreview.net/forum?id=WDVD4lUCTzU. ``` #@title Install dependencies !pip install --quiet "tensorflow-text==2.8.*" !pip install --quiet torch==1.8.1 ``` ## Download SentEval and task data This step download SentEval from github and execute the data script to download the task data. It may take up to 5 minutes to complete. ``` #@title Install SentEval and download task data !rm -rf ./SentEval !git clone https://github.com/facebookresearch/SentEval.git !cd $PWD/SentEval/data/downstream && bash get_transfer_data.bash > /dev/null 2>&1 ``` #Execute a SentEval evaulation task The following code block executes a SentEval task and output the results, choose one of the following tasks to evaluate the USE CMLM model: ``` MR CR SUBJ MPQA SST TREC MRPC SICK-E ``` Select a model, params and task to run. The rapid prototyping params can be used for reducing computation time for faster result. It typically takes 5-15 mins to complete a task with the **'rapid prototyping'** params and up to an hour with the **'slower, best performance'** params. ``` params = {'task_path': PATH_TO_DATA, 'usepytorch': True, 'kfold': 5} params['classifier'] = {'nhid': 0, 'optim': 'rmsprop', 'batch_size': 128, 'tenacity': 3, 'epoch_size': 2} ``` For better result, use the slower **'slower, best performance'** params, computation may take up to 1 hour: ``` params = {'task_path': PATH_TO_DATA, 'usepytorch': True, 'kfold': 10} params['classifier'] = {'nhid': 0, 'optim': 'adam', 'batch_size': 16, 'tenacity': 5, 'epoch_size': 6} ``` ``` import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import sys sys.path.append(f'{os.getcwd()}/SentEval') import tensorflow as tf # Prevent TF from claiming all GPU memory so there is some left for pytorch. gpus = tf.config.list_physical_devices('GPU') if gpus: # Memory growth needs to be the same across GPUs. for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) import tensorflow_hub as hub import tensorflow_text import senteval import time PATH_TO_DATA = f'{os.getcwd()}/SentEval/data' MODEL = 'https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1' #@param ['https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1', 'https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-large/1'] PARAMS = 'rapid prototyping' #@param ['slower, best performance', 'rapid prototyping'] TASK = 'CR' #@param ['CR','MR', 'MPQA', 'MRPC', 'SICKEntailment', 'SNLI', 'SST2', 'SUBJ', 'TREC'] params_prototyping = {'task_path': PATH_TO_DATA, 'usepytorch': True, 'kfold': 5} params_prototyping['classifier'] = {'nhid': 0, 'optim': 'rmsprop', 'batch_size': 128, 'tenacity': 3, 'epoch_size': 2} params_best = {'task_path': PATH_TO_DATA, 'usepytorch': True, 'kfold': 10} params_best['classifier'] = {'nhid': 0, 'optim': 'adam', 'batch_size': 16, 'tenacity': 5, 'epoch_size': 6} params = params_best if PARAMS == 'slower, best performance' else params_prototyping preprocessor = hub.KerasLayer( "https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3") encoder = hub.KerasLayer( "https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1") inputs = tf.keras.Input(shape=tf.shape(''), dtype=tf.string) outputs = encoder(preprocessor(inputs)) model = tf.keras.Model(inputs=inputs, outputs=outputs) def prepare(params, samples): return def batcher(_, batch): batch = [' '.join(sent) if sent else '.' for sent in batch] return model.predict(tf.constant(batch))["default"] se = senteval.engine.SE(params, batcher, prepare) print("Evaluating task %s with %s parameters" % (TASK, PARAMS)) start = time.time() results = se.eval(TASK) end = time.time() print('Time took on task %s : %.1f. seconds' % (TASK, end - start)) print(results) ``` #Learn More * Find more text embedding models on [TensorFlow Hub](https://tfhub.dev) * See also the [Multilingual Universal Sentence Encoder CMLM model](https://tfhub.dev/google/universal-sentence-encoder-cmlm/multilingual-base-br/1) * Check out other [Universal Sentence Encoder models](https://tfhub.dev/google/collections/universal-sentence-encoder/1) ## Reference * Ziyi Yang, Yinfei Yang, Daniel Cer, Jax Law, Eric Darve. [Universal Sentence Representations Learning with Conditional Masked Language Model. November 2020](https://openreview.net/forum?id=WDVD4lUCTzU)
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt from PIL import Image def _if_near(point, mask, nearest_neighbor): nn = nearest_neighbor w,h = mask.shape x,y = point mask = np.pad(mask,nn,'edge') x += nn y += nn if(w+nn>x and h+nn>y): x_i,y_i = int(x+0.5),int(y+0.5) #return True near = mask[x_i-nn:x_i+nn,y_i-nn:y_i+nn] if near.max()-near.min() != 0: if(x<w and y<h): return True return False # *** # *n* It's an example of 1-neighbor # *** # # ***** # ***** # **n** It's an example of 2-neighbor # ***** # ***** # # Did you get any of that? def _get_edge_k_neighbor(img,k): ''' I will say the idea is identical to the the original _is_near, but this implement save the temporal result and thus speed up the whole process by a massive margin when a big amount of points requires calculation. This will return a array sized (w,h), store the max-min value in its neighbor. ''' w,h = img.shape padded = np.pad(img, k, 'edge') # this is the result image array res = np.zeros(img.shape) # This is the main process for i in range(w): for j in range(h): neighbor = padded[i:i+2*k,j:j+2*k] _max = neighbor.max() _min = neighbor.min() res[i-k,j-k] = (_max-_min) return res def _new_if_near(point, edge_k_neighbor): x, y = point x, y = int(x), int(y) return edge_k_neighbor[x][y]>0 def getpoint(mask_img, k, beta, training = True, nearest_neighbor=3, new_if_near = True): w,h = mask_img.shape N = int(beta*k*w*h) xy_min = [0, 0] xy_max = [w-1, h-1] points = np.random.uniform(low=xy_min, high=xy_max, size=(N,2)) #print(points) if(beta>1 or beta<0): print("beta should be in range [0,1]") return NULL # for the training, the mask is a hard mask if training == True: if beta ==0: return points res = [] if new_if_near: edge_k_neighbor = _get_edge_k_neighbor(mask_img,nearest_neighbor) for p in points: if _new_if_near(p,edge_k_neighbor): res.append(p) else: for p in points: if _if_near(p,mask_img,nearest_neighbor): res.append(p) others = int((1-beta)*k*w*h) not_edge_points = np.random.uniform(low=xy_min, high=xy_max, size=(others,2)) for p in not_edge_points: res.append(p) return res # for the inference, the mask is a soft mask if training == False: res = [] for i in range(w): for j in range(h): if mask_img[i,j] > 0: res.append((i,j)) return res def _generate_mask(size, func = lambda x:x*x): w,h = size res = np.zeros((w,h)) for x in range(w): for y in range(h): if y> func(x): res[x,y] = 255 return res my_mask = _generate_mask((14,14), ) plt.imshow(my_mask) %%timeit #plt.imshow(my_mask,cmap="Purples") points = getpoint(mask_img=my_mask,k=1000,beta=0.8,nearest_neighbor=2,new_if_near=True) # points = list(zip(*points)) # plt.scatter(points[1],points[0],c='black',s=4) plt.imshow(my_mask,cmap="Purples") points = getpoint(my_mask,1,1,nearest_neighbor=2) points = list(zip(*points)) plt.scatter(points[1],points[0],c='black',s=4) plt.imshow(my_mask,cmap="Purples") points = getpoint(my_mask,10,1,nearest_neighbor=2) points = list(zip(*points)) plt.scatter(points[1],points[0],c='black',s=4) my_mask = np.asarray(Image.open("tree_mask.jpg").resize((32,32))) my_mask = my_mask[:,:,0] my_mask.shape plt.imshow(my_mask,cmap="Purples") points = getpoint(my_mask,1,1,nearest_neighbor=1) points = list(zip(*points)) plt.scatter(points[1],points[0],c='black',s=4) from pointGenerate import getpoint import matplotlib.pyplot as plt from PIL import Image import numpy as np resolution = 128 sz = (resolution,resolution) my_mask = np.asarray(Image.open("tree_mask.jpg").resize(sz)) my_img = np.asarray(Image.open("tree.jpg").resize(sz)) my_mask = my_mask[:,:,0] points = getpoint(my_mask,0.25,0.95,nearest_neighbor=1) points = list(zip(*points)) plt.subplot(121) plt.imshow(my_mask,cmap="Purples") plt.scatter(points[1],points[0],c='black',s=4) plt.title('k=0.25, beta=0.95') plt.subplot(122) plt.imshow(my_img,cmap="Purples") plt.scatter(points[1],points[0],c='black',s=4) plt.title('k=0.25, beta=0.95') plt.savefig('resolution=128.jpg',dpi=400) points = getpoint(my_mask,1,-0.95,nearest_neighbor=1) import matplotlib.pyplot as plt n = [196,1960,19600,196000] t1 = [6.11,61.8,609,6280] v1 = [0.122,2.92,14.3,99.9] t2 = [1.3,3.93,28.2,267] v2 = [0.0084,0.383,0.643,12.7] fig, ax2 = plt.subplots(1, 1) ax2.set_xscale("log") ax2.set_yscale("log") ax2.set_adjustable("datalim") ax2.plot(n, t1, "o-", label = 'original algorithm') ax2.plot(n, t2, "go-", label = 'improved algorithm') #ax2.set_xlim(1e-1, 1e2) #ax2.set_ylim(1e-1, 1e3) plt.ylabel('Time (ms)') plt.xlabel('Number of points') ax2.set_aspect(1) ax2.set_title("Performance improvement") plt.legend() plt.savefig('performance improvement.png',dpi=300) ```
github_jupyter
# DC Python Tutorial 2: 10-19 Hint: If you are typing a function name and want to know what the options are for completing what you are typing, just hit the tab key for a menu of options. Hint: If you want to see the source code associated with a function, you can do the following import inspect inspect.getsource(foo) Where "foo" is the function that you'd like to learn about. Each cell in Jupyter is either code or markdown (select in the drop down menu above). You can learn about markdown language from the help menu. Markdown allows you to create very nicely formatted text including Latex equations. $$c = \sqrt{a^2 + b^2}$$ Each cell is either in edit mode (select this cell and press the enter key) or in display mode (press shift enter). Shift Enter also executes the code in the cell. When you open a Jupyter notebook it is convenient to go to the cell menu and select Run All so that all results are calculated and displayed. The Python Kernel remembers all definitions (functions and variables) as they are defined based on execution of the cells in the Jupyter notebook. Thus if you fail to execute a cell, the parameters defined in that cell won't be available. Similarly, if you define a parameter and then delete that line of code, that parameter remains defined until you go to the Kernel menu and select restart. It is good practice to select Restart & Run All from the Kernel menu after completing an assignment to make sure that everything in your notebook works correctly and that you haven't deleted an essential line of code! ``` #Here we import packages that we will need for this notebook. You can find out about these packages in the Help menu. # although math is "built in" it needs to be imported so it's functions can be used. import math from scipy import constants, interpolate #see numpy cheat sheet https://www.dataquest.io/blog/images/cheat-sheets/numpy-cheat-sheet.pdf #The numpy import is needed because it is renamed here as np. import numpy as np #Pandas is used to import data from spreadsheets import pandas as pd import matplotlib.pyplot as plt # sys and os give us access to operating system directory paths and to sys paths. import sys, os # If you place your GitHub directory in your documents folder and # clone both the design challenge notebook and the AguaClara_design repo, then this code should all work. # If you have your GitHub directory at a different location on your computer, # then you will need to adjust the directory path below. # add the path to your GitHub directory so that python can find files in other contained folders. path1 = '~' path2 = 'Documents' path3 = 'GitHub' path4 = os.path.join(path1, path2, path3) myGitHubdir = os.path.expanduser(path4) if myGitHubdir not in sys.path: sys.path.append(myGitHubdir) # add imports for AguaClara code that will be needed # physchem has functions related to hydraulics, fractal flocs, flocculation, sedimentation, etc. from aide_design import physchem as pc # pipedatabase has functions related to pipe diameters from aide_design import pipedatabase as pipe # units allows us to include units in all of our calculations from aide_design.units import unit_registry as u from aide_design import utility as ut ``` --- ## Resources in getting started with Python Here are some basic [Python functions](http://docs.python.org/3/library/functions.html) that might be helpful to look through. ## Transitioning From Matlab To Python **Indentation** - When writing functions or using statements, Python recognizes code blocks from the way they are indented. A code block is a group of statements that, together, perform a task. A block begins with a header that is followed by one or more statements that are indented with respect to the header. The indentation indicates to the Python interpreter, and to programmers that are reading the code, that the indented statements and the preceding header form a code block. **Suppressing Statements** - Unlike Matlab, you do not need a semi-colon to suppress a statement in Python; **Indexing** - Matlab starts at index 1 whereas Python starts at index 0. **Functions** - In Matlab, functions are written by invoking the keyword "function", the return parameter(s), the equal to sign, the function name and the input parameters. A function is terminated with "end". `function y = average(x) if ~isvector(x) error('Input must be a vector') end y = sum(x)/length(x); end` In Python, functions can be written by using the keyword "def", followed by the function name and then the input parameters in paranthesis followed by a colon. A function is terminated with "return". `def average(x): if ~isvector(x) raise VocationError("Input must be a vector") return sum(x)/length(x); ` **Statements** - for loops and if statements do not require the keyword "end" in Python. The loop header in Matlab varies from that of Python. Check examples below: Matlab code `s = 10; H = zeros(s); for c = 1:s for r = 1:s H(r,c) = 1/(r+c-1); end end` Python code `s = 10 H = [] for (r in range(s)): for (c in range(s)): H[r][c].append(1/(r+c-1)` **Printing** - Use "print()" in Python instead of "disp" in Matlab. **Helpful Documents** [Numpy for Matlab Users](https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html) [Stepping from Matlab to Python](http://stsievert.com/blog/2015/09/01/matlab-to-python/) [Python for Matlab Users, UC Boulder](http://researchcomputing.github.io/meetup_fall_2014/pdfs/fall2014_meetup13_python_matlab.pdf) --- ## Arrays and Lists Python has no native array type. Instead, it has lists, which are defined using [ ]: ``` a = [0,1,2,3] ``` Python has a number of helpful commands to modify lists, and you can read more about them [here](https://docs.python.org/2/tutorial/datastructures.html). In order to use lists as arrays, numpy (numpy provides tools for working with **num**bers in **py**thon) provides an array data type that is defined using ( ). ``` a_array = np.array(a) a_array ``` Pint, which adds unit capabilities to Python, (see section on units below) is compatible with NumPy, so it is possible to add units to arrays and perform certain calculations with these arrays. We recommend using NumPy arrays rather than lists because NumPy arrays can handle units. Additionally, use functions from NumPy if possible instead of function from the math package when possible because the math package does not yet handle units. Units are added by multiplying the number by the unit raised to the appropriate power. The pint unit registry was imported above as "u" and thus the units for milliliters are defined as u.mL. ``` a_array_units = a_array * u.m a_array_units ``` In order to make a 2D array, you can use the same [NumPy array command](https://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html). ``` b = np.array([[0,1,2],[3,4,5],[6,7,8]])*u.mL b ``` Indexing is done by row and then by column. To call all of the elements in a row or column, use a colon. As you can see in the following example, indexing in python begins at zero. So `b[:,1]` is calling all rows in the second column ``` b[:,1] ``` If you want a specific range of values in an array, you can also use a colon to slice the array, with the number before the colon being the index of the first element, and the number after the colon being **one greater** than the index of the last element. ``` b[1:3,0] ``` For lists and 1D arrays, the `len()` command can be used to determine the length. Note that the length is NOT equal to the index of the last element because the indexes are zero based. The len function can be used with lists and arrays. For multiple dimension arrays the `len()` command returns the length of the first dimension. ``` len(a) len(b) ``` For any higher dimension of array, `numpy.size()` can be used to find the total number of elements and `numpy.shape()` can be used to learn the dimensions of the array. ``` np.size(b) np.shape(b) ``` For a listing of the commands you can use to manipulate numpy arrays, refer to the [scipy documentation](https://docs.scipy.org/doc/numpy/reference/routines.array-manipulation.html). Sometimes, it is helpful to have an array of elements that range from zero to a specified number. This can be useful, for example, in creating a graph. To create an array of this type, use [numpy.arange](https://docs.scipy.org/doc/numpy/reference/generated/numpy.arange.html). ``` crange = np.arange(10) crange cdetailedrange = np.arange(5,10,0.1) cdetailedrange ``` --- ## Units Units are essential to engineering calculations. Units provide a quick check on all of our calculations to help reduce the number of errors in our analysis. Getting the right dimensions back from a calculation doesn't prove that the answer is correct, but getting the wrong dimensions back does prove that the answer is wrong! Unit errors from incorrect conversions are common when using apps that don't calculate with units. Engineering design work should always include units in the calculations. We use the [pint package](https://pint.readthedocs.io/) to add unit capabilities to our calculations in Python. We have imported the `pint.UnitRegistry` as 'u' and thus all of pint's units can be used by placing a 'u.' in front of the unit name. Meters are `u.m`, seconds are `u.s`, etc. Most units are simple values that can be used just like other terms in algebraic equations. The exception to this are units that have an offset. For example, in the equation PV=nRT, temperature must be given with units that have value of zero at absolute zero. We would like to be able to enter 20 degC into that equation and have it handle the units correctly. But you can't convert from degC to Kelvin by simply multiplying by a conversion factor. Thus for temperature the units have to be handled in a special way. Temperatures require use of the u.Quantity function to enter the value and the units of temperature separated by a ',' rather than by a multiplication symbol. This is because it doesn't make sense to multiply by a temperature unit because temperatures (that aren't absolute temperatures) have both a slope and a nonzero intercept. You can find [constants that are defined in pint](https://github.com/hgrecco/pint/blob/master/pint/constants_en.txt) at the github page for pint. Below is a simple calculation illustrating the use of units to calculate the flow through a vertical pipe given a velocity and an inner diameter. We will illustrate how to calculate pipe diameters further ahead in the tutorial. ``` V_up = 1*u.mm/u.s D_reactor = 1*u.inch A_reactor = pc.area_circle(D_reactor) Q_reactor = V_up*A_reactor Q_reactor ``` The result isn't formatted very nicely. We can select the units we'd like to display by using the `.to` method. ``` Q_reactor.to(u.mL/u.s) ``` We can also force the display to be in the metric base units ``` Q_reactor.to_base_units() ``` If you need to strip units from a quantity (for example, for calculations using funtions that don't support units) you can use the `.magnitude` method. It is important that you force the quantity to be in the correct units before stripping the units. ``` Q_reactor.to(u.mL/u.s).magnitude ``` ### Significant digits Python will happily display results with 17 digits of precision. We'd like to display a reasonable number of significant digits so that we don't get distracted with 14 digits of useless information. We created a [sig function in the AguaClara_design repository](https://github.com/AguaClara/AguaClara_design/blob/master/utility.py) that allows you to specify the number of significant digits to display. You can couple this with the print function to create a well formatted solution to a calculation. The sig function also displays the accompanying units. The sig function call is `ut.sig(value, sigfig)`. ### Example problem and solution. Calculate the number of moles of methane in a 20 L container at 15 psi above atmospheric pressure with a temperature of 30 C. ``` # First assign the values given in the problem to variables. P = 15 * u.psi + 1 * u.atm T = u.Quantity(30,u.degC) V = 20 * u.L # Use the equation PV=nRT and solve for n, the number of moles. # The universal gas constant is available in pint. nmolesmethane = (P*V/(u.R*T.to(u.kelvin))).to_base_units() print('There are '+ut.sig(nmolesmethane,3)+' of methane in the container.') nmolesmethane ``` --- ## Functions When it becomes necessary to do the same calculation multiple times, it is useful to create a function to facilitate the calculation in the future. - Function blocks begin with the keyword def followed by the function name and parentheses ( ). - Any input parameters or arguments should be placed within these parentheses. - The code block within every function starts with a colon (:) and is indented. - The statement return [expression] exits a function and returns an expression to the user. A return statement with no arguments is the same as return None. - (Optional) The first statement of a function can the documentation string of the function or docstring, writeen with apostrophes ' '. Below is an example of a function that takes three inputs, pressure, volume, and temperature, and returns the number of moles. ``` # Creating a function is easy in Python def nmoles(P,V,T): return (P*V/(u.R*T.to(u.kelvin))).to_base_units() ``` Try using the new function to solve the same problem as above. You can reuse the variables. You can use the new function call inside the print statement. ``` print('There are '+ut.sig(nmoles(P,V,T),3)+' of methane in the container.') ``` --- ## Density Function We will create and graph functions describing density and viscosity of water as a function of temperature. We will use the [scipy 1D interpolate function](https://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html#d-interpolation-interp1d) to create smooth interpolation between the known data points to generate a smooth function. `density_water`, defined in [`physchem`](https://github.com/AguaClara/AguaClara_design/blob/master/physchem.py), is a function that returns a fluid's density at a given temperature. It has one input parameter, temperature (in Celsius). ``` # Here is an example of how you could define the function yourself if you chose. # Below are corresponding arrays of temperature and water density with appropriate units attached. # The 1d interpolation function will use a cubic spline. Tarray = u.Quantity([0,5,10,20,30,40,50,60,70,80,90,100],u.degC) rhoarray = [999.9,1000,999.7,998.2,995.7,992.2,988.1,983.2,977.8,971.8,965.3,958.4]*u.kg/u.m**3 def DensityWater(T): rhointerpolated=interpolate.interp1d(Tarray, rhoarray, kind='cubic') rho=rhointerpolated(T.to(u.degC)) return rho*u.kg/u.m**3 # You can get the density of water for any temperature using this function call. print('The density of water at '+ut.sig(u.Quantity(20,u.degC),3) +' is '+ut.sig(DensityWater(u.Quantity(20,u.degC)),4)+'.') ``` --- ## Pipe Database The [`pipedatabase`](https://github.com/AguaClara/AguaClara_design/blob/master/pipedatabase.py) file in the `AguaClara_design` has many useful functions concerning pipe sizing. It provides functions that calculate actual pipe inner and outer diameters given the nominal diameter of the pipe. Note that nominal diameter just means the diameter that it is called (hence the discriptor "nominal") and thus a 1 inch nominal diameter pipe might not have any dimensions that are actually 1 inch! ``` # The OD function in pipedatabase returns the outer diameter of a pipe given the nominal diameter, ND. pipe.OD(6*u.inch) ``` The ND_SDR_available function returns the nominal diameter of a pipe that has an inner diameter equal to or greater than the requested inner diameter [SDR, standard diameter ratio](http://www.engineeringtoolbox.com/sdr-standard-dimension-ratio-d_318.html). Below we find the smallest available pipe that has an inner diameter of at least 7 cm ``` IDmin = 7 * u.cm SDR = 26 ND_my_pipe = pipe.ND_SDR_available(IDmin,SDR) ND_my_pipe ``` The actual inner diameter of this pipe is ``` ID_my_pipe = pipe.ID_SDR(ND_my_pipe,SDR) print(ut.sig(ID_my_pipe.to(u.cm),2)) ``` We can display the available nominal pipe sizes that are in our database. ``` pipe.ND_all_available() ``` --- ## Physchem The 'AguaClara_design' [physchem](https://github.com/AguaClara/AguaClara_design/blob/master/physchem.py) has many useful fluids functions including Reynolds number, head loss equation, orifice equations, viscosity etc. --- ## Viscosity Functions ``` #Define the temperature of the fluid so that we can calculate the kinematic viscosity temperature = u.Quantity(20,u.degC) #Calculate the kinematic viscosity using the function in physchem which we access using "pc" nu=pc.viscosity_kinematic(temperature) print('The kinematic viscosity of water at '+ut.sig(temperature,2)+' is '+ut.sig(nu,3)) ``` --- ## Our First Graph! We will use [matplotlib](https://matplotlib.org/) to create a graph of water density as a function of temperature. [Here](https://matplotlib.org/users/pyplot_tutorial.html) is a quick tutorial on graphing. ``` # Create a list of 100 numbers between 0 and 100 and then assign the units of degC to the array. # This array will be the x values of the graph. GraphTarray = u.Quantity(np.arange(100),u.degC) #Note the use of the .to method below to display the results in a particular set of units. plt.plot(GraphTarray, pc.viscosity_kinematic(GraphTarray).to(u.mm**2/u.s), '-') plt.xlabel('Temperature (degrees Celcius)') plt.ylabel('Viscosity (mm^2/s)') plt.show() ``` ### Reynolds number We will use the physchem functions to calculate the Reynolds number for flow through a pipe. ``` Q = 5*u.L/u.s D = pipe.ID_SDR(4*u.inch,26) Reynolds_pipe = pc.re_pipe(Q,D,nu) Reynolds_pipe ``` Now use the sig function to display calulated values to a user specified number of significant figures. ``` print('The Reynolds number is '+ut.sig(pc.re_pipe(Q,D,nu),3)) ``` Here is a table of a few of the equations describing pipe flow and their physchem function counterparts. ## Assorted Fluids Functions | Equation Name | Equation | Physchem function | |---------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------:| | Reynolds Number | $Re= \frac{{4Q}}{{\pi D\nu }}$ | `re_pipe(FlowRate, Diam, Nu)` | | Swamee-Jain Turbulent Friction factor | ${\rm{f}} = \frac{{0.25}}{{{{\left[ {\log \left( {\frac{\varepsilon }{{3.7D}} + \frac{{5.74}}{{{{{\mathop{\rm Re}\nolimits} }^{0.9}}}}} \right)} \right]}^2}}}$ | `fric(FlowRate, Diam, Nu, PipeRough)` | | Laminar Friction factor | ${\rm{f}} = \frac{64}{Re}$ | | | Hagen Pousille laminar flow head loss | ${h_{\rm{f}}} = \frac{{32\mu LV}}{{\rho g{D^2}}} = \frac{{128\mu LQ}}{{\rho g\pi {D^4}}}$ | | | Darcy Weisbach head loss | ${h_{\rm{f}}} = {\rm{f}}\frac{8}{{g{\pi ^2}}}\frac{{L{Q^2}}}{{{D^5}}}$ | `headloss_fric(FlowRate, Diam, Length, Nu, PipeRough)` | | Swamee-Jain equation for diameter | $0.66\left ( \varepsilon ^{1.25}\left ( \frac{LQ^{2}}{gh_{f}} \right )^{4.75}+\nu Q^{9.4}\left ( \frac{L}{gh_{f}} \right )^{5.2} \right )^{0.04}$| `diam_swamee(FlowRate, HeadLossFric, Length, Nu, PipeRough)` | ``` # create a plot that shows both the original data values (plotted as points) # and the smooth curve that shows the density function. # Note that Tarray and rhoarray were defined much earlier in this tutorial. #We will plot the data points using circles 'o' and the smooth function using a line '-'. plt.plot(Tarray, rhoarray, 'o', GraphTarray, (DensityWater(GraphTarray)), '-') # For an x axis log scale use plt.semilogx(Tarray, rhoarray, 'o', xnew, f2(xnew), '-') # For a y axis log scale use plt.semilogy(Tarray, rhoarray, 'o', xnew, f2(xnew), '-') # For both axis log scale use plt.loglog(Tarray, rhoarray, 'o', xnew, f2(xnew), '-') #Below we create the legend and axis labels plt.legend(['data', 'cubic'], loc='best') plt.xlabel('Temperature (degrees Celcius)', fontsize=20) plt.ylabel('Density (kg/m^3)', fontsize=20) #Now we show the graph and we are done! plt.show() ``` # Design Challenge 1, learning Python, Jupyter, and some AguaClara Design Functions ### 1) Calculate the minimum inner diameter of a PVC pipe that can carry a flow of at least 10 L/s for the town of Ojojona. The population is 4000 people. The water source is a dam with a surface elevation of 1500 m. The pipeline connects the reservoir to the discharge into a distribution tank at an elevation of 1440 m. The pipeline length is 2.5 km. The pipeline is made with PVC pipe with an SDR (standard diameter ratio) of 26. The pipeline inlet at the dam is a square edge with a minor loss coefficient (${K_e}$) of 0.5. The discharge at the top of the distribution tank results in a loss of all of the kinetic energy and thus the exit minor loss coefficient is 1. See the minor loss equation below. ${h_e} = {K_e}\frac{{{V^2}}}{{2g}}$ The water temperature ranges from 10 to 30 Celsius. The roughness of a PVC pipe is approximately 0.1 mm. Use the fluids functions to calculate the minimum inner pipe diameter to carry this flow from the dam to the distribution tank. Report the following * critical design temperature * kinematic viscosity (maximum viscosity will occur at the lowest temperature) * the minimum inner pipe diameter (in mm). Use complete sentences to report the results and use 2 significant digits (use the sig function). ``` SDR = 26 Q = 10 * u.L/u.s delta_elevation = 1500 * u.m - 1440 * u.m L_pipe = 2.5 * u.km # am using 0 minor losses because pipe diameter function fails if not zero. K_minor = 1.5 # The maximum viscosity will occur at the lowest temperature. T_crit = u.Quantity(10,u.degC) nu = pc.viscosity_kinematic(T_crit) e = 0.1 * u.mm pipeline_ID_min = pc.diam_pipe(Q,delta_elevation,L_pipe,nu,e,K_minor) print('The critical water temperature for this design is '+ str(T_crit)+'.') print('The kinematic viscosity of water is '+ut.sig(nu,2)+'.') print('The minimum pipe inner diameter is '+ ut.sig(pipeline_ID_min.to(u.mm),2)+'.') ``` ### 2) Find the nominal diameter of a PVC pipe that is SDR 26. SDR means standard diameter ratio. The thickness of the pipe wall is 1/SDR of the outside diameter. The pipedatabase file has a useful function that returns nominal diameter given SDR and inner diameter. ``` pipeline_ND = pipe.ND_SDR_available(pipeline_ID_min,SDR) print('The nominal diameter of the pipeline is '+ut.sig(pipeline_ND,2)+' ('+ut.sig(pipeline_ND.to(u.mm),2)+').') ``` ### 3) What is the actual inner diameter of this pipe in mm? Compare this with the [reported inner diameter for SDR-26 pipe](http://www.cresline.com/pdf/cresline-northwest/pvcpressupipeline_Re/CNWPVC-26.pdf) to see if our pipe database is reporting the correct value. ``` pipeline_ID = pipe.ID_SDR(pipeline_ND,SDR) cresline_ID = 4.154*u.inch print('The inner diameter of the pipe is '+ut.sig(pipeline_ID.to(u.mm),3)+'.') print('Cresline reports the inner diameter is '+ut.sig(cresline_ID.to(u.mm),3)+'.') ``` ### 4) What is the maximum flow rate that can be carried by this pipe at the coldest design temperature? Display the flow rate in L/s using the .to method. ``` pipeline_Q_max = pc.flow_pipe(pipeline_ID,delta_elevation,L_pipe,nu,e,K_minor) print('The maximum flow rate at '+ut.sig(T_crit,2)+' is '+ut.sig(pipeline_Q_max.to(u.L/u.s),4)+'.') ``` ### 5) What is the Reynolds number and friction factor for this maximum flow? Assign these values to variable names so you can plot them later on the Moody diagram. ``` pipeline_Re = pc.re_pipe(pipeline_Q_max,pipeline_ID,nu) fPipe = pc.fric(pipeline_Q_max,pipeline_ID,nu,e) print('The Reynolds number and friction factor for the pipeline flow are '+ut.sig(pipeline_Re,2)+' and '+ut.sig(fPipe,2)+' respectively.') ``` ### 6) Check to see if the fluids functions are internally consistent by calculating the head loss given the flow rate that you calculated and comparing that head loss with the elevation difference. Display enough significant digits to see the difference in the two values. Note that the Moody diagram has an accuracy of about ±5% for smooth pipes and ±10% for rough pipes [Moody, 1944](http://user.engineering.uiowa.edu/~me_160/lecture_notes/MoodyLFpaper1944.pdf). ``` HLCheck = pc.headloss(pipeline_Q_max,pipeline_ID,L_pipe,nu,e,K_minor) print('The head loss is '+ut.sig(HLCheck,3)+' and that is close to the elevation difference of '+ut.sig(delta_elevation,3)+'.') ``` ### 7) How much more water (both volumetric and mass rate) will flow through the pipe at the maximum water temperature of 30 C? Take into account both the change in viscosity (changes the flow rate) and the change in density (changes the mass rate). Report the flow rates in L/s. ``` Tmax = u.Quantity(30,u.degC) nuhot = pc.viscosity_kinematic(Tmax) pipeline_Q_maxhot = pc.flow_pipe(pipeline_ID,delta_elevation,L_pipe,nuhot,e,K_minor) QDelta = pipeline_Q_maxhot-pipeline_Q_max MassFlowDelta = (pipeline_Q_maxhot*DensityWater(Tmax)-pipeline_Q_max*DensityWater(T_crit)).to_base_units() print('The increase in flow rate at '+ut.sig(Tmax,2)+' is '+ut.sig(QDelta.to(u.L/u.s),2)+'.') print('The increase in mass rate at '+ut.sig(Tmax,2)+' is '+ut.sig(MassFlowDelta,2)+'.') ``` ### 8) Why is the flow increase due to this temperature change so small given that viscosity actually changed significantly (see the calculation below)? ``` print('The viscosity ratio for the two temperatures was '+ut.sig(pc.viscosity_kinematic(Tmax)/pc.viscosity_kinematic(T_crit),2)+'.') ``` The flow is turbulent and thus viscosity has little influence on the flow rate. ### 9) Suppose an AguaClara plant is designed to be built up the hill from the distribution tank. The transmission line will need to be lengthened by 30 m and the elevation of the inlet to the entrance tank will be 1450 m. The rerouting will also require the addition of 3 elbows with a minor loss coefficient of 0.3 each. What is the new maximum flow from the water source? ``` delta_elevationnew = 1500*u.m - 1450*u.m L_pipenew = 2.5*u.km + 30*u.m Knew = 1.5+3*0.3 pipeline_Q_maxnew = pc.flow_pipe(pipeline_ID,delta_elevationnew,L_pipenew,nu,e,Knew) print('The new maximum flow rate at '+ut.sig(T_crit,2)+' is '+ut.sig(pipeline_Q_maxnew.to(u.L/u.s),4)+'.') ``` ### 10) How much less water will flow through the transmission line after the line is rerouted? ``` print('The reduction in flow is '+ut.sig((pipeline_Q_max-pipeline_Q_maxnew).to(u.L/u.s),2)+'.') ``` <div class="alert alert-block alert-danger"> We noticed that many of you are having some difficulty with naming convention and syntax. Please refer to the following for Github [Standards Page](https://github.com/AguaClara/aide_design/wiki/Standards) for naming standards. Additionally, here is a Github [Variable Naming Guide](https://github.com/AguaClara/aide_design/wiki/Variable-Naming) that will be useful for creating variable names. ### 11) There exists a function within the physchem file called `pc.fric(FlowRate, Diam, Nu, PipeRough)` that returns the friction factor for both laminar and turbulent flow. In this problem, you will be creating a new function which you shall call `fofRe()` that takes the Reynolds number and the dimensionless pipe roughness (ε/D) as inputs. Recall that the format for defining a function is `def fofRe(input1, input2): f = buncha stuff return f` Since the equation for calculating the friction factor is different for laminar and turbulent flow (with the transition Reynolds number being defined within the physchem file), you will need to use an `if, else` statement for the two conditions. The two friction factor equations are given in the **Assorted Fluids Functions** table. ### 12) Create a beautiful Moody diagram. Include axes labels and show a legend that clearly describes each plot. The result should look like the picture of the graph below.![](Moody.png) ### 12a) You will be creating a Moody diagram showing Reynolds number vs friction factor for multiple dimensionless pipe roughnesses. The first step to do this is to define the number of dimensionless pipe roughnesses you want to plot. We will plot 8 curves for the following values: 0, 0.0001, 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1. We will plot an additional curve, which will be a straight line, for laminar flow, since it is not dependent on the pipe roughness value (see the Moody diagram above). * Create an array for the dimensionless pipe roughness values, using `np.array([])`. * Specify the amount of data points you want to plot for each curve. We will be using 50 points. Because the Moody diagram is a log-log plot, we need to ensure that all 50 points on the diagram we are creating are equally spaced in log-space. Use the `np.logspace(input1, input2, input3)` function to create an array for turbulent Reynolds numbers and an array for laminar Reynolds numbers. * `input1` is the exponent for the lower bound of the range. For example, if you want your lower bound to be 1000, your input should be `math.log10(1000)` which is equal to 3. * `input2` is the exponent for the upper bound of the range. Format this input as you have formatted `input1`. * `input3` is the number of data points you are using for each curve. **12a) Deliverables** * Array of dimentionless pipe roughnesses. Call this array `eGraph`. * Variable defining the amount of points on each pipe roughness curve * Two arrays created using `np.logspace` which for turbulent and laminar Reynolds numbers, which will be the x-axis values for the Moody diagram Note: The bounds for the laminar Reynolds numbers array should span between 670 and the predefined transition number used in Problem 11. The bounds for the turbulent Reynolds numbers array should span between 3,500 and 100,000,000. These ranges are chosen to make the curves fit well within the graph and to intentionally omit data in the transition range between laminar and turbulent flows. ### 12b) Now you will create the y-axis values for turbulent flow (based on dimensionless pipe roughness) and laminar flow (not based on dimensionless pipe roughness). To do this, you will use the `fofRe()` function you wrote in Problem 11 to find the friction factors. Begin by creating an empty 2-dimensional array that will be populated by the turbulent-flow friction factors for each dimensionless pipe roughness. Use `np.zeros(number of rows, number of columns)`. The number of rows should be the number of dimensionless pipe roughness values (`len(eGraph)`), while the number of columns should be the number of data points per curve as defined above. Populating this array with friction factor values will require two `for` loops, one to iterate through rows and one to iterate through columns. Recall that `for` loop syntax is as follows: `example = np.zeros((40, 30)) for i in range(0, 40): for j in range(0, 30): example[i,j] = function(buncha[i],stuff[j])` where `buncha` and `stuff` are arrays. You will repeat this process to find the friction factors for laminar flow. The only difference between the turbulent and laminar friction flow arrays will be that the laminar array will only have one dimension since it does not affected by the dimensionless pipe roughness. Start by creating an empty 1-dimensional array and then use a single `for` loop. **12b) Deliverables** * One 2-D array containing friction factor values for each dimensionless pipe roughness for turbulent flow. * One 1-D array containing friction factor values for laminar flow. ### 12c) Now, we are ready to start making the Moody diagram!!!!!1!!! The plot formatting is included for you in the cell below. You will add to this cell the code that will actually plot the arrays you brought into existence in 12a) and 12b) with a legend. For the sake of your own sanity, please only add code where specified. * First, plot your arrays. See the plots in the tutorial above for the syntax. Recall that each dimensionless pipe roughness is a separate row within the 2-D array you created. To plot these roughnesses as separate curves, use a `for` loop to iterate through the rows of your array. To plot all columns in a particular row, use the `[1,:]` call on an array, where 1 is the row you are calling. * Plotting the laminar flow curve does not require a `for` loop because it is a 1-D array. * Use a linewidth of 4 for all curves. * Now plot the data point you calculated in DC Python Tutorial 1, conveniently located a few problems above this one. Use the Reynolds number and friction factor obtained in Problem 5. Because this is a single point, it should be plotted as a circle instead of a line. Because a line composed of a single point does not exist. * You will need to make a legend for the graph using `leg = plt.legend(stringarray, loc = 'best')` * The first input, `stringarray`, must be an array composed of strings instead of numbers. The array you created which contains the dimensionless pipe roughness values (`eGraph`) can be converted into a string array for your legend (`eGraph.astype('str'))`. You will need to add 'Laminar' and 'Pipeline' as strings to the new ` eGraph ` string array. Perhaps you will find `np.append(basestring, [('string1','string2')])` to be useful ;) ``` #Set the size of the figure to make it big! plt.figure('ax',(10,8)) #-------------------------------------------------------------------------------------- #---------------------WRITE CODE BELOW------------------------------------------------- #-------------------------------------------------------------------------------------- #-------------------------------------------------------------------------------------- #---------------------WRITE CODE ABOVE------------------------------------------------- #-------------------------------------------------------------------------------------- #LOOK AT ALL THIS COOL CODE! plt.yscale('log') plt.xscale('log') plt.grid(b=True, which='major', color='k', linestyle='-', linewidth=0.5) #Set the grayscale of the minor gridlines. Note that 1 is white and 0 is black. plt.grid(b=True, which='minor', color='0.5', linestyle='-', linewidth=0.5) #The next 2 lines of code are used to set the transparency of the legend to 1. #The default legend setting was transparent and was cluttered. plt.xlabel('Reynolds number', fontsize=30) plt.ylabel('Friction factor', fontsize=30) plt.show() ``` ### 13) Researchers in the AguaClara laboratory collected the following head loss data through a 1/8" diameter tube that was 2 m long using water at 22°C. The data is in a comma separated data (.csv) file named ['Head_loss_vs_Flow_dosing_tube_data.csv'](https://github.com/AguaClara/CEE4540_DC/blob/master/Head_loss_vs_Flow_dosing_tube_data.csv). Use the pandas read csv function (`pd.read_csv('filename.csv')`) to read the data file. Display the data so you can see how it is formatted. ### 14) Using the data table from Problem 13, assign the head loss **and flow rate** data to separate 1-D arrays. Attach the correct units. `np.array` can extract the data by simply inputting the text string of the column header. Here is example code to create the first array: `HL_data=np.array(head_loss_data['Head loss (m)'])*u.m` In the example, `head_loss_data` is the variable name to which the csv file was assigned. ### 15) Calculate and report the maximum and minimum Reynolds number for this data set. Use the tube and temperature parameters specified in Problem 13. Use the `min` and `max` functions which take arrays as their inputs. ### 16) You will now create a graph of headloss vs flow for the tube mentioned in the previous problems. This graph will have two sets of data: the real data contained within the csv file and some theoretical data. The theoretical data is what we would expect the headloss through the tube to be in an ideal world for any given flow. When calculating the theoretical headloss, assume that minor losses are negligible. Plot the data from the csv file as individual data points and the theoretical headloss as a continuous curve. Make the y-axis have units of cm and the x-axis have units of mL/s. A few hints. * To find the theoretical headloss, you will first need to create an array of different flow values. While you could use the values in the csv file that you extracted in Problem 14, we would instead like you to create an array of 50 equally-spaced flow values. These values shall be between the minimum and maximum flows in the csv file. * You can use the `np.linspace(input1, input2, input3)` function to create this set of equally-spaced flows. Inputs for `np.linspace` are the same as they were for `np.logspace`, which was used in Problem 12a). Linspace does not work with units; you will need to remove the units (using `.magnitude`) from the inputs to `np.logspace` and then reattach the correct units of flow after creating the array. * The `pc.headloss_fric` function can handle arrays as inputs, so that makes it easy to produce the theoretical headloss array once you have finished your equally-spaced flow array. * When using `plt.plot`, make sure to convert the flow and headloss data to the desired units. The theoretical model doesn't fit the data very well. We assumed that major losses dominated. But that assumption was wrong. So let's try a more sophisticated approach where we fit minor losses to the data. Below we demonstrate the use of the [scipy curve_fit method](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html#scipy.optimize.curve_fit) to fit the minor loss coefficient given this data set. In this example, `Q_data` is the flow rate array for the csv file from problem 13. You should re-name this variable below to whatever you titled this variable. ``` from scipy.optimize import curve_fit # Define a new function that calculates head loss given the flow rate # and the parameter that we want to use curve fitting to estimate # Define the other known values inside the function because we won't be passing those parameters to the function. def HL_curvefit(FlowRate, KMinor): # The tubing is smooth AND pipe roughness isn't significant for laminar flow. PipeRough = 0*u.mm L_tube = 2*u.m T_data = u.Quantity(22,u.degC) nu_data = pc.viscosity_kinematic(T_data) D_tube = 1/8*u.inch # pass all of the parameters to the head loss function and then strip the units so # the curve fitting function can handle the data. return (pc.headloss(FlowRate, D_tube, L_tube, nu_data, PipeRough, KMinor)).magnitude # The curve fit function will need bounds on the unknown parameters to find a real solution. # The bounds for K minor are 0 and 20. # The curve fit function returns a list that includes the optimal parameters and the covariance. popt, pcov = curve_fit(HL_curvefit, Q_data, HL_data, bounds=[[0.],[20]]) K_minor_fit = popt[0] # Plot the raw data plt.plot(Q_data.to(u.mL/u.s), HL_data.to(u.cm), 'o', label='data') # Plot the curve fit equation. plt.plot(Q_data.to(u.mL/u.s), ((HL_curvefit(Q_data, *popt))*u.m).to(u.cm), 'r-', label='fit') plt.xlabel('Flow rate (mL/s)') plt.ylabel('Head loss (cm)') plt.legend() plt.show() #Calculate the root mean square error to estimate the goodness of fit of the model to the data RMSE_Kminor = (np.sqrt(np.var(np.subtract((HL_curvefit(Q_data, *popt)),HL_data.magnitude)))*u.m).to(u.cm) print('The root mean square error for the model fit when adjusting the minor loss coefficient was '+ut.sig(RMSE_Kminor,2)) ``` ### 17) Repeat the analysis from the previous cell, but this time assume that the minor loss coefficient is zero and that diameter is the unknown parameter. The bounds specified in the line beginning with `popt, pcov` should be changed from the previous question (which had bounds from 0 to 20) to the new bounds of 0.001 to 0.01. Hint: Don't think too much about this, you only need to change the name of the defined function (perhaps "`HL_curvefit2`"?) and adjust its inputs/values. Please make use of the fantasticly useful copy-paste functionality. ### 18 Changes to which of the two parameters, minor loss coefficient or tube diameter, results in a better fit to the data? ### 19 What did you find most difficult about learning to use Python? Create a brief example as an extension to this tutorial to help students learn the topic that you found most difficult. ## Final Pointer It is good practice to select Restart & Run All from the Kernel menu after completing an assignment to make sure that everything in your notebook works correctly and that you haven't deleted an essential line of code!
github_jupyter
# Convolutional Neural Network This notebook was created by Camille-Amaury JUGE, in order to better understand CNN principles and how they work. (it follows the exercices proposed by Hadelin de Ponteves on Udemy : https://www.udemy.com/course/le-deep-learning-de-a-a-z/) ## Imports ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt # scikit from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix, accuracy_score # keras import keras from keras.models import Sequential from keras.layers import Dense, Dropout from keras.layers import Convolution2D, MaxPooling2D, Flatten from keras.preprocessing.image import ImageDataGenerator from keras.preprocessing import image ``` ## Model ### Basic Convolutional Network Since our images are in RGB mode, we need to have an array of dimension 3 (one dimension for each color mode). We will also begin as a small but efficient size of feature map = 32 (should multiplicate by 2 if stacking multiple layer). Then, the feature dectetor will be 3*3 pixels. Finally, we always use the relu function in order to break linearity of features map. This improve the quality of the features. Thus, we apply max pooling in order to keep the important information while reducing the size of inputs (even if we lose some informations). We will use a 2*2 matrix which will reduce by 4 the features size. To conclude, the flattening layer will help the features to be flatten in order to be used by other kind of neural networks. ``` def create_convolutional_layer(clf): # convolution layer clf.add(Convolution2D(filters=32, kernel_size=(3,3), strides=(1,1), input_shape=(64,64,3), activation="relu")) # Pooling (here max) clf.add(MaxPooling2D(pool_size=(2,2))) # Flattening clf.add(Flatten()) return clf def hidden_layer(clf): clf.add(Dense(units=128, activation="relu")) clf.add(Dense(units=1, activation="sigmoid")) return clf clf = Sequential() clf = create_convolutional_layer(clf) clf = hidden_layer(clf) clf.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"]) ``` ### Image Creation We are going to increase the number of images by using a keras method which apply a lot of filters, rotation on existing images in order to avoid overfitting and prepare for lot of different factors. Folders need to be well organized before using those functions. ``` _batch_size = 32 _training_size = 8000 _test_size = 200 _image_size = (64,64) # change/create the train dataset images train_datagen = ImageDataGenerator( # rescale the values of each pixel between 0 and 1 rescale=1./255, # transvection (rotating in 3D but still seing in 2D) shear_range=0.2, # zoom zoom_range=0.2, # return the image on a horizontal plan horizontal_flip=True) # same for the test set test_datagen = ImageDataGenerator(rescale=1./255) # generate the new images for train dataset train_generator = train_datagen.flow_from_directory( 'training_set', target_size=_image_size, batch_size=_batch_size, class_mode='binary') # same for the test dataset test_generator = test_datagen.flow_from_directory( 'test_set', target_size=_image_size, batch_size=_batch_size, class_mode='binary') # do the job clf.fit(train_generator, steps_per_epoch=int(_training_size/_batch_size), epochs=25, validation_data=test_generator, validation_steps=int(_test_size/_batch_size)) ``` As we can see, the network performs medium-well on the image recognition : 0.86 on training set and 0.78 on test set. Nethertheless, it seems that there is overfitting since we have a 0.08 difference which is quite enormous. Our neural network is not able to well generalize. ### Improving the model We will add some convolution layers. ``` def create_convolutional_layer(clf): # convolution layer clf.add(Convolution2D(filters=64, kernel_size=(3,3), strides=(1,1), input_shape=(128,128,3), activation="relu")) # Pooling (here max) clf.add(MaxPooling2D(pool_size=(2,2))) clf.add(Dropout(0.1)) clf.add(Convolution2D(filters=32, kernel_size=(3,3), strides=(1,1), activation="relu")) clf.add(MaxPooling2D(pool_size=(2,2))) clf.add(Dropout(0.1)) # Flattening clf.add(Flatten()) return clf def hidden_layer(clf): clf.add(Dense(units=128, activation="relu")) clf.add(Dense(units=64, activation="relu")) clf.add(Dense(units=1, activation="sigmoid")) return clf clf = Sequential() clf = create_convolutional_layer(clf) clf = hidden_layer(clf) clf.compile(optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"]) _batch_size = 32 _training_size = 8000 _test_size = 200 _image_size = (128,128) # change/create the train dataset images train_datagen = ImageDataGenerator( # rescale the values of each pixel between 0 and 1 rescale=1./255, # transvection (rotating in 3D but still seing in 2D) shear_range=0.2, # zoom zoom_range=0.2, # return the image on a horizontal plan horizontal_flip=True) # same for the test set test_datagen = ImageDataGenerator(rescale=1./255) # generate the new images for train dataset train_generator = train_datagen.flow_from_directory( 'training_set', target_size=_image_size, batch_size=_batch_size, class_mode='binary') # same for the test dataset test_generator = test_datagen.flow_from_directory( 'test_set', target_size=_image_size, batch_size=_batch_size, class_mode='binary') # do the job clf.fit(train_generator, steps_per_epoch=int(_training_size/_batch_size), epochs=25, validation_data=test_generator, validation_steps=int(_test_size/_batch_size)) ``` ## Predict One image ``` test_image1_128 = image.load_img("single_prediction\\cat_or_dog_1.jpg", target_size=(128,128)) test_image2_128 = image.load_img("single_prediction\\cat_or_dog_2.jpg", target_size=(128,128)) test_image1_128 = image.img_to_array(test_image1_128) test_image2_128 = image.img_to_array(test_image2_128) test_image1_128 = np.expand_dims(test_image1_128, axis=0) test_image2_128 = np.expand_dims(test_image2_128, axis=0) test_image1_128.shape classes = {value : key for (key, value) in train_generator.class_indices.items()} classes y_pred = classes[int(clf.predict(test_image1_128)[0][0])] y = classes[1] print("Predicted {} and is {}".format(y_pred, y)) y_pred = classes[int(clf.predict(test_image2_128)[0][0])] y = classes[0] print("Predicted {} and is {}".format(y_pred, y)) ```
github_jupyter
# Object Detection API Demo <table align="left"><td> <a target="_blank" href="https://colab.sandbox.google.com/github/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab </a> </td><td> <a target="_blank" href="https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb"> <img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td></table> Welcome to the [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection). This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. > **Important**: This tutorial is to help you through the first step towards using [Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection) to build models. If you just just need an off the shelf model that does the job, see the [TFHub object detection example](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/object_detection.ipynb). # Setup Important: If you're running on a local machine, be sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md). This notebook includes only what's necessary to run in Colab. ### Install ``` !pip install -U --pre tensorflow=="2.*" ``` Make sure you have `pycocotools` installed ``` !pip install pycocotools ``` Get `tensorflow/models` or `cd` to parent directory of the repository. ``` import os import pathlib if "models" in pathlib.Path.cwd().parts: while "models" in pathlib.Path.cwd().parts: os.chdir('..') elif not pathlib.Path('models').exists(): !git clone --depth 1 https://github.com/tensorflow/models ``` Compile protobufs and install the object_detection package ``` %%bash cd models/research/ protoc object_detection/protos/*.proto --python_out=. %%bash cd models/research pip install . ``` ### Imports ``` import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image from IPython.display import display ``` Import the object detection module. ``` from object_detection.utils import ops as utils_ops from object_detection.utils import label_map_util from object_detection.utils import visualization_utils as vis_util ``` Patches: ``` # patch tf1 into `utils.ops` utils_ops.tf = tf.compat.v1 # Patch the location of gfile tf.gfile = tf.io.gfile ``` # Model preparation ## Variables Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing the path. By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. ## Loader ``` def load_model(model_name): base_url = 'http://download.tensorflow.org/models/object_detection/' model_file = model_name + '.tar.gz' model_dir = tf.keras.utils.get_file( fname=model_name, origin=base_url + model_file, untar=True) model_dir = pathlib.Path(model_dir)/"saved_model" model = tf.saved_model.load(str(model_dir)) model = model.signatures['serving_default'] return model ``` ## Loading label map Label maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine ``` # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = 'models/research/object_detection/data/mscoco_label_map.pbtxt' category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) ``` For the sake of simplicity we will test on 2 images: ``` # If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS. PATH_TO_TEST_IMAGES_DIR = pathlib.Path('models/research/object_detection/test_images') TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("*.jpg"))) TEST_IMAGE_PATHS ``` # Detection Load an object detection model: ``` model_name = 'ssd_mobilenet_v1_coco_2017_11_17' detection_model = load_model(model_name) ``` Check the model's input signature, it expects a batch of 3-color images of type uint8: ``` print(detection_model.inputs) ``` And returns several outputs: ``` detection_model.output_dtypes detection_model.output_shapes ``` Add a wrapper function to call the model, and cleanup the outputs: ``` def run_inference_for_single_image(model, image): image = np.asarray(image) # The input needs to be a tensor, convert it using `tf.convert_to_tensor`. input_tensor = tf.convert_to_tensor(image) # The model expects a batch of images, so add an axis with `tf.newaxis`. input_tensor = input_tensor[tf.newaxis,...] # Run inference output_dict = model(input_tensor) # All outputs are batches tensors. # Convert to numpy arrays, and take index [0] to remove the batch dimension. # We're only interested in the first num_detections. num_detections = int(output_dict.pop('num_detections')) output_dict = {key:value[0, :num_detections].numpy() for key,value in output_dict.items()} output_dict['num_detections'] = num_detections # detection_classes should be ints. output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64) # Handle models with masks: if 'detection_masks' in output_dict: # Reframe the the bbox mask to the image size. detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( output_dict['detection_masks'], output_dict['detection_boxes'], image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5, tf.uint8) output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy() return output_dict ``` Run it on each test image and show the results: ``` def show_inference(model, image_path): # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. image_np = np.array(Image.open(image_path)) # Actual detection. output_dict = run_inference_for_single_image(model, image_np) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks_reframed', None), use_normalized_coordinates=True, line_thickness=8) display(Image.fromarray(image_np)) for image_path in TEST_IMAGE_PATHS: show_inference(detection_model, image_path) ``` ## Instance Segmentation ``` model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28" masking_model = load_model(model_name) ``` The instance segmentation model includes a `detection_masks` output: ``` masking_model.output_shapes for image_path in TEST_IMAGE_PATHS: show_inference(masking_model, image_path) ```
github_jupyter
# Name Data preparation by using a template to submit a job to Cloud Dataflow # Labels GCP, Cloud Dataflow, Kubeflow, Pipeline # Summary A Kubeflow Pipeline component to prepare data by using a template to submit a job to Cloud Dataflow. # Details ## Intended use Use this component when you have a pre-built Cloud Dataflow template and want to launch it as a step in a Kubeflow Pipeline. ## Runtime arguments Argument | Description | Optional | Data type | Accepted values | Default | :--- | :---------- | :----------| :----------| :---------- | :----------| project_id | The ID of the Google Cloud Platform (GCP) project to which the job belongs. | No | GCPProjectID | | | gcs_path | The path to a Cloud Storage bucket containing the job creation template. It must be a valid Cloud Storage URL beginning with 'gs://'. | No | GCSPath | | | launch_parameters | The parameters that are required to launch the template. The schema is defined in [LaunchTemplateParameters](https://cloud.google.com/dataflow/docs/reference/rest/v1b3/LaunchTemplateParameters). The parameter `jobName` is replaced by a generated name. | Yes | Dict | A JSON object which has the same structure as [LaunchTemplateParameters](https://cloud.google.com/dataflow/docs/reference/rest/v1b3/LaunchTemplateParameters) | None | location | The regional endpoint to which the job request is directed.| Yes | GCPRegion | | None | staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information. This is done so that you can resume the job in case of failure.| Yes | GCSPath | | None | validate_only | If True, the request is validated but not executed. | Yes | Boolean | | False | wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | ## Input data schema The input `gcs_path` must contain a valid Cloud Dataflow template. The template can be created by following the instructions in [Creating Templates](https://cloud.google.com/dataflow/docs/guides/templates/creating-templates). You can also use [Google-provided templates](https://cloud.google.com/dataflow/docs/guides/templates/provided-templates). ## Output Name | Description :--- | :---------- job_id | The id of the Cloud Dataflow job that is created. ## Caution & requirements To use the component, the following requirements must be met: - Cloud Dataflow API is enabled. - The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details. - The Kubeflow user service account is a member of: - `roles/dataflow.developer` role of the project. - `roles/storage.objectViewer` role of the Cloud Storage Object `gcs_path.` - `roles/storage.objectCreator` role of the Cloud Storage Object `staging_dir.` ## Detailed description You can execute the template locally by following the instructions in [Executing Templates](https://cloud.google.com/dataflow/docs/guides/templates/executing-templates). See the sample code below to learn how to execute the template. Follow these steps to use the component in a pipeline: 1. Install the Kubeflow Pipeline SDK: ``` %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ``` 2. Load the component using KFP SDK ``` import kfp.components as comp dataflow_template_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.0.0/components/gcp/dataflow/launch_template/component.yaml') help(dataflow_template_op) ``` ### Sample Note: The following sample code works in an IPython notebook or directly in Python code. In this sample, we run a Google-provided word count template from `gs://dataflow-templates/latest/Word_Count`. The template takes a text file as input and outputs word counts to a Cloud Storage bucket. Here is the sample input: ``` !gsutil cat gs://dataflow-samples/shakespeare/kinglear.txt ``` #### Set sample parameters ``` # Required Parameters PROJECT_ID = '<Please put your project ID here>' GCS_WORKING_DIR = 'gs://<Please put your GCS path here>' # No ending slash # Optional Parameters EXPERIMENT_NAME = 'Dataflow - Launch Template' OUTPUT_PATH = '{}/out/wc'.format(GCS_WORKING_DIR) ``` #### Example pipeline that uses the component ``` import kfp.dsl as dsl import json @dsl.pipeline( name='Dataflow launch template pipeline', description='Dataflow launch template pipeline' ) def pipeline( project_id = PROJECT_ID, gcs_path = 'gs://dataflow-templates/latest/Word_Count', launch_parameters = json.dumps({ 'parameters': { 'inputFile': 'gs://dataflow-samples/shakespeare/kinglear.txt', 'output': OUTPUT_PATH } }), location = '', validate_only = 'False', staging_dir = GCS_WORKING_DIR, wait_interval = 30): dataflow_template_op( project_id = project_id, gcs_path = gcs_path, launch_parameters = launch_parameters, location = location, validate_only = validate_only, staging_dir = staging_dir, wait_interval = wait_interval) ``` #### Compile the pipeline ``` pipeline_func = pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ``` #### Submit the pipeline for execution ``` #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ``` #### Inspect the output ``` !gsutil cat $OUTPUT_PATH* ``` ## References * [Component python code](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/component_sdk/python/kfp_component/google/dataflow/_launch_template.py) * [Component docker file](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/Dockerfile) * [Sample notebook](https://github.com/kubeflow/pipelines/blob/master/components/gcp/dataflow/launch_template/sample.ipynb) * [Cloud Dataflow Templates overview](https://cloud.google.com/dataflow/docs/guides/templates/overview) ## License By deploying or using this software you agree to comply with the [AI Hub Terms of Service](https://aihub.cloud.google.com/u/0/aihub-tos) and the [Google APIs Terms of Service](https://developers.google.com/terms/). To the extent of a direct conflict of terms, the AI Hub Terms of Service will control.
github_jupyter
### Problem 1 __We will use a full day worth of tweets as an input (there are total of 4.4M tweets in this file, but you only need to read 1M):__ http://rasinsrv07.cstcis.cti.depaul.edu/CSC455/OneDayOfTweets.txt __a. Create a 3rd table incorporating the Geo table (in addition to tweet and user tables that you already have from HW4 and HW5) and extend your schema accordingly. You do not need to use ALTER TABLE, it is sufficient to just re-make your schema.__ __You will need to generate an ID for the Geo table primary key (you may use any value or reasonable combination of values as long as it is unique) for that table and link it to the Tweet table (foreign key should be in the Tweet). In addition to the primary key column, the geo table should have at least the “type”, “longitude” and “latitude” columns.__ ``` #imports import urllib.request, time, json, sqlite3 #setup conn = sqlite3.connect('Tweets_Database_THF1.db') #db connection c = conn.cursor() wFD = urllib.request.urlopen('http://rasinsrv07.cstcis.cti.depaul.edu/CSC455/OneDayOfTweets.txt') #get the file #c.execute('DROP TABLE IF EXISTS User'); #c.execute('DROP TABLE IF EXISTS Tweets'); #c.execute('DROP TABLE IF EXISTS Geo'); wFD.close() fdErr.close() c.close() conn.commit() conn.close() #create User Table create_UserTable = '''CREATE TABLE User ( ID INTEGER, NAME TEXT, SCREEN_NAME TEXT, DESCRIPTION TEXT, FRIENDS_COUNT INTEGER, CONSTRAINT User_pk PRIMARY KEY(ID) );''' c.execute('DROP TABLE IF EXISTS User') c.execute(create_UserTable) #create Tweets Table create_TweetsTable = '''CREATE TABLE Tweets ( ID INTEGER, Created_At DATE, Text TEXT, Source TEXT, In_Reply_to_User_ID INTEGER, In_Reply_to_Screen_Name TEXT, In_Reply_to_Status_ID INTEGER, Retweet_Count INTEGER, Contributors TEXT, User_ID INTEGER, Geo_ID Text, CONSTRAINT Tweet_pk PRIMARY KEY(ID), CONSTRAINT tweet_fk1 FOREIGN KEY (User_ID) REFERENCES User(ID), CONSTRAINT tweet_fk2 FOREIGN KEY (Geo_ID) REFERENCES Geo(ID) );''' c.execute('DROP TABLE IF EXISTS Tweets') c.execute(create_TweetsTable) #create Geo Table create_GeoTable = '''CREATE TABLE Geo ( ID Text, Type Text, Latitude INTEGER, Longitude INTEGER, CONSTRAINT Geo_pk PRIMARY KEY(ID) );''' c.execute('DROP TABLE IF EXISTS Geo') c.execute(create_GeoTable) ``` __b. Use python to download from the web and save to a local text file (not into database yet, just to text file) at least 1,000,000 lines worth of tweets. Test your code with fewer rows first and only time it when you know it works. Report how long did it take.__ __NOTE: Do not call read() or readlines() without any parameters at any point. That command will attempt to read the entire file which is too much data.__ ``` #open files start = time.time() db_file = open('THF_db.txt', 'w') # db_err_file = open('THF_db_errors.txt', 'w') for i in range(1000000): #for lines 1 through 1,000,0000 line = wFD.readline() try: db_file.write(line.decode()) #write to the database txt file except ValueError: db_err_file.write(line.decode() + '\n') #catch errors if they come up #close files db_file.close() db_err_file.close() end = time.time() print("Part b file writing took ", (end-start), ' seconds.') ``` __c. Repeat what you did in part-b, but instead of saving tweets to the file, populate the 3-table schema that you created in SQLite. Be sure to execute commit and verify that the data has been successfully loaded (report loaded row counts for each of the 3 tables).__ ``` start = time.time() fdErr = open('THF_error.txt', 'w', errors = 'replace') tweetBatch = [] userBatch = [] geoBatch = [] loadCounter = 0 # There is a total of 1,000,000 tweets, but we will do a for-loop here for i in range(1000000): line = wFD.readline() try: tweetDict = json.loads(line) # This is the dictionary for tweet info loadCounter = loadCounter + 1 #------------------------------------ #Tweet Table newRowTweet = [] # hold individual values of to-be-inserted row tweetKeys = ['id_str','created_at','text','source','in_reply_to_user_id', 'in_reply_to_screen_name', 'in_reply_to_status_id', 'retweet_count', 'contributors'] for key in tweetKeys: # For each dictionary key we want if tweetDict[key] == 'null' or tweetDict[key] == '': newRowTweet.append(None) #null else: newRowTweet.append(tweetDict[key]) # use value as-is #Adds in user_id userDict = tweetDict['user'] # This the the dictionary for user information newRowTweet.append(userDict['id']) # User id/ foreign key #Adds in geo_id geoDict = tweetDict['geo'] if tweetDict['geo']: newRowTweet.append(str(tweetDict['geo']['coordinates'])) #geo_id is the latitude/longitude as a string else: newRowTweet.append(None) # Geo info is missing #batching if loadCounter < 50: # Batching 1 at a time tweetBatch.append(newRowTweet) else: c.executemany ('INSERT OR IGNORE INTO Tweets VALUES(?,?,?,?,?,?,?,?,?,?,?)', tweetBatch) tweetBatch = [] # Reset the list of batched tweets #------------------------------------ #User Table newRowUser = [] # hold individual values of to-be-inserted row for user table userKeys = ['id', 'name', 'screen_name', 'description', 'friends_count'] for key in userKeys: # For each dictionary key we want if userDict[key] == 'null' or userDict[key] == '': newRowUser.append(None) # proper NULL else: newRowUser.append(userDict[key]) # use value as-is #batching if loadCounter < 50: # Batching 1 at a time userBatch.append(newRowUser) else: c.executemany ('INSERT OR IGNORE INTO User VALUES(?,?,?,?,?)', userBatch) loadCounter = 0 userBatch = [] # Reset the list of batched users #------------------------------------ #Geo Table newRowGeo = [] # hold individual values of to-be-inserted row for geo table geoKeys = ['id','type','latitude', 'longitude'] if tweetDict['geo'] == 'null' or tweetDict['geo'] == '' or tweetDict['geo'] is None: #do nothing continue else: #id newRowGeo.append(str(tweetDict['geo']['coordinates'])) #type newRowGeo.append(tweetDict['geo']['type']) #latitude newRowGeo.append(tweetDict['geo']['coordinates'][0]) #longitude newRowGeo.append(tweetDict['geo']['coordinates'][1]) if loadCounter < 50: # Batching 1 at a time geoBatch.append(newRowGeo) else: c.executemany ('INSERT OR IGNORE INTO User VALUES(?,?,?,?)', geoBatch) loadCounter = 0 geoBatch = [] # Reset the list of batched geos except ValueError: # Handle the error of JSON parsing fdErr.write(line.decode() + '\n') # Final batch (the remaining less-than-50 rows to be loaded) c.executemany ('INSERT OR IGNORE INTO Tweets VALUES(?,?,?,?,?,?,?,?,?,?,?)', tweetBatch) c.executemany ('INSERT OR IGNORE INTO User VALUES(?,?,?,?,?)', userBatch) c.executemany ('INSERT OR IGNORE INTO Geo VALUES(?,?,?,?)', geoBatch) print ("Loaded ", c.execute('SELECT COUNT(*) FROM Tweets').fetchall()[0], " Tweet rows") print ("Loaded ", c.execute('SELECT COUNT(*) FROM User').fetchall()[0], " User rows") print ("Loaded ", c.execute('SELECT COUNT(*) FROM Geo').fetchall()[0], " Geo rows") wFD.close() fdErr.close() c.close() conn.commit() conn.close() end = time.time() print("Part c file writing took ", (end-start), ' seconds.') c.execute('SELECT * FROM Geo LIMIT 2').fetchall() c.execute('SELECT * FROM Tweets LIMIT 2').fetchall() c.execute('SELECT * FROM User LIMIT 2').fetchall() ``` __How long did this step take?__ It took: __d. Use your locally saved tweet file (created in part-b) to repeat the database population step from part-c. That is, load 1,000,000 tweets into the 3-table database using your saved file with tweets (do not use the URL to read twitter data).__ ``` start = time.time() #open the database text file f = open("THF_db.txt", 'r', encoding='utf-8') fdErr = open('THF_error.txt', 'w', errors = 'replace') tweetBatch = [] userBatch = [] geoBatch = [] loadCounter = 0 # There is a total of 1,000,000 tweets, but we will do a for-loop here for i in range(1000000): line = f.readline() try: tweetDict = json.loads(line) # This is the dictionary for tweet info loadCounter = loadCounter + 1 #------------------------------------ #Tweet Table newRowTweet = [] # hold individual values of to-be-inserted row tweetKeys = ['id_str','created_at','text','source','in_reply_to_user_id', 'in_reply_to_screen_name', 'in_reply_to_status_id', 'retweet_count', 'contributors'] for key in tweetKeys: # For each dictionary key we want if tweetDict[key] == 'null' or tweetDict[key] == '': newRowTweet.append(None) #null else: newRowTweet.append(tweetDict[key]) # use value as-is #Adds in user_id userDict = tweetDict['user'] # This the the dictionary for user information newRowTweet.append(userDict['id']) # User id/ foreign key #Adds in geo_id geoDict = tweetDict['geo'] if tweetDict['geo']: newRowTweet.append(str(tweetDict['geo']['coordinates'])) #geo_id is the latitude/longitude as a string else: newRowTweet.append(None) # Geo info is missing #batching if loadCounter < 50: # Batching 1 at a time tweetBatch.append(newRowTweet) else: c.executemany ('INSERT OR IGNORE INTO Tweets VALUES(?,?,?,?,?,?,?,?,?,?,?)', tweetBatch) tweetBatch = [] # Reset the list of batched tweets #------------------------------------ #User Table newRowUser = [] # hold individual values of to-be-inserted row for user table userKeys = ['id', 'name', 'screen_name', 'description', 'friends_count'] for key in userKeys: # For each dictionary key we want if userDict[key] == 'null' or userDict[key] == '': newRowUser.append(None) # proper NULL else: newRowUser.append(userDict[key]) # use value as-is #batching if loadCounter < 50: # Batching 1 at a time userBatch.append(newRowUser) else: c.executemany ('INSERT OR IGNORE INTO User VALUES(?,?,?,?,?)', userBatch) loadCounter = 0 userBatch = [] # Reset the list of batched users #------------------------------------ #Geo Table newRowGeo = [] # hold individual values of to-be-inserted row for geo table geoKeys = ['id','type','latitude', 'longitude'] if tweetDict['geo'] == 'null' or tweetDict['geo'] == '' or tweetDict['geo'] is None: #do nothing continue else: #id newRowGeo.append(str(tweetDict['geo']['coordinates'])) #type newRowGeo.append(tweetDict['geo']['type']) #latitude newRowGeo.append(tweetDict['geo']['coordinates'][0]) #longitude newRowGeo.append(tweetDict['geo']['coordinates'][1]) if loadCounter < 50: # Batching 1 at a time geoBatch.append(newRowGeo) else: c.executemany ('INSERT OR IGNORE INTO User VALUES(?,?,?,?)', geoBatch) loadCounter = 0 geoBatch = [] # Reset the list of batched geos except ValueError: # Handle the error of JSON parsing fdErr.write(line.decode() + '\n') # Final batch (the remaining less-than-50 rows to be loaded) c.executemany ('INSERT OR IGNORE INTO Tweets VALUES(?,?,?,?,?,?,?,?,?,?,?)', tweetBatch) c.executemany ('INSERT OR IGNORE INTO User VALUES(?,?,?,?,?)', userBatch) c.executemany ('INSERT OR IGNORE INTO Geo VALUES(?,?,?,?)', geoBatch) print ("Loaded ", c.execute('SELECT COUNT(*) FROM Tweets').fetchall()[0], " Tweet rows") print ("Loaded ", c.execute('SELECT COUNT(*) FROM User').fetchall()[0], " User rows") print ("Loaded ", c.execute('SELECT COUNT(*) FROM Geo').fetchall()[0], " Geo rows") f.close() wFD.close() fdErr.close() c.close() conn.commit() conn.close() end = time.time() print("Part d file writing took ", (end-start), ' seconds.') ``` __How does the runtime compare with part-c?__ Compared to part-c it took: __e. Re-run the previous step with a batching size of 1000 (i.e. by inserting 1000 rows at a time with executemany).__ ``` start = time.time() #open the database text file f = open("THF_db.txt", 'r', encoding='utf-8') fdErr = open('THF_error.txt', 'w', errors = 'replace') tweetBatch = [] userBatch = [] geoBatch = [] loadCounter = 0 # There is a total of 1,000,000 tweets, but we will do a for-loop here for i in range(1000000): line = f.readline() try: tweetDict = json.loads(line) # This is the dictionary for tweet info loadCounter = loadCounter + 1 #------------------------------------ #Tweet Table newRowTweet = [] # hold individual values of to-be-inserted row tweetKeys = ['id_str','created_at','text','source','in_reply_to_user_id', 'in_reply_to_screen_name', 'in_reply_to_status_id', 'retweet_count', 'contributors'] for key in tweetKeys: # For each dictionary key we want if tweetDict[key] == 'null' or tweetDict[key] == '': newRowTweet.append(None) #null else: newRowTweet.append(tweetDict[key]) # use value as-is #Adds in user_id userDict = tweetDict['user'] # This the the dictionary for user information newRowTweet.append(userDict['id']) # User id/ foreign key #Adds in geo_id geoDict = tweetDict['geo'] if tweetDict['geo']: newRowTweet.append(str(tweetDict['geo']['coordinates'])) #geo_id is the latitude/longitude as a string else: newRowTweet.append(None) # Geo info is missing #batching if loadCounter < 1000: # Batching 1 at a time tweetBatch.append(newRowTweet) else: c.executemany ('INSERT OR IGNORE INTO Tweets VALUES(?,?,?,?,?,?,?,?,?,?,?)', tweetBatch) tweetBatch = [] # Reset the list of batched tweets #------------------------------------ #User Table newRowUser = [] # hold individual values of to-be-inserted row for user table userKeys = ['id', 'name', 'screen_name', 'description', 'friends_count'] for key in userKeys: # For each dictionary key we want if userDict[key] == 'null' or userDict[key] == '': newRowUser.append(None) # proper NULL else: newRowUser.append(userDict[key]) # use value as-is #batching if loadCounter < 1000: # Batching 1 at a time userBatch.append(newRowUser) else: c.executemany ('INSERT OR IGNORE INTO User VALUES(?,?,?,?,?)', userBatch) loadCounter = 0 userBatch = [] # Reset the list of batched users #------------------------------------ #Geo Table newRowGeo = [] # hold individual values of to-be-inserted row for geo table geoKeys = ['id','type','latitude', 'longitude'] if tweetDict['geo'] == 'null' or tweetDict['geo'] == '' or tweetDict['geo'] is None: #do nothing continue else: #id newRowGeo.append(str(tweetDict['geo']['coordinates'])) #type newRowGeo.append(tweetDict['geo']['type']) #latitude newRowGeo.append(tweetDict['geo']['coordinates'][0]) #longitude newRowGeo.append(tweetDict['geo']['coordinates'][1]) if loadCounter < 1000: # Batching 1 at a time geoBatch.append(newRowGeo) else: c.executemany ('INSERT OR IGNORE INTO User VALUES(?,?,?,?)', geoBatch) loadCounter = 0 geoBatch = [] # Reset the list of batched geos except ValueError: # Handle the error of JSON parsing fdErr.write(line.decode() + '\n') # Final batch (the remaining less-than-50 rows to be loaded) c.executemany ('INSERT OR IGNORE INTO Tweets VALUES(?,?,?,?,?,?,?,?,?,?,?)', tweetBatch) c.executemany ('INSERT OR IGNORE INTO User VALUES(?,?,?,?,?)', userBatch) c.executemany ('INSERT OR IGNORE INTO Geo VALUES(?,?,?,?)', geoBatch) print ("Loaded ", c.execute('SELECT COUNT(*) FROM Tweets').fetchall()[0], " Tweet rows") print ("Loaded ", c.execute('SELECT COUNT(*) FROM User').fetchall()[0], " User rows") print ("Loaded ", c.execute('SELECT COUNT(*) FROM Geo').fetchall()[0], " Geo rows") f.close() wFD.close() fdErr.close() c.close() conn.commit() conn.close() end = time.time() print("Part e file writing took ", (end-start), ' seconds.') ``` __How does the runtime compare when batching is used?__ The runtime with batching: ### Problem 2 __a. Write and execute SQL queries to do the following. Don’t forget to report the running times in each part and the code you used.__ __i. Find tweets where tweet id_str contains “55” or “88” anywhere in the column__ ``` start = time.time() c.execute('SELECT * FROM Tweets WHERE id LIKE "%55%" or id LIKE "%88%" ').fetchall() end = time.time() print("Part i query took ", (end-start), ' seconds.') c.execute('SELECT * FROM Tweets WHERE id LIKE "%55%" or id LIKE "%88%" ').fetchall() ``` __ii. Find how many unique values are there in the “in_reply_to_user_id” column__ ``` start = time.time() c.execute('SELECT COUNT(DISTINCT in_reply_to_user_id) AS num_replies FROM Tweets').fetchall() end = time.time() print("Part ii query took ", (end-start), ' seconds.') c.execute('SELECT COUNT(DISTINCT in_reply_to_user_id) AS num_replies FROM Tweets').fetchall() ``` __iii. Find the tweet(s) with the shortest, longest and average length text message.__ ``` start = time.time() c.execute('SELECT MIN(LENGTH(Text)) AS shortest, MAX(LENGTH(Text)) AS longest, AVG(LENGTH(Text)) \ AS average FROM Tweets').fetchall() end = time.time() print("Part iii query took ", (end-start), ' seconds.') start = time.time() c.execute('SELECT * FROM TWEETS WHERE LENGTH(Text) IN (1,434,68.83193998521863)').fetchall() end = time.time() print("Part iii query took ", (end-start), ' seconds.') ``` __iv. Find the average longitude and latitude value for each user name.__ ``` start = time.time() c.execute('SELECT screen_name, AVG(latitude), AVG(longitude) FROM User \ JOIN Tweets ON User.ID=Tweets.user_id \ JOIN Geo ON Tweets.geo_id=Geo.ID \ GROUP BY screen_name').fetchall() end = time.time() print("Part iv query took ", (end-start), ' seconds.') c.execute('SELECT screen_name, AVG(latitude), AVG(longitude) FROM User \ JOIN Tweets ON User.ID=Tweets.user_id \ JOIN Geo ON Tweets.geo_id=Geo.ID \ GROUP BY screen_name').fetchall() ``` __v. Find how many known/unknown locations there were in total (e.g., 50,000 known, 950,000 unknown, 5% locations are available)__ ``` c.execute('SELECT (COUNT(*)-COUNT(CASE WHEN `geo_id` IS NULL THEN 1 END)) \ ,COUNT(CASE WHEN `geo_id` IS NULL THEN 1 END), \ ROUND((COUNT(*)-COUNT(CASE WHEN `geo_id` IS NULL THEN 1 END)) * 100.0 / \ COUNT(CASE WHEN `geo_id` IS NULL THEN 1 END), 1)FROM Tweets').fetchall() start = time.time() c.execute('SELECT (COUNT(*)-COUNT(CASE WHEN `geo_id` IS NULL THEN 1 END)) \ ,COUNT(CASE WHEN `geo_id` IS NULL THEN 1 END), \ ROUND((COUNT(*)-COUNT(CASE WHEN `geo_id` IS NULL THEN 1 END)) * 100.0 / \ COUNT(CASE WHEN `geo_id` IS NULL THEN 1 END), 1)FROM Tweets').fetchall() end = time.time() print("Part v query took ", (end-start), ' seconds.') ``` __vi. Re-execute the query in part iv) 10 times and 100 times and measure the total runtime (just re-run the same exact query multiple times using a for-loop). Does the runtime scale linearly? (i.e., does it take 10X and 100X as much time?)__ ``` start = time.time() for i in range(10): c.execute('SELECT screen_name, AVG(latitude), AVG(longitude) FROM User \ JOIN Tweets ON User.ID=Tweets.user_id \ JOIN Geo ON Tweets.geo_id=Geo.ID \ GROUP BY screen_name').fetchall() end = time.time() print("Part iv 10x query took ", (end-start), ' seconds.') start = time.time() for i in range(100): c.execute('SELECT screen_name, AVG(latitude), AVG(longitude) FROM User \ JOIN Tweets ON User.ID=Tweets.user_id \ JOIN Geo ON Tweets.geo_id=Geo.ID \ GROUP BY screen_name').fetchall() end = time.time() print("Part iv 100x query took ", (end-start), ' seconds.') ``` __b. Write python code that is going to read the locally saved tweet data file from 1-b and perform the equivalent computation for parts 2-i and 2-ii only. How does the runtime compare to the SQL queries?__ ``` #i - c.execute('SELECT * FROM Tweets WHERE id LIKE "%55%" or id LIKE "%88%" ').fetchall() import pandas as pd start = time.time() f = open("THF_db.txt", 'r', encoding='utf-8') data = [] labels = ['id_str','in_reply_to_user_id'] error_tally =0 # Loop through the 1,000,000 tweets in the text file for i in range(1000000): line = f.readline() try: tweetDict = json.loads(line) # This is the dictionary for tweet info data.append((tweetDict["id_str"], tweetDict["in_reply_to_user_id"])) except: #catch any error error_tally+=1 df=pd.DataFrame.from_records(data,columns=labels) f.close() df_end = df[df['id_str'].astype(str).str.contains('55|88')] end = time.time() print("Part 2b-i loop took ", (end-start), ' seconds.') df_end.head(10) #ii - c.execute('SELECT COUNT(DISTINCT in_reply_to_user_id) AS num_replies FROM Tweets').fetchall() import pandas as pd start = time.time() f = open("THF_db.txt", 'r', encoding='utf-8') data = [] labels = ['id_str','in_reply_to_user_id'] error_tally =0 # Loop through the 1,000,000 tweets in the text file for i in range(1000000): line = f.readline() try: tweetDict = json.loads(line) # This is the dictionary for tweet info data.append((tweetDict["id_str"], tweetDict["in_reply_to_user_id"])) except: #catch any error error_tally+=1 df=pd.DataFrame.from_records(data,columns=labels) f.close() df_end = df['in_reply_to_user_id'].value_counts(ascending=False) end = time.time() print("Part 2b-ii loop took ", (end-start), ' seconds.') df_end.head(10) ``` ### Problem 3 __a. Export the contents of the User table from a SQLite table into a sequence of INSERT statements within a file. This is very similar to what you already did in Assignment 4. However, you have to add a unique ID column which has to be a string (you cannot use numbers). Hint: you can replace digits with letters, e.g., chr(ord('a')+1) gives you a 'b' and chr(ord('a')+2) returns a 'c'__ ``` #import sqlite3 def generateInsertStatements(tblName): conn = sqlite3.connect('Tweets_Database_THF1.db') # Using HW3 SQLite DB (preloaded) c = conn.cursor() # Open file for export fd = open(tblName+'.txt', 'w') tblRows = c.execute('SELECT * FROM %s' % tblName) for row in tblRows: fd.write("INSERT INTO %s VALUES %s;\n" % (tblName, str(row))) fd.close() c.close() conn.close() start = time.time() generateInsertStatements('User') end = time.time() print("Part 3a loop took ", (end-start), ' seconds.') ``` Part 2b-ii loop took 314.5617868900299 seconds. __b. Create the same collection of INSERT for the User table by reading data from the local tweet file that you have saved earlier.__ ``` def generateInsertStatements_b(tblName): #open the database text file f = open("THF_db.txt", 'r', encoding='utf-8') #open the file to write to fd = open(tblName+'.txt', 'w') err=0 for i in range(1000000): line = f.readline() try: tweetDict = json.loads(line) # This is the dictionary for tweet info userDict = tweetDict['user'] #User Table newRowUser = [] # hold individual values of to-be-inserted row for user table userKeys = ['id', 'name', 'screen_name', 'description', 'friends_count'] for key in userKeys: # For each dictionary key we want if userDict[key] == 'null' or userDict[key] == '': newRowUser.append(None) # proper NULL else: newRowUser.append(userDict[key]) # use value as-is fd.write("INSERT INTO %s VALUES %s;\n" % (tblName, str((newRowUser)))) except: err+=1 f.close() fd.close() start = time.time() generateInsertStatements_b('User') end = time.time() print("Part 3b loop took ", (end-start), ' seconds.') ``` __How do these compare in runtime? Which method was faster?__ Comparing the runtime: ### Problem 4 __4. Export all three tables (Tweet, User and Geo tables) from the database into a |-separated text file (each value in a row should be separated by |). You do not generate SQL INSERT statements, just raw |-separated text data.__ ``` #import sqlite3 #import pandas as pd conn = sqlite3.connect('Tweets_Database_THF1.db') c = conn.cursor() df_tweets_read = pd.read_sql_query("SELECT * FROM Tweets;", conn) #tweets df_user_read = pd.read_sql_query("SELECT * FROM User;", conn) #user df_geo_read = pd.read_sql_query("SELECT * FROM Geo;", conn) #geo df_tweets_write = df_tweets_read.to_csv("tweets_table.txt", sep ='|') #tweets df_user_write = df_user_read.to_csv("user_table.txt", sep ='|') #user df_geo_write = df_geo_read.to_csv("geo_table.txt", sep ='|') #geo c.close() conn.commit() conn.close() ``` __a. For the Geo table, add a new column with relative distance from a fixed point which is the location of CDM (41.878668, -87.625555). You can simply treat it as a point-to-point Euclidean distance (although bonus points for finding a real distance in miles) and round the longitude and latitude columns to a maximum of 4 digits after the decimal.__ ``` import sqlite3 import pandas as pd import numpy as np conn = sqlite3.connect('Tweets_Database_THF1.db') c = conn.cursor() df_geo_read = pd.read_sql_query("SELECT * FROM Geo;", conn) #geo df_geo_read['Latitude'] = df_geo_read.Latitude.round(4) df_geo_read['Longitude'] = df_geo_read.Longitude.round(4) df_geo_read['distance'] = (df_geo_read.Latitude.sub(41.878668).pow(2).add(df_geo_read.Longitude.sub(-87.625555).pow(2))).pow(.5).round(4) df_geo_write = df_geo_read.to_csv("geo_table.txt", sep ='|') #geo c.close() conn.commit() conn.close() df_geo_read.head(10) ``` __b. For the Tweet table, add two new columns from the User table (“name” and “screen_name”) in addition to existing columns.__ ``` import sqlite3 import pandas as pd conn = sqlite3.connect('Tweets_Database_THF1.db') c = conn.cursor() df_tweets_read = pd.read_sql_query("SELECT * FROM Tweets;", conn) #Tweets df_user_read = pd.read_sql_query("SELECT * FROM User;", conn) #User new_df = pd.merge(df_tweets_read, df_user_read, how='left', left_on='User_ID',right_on='ID') new_df = new_df.drop(['DESCRIPTION','FRIENDS_COUNT','ID_y'], axis=1) df_tweets_write = new_df.to_csv("tweet_table.txt", sep ='|') #Tweets written c.close() conn.commit() conn.close() new_df.head(10) ``` __c. For the User table file add a column that specifies how many tweets by that user are currently in the database. That is, your output file should contain all of the columns from the User table, plus the new column with tweet count. You do not need to modify the User table, just create the output text file. What is the name of the user with most tweets?__ ``` #x = new_df['User_ID'].value_counts() #x.columns = ['User_ID','tweets_count'] #x.head() z = new_df['User_ID'] import sqlite3 import pandas as pd conn = sqlite3.connect('Tweets_Database_THF1.db') c = conn.cursor() #df_user_read = pd.read_sql_query("SELECT * FROM User;", conn) #User #df_tweets_read = pd.read_sql_query("SELECT * FROM Tweets;", conn) #Tweets new_df['tweets_count'] = new_df.groupby('User_ID')['User_ID'].transform('count') #from last part join_df = new_df[['User_ID','tweets_count']] #dataframe with only id and tweetcount newer_df = pd.merge(df_user_read, join_df, left_on='ID', right_on='User_ID') clean_df = newer_df.sort_values(by=['tweets_count'], ascending=False).drop_duplicates() df_user_write = clean_df.to_csv("user_table.txt", sep ='|') #user c.close() conn.commit() conn.close() clean_df.head(10) newer_df.head(10) ```
github_jupyter
<font size=6><b>Understand_Tables.ipynb:</b></font> <p> <font size=4>Extract Structured Information from Tables in PDF Documents using IBM Watson Discovery and Text Extensions for Pandas </font> # Introduction Many organizations have valuable information hidden in tables inside human-readable documents like PDF files and web pages. Table identification and extraction technology can turn this human-readable information into a format that data science tools can import and use. Text Extensions for Pandas and Watson Discovery make this process much easier. In this notebook, we'll follow the journey of Allison, an analyst at an investment bank. Allison's employer has assigned her to cover several different companies, one of which is IBM. As part of her analysis, Allison wants to track IBM's revenue over time, broken down by geographical region. That detailed revenue information is all there in IBM's filings with the U.S. Securities and Exchange Commission (SEC). For example, here's IBM's 2019 annual report: ![IBM Annual Report for 2019 (146 pages)](images/IBM_Annual_Report_2019.png) Did you see the table of revenue by geography? It's here, on page 39: ![Page 39 of IBM Annual Report for 2019](images/IBM_Annual_Report_2019_page_39.png) Here's what that table looks like close up: ![Table: Geographic Revenue (from IBM 2019 annual report)](images/screenshot_table_2019.png) But this particular table only gives two years' revenue figures. Allison needs to have enough data to draw a meaningful chart of revenue over time. 10 years of annual revenue figures would be a good starting point. Allison has a collection of IBM annual reports going back to 2009. In total, these documents contain about 1500 pages of financial information. Hidden inside those 1500 pages are the detailed revenue figures that Allison wants. She needs to find those figures, extract them from the documents, and import them into her data science tools. Fortunately, Allison has [Watson Discovery](https://www.ibm.com/cloud/watson-discovery), IBM's suite of tools for managing and extracting value from collections of human-readable documents. The cells that follow will show how Allison uses Text Extensions for Pandas and Watson Discovery to import the detailed revenue information from her PDF documents into a Pandas DataFrame... ![Screenshot of a DataFrame from later in this notebook.](images/revenue_table.png) ...that she then uses to generate a chart of revenue over time: ![Chart of revenue over time, from later in this notebook.](images/revenue_over_time.png) But first, let's set your environment up so that you can run Allison's code yourself. (If you're just reading through the precomputed outputs of this notebook, you can skip ahead to the section labeled ["Extract Tables with Watson Discovery"](#watson_discovery)). # Environment Setup This notebook requires a Python 3.7 or later environment with the following packages: * The dependencies listed in the ["requirements.txt" file for Text Extensions for Pandas](https://github.com/CODAIT/text-extensions-for-pandas/blob/master/requirements.txt) * `matplotlib` * `text_extensions_for_pandas` You can satisfy the dependency on `text_extensions_for_pandas` in either of two ways: * Run `pip install text_extensions_for_pandas` before running this notebook. This command adds the library to your Python environment. * Run this notebook out of your local copy of the Text Extensions for Pandas project's [source tree](https://github.com/CODAIT/text-extensions-for-pandas). In this case, the notebook will use the version of Text Extensions for Pandas in your local source tree **if the package is not installed in your Python environment**. ``` # Core Python libraries import json import os import sys from typing import * import pandas as pd from matplotlib import pyplot as plt # And of course we need the text_extensions_for_pandas library itself. try: import text_extensions_for_pandas as tp except ModuleNotFoundError as e: # If we're running from within the project source tree and the parent Python # environment doesn't have the text_extensions_for_pandas package, use the # version in the local source tree. if not os.getcwd().endswith("notebooks"): raise e if ".." not in sys.path: sys.path.insert(0, "..") import text_extensions_for_pandas as tp ``` <div id="watson_discovery"/> # Extract Tables with Watson Discovery Allison connects to the [Watson Discovery](https://cloud.ibm.com/docs/discovery-data?topic=discovery-data-install) component of her firm's [IBM Cloud Pak for Data]( https://www.ibm.com/products/cloud-pak-for-data) installation on their [OpenShift](https://www.openshift.com/) cluster. She creates a new Watson Discovery project and uploads her stack of IBM annual reports to her project. Then she uses the Watson Discovery's [Table Understanding enrichment](https://cloud.ibm.com/docs/discovery-data?topic=discovery-data-understanding_tables) to identify tables in the PDF documents and to extract detailed information about the cells and headers that make up each table. To keep this notebook short, we've captured the output of Table Understanding on Allison's documents and checked it into Github [here](https://github.com/CODAIT/text-extensions-for-pandas/tree/master/resources/tables/Financial_table_demo/IBM_10-K). We will use these JSON files as input for the rest of this scenario. If you'd like to learn more about importing and managing document collections in Watson Discovery, take a look at the [Getting Started Guide for Watson Discovery](https://cloud.ibm.com/docs/discovery-data?topic=discovery-data-getting-started). Allison reads the JSON output from Watson Discovery's table enrichment into a Python variable, then prints out what the 2019 "Geographic Revenue" table looks like in this raw output. ``` # Location of the output from Watson Discovery's Table Understanding enrichment # (relative to this notebook file) FILES_DIR = "../resources/tables/financial_table_demo/IBM_10-K" with open(f"{FILES_DIR}/2019.json", "r") as f: ibm_2019_json = json.load(f) # Find the table in the "Geographic Revenue" section. table_index = [i for i in range(len(ibm_2019_json["tables"])) if ibm_2019_json["tables"][i]["section_title"]["text"] == "Geographic Revenue"][0] print(json.dumps(ibm_2019_json["tables"][table_index], indent=2)) ``` That raw output contains everything Allison needs to extract the revenue figures from this document, but it's in a format that's cumbersome to deal with. So Allison uses Text Extensions for Pandas to convert this JSON into a collection of Pandas DataFrames. These DataFrames encode information about the row headers, column headers, and cells that make up the table. ``` table_data = tp.io.watson.tables.parse_response(ibm_2019_json, select_table="Geographic Revenue") table_data.keys() table_data["body_cells"].head(5) ``` Text Extensions for Pandas can convert these DataFrames into a single Pandas DataFrame that matches the layout of the original table in the document. Allison calls the `make_table()` function to perform that conversion and inspects the output. ``` revenue_2019_df = tp.io.watson.tables.make_table(table_data) revenue_2019_df ``` &nbsp; The reconstructed dataframe looks good! Here's what the original table in the PDF document looked like: ![Table: Geographic Revenue (from IBM 2019 annual report)](images/screenshot_table_2019.png) If Allison just wanted to create a DataFrame of 2018/2019 revenue figures, her task would be done. But Allison wants to reconstruct ten years of revenue by geographic region. To do that, she will need to combine information from multiple documents. For tables like this one that have multiple levels of header information, this kind of integration is easier to perform over the "exploded" version of the table, where each cell in the table is represented a single row containing all the corresponding header values. Allison passes the same table data from the 2019 report through the Text Extensions for Pandas function `make_exploded_df()` to produce the exploded represention of the table: ``` exploded_df, row_header_names, col_header_names = ( tp.io.watson.tables.make_exploded_df(table_data, col_explode_by="concat")) exploded_df ``` This exploded version of the table is the exact same data, just represented in a different way. If she wants, Allison can convert it back to the format from the original document by calling `pandas.DataFrame.pivot()`: ``` exploded_df.pivot(index="row_header_texts_0", columns="column_header_texts", values="text") ``` But because she is about to merge this DataFrame with similar data from other documents, Allison keeps the data in exploded format for now. Allison's next task is to write some Pandas transformations that will clean and reformat the DataFrame for each source table prior to merging them all together. She uses the 2019 report's data as a test case for creating this code. The first step is to convert the cell values in the Watson Discovery output from text to numeric values. Text Extensions for Pandas includes a more robust version of [`pandas.to_numeric()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_numeric.html) that can handle common idioms for representing currencies and percentages. Allison uses this function, called `convert_cols_to_numeric()`, to convert all the cell values to numbers. She adds a new column "value" to her DataFrame to hold these numbers. ``` exploded_df["value"] = \ tp.io.watson.tables.convert_cols_to_numeric(exploded_df[["text"]]) exploded_df ``` Now all the cell values have been converted to floating-point numbers, but only some of these numbers represent revenue. Looking at the 2019 data, Allison can see that the revenue numbers have 4-digit years in their column headers. So she filters the DataFrame down to just those rows with 4-digit numbers in the "column_header_texts" column. ``` rows_to_retain = exploded_df[exploded_df["column_header_texts"].str.fullmatch("\d{4}")].copy() rows_to_retain ``` That's looking good! Now Allison drops the unnecessary columns and gives some more friendly names to the columns that remain. ``` rows_to_retain.rename( columns={ "row_header_texts_0": "Region", "column_header_texts": "Year", "value": "Revenue" })[["Year", "Region", "Revenue"]] ``` The code from the last few cells worked to clean up the 2019 data, so Allison copies and pastes that code into a Python function: ``` def dataframe_for_file(filename: str): with open(f"{FILES_DIR}/{filename}", "r") as f: json_output = json.load(f) table_data = tp.io.watson.tables.parse_response(json_output, select_table="Geographic Revenue") exploded_df, _, _ = tp.io.watson.tables.make_exploded_df( table_data, col_explode_by="concat") rows_to_retain = exploded_df[exploded_df["column_header_texts"].str.fullmatch("\d{4}") & (exploded_df["text"].str.len() > 0)].copy() rows_to_retain["value"] = tp.io.watson.tables.convert_cols_to_numeric( rows_to_retain[["text"]]) rows_to_retain["file"] = filename return ( rows_to_retain.rename(columns={ "row_header_texts_0": "Region", "column_header_texts": "Year", "value": "Revenue"}) [["Year", "Region", "Revenue"]] ) ``` Then she calls that function on the Watson Discovery output from the 2019 annual report to verify that it produces the same answer. ``` dataframe_for_file("2019.json") ``` Looks good! Time to run the same function over an entire stack of reports. Allison puts the names of all her Watson Discovery output files into a single Python list. ``` all_files = sorted([f for f in os.listdir(FILES_DIR) if f.endswith(".json")]) all_files ``` Note that the annual reports for 2011 and 2014 aren't in the collection of files that Allison has. But that's ok; each report contains the previous year's figures, so Allison can reconstruct the missing data from adjacent years. Allison calls her `dataframe_for_file()` function on each of the files, then concatenates all of the resulting Pandas DataFrames into a single large DataFrame. ``` revenue_df = pd.concat([dataframe_for_file(f) for f in all_files]) revenue_df ``` Allison can see that the first four lines of this DataFrame contain total worldwide revenue; and that this total occurred under different names in different documents. Allison is interested in the fine-grained revenue figures, not the totals, so she needs to filter out all these rows with worldwide revenue. What are all the names of geographic regions that IBM annual reports have used over the last ten years? ``` revenue_df[["Region"]].drop_duplicates() ``` It looks like all the worldwide revenue figures are under some variation of "Geographies" or "Total revenue". Allison uses Pandas' string matching facilities to filter out the rows whose "Region" column contains the words "geographies" or "total". ``` geo_revenue_df = ( revenue_df[~( # "~" operator inverts a Pandas selection condition (revenue_df["Region"].str.contains("geographies", case=False)) | (revenue_df["Region"].str.contains("total", case=False)) )]).copy() geo_revenue_df ``` Now every row contains a regional revenue figure. What are the regions represented? ``` geo_revenue_df[["Region"]].drop_duplicates() ``` That's strange &mdash; one of the regions is "Asia Pacifi c", with a space before the last "c". It looks like the PDF conversion on the 2016 annual report added an extra space. Allison uses the function `pandas.Series.replace()` to correct that issue. ``` geo_revenue_df["Region"] = geo_revenue_df["Region"].replace("Asia Pacifi c", "Asia Pacific") geo_revenue_df ``` Allison inspects the time series of revenue for the "Americas" region: ``` geo_revenue_df[geo_revenue_df["Region"] == "Americas"].sort_values("Year") ``` Every year from 2008 to 2019 is present, but many of the years appear twice. That's to be expected, since each of the annual reports contains two years of geographical revenue figures. Allison drops the duplicate values using `pandas.DataFrame.drop_duplicates()`. ``` geo_revenue_df.drop_duplicates(["Region", "Year"], inplace=True) geo_revenue_df ``` Now Allison has a clean and complete set of revenue figures by geographical region for the years 2008-2019. She uses Pandas' `pandas.DataFrame.pivot()` method to convert this data into a compact table. ``` revenue_table = geo_revenue_df.pivot(index="Region", columns="Year", values="Revenue") revenue_table ``` Then she uses that table to produce a plot of revenue by region over that 11-year period. ``` plt.rcParams.update({'font.size': 16}) revenue_table.transpose().plot(title="Revenue by Geographic Region", ylabel="Revenue (Millions of US$)", figsize=(12, 7), ylim=(0, 50000)) ``` Now Allison has a clear picture of the detailed revenue data that was hidden inside those 1500 pages of PDF files. As she works on her analyst report, Allison can use the same process to extract DataFrames for other financial metrics too!
github_jupyter
<a href="https://colab.research.google.com/github/ManojKesani/100-Days-Of-ML-Code/blob/master/Copy_of_basics.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <!--NAVIGATION--> <a href="https://colab.research.google.com/github/saskeli/x/blob/master/basics.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a> | - | - | - | |-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------| | [Exercise 1 (hello world)](<#Exercise-1-(hello-world&#41;>) | [Exercise 2 (compliment)](<#Exercise-2-(compliment&#41;>) | [Exercise 3 (multiplication)](<#Exercise-3-(multiplication&#41;>) | | [Exercise 4 (multiplication table)](<#Exercise-4-(multiplication-table&#41;>) | [Exercise 5 (two dice)](<#Exercise-5-(two-dice&#41;>) | [Exercise 6 (triple square)](<#Exercise-6-(triple-square&#41;>) | | [Exercise 7 (areas of shapes)](<#Exercise-7-(areas-of-shapes&#41;>) | [Exercise 8 (solve quadratic)](<#Exercise-8-(solve-quadratic&#41;>) | [Exercise 9 (merge)](<#Exercise-9-(merge&#41;>) | | [Exercise 10 (detect ranges)](<#Exercise-10-(detect-ranges&#41;>) | [Exercise 11 (interleave)](<#Exercise-11-(interleave&#41;>) | [Exercise 12 (distinct characters)](<#Exercise-12-(distinct-characters&#41;>) | | [Exercise 13 (reverse dictionary)](<#Exercise-13-(reverse-dictionary&#41;>) | [Exercise 14 (find matching)](<#Exercise-14-(find-matching&#41;>) | [Exercise 15 (two dice comprehension)](<#Exercise-15-(two-dice-comprehension&#41;>) | | [Exercise 16 (transform)](<#Exercise-16-(transform&#41;>) | [Exercise 17 (positive list)](<#Exercise-17-(positive-list&#41;>) | [Exercise 18 (acronyms)](<#Exercise-18-(acronyms&#41;>) | | [Exercise 19 (sum equation)](<#Exercise-19-(sum-equation&#41;>) | [Exercise 20 (usemodule)](<#Exercise-20-(usemodule&#41;>) | | # Python ## Basic concepts ### Basic input and output The traditional "Hello, world" program is very simple in Python. You can run the program by selecting the cell by mouse and pressing control-enter on keyboard. Try editing the string in the quotes and rerunning the program. ``` print("Hello world2!") ``` Multiple strings can be printed. By default, they are concatenated with a space: ``` print("Hello,", "John!", "How are you?") ``` In the print function, numerical expression are first evaluated and then automatically converted to strings. Subsequently the strings are concatenated with spaces: ``` print(1, "plus", 2, "equals", 1+2) ``` Reading textual input from the user can be achieved with the input function. The input function is given a string parameter, which is printed and prompts the user to give input. In the example below, the string entered by the user is stored the variable `name`. Try executing the program in the interactive notebook by pressing control-enter! ``` name=input("Give me your name: ") print("Hello,", name) ``` ### Indentation Repetition is possible with the for loop. Note that the body of for loop is indented with a tabulator or four spaces. Unlike in some other languages, braces are not needed to denote the body of the loop. When the indentation stops, the body of the loop ends. ``` for i in range(3): print("Hello") print("Bye!") ``` Indentation applies to other compound statements as well, such as bodies of functions, different branches of an if statement, and while loops. We shall see examples of these later. The `range(3)` expression above actually results with the sequence of integers 0, 1, and 2. So, the range is a half-open interval with the end point excluded from the range. In general, expression range(n) gives integers 0, 1, 2, ..., n-1. Modify the above program to make it also print the value of variable i at each iteration. Rerun the code with control-enter. #### <div class="alert alert-info">Exercise 1 (hello world)</div> Fill in the missing piece in the solution stub file `hello_world.py` in folder `src` to make it print the following: `Hello, world!` Make sure you use correct indenting. You can run it with command `python3 src/hello_world.py`. If the output looks good, then you can test it with command `tmc test`. If the tests pass, submit your solution to the server with command `tmc submit`. <hr/> #### <div class="alert alert-info">Exercise 2 (compliment)</div> Fill in the stub solution to make the program work as follows. The program should ask the user for an input, and the print an answer as the examples below show. ``` What country are you from? Sweden I have heard that Sweden is a beautiful country. What country are you from? Chile I have heard that Chile is a beautiful country. ``` <hr/> #### <div class="alert alert-info">Exercise 3 (multiplication)</div> Make a program that gives the following output. You should use a for loop in your solution. ``` 4 multiplied by 0 is 0 4 multiplied by 1 is 4 4 multiplied by 2 is 8 4 multiplied by 3 is 12 4 multiplied by 4 is 16 4 multiplied by 5 is 20 4 multiplied by 6 is 24 4 multiplied by 7 is 28 4 multiplied by 8 is 32 4 multiplied by 9 is 36 4 multiplied by 10 is 40 ``` <hr/> ### Variables and data types We saw already earlier that assigning a value to variable is very simple: ``` a=1 print(a) ``` Note that we did not need to introduce the variable `a` in any way. No type was given for the variable. Python automatically detected that the type of `a` must be `int` (an integer). We can query the type of a variable with the builtin function `type`: ``` type(a) ``` Note also that the type of a variable is not fixed: ``` a="some text" type(a) ``` In Python the type of a variable is not attached to the name of the variable, like in C for instance, but instead with the actual value. This is called dynamic typing. ![typing.svg](https://github.com/csmastersUH/data_analysis_with_python_2020/blob/master/typing.svg?raw=1) We say that a variable is a name that *refers* to a value or an object, and the assignment operator *binds* a variable name to a value. The basic data types in Python are: `int`, `float`, `complex`, `str` (a string), `bool` (a boolean with values `True` and `False`), and `bytes`. Below are few examples of their use. ``` i=5 f=1.5 b = i==4 print("Result of the comparison:", b) c=0+2j # Note that j denotes the imaginary unit of complex numbers. print("Complex multiplication:", c*c) s="conca" + "tenation" print(s) ``` The names of the types act as conversion operators between types: ``` print(int(-2.8)) print(float(2)) print(int("123")) print(bool(-2), bool(0)) # Zero is interpreted as False print(str(234)) ``` A *byte* is a unit of information that can represent numbers between 0 and 255. A byte consists of 8 *bits*, which can in turn represent either 0 or 1. All the data that is stored on disks or transmitted across the internet are sequences of bytes. Normally we don't have to care about bytes, since our strings and other variables are automatically converted to a sequence of bytes when needed to. An example of the correspondence between the usual data types and bytes is the characters in a string. A single character is encoded as a sequence of one or more bytes. For example, in the common [UTF-8](https://en.wikipedia.org/wiki/UTF-8) encoding the character `c` corresponds to the byte with integer value 99 and the character `ä` corresponds to sequence of bytes [195, 164]. An example conversion between characters and bytes: ``` b="ä".encode("utf-8") # Convert character(s) to a sequence of bytes print(b) # Prints bytes in hexadecimal notation print(list(b)) # Prints bytes in decimal notation bytes.decode(b, "utf-8") # convert sequence of bytes to character(s) ``` During this course we don't have to care much about bytes, but in some cases, when loading data sets, we might have to specify the encoding if it deviates from the default one. #### Creating strings A string is a sequence of characters commonly used to store input or output data in a program. The characters of a string are specified either between single (`'`) or double (`"`) quotes. This optionality is useful if, for example, a string needs to contain a quotation mark: "I don't want to go!". You can also achieve this by *escaping* the quotation mark with the backslash: 'I don\\'t want to go'. The string can also contain other escape sequences like `\n` for newline and `\t` for a tabulator. See [literals](https://docs.python.org/3/reference/lexical_analysis.html#literals) for a list of all escape sequences. ``` print("One\tTwo\nThree\tFour") ``` A string containing newlines can be easily given within triple double or triple single quotes: ``` s="""A string spanning over several lines""" ``` Although we can concatenate strings using the `+` operator, for effiency reasons, one should use the `join` method to concatenate larger number of strings: ``` a="first" b="second" print(a+b) print(" ".join([a, b, b, a])) # More about the join method later ``` Sometimes printing by concatenation from pieces can be clumsy: ``` print(str(1) + " plus " + str(3) + " is equal to " + str(4)) # slightly better print(1, "plus", 3, "is equal to", 4) ``` The multiple catenation and quotation characters break the flow of thought. *String interpolation* offers somewhat easier syntax. There are multiple ways to do sting interpolation: * Python format strings * the `format` method * f-strings Examples of these can be seen below: ``` print("%i plus %i is equal to %i" % (1, 3, 4)) # Format syntax print("{} plus {} is equal to {}".format(1, 3, 4)) # Format method print(f"{1} plus {3} is equal to {4}") # f-string ``` The `i` format specifier in the format syntacs corresponds to integers and the specifier `f` corresponds to floats. When using f-strings or the `format` method, integers use `d` instead. In format strings specifiers can usually be omitted and are generally used only when specific formatting is required. For example in f-strings `f"{4:3d}"` would specify the number 4 left padded with spaces to 3 digits. It is often useful to specify the number of decimals when printing floats: ``` print("%.1f %.2f %.3f" % (1.6, 1.7, 1.8)) # Old style print("{:.1f} {:.2f} {:.3f}".format(1.6, 1.7, 1.8)) # newer style print(f"{1.6:.1f} {1.7:.2f} {1.8:.3f}") # f-string ``` The specifier `s` is used for strings. An example: ``` print("%s concatenated with %s produces %s" % ("water", "melon", "water"+"melon")) print("{0} concatenated with {1} produces {0}{1}".format("water", "melon")) print(f"{'water'} concatenated with {'melon'} produces {'water' + 'melon'}") ``` Look [here](https://pyformat.info/#number) for more details about format specifiers, and for comparison between the old and new style of string interpolation. Different ways of string interpolation have different strengths and weaknesses. Generally choosing which to use is a matter of personal preference. On this course examples and model solutions will predominantly use f-strings and the `format` method. ### Expressions An *expression* is a piece of Python code that results in a value. It consists of values combined together with *operators*. Values can be literals, such as `1`, `1.2`, `"text"`, or variables. Operators include arithmetics operators, comparison operators, function call, indexing, attribute references, among others. Below there are a few examples of expressions: ``` 1+2 7/(2+0.1) a cos(0) mylist[1] c > 0 and c !=1 (1,2,3) a<5 obj.attr (-1)**2 == 1 ``` <div class="alert alert-warning">Note that in Python the operator `//` performs integer division and operator `/` performs float division. The `**` operator denotes exponentiation. These operators might therefore behave differently than in many other common languages.</div> As another example the following expression computes the kinetic energy of a non-rotating object: `0.5 * mass * velocity**2` ### Statements Statements are commands that have some effect. For example, a function call (that is not part of another expression) is a statement. Also, the variable assignment is a statement: ``` i = 5 i = i+1 # This is a commong idiom to increment the value of i by one i += 1 # This is a short-hand for the above ``` Note that in Python there are no operators `++` or `--` unlike in some other languages. It turns out that the operators `+ - * / // % & | ^ >> << **` have the corresponding *augmented assignment operators* `+= -= *= /= //= %= &= |= ^= >>= <<= **=` Another large set of statements is the flow-control statements such as if-else, for and while loops. We will look into these in the next sections. #### Loops for repetitive tasks In Python we have two kinds of loops: `while` and `for`. We briefly saw the `for` loop earlier. Let's now look at the `while` loop. A `while` loop repeats a set of statements while a given condition holds. An example: ``` i=1 while i*i < 1000: print("Square of", i, "is", i*i) i = i + 1 print("Finished printing all the squares below 1000.") ``` Note again that the body of the while statement was marked with the indentation. Another way of repeating statements is with the `for` statement. An example ``` s=0 for i in [0,1,2,3,4,5,6,7,8,9]: s = s + i print("The sum is", s) ``` The `for` loop executes the statements in the block as many times as there are elements in the given list. At each iteration the variable `i` refers to another value from the list in order. Instead of the giving the list explicitly as above, we could have used the *generator* `range(10)` which returns values from the sequence 0,1,...,9 as the for loop asks for a new value. In the most general form the `for` loop goes through all the elements in an *iterable*. Besides lists and generators there are other iterables. We will talk about iterables and generators later this week. When one wants to iterate through all the elements in an iterable, then the `for` loop is a natural choice. But sometimes `while` loops offer cleaner solution. For instance, if we want to go through all Fibonacci numbers up till a given limit, then it is easier to do with a `while` loop. #### <div class="alert alert-info">Exercise 4 (multiplication table)</div> In the `main` function print a multiplication table, which is shown below: ``` 1 2 3 4 5 6 7 8 9 10 2 4 6 8 10 12 14 16 18 20 3 6 9 12 15 18 21 24 27 30 4 8 12 16 20 24 28 32 36 40 5 10 15 20 25 30 35 40 45 50 6 12 18 24 30 36 42 48 54 60 7 14 21 28 35 42 49 56 63 70 8 16 24 32 40 48 56 64 72 80 9 18 27 36 45 54 63 72 81 90 10 20 30 40 50 60 70 80 90 100 ``` For example at row 4 and column 9 we have 4*9=36. Use two nested for loops to achive this. Note that you can use the following form to stop the `print` function from automatically starting a new line: ``` print("text", end="") print("more text") ``` Print the numbers in a field with width four, so that the numbers are nicely aligned. For instructions on how adjust the field width refer to [pyformat.info](https://pyformat.info/#number_padding). <hr/> #### Decision making with the if statement The if-else statement works as can be expected. Try running the below cell by pressing control+enter. ``` x=input("Give an integer: ") x=int(x) if x >= 0: a=x else: a=-x print("The absolute value of %i is %i" % (x, a)) ``` The general from of an if-else statement is ``` if condition1: statement1_1 statement1_2 ... elif condition2: statement2_1 statement2_2 ... ... else: statementn_1 statementn_2 ... ``` Another example: ``` c=float(input("Give a number: ")) if c > 0: print("c is positive") elif c<0: print("c is negative") else: print("c is zero") ``` #### Breaking and continuing loop Breaking the loop, when the wanted element is found, with the `break` statement: ``` l=[1,3,65,3,-1,56,-10] for x in l: if x < 0: break print("The first negative list element was", x) ``` Stopping current iteration and continuing to the next one with the `continue` statement: ``` from math import sqrt, log l=[1,3,65,3,-1,56,-10] for x in l: if x < 0: continue print(f"Square root of {x} is {sqrt(x):.3f}") print(f"Natural logarithm of {x} is {log(x):.4f}") ``` #### <div class="alert alert-info">Exercise 5 (two dice)</div> Let us consider throwing two dice. (A dice can give a value between 1 and 6.) Use two nested `for` loops in the `main` function to iterate through all possible combinations the pair of dice can give. There are 36 possible combinations. Print all those combinations as (ordered) pairs that sum to 5. For example, your printout should include the pair `(2,3)`. Print one pair per line. <hr/> ### Functions A function is defined with the `def` statement. Let's do a doubling function. ``` def double(x): "This function multiplies its argument by two." return x*2 print(double(4), double(1.2), double("abc")) # It even happens to work for strings! ``` The double function takes only one parameter. Notice the *docstring* on the second line. It documents the purpose and usage of the function. Let's try to access it. ``` print("The docstring is:", double.__doc__) help(double) # Another way to access the docstring ``` Most of Python's builtin functions, classes, and modules should contain a docstring. ``` help(print) ``` Here's another example function: ``` def sum_of_squares(a, b): "Computes the sum of arguments squared" return a**2 + b**2 print(sum_of_squares(3, 4)) ``` <div class="alert alert-warning">Note the terminology: in the function definition the names a and b are called <strong>parameters</strong> of the function; in the function call, however, 3 and 4 are called <strong>arguments</strong> to the function. </div> It would be nice that the number of arguments could be arbitrary, not just two. We could pass a list to the function as a parameter. ``` def sum_of_squares(lst): "Computes the sum of squares of elements in the list given as parameter" s=0 for x in lst: s += x**2 return s print(sum_of_squares([-2])) print(sum_of_squares([-2,4,5])) ``` This works perfectly! There is however some extra typing with the brackets around the lists. Let's see if we can do better: ``` def sum_of_squares(*t): "Computes the sum of squares of arbitrary number of arguments" s=0 for x in t: s += x**2 return s print(sum_of_squares(-2)) print(sum_of_squares(-2,4,5)) ``` The strange looking argument notation (the star) is called *argument packing*. It packs all the given positional arguments into a tuple `t`. We will encounter tuples again later, but it suffices now to say that tuples are *immutable* lists. With the `for` loop we can iterate through all the elements in the tuple. Conversely, there is also syntax for *argument unpacking*. It has confusingly exactly same notation as argument packing (star), but they are separated by the location where used. Packing happens in the parameter list of the functions definition, and unpacking happens where the function is called: ``` lst=[1,5,8] print("With list unpacked as arguments to the functions:", sum_of_squares(*lst)) # print(sum_of_squares(lst)) # Does not work correctly ``` The second call failed because the function tried to raise the list of numbers to the second power. Inside the function body we have `t=([1,5,8])`, where the parentheses denote a tuple with one element, a list. In addition to positional arguments we have seen so far, a function call can also have *named arguments*. An example will explain this concept best: ``` def named(a, b, c): print("First:", a, "Second:", b, "Third:", c) named(5, c=7, b=8) ``` Note that the named arguments didn't need to be in the same order as in the function definition. The named arguments must come after the positional arguments. For example, the following function call is illegal `named(a=5, 7, 8)`. One can also specify an optional parameter by giving the parameter a default value. The parameters that have default values must come after those parameters that don't. We saw that the parameters of the `print` function were of form `print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False)`. There were four parameters with default values. If some default values don't suit us, we can give them in the function call using the name of the parameter: ``` print(1, 2, 3, end=' |', sep=' -*- ') print("first", "second", "third", end=' |', sep=' -*- ') ``` We did not need to specify all the parameters with default values, only those we wanted to change. Let's go through another example of using parameters with default values: ``` def length(*t, degree=2): """Computes the length of the vector given as parameter. By default, it computes the Euclidean distance (degree==2)""" s=0 for x in t: s += abs(x)**degree return s**(1/degree) print(length(-4,3)) print(length(-4,3, degree=3)) ``` With the default parameter this is the Euclidean distance, and if $p\ne 2$ it is called [p-norm](https://en.wikipedia.org/wiki/P-norm). We saw that it was possible to use packing and unpacking of arguments with the * notation, when one wants to specify arbitrary number of *positional arguments*. This is also possible for arbitrary number of named arguments with the `**` notation. We will talk about this more in the data structures section. #### Visibility of variables Function definition creates a new *namespace* (also called local scope). Variables created inside this scope are not available from outside the function definition. Also, the function parameters are only visible inside the function definition. Variables that are not defined inside any function are called `global variables`. Global variable are readable also in local scopes, but an assignment creates a new local variable without rebinding the global variable. If we are inside a function, a local variable hides a global variable by the same name: ``` i=2 # global variable def f(): i=3 # this creates a new variable, it does not rebind the global i print(i) # This will print 3 f() print(i) # This will print 2 ``` If you really need to rebind a global variable from a function, use the `global` statement. Example: ``` i=2 def f(): global i i=5 # rebind the global i variable print(i) # This will print 5 f() print(i) # This will print 5 ``` Unlike languages like C or C++, Python allows defining a function inside another function. This *nested* function will have nested scope: ``` def f(): # outer function b=2 def g(): # inner function #nonlocal b # Without this nonlocal statement, b=3 # this will create a new local variable print(b) g() print(b) f() ``` Try first running the above cell and see the result. Then uncomment the nonlocal stamement and run the cell again. The `global` and `nonlocal` statements are similar. The first will force a variable refer to a global variable, and the second will force a variable to refer to the variable in the nearest outer scope (but not the global scope). #### <div class="alert alert-info">Exercise 6 (triple square)</div> Write two functions: `triple` and `square`. Function `triple` multiplies its parameter by three. Function `square` raises its parameter to the power of two. For example, we have equalities `triple(5)==15` and `square(5)==25`. Part 1. In the `main` function write a `for` loop that iterates through values 1 to 10, and for each value prints its triple and its square. The output should be as follows: ``` triple(1)==3 square(1)==1 triple(2)==6 square(2)==4 ... ``` Part 2. Now modify this `for` loop so that it stops iteration when the square of a value is larger than the triple of the value, without printing anything in the last iteration. Note that the test cases check that both functions `triple` and `square` are called exactly once per iteration. <hr/> #### <div class="alert alert-info">Exercise 7 (areas of shapes)</div> Create a program that can compute the areas of three shapes, triangles, rectangles and circles, when their dimensions are given. An endless loop should ask for which shape you want the area be calculated. An empty string as input will exit the loop. If the user gives a string that is none of the given shapes, the message “unknown shape!” should be printed. Then it will ask for dimensions for that particular shape. When all the necessary dimensions are given, it prints the area, and starts the loop all over again. Use format specifier `f` for the area. What happens if you give incorrect dimensions, like giving string "aa" as radius? You don't have to check for errors in the input. Example interaction: ``` Choose a shape (triangle, rectangle, circle): triangle Give base of the triangle: 20 Give height of the triangle: 5 The area is 50.000000 Choose a shape (triangle, rectangle, circle): rectangel Unknown shape! Choose a shape (triangle, rectangle, circle): rectangle Give width of the rectangle: 20 Give height of the rectangle: 4 The area is 80.000000 Choose a shape (triangle, rectangle, circle): circle Give radius of the circle: 10 The area is 314.159265 Choose a shape (triangle, rectangle, circle): ``` <hr/> ### Data structures The main data structures in Python are strings, lists, tuples, dictionaries, and sets. We saw some examples of lists, when we discussed `for` loops. And we saw briefly tuples when we introduced argument packing and unpacking. Let's get into more details now. #### Sequences A *list* contains arbitrary number of elements (even zero) that are stored in sequential order. The elements are separated by commas and written between brackets. The elements don't need to be of the same type. An example of a list with four values: ``` [2, 100, "hello", 1.0] ``` A *tuple* is fixed length, immutable, and ordered container. Elements of tuple are separated by commas and written between parentheses. Examples of tuples: ``` (3,) # a singleton (1,3) # a pair (1, "hello", 1.0); # a triple ``` <div class="alert alert-warning">Note the difference between `(3)` and `(3,)`. Because the parentheses can also be used to group expressions, the first one defines an integer, but the second one defines a tuple with single element.</div> As we can see, both lists and tuples can contain values of different type. List, tuples, and strings are called *sequences* in Python, and they have several commonalities: * their length can be queried with the `len` function * `min` and `max` function find the minimum and maximum element of a sequence, and `sum` adds all the elements of numbers together * Sequences can be concatenated with the `+` operator, and repeated with the `*` operator: `"hi"*3=="hihihi"` * Since sequences are ordered, we can refer to the elements of a sequences by integers using the *indexing* notation: `"abcd"[2] == "c"` * Note that the indexing begins from 0 * Negative integers start indexing from the end: -1 refers to the last element, -2 refers to the second last, and so on Above we saw that we can access a single element of a sequence using *indexing*. If we want a subsequence of a sequence, we can use the *slicing* syntax. A slice consists of elements of the original sequence, and it is itself a sequence as well. A simple slice is a range of elements: ``` s="abcdefg" s[1:4] ``` Note that Python ranges exclude the last index. The generic form of a slice is `sequence[first:last:step]`. If any of the three parameters are left out, they are set to default values as follows: first=0, last=len(L), step=1. So, for instance "abcde"[1:]=="bcde". The step parameter selects elements that are step distance apart from each other. For example: ``` print([0,1,2,3,4,5,6,7,8,9][::3]) ``` #### <div class="alert alert-info">Exercise 8 (solve quadratic)</div> In mathematics, the quadratic equation $ax^2+bx+c=0$ can be solved with the formula $x=\frac{-b\pm \sqrt{b^2 -4ac}}{2a}$. Write a function `solve_quadratic`, that returns both solutions of a generic quadratic as a pair (2-tuple) when the coefficients are given as parameters. It should work like this: ```python print(solve_quadratic(1,-3,2)) (2.0,1.0) print(solve_quadratic(1,2,1)) (-1.0,-1.0) ``` You may want to use the `math.sqrt` function from the `math` module in your solution. Test that your function works in the main function! <hr/> #### Modifying lists We can assign values to elements of a list by indexing or by slicing. An example: ``` L=[11,13,22,32] L[2]=10 # Changes the third element print(L) ``` Or we can assign a list to a slice: ``` L[1:3]=[4] print(L) ``` We can also modify a list by using *mutating methods* of the `list` class, namely the methods `append`, `extend`, `insert`, `remove`, `pop`, `reverse`, and `sort`. Try Python's help functionality to find more about these methods: e.g. `help(list.extend)` or `help(list)`. <div class="alert alert-warning">Note that we cannot perform these modifications on tuples or strings since they are *immutable*</div> #### Generating numerical sequences Trivial lists can be tedious to write: `[0,1,2,3,4,5,6]`. The function `range` creates numeric ranges automatically. The above sequence can be generated with the function call `range(7)`. Note again that then end value is not included in the sequence. An example of using the `range` function: ``` L=range(3) for i in L: print(i) # Note that L is not a list! print(L) ``` So `L` is not a list, but it is a sequence. We can for instace access its last element with `L[-1]`. If really needed, then it can be converted to a list with the `list` constructor: ``` L=range(10) print(list(L)) ``` <div class="alert alert-warning">Note that using a range consumes less memory than the corresponding list. This is because in a list all the elements are stored in the memory, whereas the range generates the requested elements only when needed. For example, when the for loop asks for the next element from the range at each iteration, only a single element from the range exists in memory at the same time. This makes a big difference when using large ranges, like range(1000000).</div> The `range` function works in similar fashion as slices. So, for instance the step of the sequence can be given: ``` print(list(range(0, 7, 2))) ``` #### Sorting sequences In Python there are two ways to sort sequences. The `sort` *method* modifies the original list, whereas the `sorted` *function* returns a new sorted list and leaves the original intact. A couple of examples will demonstrate this: ``` L=[5,3,7,1] L.sort() # here we call the sort method of the object L print(L) L2=[6,1,7,3,6] print(sorted(L2)) print(L2) ``` The parameter `reverse=True` can be given (both to `sort` and `sorted`) to get descending order of elements: ``` L=[5,3,7,1] print(sorted(L, reverse=True)) ``` #### <div class="alert alert-info">Exercise 9 (merge)</div> Suppose we have two lists `L1` and `L2` that contain integers which are sorted in ascending order. Create a function `merge` that gets these lists as parameters and returns a new sorted list `L` that has all the elements of `L1` and `L2`. So, `len(L)` should equal to `len(L1)+len(L2)`. Do this using the fact that both lists are already sorted. You can’t use the `sorted` function or the `sort` method in implementing the `merge` method. You can however use these `sorted` in the main function for creating inputs to the `merge` function. Test with a couple of examples in the `main` function that your solution works correctly. Note: In Python argument lists are passed by reference to the function, they are not copied! Make sure you don't modify the original lists of the caller. <hr/> #### <div class="alert alert-info">Exercise 10 (detect ranges)</div> Create a function named `detect_ranges` that gets a list of integers as a parameter. The function should then sort this list, and transform the list into another list where pairs are used for all the detected intervals. So `3,4,5,6` is replaced by the pair `(3,7)`. Numbers that are not part of any interval result just single numbers. The resulting list consists of these numbers and pairs, separated by commas. An example of how this function works: ```python print(detect_ranges([2,5,4,8,12,6,7,10,13])) [2,(4,9),10,(12,14)] ``` Note that the second element of the pair does not belong to the range. This is consistent with the way Python's `range` function works. You may assume that no element in the input list appears multiple times. <hr/> #### Zipping sequences The `zip` function combines two (or more) sequences into one sequence. If, for example, two sequences are zipped together, the resulting sequence contains pairs. In general, if `n` sequences are zipped together, the elements of the resulting sequence contains `n`-tuples. An example of this: ``` L1=[1,2,3] L2=["first", "second", "third"] print(zip(L1, L2)) # Note that zip does not return a list, like range print(list(zip(L1, L2))) # Convert to a list ``` Here's another example of using the `zip` function. ``` days="Monday Tuesday Wednesday Thursday Friday Saturday Sunday".split() weathers="rainy rainy sunny cloudy rainy sunny sunny".split() temperatures=[10,12,12,9,9,11,11] for day, weather, temperature in zip(days,weathers,temperatures): print(f"On {day} it was {weather} and the temperature was {temperature} degrees celsius.") # Or equivalently: #for t in zip(days,weathers,temperatures): # print("On {} it was {} and the temperature was {} degrees celsius.".format(*t)) ``` If the sequences are not of equal length, then the resulting sequence will be as long as the shortest input sequence is. #### <div class="alert alert-info">Exercise 11 (interleave)</div> Write function `interleave` that gets arbitrary number of lists as parameters. You may assume that all the lists have equal length. The function should return one list containing all the elements from the input lists interleaved. Test your function from the `main` function of the program. Example: `interleave([1,2,3], [20,30,40], ['a', 'b', 'c'])` should return `[1, 20, 'a', 2, 30, 'b', 3, 40, 'c']`. Use the `zip` function to implement `interleave`. Remember the `extend` method of list objects. <hr/> #### Enumerating sequences In some other programming languages one iterates through the elements using their indices (0,1, ...) in the sequence. In Python we normally don't need to think about indices when iterating, because the `for` loop allows simpler iteration through the elements. But sometimes you really need to know the index of the current element in the sequence. In this case one uses Python's `enumerate` function. In the next example we would like find the second occurrence of integer 5 in a list. ``` L=[1,2,98,5,-1,2,0,5,10] counter = 0 for i, x in enumerate(L): if x == 5: counter += 1 if counter == 2: break print(i) ``` The `enumerate(L)` function call can be thought to be equivalent to `zip(range(len(L)), L)`. #### Dictionaries A *dictionary* is a dynamic, unordered container. Instead of using integers to access the elements of the container, the dictionary uses *keys* to access the stored *values*. The dictionary can be created by listing the comma separated key-value pairs in braces. Keys and values are separated by a colon. A tuple (key,value) is called an *item* of the dictionary. Let's demonstrate the dictionary creation and usage: ``` d={"key1":"value1", "key2":"value2"} print(d["key1"]) print(d["key2"]) ``` Keys can have different types even in the same container. So the following code is legal: `d={1:"a", "z":1}`. The only restriction is that the keys must be *hashable*. That is, there has to be a mapping from keys to integers. Lists are *not* hashable, but tuples are! There are alternative syntaxes for dictionary creation: ``` dict([("key1", "value1"), ("key2", "value2"), ("key3", "value3")]) # list of items dict(key1="value1", key2="value2", key3="value3"); ``` If a key is not found in a dictionary, the indexing `d[key]` results in an error (*exception* `KeyError`). But an assignment with a non-existing key causes the key to be added in the dictionary associated with the corresponding value: ``` d={} d[2]="value" print(d) # d[1] # This would cause an error ``` Dictionary object contains several non-mutating methods: ``` d.copy() d.items() d.keys() d.values() d.get(k[,x]) ``` Some methods mutate the dictionary: ``` d.clear() d.update(d1) d.setdefault(k[,x]) d.pop(k[,x]) d.popitem() ``` Try out some of these in the below cell. You can find more info with `help(dict)` or `help(dict.keys)`. ``` d=dict(a=1, b=2, c=3, d=4, e=5) d.values() ``` #### Sets Set is a dynamic, unordered container. It works a bit like dictionary, but only the keys are stored. And each key can be stored only once. The set requires that the keys to be stored are hashable. Below are a few ways of creating a set: ``` s={1,1,1} print(s) s=set([1,2,2,'a']) print(s) s=set() # empty set print(s) s.add(7) # add one element print(s) ``` A more useful example: ``` s="mississippi" print(f"There are {len(set(s))} distinct characters in {s}") ``` The `set` provides the following non-mutating methods: ``` s=set() s1=set() s.copy() s.issubset(s1) s.issuperset(s1) s.union(s1) s.intersection(s1) s.difference(s1) s.symmetric_difference(s1); ``` The last four operation can be tedious to write to create a more complicated expression. The alternative is to use the corresponding operator forms: `|`, `&`, `-`, and `^`. An example of these: ``` s=set([1,2,7]) t=set([2,8,9]) print("Union:", s|t) print("Intersection:", s&t) print("Difference:", s-t) print("Symmetric difference", s^t) ``` There are also the following mutating methods: ``` s.add(x) s.clear() s.discard() s.pop() s.remove(x) ``` And the set operators `|`, `&`, `-`, and `^` have the corresponding mutating, augmented assignment forms: `|=`, `&=`, `-=`, and `^=`. #### <div class="alert alert-info">Exercise 12 (distinct characters)</div> Write function `distinct_characters` that gets a list of strings as a parameter. It should return a dictionary whose keys are the strings of the input list and the corresponding values are the numbers of distinct characters in the key. Use the `set` container to temporarily store the distinct characters in a string. Example of usage: `distinct_characters(["check", "look", "try", "pop"])` should return `{ "check" : 4, "look" : 3, "try" : 3, "pop" : 2}`. <hr/> #### Miscellaneous stuff To find out whether a container includes an element, the `in` operator can be used. The operator returns a truth value. Some examples of the usage: ``` print(1 in [1,2]) d=dict(a=1, b=3) print("b" in d) s=set() print(1 in s) print("x" in "text") ``` As a special case, for strings the `in` operator can be used to check whether a string is part of another string: ``` print("issi" in "mississippi") print("issp" in "mississippi") ``` Elements of a container can be unpacked into variables: ``` first, second = [4,5] a,b,c = "bye" print(c) d=dict(a=1, b=3) key1, key2 = d print(key1, key2) ``` In membership testing and unpacking only the keys of a dictionary are used, unless either values or items (like below) are explicitly asked. ``` for key, value in d.items(): print(f"For key '{key}' value {value} was stored") ``` To remove the binding of a variable, use the `del` statement. For example: ``` s="hello" del s # print(s) # This would cause an error ``` To delete an item from a container, the `del` statement can again be applied: ``` L=[13,23,40,100] del L[1] print(L) ``` In similar fashion `del` can be used to delete a slice. Later we will see that `del` can delete attributes from an object. #### <div class="alert alert-info">Exercise 13 (reverse dictionary)</div> Let `d` be a dictionary that has English words as keys and a list of Finnish words as values. So, the dictionary can be used to find out the Finnish equivalents of an English word in the following way: ``` d["move"] ["liikuttaa"] d["hide"] ["piilottaa", "salata"] ``` Make a function `reverse_dictionary` that creates a Finnish to English dictionary based on a English to Finnish dictionary given as a parameter. The values of the created dictionary should be lists of words. It should work like this: ``` d={'move': ['liikuttaa'], 'hide': ['piilottaa', 'salata'], 'six': ['kuusi'], 'fir': ['kuusi']} reverse_dictionary(d) {'liikuttaa': ['move'], 'piilottaa': ['hide'], 'salata': ['hide'], 'kuusi': ['six', 'fir']} ``` Be careful with synonyms and homonyms! <hr/> #### <div class="alert alert-info">Exercise 14 (find matching)</div> Write function `find_matching` that gets a list of strings and a search string as parameters. The function should return the indices to those elements in the input list that contain the search string. Use the function `enumerate`. An example: `find_matching(["sensitive", "engine", "rubbish", "comment"], "en")` should return the list `[0, 1, 3]`. <hr/> ### Compact way of creating data structures We can now easily create complicated data structures using `for` loops: ``` L=[] for i in range(10): L.append(i**2) print(L) ``` Because this kind of pattern is often used, Python offers a short-hand for this. A *list comprehension* is an expression that allows creating complicated lists on one line. The notation is familiar from mathematics: $\{a^3 : a \in \{1,2, \ldots, 10\}\}$ The same written in Python as a list comprehension: ``` L=[ a**3 for a in range(1,11)] print(L) ``` The generic form of a list comprehension is: `[ expression for element in iterable lc-clauses ]`. Let's break this syntax into pieces. The iterable can be any sequence (or something more general). The lc-clauses consists of zero or more of the following clauses: * for elem in iterable * if expression A more complicated example. How would you describe these numbers? ``` L=[ 100*a + 10*b +c for a in range(0,10) for b in range(0,10) for c in range(0,10) if a <= b <= c] print(L) ``` If one needs only to iterate through the list once, it is more memory efficient to use a *generator expression* instead. The only thing that changes syntactically is that the surrounding brackets are replaced by parentheses: ``` G = ( 100*a + 10*b + c for a in range(0,10) for b in range(0,10) for c in range(0,10) if a <= b <= c ) print(sum(G)) # This iterates through all the elements from the generator print(sum(G)) # It doesn't restart from the beginning, so all elements are already consumed ``` <div class="alert alert-warning">Note above that one can only iterate through the generator once.</div> Similary a *dictionary comprehension* creates a dictionary: ``` d={ k : k**2 for k in range(10)} print(d) ``` And a *set comprehension* creates a set: ``` s={ i*j for i in range(10) for j in range(10)} print(s) ``` #### <div class="alert alert-info">Exercise 15 (two dice comprehension)</div> Redo the earlier exercise which printed all the pairs of two dice results that sum to 5. But this time use a list comprehension. Print one pair per line. <hr/> ### Processing sequences In this section we will go through some useful tools, that are maybe familiar to you from some functional programming language like *lisp* or *haskell*. These functions rely on functions being first-class objects in Python, that is, you can * pass a function as a parameter to another function * return a function as a return value from some function * store a function in a data structure or a variable We will talk about `map`, `filter`, and `reduce` functions. We will also cover how to create functions with no name using the *lambda* expressions. #### Map and lambda functions The `map` function gets a list and a function as parameters, and it returns a new list whose elements are elements of the original list transformed by the parameter function. For this to work the parameter function must take exactly one value in and return a value out. An example will clarify this concept: ``` def double(x): return 2*x L=[12,4,-1] print(map(double, L)) ``` The map function returns a map object for efficiency reasons. However, since we only want print the contents, we first convert it to a list and then print it: ``` print(list(map(double,L))) ``` When one reads numeric data from a file or from the internet, the numbers are usually in string form. Before they can be used in computations, they must first be converted to ints or floats. A simple example will showcase this. ``` s="12 43 64 6" L=s.split() # The split method of the string class, breaks the string at whitespaces # to a list of strings. print(L) print(sum(map(int, L))) # The int function converts a string to an integer ``` Sometimes it feels unnecessary to write a function is you are only going to use it in one `map` function call. For example the function ``` def add_double_and_square(x): return 2*x+x**2 ``` It is not likely that you will need it elsewhere in your program. The solution is to use an *expression* called *lambda* to define a function with no name. Because it is an expression, we can put it, for instance, in an argument list of a function call. The lambda expression has the form `lambda param1,param2, ... : expression`, where after the lambda keyword you list the parameters of the function, and after the colon is the expression that uses the parameters to compute the return value of the function. Let's replace the above `add_double_and_square` function with a lambda function and apply it to a list using the `map` function. ``` L=[2,3,5] print(list(map(lambda x : 2*x+x**2, L))) ``` #### <div class="alert alert-info">Exercise 16 (transform)</div> Write a function `transform` that gets two strings as parameters and returns a list of integers. The function should split the strings into words, and convert these words to integers. This should give two lists of integers. Then the function should return a list whose elements are multiplication of two integers in the respective positions in the lists. For example `transform("1 5 3", "2 6 -1")` should return the list of integers `[2, 30, -3]`. You **have** to use `split`, `map`, and `zip` functions/methods. You may assume that the two input strings are in correct format. <hr/> #### Filter function The `filter` function takes a function and a list as parameters. But unlike with the map construct, now the parameter function must take exactly one parameter and return a truth value (True or False). The `filter` function then creates a new list with only those elements from the original list for which the parameter function returns True. The elements for which the parameter function returns False are filtered out. An example will demonstrate the `filter` function: ``` def is_odd(x): """Returns True if x is odd and False if x is even""" return x % 2 == 1 # The % operator returns the remainder of integer division L=[1, 4, 5, 9, 10] print(list(filter(is_odd, L))) ``` The even elements of the list were filtered out. Note that the `filter` function is rarely used in modern python since list comprehensions can do the same thing while also doing whatever we want to do with the filtered values. ``` [l**2 for l in L if is_odd(l)] # squares of odd values ``` That said, `filter` is a useful function to know. #### <div class="alert alert-info">Exercise 17 (positive list)</div> Write a function `positive_list` that gets a list of numbers as a parameter, and returns a list with the negative numbers and zero filtered out using the `filter` function. The function call `positive_list([2,-2,0,1,-7])` should return the list `[2,1]`. Test your function in the `main` function. <hr/> #### The reduce function The `sum` function that returns the sum of a numeric list, can be though to reduce a list to a single element. It does this reduction by repeatedly applying the `+` operator until all the list elements are consumed. For instance, the list `[1,2,3,4]` is reduced by the expression `(((0+1)+2)+3)+4` of repeated applications of the `+` operator. We could implement this with the following function: ``` def sumreduce(L): s=0 for x in L: s = s+x return s ``` Because this is a common pattern, the `reduce` function is a common inclusion in functional programming languages. In Python `reduce` is included in the `functools` module. You give the operator you want to use as a parameter to reduce (addition in the above example). You may also give a starting value of the computation (starting value 0 was used above). If no starting value is used, the first element of the iterable is used as the starting value. We can now get rid of the separate function `sumreduce` by using the reduce function: ``` L=[1,2,3,4] from functools import reduce # import the reduce function from the functools module reduce(lambda x,y:x+y, L, 0) ``` If we wanted to get a product of all numbers in a sequence, we would use ``` reduce(lambda x,y:x*y, L, 1) ``` This corresponds to the sequence `(((1*1)*2)*3)*4` of application of operator `*`. <div class="alert alert-warning">Note that use of the starting value is necessary, because we want to be able to reduce lists of length 0 as well. If no starting value is specified when run on an empty list, <code>reduce</code> will raise an exception.</div> ## String handling We have already seen how to index, slice, concatenate, and repeat strings. Let's now look into what methods the `str` class offers. In Python strings are immutable. This means that for instance the following assignment is not legal: ``` s="text" # s[0] = "a" # This is not legal in Python ``` Because of the immutability of the strings, the string methods work by returning a value; they don't have any side-effects. In the rest of this section we briefly describe several of these methods. The methods are here divided into five groups. ### Classification of strings All the following methods will take no parameters and return a truth value. An empty string will always result in `False`. * `s.isalnum()` True if all characters are letters or digits * `s.isalpha()` True if all characters are letters * `s.isdigit()` True if all characters are digits * `s.islower()` True if contains letters, and all are lowercase * `s.isupper()` True if contains letters, and all are uppercase * `s.isspace()` True if all characters are whitespace * `s.istitle()` True if uppercase in the beginning of word, elsewhere lowercase ### String transformations The following methods do conversions between lower and uppercase characters in the string. All these methods return a new string. * `s.lower()` Change all letters to lowercase * `s.upper()` Change all letters to uppercase * `s.capitalize()` Change all letters to capitalcase * `s.title()` Change to titlecase * `s.swapcase()` Change all uppercase letters to lowercase, and vice versa ### Searching for substrings All the following methods get the wanted substring as the parameter, except the replace method, which also gets the replacing string as a parameter * `s.count(substr)` Counts the number of occurences of a substring * `s.find(substr)` Finds index of the first occurence of a substring, or -1 * `s.rfind(substr)` Finds index of the last occurence of a substring, or -1 * `s.index(substr)` Like find, except ValueError is raised if not found * `s.rindex(substr)` Like rfind, except ValueError is raised if not found * `s.startswith(substr)` Returns True if string starts with a given substring * `s.endswith(substr)` Returns True if string ends with a given substring * `s.replace(substr, replacement)` Returns a string where occurences of one string are replaced by another Keep also in mind that the expression `"issi" in "mississippi"` returns a truth value of whether the first string occurs in the second string. ### Trimming and adjusting * `s.strip(x)` Removes leading and trailing whitespace by default, or characters found in string x * `s.lstrip(x)` Same as strip but only leading characters are removed * `s.rstrip(x)` Same as strip but only trailing characters are removed * `s.ljust(n)` Left justifies string inside a field of length n * `s.rjust(n)` Right justifies string inside a field of length n * `s.center(n)` Centers string inside a field of length n An example of using the `center` method and string repetition: ``` L=[1,3,5,7,9,1,1] print("-"*11) for i in L: s="*"*i print(f"|{s.center(9)}|") print("-"*11) ``` ### Joining and splitting The `join(seq)` method joins the strings of the sequence `seq`. The string itself is used as a delimitter. An example: ``` "--".join(["abc", "def", "ghi"]) L=[str(x) for x in range(100)] s="" for x in L: s += " " + x # Avoid doing this, it creates a new string at every iteration print(s) # Note the redundant initial space print(" ".join(L)) # This is the correct way of building a string out of smaller strings ``` <div class="alert alert-warning">If you want to build a string out of smaller strings, then first put the small strings into a list, and then use the `join` method to catenate the pieces together. It is much more efficient this way. Use the <code>+</code> catenation operator only if you have very few short strings that you want to catenate.</div> Below we can see that for our small (100 element) list, execution is an order of magnitude faster using the `join` method. ``` %%timeit s="" for x in L: s += " " + x %%timeit s = " ".join(L) ``` `%%timeit` is an IPython [cell magic](https://ipython.readthedocs.io/en/stable/interactive/magics.html) command, that is useful for timing execution in notebooks. The method `split(sep=None)` divides a string into pieces that are separated by the string `sep`. The pieces are returned in a list. For instance, the call `'abc--def--ghi'.split("--")` will result in ``` 'abc--def--ghi'.split("--") ``` If no parameters are given to the `split` method, then it splits at any sequence of white space. #### <div class="alert alert-info">Exercise 18 (acronyms)</div> Write function `acronyms` which takes a string as a parameter and returns a list of acronyms. A word is an acronym if it has length at least two, and all its characters are in uppercase. Before acronym detection, delete punctuation with the `strip` method. Test this function in the `main` function with the following call: ```python print(acronyms("""For the purposes of the EU General Data Protection Regulation (GDPR), the controller of your personal information is International Business Machines Corporation (IBM Corp.), 1 New Orchard Road, Armonk, New York, United States, unless indicated otherwise. Where IBM Corp. or a subsidiary it controls (not established in the European Economic Area (EEA)) is required to appoint a legal representative in the EEA, the representative for all such cases is IBM United Kingdom Limited, PO Box 41, North Harbour, Portsmouth, Hampshire, United Kingdom PO6 3AU.""")) ``` This should return ```['EU', 'GDPR', 'IBM', 'IBM', 'EEA', 'EEA', 'IBM', 'PO', 'PO6', '3AU']``` <hr/> #### <div class="alert alert-info">Exercise 19 (sum equation)</div> Write a function `sum_equation` which takes a list of positive integers as parameters and returns a string with an equation of the sum of the elements. Example: `sum_equation([1,5,7])` returns `"1 + 5 + 7 = 13"` Observe, the spaces should be exactly as shown above. For an empty list the function should return the string "0 = 0". <hr/> ## Modules To ease management of large programs, software is divided into smaller pieces. In Python these pieces are called *modules*. A module should be a unit that is as independent from other modules as possible. Each file in Python corresponds to a module. Modules can contain classes, objects, functions, ... For example, functions to handle regular expressions are in module `re` The standard library of Python consists of hundreds of modules. Some of the most common standard modules include * `re` * `math` * `random` * `os` * `sys` Any file with extension `.py` that contains Python source code is a module. So, no special notation is needed to create a module. ### Using modules Let’s say that we need to use the cosine function. This function, and many other mathematical functions are located in the `math` module. To tell Python that we want to access the features offered by this module, we can give the statement `import math`. Now the module is loaded into memory. We can now call the function like this: ```python math.cos(0) 1.0 ``` Note that we need to include the module name where the `cos` function is found. This is because other modules may have a function (or other attribute of a module) with the same name. This usage of different namespace for each module prevents name clashes. For example, functions `gzip.open`, `os.open` are not to be confused with the builtin `open` function. ### Breaking the namespace If the cosine is needed a lot, then it might be tedious to always specify the namespace, especially if the name of the namespace/module is long. For these cases there is another way of importing modules. Bring a name to the current scope with `from math import cos` statement. Now we can use it without the namespace specifier: `cos(1)`. Several names can be imported to the current scope with `from math import name1, name2, ...` Or even all names of the module with `from math import *` The last form is sensible only in few cases, normally it just confuses things since the user may have no idea what names will be imported. ### Module lookup When we try to import a module `mod` with the import statement, the lookup proceeds in the following order: * Check if it is a builtin module * Check if the file `mod.py` is found in any of the folders in the list `sys.path`. The first item in this list is the current folder When Python is started, the `sys.path` list is initialised with the contents of the `PYTHONPATH` environment variable ### Module hierarchy The standard library contains hundreds of modules. Hence, it is hard to comprehend what the library includes. The modules therefore need to be organised somehow. In Python the modules can be organised into hierarchies using *packages*. A package is a module that can contain other packages and modules. For example, the `numpy` package contains subpackages `core`, `distutils`, `f2py`, `fft`, `lib`, `linalg`, `ma`, `numarray`, `oldnumeric`, `random`, and `testing`. And package `numpy.linalg` in turn contains modules `linalg`, `lapack_lite` and `info`. ### Importing from packages The statement `import numpy` imports the top-level package `numpy` and its subpackages. * `import numpy.linalg` imports the subpackage only, and * `import numpy.linalg.linalg` imports the module only If we want to skip the long namespace specification, we can use the form ```python from numpy.linalg import linalg ``` or ```python from numpy.linalg import linalg as lin ``` if we want to use a different name for the module. The following command imports the function `det` (computes the determinant of a matrix) from the module linalg, which is contained in a subpackage linalg, which belongs to package numpy: ```python from numpy.linalg.linalg import det ``` Had we only imported the top-level package `numpy` we would have to refer to the `det` function with the full name `numpy.linalg.linalg.det`. Here's a recap of the module hierarchy: ``` numpy package . linalg subpackage . linalg module . det function ``` ### Correspondence between folder and module hierarchies The packages are represented by folders in the filesystem. The folder should contain a file named `__init__.py` that makes up the package body. This handles the initialisation of the package. The folder may contain also further folders (subpackages) or Python files (normal modules). ``` a/ __init__.py b.py c/ __init__.py d.py e.py ``` ![package.svg](https://github.com/csmastersUH/data_analysis_with_python_2020/blob/master/package.svg?raw=1) ### Contents of a module Suppose we have a module named `mod.py`. All the assignments, class definitions with the `class` statement, and function definitions with `def` statement will create new attributes to this module. Let’s import this module from another Python file using the `import mod` statement. After the import we can access the attributes of the module object using the normal dot notation: `mod.f()`, `mod.myclass()`, `mod.a`, etc. Note that Python doesn’t really have global variables that are visible to all modules. All variables belong to some module namespace. One can query the attributes of an object using the `dir` function. With no parameters, it shows the attributes of the current module. Try executing `dir()` in an IPython shell or in a Jupyter notebook! After that, define the following attributes, and try running `dir()` again: ```python a=5 def f(i): return i + 1 ``` The above definitions created a *data attribute* called `a` and a *function attribute* called `f`. We will talk more about attributes next week when we will talk about objects. Just like other objects, the module object contains its attributes in the dictionary `modulename.__dict__` Usually a module contains at least the attributes `__name__` and `__file__`. Other common attributes are `__version__`, `__author__` and `__doc__` , which contains the docstring of the module. If the first statement of a file is a string, this is taken as the docstring for that module. Note that the docstring of the module really must be the first non-empty non-comment line. The attribute `__file__` is always the filename of the module. The module attribute `__name__` has value `“__main__”` if we in are the main program, otherwise some other module has imported us and name equals `__file__`. In Python it is possible to put statements on the top-level of our module `mod` so that they don't belong to any function. For instance like this: ```python for _ in range(3): print("Hello") ``` But if somebody imports our module with `import mod`, then all the statements at the top-level will be executed. This may be surprising to the user who imported the module. The user will usually say, explicitly when he/she wants to execute some code from the imported module. It is better style to put these statements inside some function. If they don't fit in any other function, then you can use, for example, the function named `main`, like this: ```python def main(): for _ in range(3): print("Hello") if __name__ == "__main__": # We call main only when this module is not being imported, but directly executed main() # for example with 'python3 mod.py' ``` You probably have seen this mechanism used in the exercise stubs. Note that in Python the `main` has no special meaning, it is just our convention to use it here. Now if somebody imports `mod`, the `for` loop won't be automatically executed. If we want, we can call it explicitly with `mod.main()`. ```python for _ in range(3): print("Hello") ``` #### <div class="alert alert-info">Exercise 20 (usemodule)</div> Create your own module as file `triangle.py` in the `src` folder. The module should contain two functions: * `hypothenuse` which returns the length of the hypothenuse when given the lengths of two other sides of a right-angled triangle * `area` which returns the area of the right-angled triangle, when two sides, perpendicular to each other, are given as parameters. Make sure both the functions and the module have descriptive docstrings. Add also the `__version__` and `__author__` attributes to the module. Call both your functions from the main function (which is in file `usemodule.py`). ## Summary * We have learned that Python's code blocks are denoted by consistent indenting, with spaces or tabs, unlike in many other languages * Python's `for` loops goes through all the elements of a container without the need of worrying about the positions (indices) of the elements in the container * More generally, an iterable is an object whose elements can be gone through one by one using a `for` loop. Such as `range(1,7)` * Python has dynamic typing: the type of a name is known only when we run the program. The type might not be fixed, that is, if a name is created, for example, in a loop, then its type might change at each iteration. * Visibility of a name: a name that refers to a variable can disappear in the middle of a code block, if a `del` statement is issued! * Python is good at string handling, but remember that if you want to concatenate large number of strings, use the `join` method. Concatenating by the `+` operator multiple times is very inefficient * Several useful tools exist to process sequences: `map`, `reduce`, `filter`, `zip`, `enumerate`, and `range`. The unnamed lambda function can be helpful with these tools. Note that these tools (except the `reduce`) don't return lists, but iterables, for efficiency reasons: Most often we don't want to store the result from these tools to a container (such as a list), we may only want to iterate through the result! <!--NAVIGATION--> <a href="https://colab.research.google.com/github/saskeli/x/blob/master/basics.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
github_jupyter
# Tutorial of Node Schematas - PI & TwoSymbol Visualization of schematas for simple boolean nodes (automatas) ``` %load_ext autoreload %autoreload 2 %matplotlib inline from __future__ import division import numpy as np import pandas as pd from IPython.display import Image, display import cana from cana.datasets.bools import * from cana.drawing.canalizing_map import draw_canalizing_map_graphviz n = OR() print( n) print( 'k_r: %.2f - %.2f' % (n.input_redundancy(mode='node',bound='upper',norm=False), n.input_redundancy(mode='node',bound='lower',norm=False))) print( 'k_e: %.2f - %.2f' % (n.effective_connectivity(mode='node',bound='upper',norm=False), n.effective_connectivity(mode='node',bound='lower',norm=False))) print( 'k_s: %.2f - %.2f' % (n.input_symmetry(mode='node',bound='upper',norm=False), n.input_symmetry(mode='node',bound='lower',norm=False))) print() print( 'k_r: %s (upper)' % n.input_redundancy(mode='input',bound='upper')) print( 'k_e: %s (upper)' % n.input_redundancy(mode='input',bound='upper')) print() dfLUT, dfPI, dfTS = n.look_up_table(), n.schemata_look_up_table(type='pi'), n.schemata_look_up_table(type='ts') display(pd.concat({'Original LUT':dfLUT,'PI Schema':dfPI,'TS Schema':dfTS}, axis=1).fillna('-')) draw_canalizing_map_graphviz(n.canalizing_map()) n = CONTRADICTION() n.name = 'Con' print( n) print( 'k_r: %.2f - %.2f' % (n.input_redundancy(mode='node',bound='upper',norm=False), n.input_redundancy(mode='node',bound='lower',norm=False))) print( 'k_e: %.2f - %.2f' % (n.effective_connectivity(mode='node',bound='upper',norm=False), n.effective_connectivity(mode='node',bound='lower',norm=False))) print( 'k_s: %.2f - %.2f' % (n.input_symmetry(mode='node',bound='upper',norm=False), n.input_symmetry(mode='node',bound='lower',norm=False))) print() print( 'k_r: %s (upper)' % n.input_redundancy(mode='input',bound='upper')) print( 'k_e: %s (upper)' % n.input_redundancy(mode='input',bound='upper')) print dfLUT, dfPI, dfTS = n.look_up_table(), n.schemata_look_up_table(type='pi'), n.schemata_look_up_table(type='ts') display(pd.concat({'Original LUT':dfLUT,'PI Schema':dfPI,'TS Schema':dfTS}, axis=1).fillna('-')) draw_canalizing_map_graphviz(n.canalizing_map()) n = XOR() print( n) print( 'k_r: %.2f - %.2f' % (n.input_redundancy(mode='node',bound='upper',norm=False), n.input_redundancy(mode='node',bound='lower',norm=False))) print( 'k_e: %.2f - %.2f' % (n.effective_connectivity(mode='node',bound='upper',norm=False), n.effective_connectivity(mode='node',bound='lower',norm=False))) print( 'k_s: %.2f - %.2f' % (n.input_symmetry(mode='node',bound='upper',norm=False), n.input_symmetry(mode='node',bound='lower',norm=False))) print() print( 'k_r: %s (upper)' % n.input_redundancy(mode='input',bound='upper')) print( 'k_e: %s (upper)' % n.input_redundancy(mode='input',bound='upper')) print for input in [0,1]: for ts,per,sms in n._two_symbols[input]: print( 'TS: %s | PermIdx: %s | SameIdx: %s' % (ts, per,sms)) dfLUT, dfPI, dfTS = n.look_up_table(), n.schemata_look_up_table(type='pi'), n.schemata_look_up_table(type='ts') display(pd.concat({'Original LUT':dfLUT,'PI Schema':dfPI,'TS Schema':dfTS}, axis=1).fillna('-')) draw_canalizing_map_graphviz(n.canalizing_map()) n = AND() print( n) print( 'k_r: %.2f - %.2f' % (n.input_redundancy(mode='node',bound='upper',norm=False), n.input_redundancy(mode='node',bound='lower',norm=False))) print( 'k_e: %.2f - %.2f' % (n.effective_connectivity(mode='node',bound='upper',norm=False), n.effective_connectivity(mode='node',bound='lower',norm=False))) print( 'k_s: %.2f - %.2f' % (n.input_symmetry(mode='node',bound='upper',norm=False), n.input_symmetry(mode='node',bound='lower',norm=False))) print() print( 'k_r: %s (upper)' % n.input_redundancy(mode='input',bound='upper')) print( 'k_e: %s (upper)' % n.input_redundancy(mode='input',bound='upper')) print dfLUT, dfPI, dfTS = n.look_up_table(), n.schemata_look_up_table(type='pi'), n.schemata_look_up_table(type='ts') display(pd.concat({'Original LUT':dfLUT,'PI Schema':dfPI,'TS Schema':dfTS}, axis=1).fillna('-')) draw_canalizing_map_graphviz(n.canalizing_map()) n = COPYx1() n.name = 'CPx1' print( n) print( 'k_r: %.2f - %.2f' % (n.input_redundancy(mode='node',bound='upper',norm=False), n.input_redundancy(mode='node',bound='lower',norm=False))) print( 'k_e: %.2f - %.2f' % (n.effective_connectivity(mode='node',bound='upper',norm=False), n.effective_connectivity(mode='node',bound='lower',norm=False))) print('k_s: %.2f - %.2f' % (n.input_symmetry(mode='node',bound='upper',norm=False), n.input_symmetry(mode='node',bound='lower',norm=False))) print print( 'k_r: %s (upper)' % n.input_redundancy(mode='input',bound='upper')) print( 'k_e: %s (upper)' % n.input_redundancy(mode='input',bound='upper')) print() dfLUT, dfPI, dfTS = n.look_up_table(), n.schemata_look_up_table(type='pi'), n.schemata_look_up_table(type='ts') display(pd.concat({'Original LUT':dfLUT,'PI Schema':dfPI,'TS Schema':dfTS}, axis=1).fillna('-')) draw_canalizing_map_graphviz(n.canalizing_map()) n = RULE90() n.name = 'R90' print( n) print( 'k_r: %.2f - %.2f' % (n.input_redundancy(mode='node',bound='upper',norm=False), n.input_redundancy(mode='node',bound='lower',norm=False))) print( 'k_e: %.2f - %.2f' % (n.effective_connectivity(mode='node',bound='upper',norm=False), n.effective_connectivity(mode='node',bound='lower',norm=False))) print( 'k_s: %.2f - %.2f' % (n.input_symmetry(mode='node',bound='upper',norm=False), n.input_symmetry(mode='node',bound='lower',norm=False))) print() print( 'k_r: %s (upper)' % n.input_redundancy(mode='input',bound='upper')) print( 'k_e: %s (upper)' % n.input_redundancy(mode='input',bound='upper')) print() dfLUT, dfPI, dfTS = n.look_up_table(), n.schemata_look_up_table(type='pi'), n.schemata_look_up_table(type='ts') display(pd.concat({'Original LUT':dfLUT,'PI Schema':dfPI,'TS Schema':dfTS}, axis=1).fillna('-')) draw_canalizing_map_graphviz(n.canalizing_map()) n = RULE110() n.name = 'R110' print( n) print( 'k_r: %.2f - %.2f' % (n.input_redundancy(mode='node',bound='upper',norm=False), n.input_redundancy(mode='node',bound='lower',norm=False))) print( 'k_e: %.2f - %.2f' % (n.effective_connectivity(mode='node',bound='upper',norm=False), n.effective_connectivity(mode='node',bound='lower',norm=False))) print( 'k_s: %.2f - %.2f' % (n.input_symmetry(mode='node',bound='upper',norm=False), n.input_symmetry(mode='node',bound='lower',norm=False))) print() print( 'k_r: %s (upper)' % n.input_redundancy(mode='input',bound='upper')) print( 'k_e: %s (upper)' % n.input_redundancy(mode='input',bound='upper')) print() dfLUT, dfPI, dfTS = n.look_up_table(), n.schemata_look_up_table(type='pi'), n.schemata_look_up_table(type='ts') display(pd.concat({'Original LUT':dfLUT,'PI Schema':dfPI,'TS Schema':dfTS}, axis=1).fillna('-')) draw_canalizing_map_graphviz(n.canalizing_map()) ```
github_jupyter
# Simulate and Generate Empirical Distributions in Python ## Mini-Lab: Simulations, Empirical Distributions, Sampling Welcome to your next mini-lab! Go ahead an run the following cell to get started. You can do that by clicking on the cell and then clickcing `Run` on the top bar. You can also just press `Shift` + `Enter` to run the cell. ``` from datascience import * import numpy as np import random import otter grader = otter.Notebook("m6_l1_tests") ``` Let's continue our analysis of COVID-19 data with the same false negative and false positive values of 10% and 5%. For the first task, let's try and create a sample population with 10,000 people. Let's say that 20% of this population has COVID-19. Replace the `...` in function below to create this sample population. The `create_population` function takes in an input `n` and returns a table with `n` rows. These rows can either have `positive` or `negative` as their value. These values indicate whether or not an individual has COVID-19. For random number generation, feel free to look up the [NumPy documentation](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.random.html) or the [Python random documentation](https://docs.python.org/3.8/library/random.html). ``` def create_population(n): test_results = ... for ...: random_num = ... if ...: disease_result = ... else: disease_result = ... test_results = np.append(test_results, disease_result) return Table().with_column("COVID-19", test_results) covid_population = create_population(...) covid_population.show(5) # There is a chance that this test may fail even with a correct solution due to randomness! # Run the above cell again and run the grader again if you think this is the case. grader.check("q1") ``` Given this population, let's go ahead and randomly test 1000 members. Complete `test_population` below by replacing the `...` with functional code. This function takes in a `population` which is a `datascience` table and a number `n`, where `n` is the number of people that we are testing. Inside the function, we add a column to this table called `Test Results` which contains the test result for each person in the sample based on the false negative and false positve rates given earlier. There is another function called `test_individual` that simplifies `test_population`. You will use `test_individual` within `test_population`. ``` def test_population(population, n): population = ... test_results = population.apply(test_individuals, "COVID-19") population = population.with_column(...) return population def test_individuals(individual): random_num = ... if individual == "positive": if ...: return ... else: return ... else: if ...: return ... else: return ... covid_sample = ... covid_sample.show(5) # There is a chance that this test may fail even with a correct solution due to randomness! # Run the above cell again and run the grader again if you think this is the case. grader.check("q2") ``` Now that we've simulated a population and sampled this population, let's take a look at our results. We'll pivot first by the `COVID-19` column and then by the `Test Results` column to look at how well our COVID-19 test does using "real-life" figures. ``` covid_sample.pivot("COVID-19", "Test Results") ``` You'll see that though our test correctly identifies the disease most of the time, there are still some instances where our test gets it wrong. It is impossible for a test to have both a 0% false negative rate and a 0% false positive rate. In the case of this disease and testing, which should we prioritize? Driving down the false positive rate or driving down the false negative rate? Is there reason why one should be prioritized over the other? There is no simple answer to these questions, and as data scientists, we'll have to grapple with these issues oursleves and navigate the complex web we call life. Congratulations on finishing! Run the next cell to make sure that you passed all of the test cases. ``` grader.check_all() ```
github_jupyter
# Lab 11 CNN(Convolutional Nueral Network) ## Lab11-0-cnn_basics ``` %matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt sess = tf.InteractiveSession() image = np.array([[[[1],[2],[3]], [[4],[5],[6]], [[7],[8],[9]]]], dtype=np.float32) print(image.shape) plt.imshow(image.reshape(3,3), cmap='Greys') ``` ## 1 filter (2,2,1,1) with padding: VALID weight.shape = 1 filter (2 , 2 , 1, 1) ![image](https://cloud.githubusercontent.com/assets/901975/24833375/c0d9c262-1cf9-11e7-9efc-5dd6fe0fedb0.png) ``` print("imag:\n", image) # image의 형태 확인 print("image.shape", image.shape) weight = tf.constant([[[[1.]],[[1.]]], [[[1.]],[[1.]]]]) print("weight.shape", weight.shape) conv2d = tf.nn.conv2d(image, weight, strides=[1, 1, 1, 1], padding='VALID') # padding 안준 경우 => convolution 전에 비해 1씩 줄어듬 conv2d_img = conv2d.eval() print("conv2d_img.shape", conv2d_img.shape) # shape이 2x2x1 로 변화 print(conv2d_img) # 1개의 filter를 거친 결과 conv2d_img = np.swapaxes(conv2d_img, 0, 3) # 0과 3축을 transpose ??? 왜하는 거지?? 안해도 됨 # print("conv2_img transpose\n", conv2d_img) # 바뀌었는지 확인 for i, one_img in enumerate(conv2d_img): # enumerate(리스트) : 리스트의 index와 원소를 같이 반환해줌 print(one_img.reshape(2,2)) plt.subplot(1,2,i+1), plt.imshow(one_img.reshape(2,2), cmap='gray') # plt.subplot(1,2, i+1) : R의 par(mfrow=c(1,2))와 같은 기능 ``` ## 1 filter (2,2,1,1) with padding:SAME ![image](https://cloud.githubusercontent.com/assets/901975/24833381/fd01869e-1cf9-11e7-9d59-df08c7c6e5c4.png) ``` # print("imag:\n", image) print("image.shape", image.shape) weight = tf.constant([[[[1.]],[[1.]]], [[[1.]],[[1.]]]]) print("weight.shape", weight.shape) conv2d = tf.nn.conv2d(image, weight, strides=[1, 1, 1, 1], padding='SAME') # padding을 준 경우 (0 padding) => convolution 전과 후의 shape이 같음 conv2d_img = conv2d.eval() print("conv2d_img.shape", conv2d_img.shape) # 3x3x1 conv2d_img = np.swapaxes(conv2d_img, 0, 3) for i, one_img in enumerate(conv2d_img): print(one_img.reshape(3,3)) plt.subplot(1,2,i+1), plt.imshow(one_img.reshape(3,3), cmap='gray') ``` ## 3 filters (2,2,1,3) ``` # print("imag:\n", image) print("image.shape", image.shape) weight = tf.constant([[[[1.,10.,-1.]],[[1.,10.,-1.]]], [[[1.,10.,-1.]],[[1.,10.,-1.]]]]) print("weight.shape", weight.shape) # 2x2의 weight을 3개 사용(filter가 3개) conv2d = tf.nn.conv2d(image, weight, strides=[1, 1, 1, 1], padding='SAME') # 1 stride, zero padding 사용 conv2d_img = conv2d.eval() print("conv2d_img.shape", conv2d_img.shape) # 3x3x3 conv2d_img = np.swapaxes(conv2d_img, 0, 3) for i, one_img in enumerate(conv2d_img): print(one_img.reshape(3,3)) plt.subplot(1,3,i+1), plt.imshow(one_img.reshape(3,3), cmap='gray') ``` ## MAX POOLING ![image](https://cloud.githubusercontent.com/assets/901975/23337676/bd154da2-fc30-11e6-888c-d86bc2206066.png) ![image](https://cloud.githubusercontent.com/assets/901975/23340355/a4bd3c08-fc6f-11e6-8a99-1e3bbbe86733.png) ``` image = np.array([[[[4],[3]], [[2],[1]]]], dtype=np.float32) pool = tf.nn.max_pool(image, ksize=[1, 2, 2, 1], strides=[1, 1, 1, 1], padding='VALID') print(pool.shape) print(pool.eval()) ``` ## SAME: Zero paddings ![image](https://cloud.githubusercontent.com/assets/901975/23340337/71b27652-fc6f-11e6-96ef-760998755f77.png) ``` image = np.array([[[[4],[3]], [[2],[1]]]], dtype=np.float32) pool = tf.nn.max_pool(image, ksize=[1, 2, 2, 1], strides=[1, 1, 1, 1], padding='SAME') print(pool.shape) print(pool.eval()) ``` ### MNIST data로 간단한 CNN 구현 ``` from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # Check out https://www.tensorflow.org/get_started/mnist/beginners for # more information about the mnist dataset img = mnist.train.images[0].reshape(28,28) plt.imshow(img, cmap='gray') # 1번 convolution sess = tf.InteractiveSession() img = img.reshape(-1,28,28,1) W1 = tf.Variable(tf.random_normal([3, 3, 1, 5], stddev=0.01)) # 3x3 filter 5개 사용 conv2d = tf.nn.conv2d(img, W1, strides=[1, 2, 2, 1], padding='SAME') # 2 stride, zero padding print(conv2d) sess.run(tf.global_variables_initializer()) conv2d_img = conv2d.eval() conv2d_img = np.swapaxes(conv2d_img, 0, 3) for i, one_img in enumerate(conv2d_img): plt.subplot(1,5,i+1), plt.imshow(one_img.reshape(14,14), cmap='gray') # pooling pool = tf.nn.max_pool(conv2d, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') # 2 stride, zero padding, 2x2의 크기로 max pooling 시행 print(pool) sess.run(tf.global_variables_initializer()) pool_img = pool.eval() pool_img = np.swapaxes(pool_img, 0, 3) for i, one_img in enumerate(pool_img): plt.subplot(1,5,i+1), plt.imshow(one_img.reshape(7, 7), cmap='gray') ```
github_jupyter
# Experiment set up 1. Create dataset: sequence of preporcessed examples ready to feed to neuralnet 2. Create dataloader: define how dataset is loaded to neuralnet (batch size, order, computation optimizing ...) 3. Create model : a bunch of matrixes math to transform input tensor to output tensor 4. Training: + Forward + Calculate loss, metrics batch + Backward # Import necessary packages ``` import os import glob import sys import random import matplotlib.pylab as plt from PIL import Image, ImageDraw import torch from torch.utils.data import Dataset import torchvision.transforms.functional as TF import numpy as np from sklearn.model_selection import ShuffleSplit torch.manual_seed(0) np.random.seed(0) random.seed(0) %matplotlib inline sys.path.insert(0, '..') from src.models.utils import FaceDataset ``` # Create a transformer ``` def resize_img_label(image,label,target_size=(256,256)): w_orig,h_orig = image.size w_target,h_target = target_size # resize image and label image_new = TF.resize(image,target_size) return image_new,label def transformer(image, label, params): image,label = resize_img_label(image,label,params["target_size"]) image = TF.to_tensor(image) return image, label ``` # Create Data loader ``` trans_params_train = { "target_size" : (112, 112), } trans_params_val={ "target_size" : (112, 112), } path2data = "/home/Data/appa-real/processed/" # create data set train_ds = FaceDataset(path2data + "train.csv", transformer, trans_params_train) val_ds = FaceDataset(path2data + "valid.csv", transformer, trans_params_val) print(len(train_ds)) print(len(val_ds)) import matplotlib.pyplot as plt def show(img,label=None): npimg = img.numpy().transpose((1,2,0)) plt.imshow(npimg) if label is not None: label = label.view(-1,2) for point in label: x,y= point plt.plot(x,y,'b+',markersize=10) plt.figure(figsize=(10,10)) for img,label in train_ds: show(img,label) break plt.figure(figsize=(10,10)) for img,label in val_ds: show(img,label) break from torch.utils.data import DataLoader train_dl = DataLoader(train_ds, batch_size = 32, shuffle=True) val_dl = DataLoader(val_ds, batch_size = 256, shuffle=False) for img_b, label_b in train_dl: print(img_b.shape,img_b.dtype) print(label_b.shape) break for img, label in val_dl: print(label.shape) break ``` # Create Model ``` import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self, params): super(Net, self).__init__() def forward(self, x): return x def __init__(self, params): super(Net, self).__init__() C_in,H_in,W_in=params["input_shape"] init_f=params["initial_filters"] num_outputs=params["num_outputs"] self.conv1 = nn.Conv2d(C_in, init_f, kernel_size=3, stride=2, padding=1) self.conv2 = nn.Conv2d(init_f+C_in, 2*init_f, kernel_size=3, stride=1, padding=1) self.conv3 = nn.Conv2d(3*init_f+C_in, 4*init_f, kernel_size=3, padding=1) self.conv4 = nn.Conv2d(7*init_f+C_in, 8*init_f, kernel_size=3, padding=1) self.conv5 = nn.Conv2d(15*init_f+C_in, 16*init_f, kernel_size=3, padding=1) self.fc1 = nn.Linear(16*init_f, num_outputs) def forward(self, x): identity=F.avg_pool2d(x,4,4) x = F.relu(self.conv1(x)) x = F.max_pool2d(x, 2, 2) x = torch.cat((x, identity), dim=1) identity=F.avg_pool2d(x,2,2) x = F.relu(self.conv2(x)) x = F.max_pool2d(x, 2, 2) x = torch.cat((x, identity), dim=1) identity=F.avg_pool2d(x,2,2) x = F.relu(self.conv3(x)) x = F.max_pool2d(x, 2, 2) x = torch.cat((x, identity), dim=1) identity=F.avg_pool2d(x,2,2) x = F.relu(self.conv4(x)) x = F.max_pool2d(x, 2, 2) x = torch.cat((x, identity), dim=1) x = F.relu(self.conv5(x)) x=F.adaptive_avg_pool2d(x,1) x = x.reshape(x.size(0), -1) x = self.fc1(x) return x Net.__init__= __init__ Net.forward = forward params_model={ "input_shape": (3,112,112), "initial_filters": 64, "num_outputs": 1, } model = Net(params_model) device = torch.device("cuda") model = model.to(device) ``` # Create optimizer ``` from torch import optim from torch.optim.lr_scheduler import ReduceLROnPlateau opt = optim.Adam(model.parameters(), lr=1e-3) lr_scheduler = ReduceLROnPlateau(opt, mode='min',factor=0.5, patience=10,verbose=1) ``` # Training ``` from src.models import experiment performance = experiment.Performance() path2models= "../models/weights.pt" params = experiment.Prams(num_epochs=30, path2weights=path2models, device=device,optimizer=opt, lr_scheduler=lr_scheduler, sanity_check=False) pipeline = experiment.Pipeline(model, train_dl, val_dl, performance, params) model, performance = pipeline.train_val() loss_hist, metric_history = performance.loss_history, performance.metrics_history # Train-Validation Progress num_epochs= 30 # plot loss progress plt.title("Train-Val Loss") plt.plot(range(1,num_epochs+1),loss_hist["train"],label="train") plt.plot(range(1,num_epochs+1),loss_hist["val"],label="val") plt.ylabel("Loss") plt.xlabel("Training Epochs") plt.legend() plt.show() # plot accuracy progress plt.title("Val mae") plt.plot(range(1,num_epochs+1),metric_history["val"],label="val") plt.plot(range(1,num_epochs+1),metric_history["train"],label="train") plt.ylabel("MAE") plt.xlabel("Training Epochs") plt.legend() plt.show() min(metric_history["val"]) min(loss_hist["val"]) min(loss_hist["train"]) ```
github_jupyter
Cognizant Data Science Summit 2020 : July 1, 2020 Yogesh Deshpande [157456] # Week 1 challenge - Python Description The eight queens puzzle is the problem of placing eight chess queens on an 8×8 chessboard so that no two queens threaten each other; thus, a solution requires that no two queens share the same row, column, or diagonal. The eight queens puzzle is an example of the more general n queens problem of placing n non-attacking queens on an n×n chessboard. (Source : https://en.wikipedia.org/wiki/Eight_queens_puzzle ) Challenge The challenge is to generate one right sequence through Genetic Programming. The sequence has to be 8 numbers between 0 to 7. Each number represents the positions the Queens can be placed. Each number refers to the row number in the specific column 0 3 4 5 6 1 2 4 • 0 is the row number in the column 0 where the Queen can be placed • 3 is the row number in the column 1 where the Queen can be placed # Initiaze variables, functions' definitions ``` import random # Set the variables as per the problem statement NumberofQueens = 8 InitialPopulation = 1000000 # Initial population has number of chromozones out of which one or more are possible solutions NumberofIterations = 1000 # Number of generations to check for possible solution def create_chromozone(NumberofQueens): chromozone = [] for gene in range(NumberofQueens): chromozone.append(random.randint(0, NumberofQueens-1)) return chromozone #print(chromozone) # Unit testing # create_chromozone(NumberofQueens) def create_population(NumberofQueens, InitialPopulation): Population = [] for chromozone in range(InitialPopulation): Population.append(create_chromozone(NumberofQueens)) #print(Population) return Population # Unit testing #create_population(NumberofQueens, InitialPopulation) def fitness_calculation(chromosome, maxFitness): horizontal_collisions = sum([chromosome.count(i) - 1 for i in chromosome])/2 diagonal_collisions = 0 for record in range(1,len(chromosome)+1): column1 = record-1 row1 = chromosome[column1] for i in range (column1+1, len(chromosome)): column2 = i row2 = chromosome[i] deltaRow = abs(row1 - row2) deltaCol = abs(column1 - column2) if (deltaRow == deltaCol): #print("######## Collision detected ##############") diagonal_collisions = diagonal_collisions + 1 #print("Horizontal Collisions are {} and Diagonal are {} ".format(horizontal_collisions, diagonal_collisions)) fitness_score = maxFitness - (horizontal_collisions + diagonal_collisions) #print("The fitness score is {}".format(fitness_score)) return fitness_score #Unit Test #itness_calculation([4, 1, 5, 8, 2, 7, 3, 6], 28) def strength_of_chromosome(chromosome, maxFitness): return fitness_calculation(chromosome, maxFitness) / maxFitness #Unit Test #strength_of_chromosome([1, 1, 1, 1, 1, 1, 1, 1], 28) #strength_of_chromosome([4, 1, 5, 8, 2, 7, 3, 6], 28) ``` # Main Program for solution to get a 8-Queen sequence ``` # Main Program if __name__ == "__main__": # Calulate the target Fitness TargetFitness = (NumberofQueens * (NumberofQueens - 1)) /2 print("Maximum score to achive is = {}".format(TargetFitness)) # Inital population Population = create_population(NumberofQueens, InitialPopulation) generation_counter = 0 for iteration in range(NumberofIterations): MaxPopulationScore = max([fitness_calculation(chromozone, TargetFitness) for chromozone in Population]) print("generation counter = {}, MaxPopulationScore = {}".format(generation_counter, MaxPopulationScore)) if (MaxPopulationScore != TargetFitness): # If the current population has no score matching target score, continue with next generation generation_counter = generation_counter + 1 else: # Target score is achieved at this stage break print("Solved in generation {}".format(generation_counter+1)) for chromosome in Population: if (fitness_calculation(chromosome, TargetFitness) == TargetFitness): print("Solution =======> {}".format(chromosome)) create_chromozone(8) create_chromozone(8) ```
github_jupyter
# Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. ## Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) ``` ## Explore the Data Play around with view_sentence_range to view different parts of the data. ``` view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ``` ## Implement Preprocessing Function ### Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `<EOS>` word id at the end of `target_text`. This will help the neural network predict when the sentence should end. You can get the `<EOS>` word id by doing: ```python target_vocab_to_int['<EOS>'] ``` You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`. ``` def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ source_id_text = [] target_id_text = [] for line in source_text.splitlines(): source_id_text.append([source_vocab_to_int[word] for word in line.split()]) EOS = target_vocab_to_int['<EOS>'] for line in target_text.splitlines(): target_id_text.append([target_vocab_to_int[word] for word in line.split()] + [EOS]) return source_id_text, target_id_text """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) ``` ### Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) ``` # Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() ``` ### Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ``` ## Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - `model_inputs` - `process_decoder_input` - `encoding_layer` - `decoding_layer_train` - `decoding_layer_infer` - `decoding_layer` - `seq2seq_model` ### Input Implement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. - Targets placeholder with rank 2. - Learning rate placeholder with rank 0. - Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. - Target sequence length placeholder named "target_sequence_length" with rank 1 - Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0. - Source sequence length placeholder named "source_sequence_length" with rank 1 Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) ``` def model_inputs(): """ Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) """ input_ = tf.placeholder(tf.int32, shape=[None, None], name="input") targets = tf.placeholder(tf.int32, shape=[None, None], name="targets") learning_rate = tf.placeholder(tf.float32, shape=None) keep_prob = tf.placeholder(tf.float32, shape=None, name="keep_prob") target_sequence_length = tf.placeholder(tf.int32, shape=[None], name="target_sequence_length") max_target_len = tf.reduce_max(target_sequence_length, name="max_target_len") source_sequence_length = tf.placeholder(tf.int32, shape=[None], name="source_sequence_length") return input_, targets, learning_rate, keep_prob, target_sequence_length, max_target_len, source_sequence_length """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) ``` ### Process Decoder Input Implement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch. ``` def process_decoder_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ batches = tf.strided_slice(target_data, [0,0], [batch_size, -1], [1, 1]) padding = tf.fill(dims=[batch_size, 1], value=target_vocab_to_int['<GO>']) preprocessed_target_data = tf.concat(values=[padding, batches], axis=1) return preprocessed_target_data """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_encoding_input(process_decoder_input) ``` ### Encoding Implement `encoding_layer()` to create a Encoder RNN layer: * Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence) * Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.md#stacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper) * Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) ``` from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) """ def build_cell(): lstm = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) cell = tf.contrib.rnn.MultiRNNCell([build_cell() for _ in range(num_layers)]) output, state = tf.nn.dynamic_rnn(cell, embed_input, source_sequence_length, dtype=tf.float32) return output, state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) ``` ### Decoding - Training Create a training decoding layer: * Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder) * Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ``` def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id """ training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length) return training_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) ``` ### Decoding - Inference Create inference decoder: * Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper) * Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder) * Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode) ``` def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id """ start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens') inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length) return inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) ``` ### Build the Decoding Layer Implement `decoding_layer()` to create a Decoder RNN layer. * Embed the target sequences * Construct the decoder LSTM cell (just like you constructed the encoder cell above) * Create an output layer to map the outputs of the decoder to the elements of our vocabulary * Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits. * Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits. Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference. ``` def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): """ Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ def build_cell(): lstm = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) return drop dec_cell = tf.contrib.rnn.MultiRNNCell([build_cell() for _ in range(num_layers)]) start_of_sequence_id = target_vocab_to_int["<GO>"] end_of_sequence_id = target_vocab_to_int['<EOS>'] vocab_size = len(target_vocab_to_int) dec_embeddings = tf.Variable(tf.random_uniform([vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) output_layer = Dense(target_vocab_size, kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1)) with tf.variable_scope("decode"):# as decoding_scope: training_decoder_output = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) # decoding_scope.reuse_variables with tf.variable_scope("decode", reuse=True): inference_decoder_output = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) ``` ### Build the Neural Network Apply the functions you implemented above to: - Apply embedding to the input data for the encoder. - Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`. - Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function. - Apply embedding to the target data for the decoder. - Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function. ``` def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) """ _, encoder_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) training_decoder_output, inference_decoder_output = decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return training_decoder_output, inference_decoder_output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) ``` ## Neural Network Training ### Hyperparameters Tune the following parameters: - Set `epochs` to the number of epochs. - Set `batch_size` to the batch size. - Set `rnn_size` to the size of the RNNs. - Set `num_layers` to the number of layers. - Set `encoding_embedding_size` to the size of the embedding for the encoder. - Set `decoding_embedding_size` to the size of the embedding for the decoder. - Set `learning_rate` to the learning rate. - Set `keep_probability` to the Dropout keep probability - Set `display_step` to state how many steps between each debug output statement ``` # Number of Epochs epochs = 8 # Batch Size batch_size = 128 # RNN Size rnn_size = 256 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 200 decoding_embedding_size = 200 # Learning Rate learning_rate = 0.0005 # Dropout Keep Probability keep_probability = 0.75 display_step = 20 ``` ### Build the Graph Build the graph using the neural network you implemented. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) ``` Batch and pad the source and target sequences ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ def pad_sentence_batch(sentence_batch, pad_int): """Pad sentences with <PAD> so that each sentence of a batch has the same length""" max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): """Batch targets, sources, and the lengths of their sentences together""" for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths ``` ### Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') ``` ### Save Parameters Save the `batch_size` and `save_path` parameters for inference. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) ``` # Checkpoint ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() ``` ## Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences. - Convert the sentence to lowercase - Convert words into ids using `vocab_to_int` - Convert words not in the vocabulary, to the `<UNK>` word id. ``` def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ index = [vocab_to_int.get(word.lower(), vocab_to_int["<UNK>"]) for word in sentence.split()] return index """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) ``` ## Translate This will translate `translate_sentence` from English to French. ``` translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) ``` ## Imperfect Translation You might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data. You can train on the [WMT10 French-English corpus](http://www.statmt.org/wmt10/training-giga-fren.tar). This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project. ## Submitting This Project When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_language_translation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.
github_jupyter
# 🔬 Sequence Comparison of DNA using `BioPython` ### 🦠 `Covid-19`, `SARS`, `MERS`, and `Ebola` #### Analysis Techniques: * Compare their DNA sequence and Protein (Amino Acid) sequence * GC Content * Freq of Each Amino Acids * Find similarity between them * Alignment * hamming distance * 3D structure of each | DNA Sequence | Datasource | |:-----------------|:--------------------------------------------------------------| | Latest Sequence | https://www.ncbi.nlm.nih.gov/genbank/sars-cov-2-seqs/ | | Wuhan-Hu-1 | https://www.ncbi.nlm.nih.gov/nuccore/MN908947.3?report=fasta | | Covid19 | https://www.ncbi.nlm.nih.gov/nuccore/NC_045512.2?report=fasta | | SARS | https://www.ncbi.nlm.nih.gov/nuccore/NC_004718.3?report=fasta | | MERS | https://www.ncbi.nlm.nih.gov/nuccore/NC_019843.3?report=fasta | | EBOLA | https://www.ncbi.nlm.nih.gov/nuccore/NC_002549.1?report=fasta | ### 1. Analysis Techniques ``` import warnings warnings.filterwarnings('ignore') import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline from Bio.Seq import Seq # Create our sequence seq1 = Seq('ACTCGA') seq2 = Seq('AC') ``` #### GC Contents In DNA * `GC-content` (or guanine-cytosine content) is the **percentage of nitrogenous bases** in a DNA or RNA molecule that are either guanine (`G`) or cytosine (`C`) #### Usefulness * In polymerase chain reaction (PCR) experiments, the GC-content of short oligonucleotides known as primers is often used to predict their **annealing temperature** to the template DNA. * A `high` GC-content level indicates a relatively higher melting temperature. * DNA with `low` GC-content is less stable than DNA with high GC-content > Question: which sequence is more stable when heat is applied? ``` from Bio.SeqUtils import GC # Check GC (guanine-cytosine) percentage in sequence print(f"{GC(seq1)}% \t({seq1})") print(f"{GC(seq2)}% \t({seq2})") ``` ### Sequence Alignment * `Global alignment` finds the best concordance/agreement between all characters in two sequences * `Local Alignment` finds just the subsequences that align the best ``` from Bio import pairwise2 from Bio.pairwise2 import format_alignment print('seq1 =', seq1, '\nseq2 =', seq2, '\n\n') # Global alignment alignments = pairwise2.align.globalxx(seq1, seq2) print(f'Alignments found: {len(alignments)}') print(*alignments) # Print nicely print(format_alignment(*alignments[0])) # 2nd alignment print(format_alignment(*alignments[1])) # To see all possible alignments for a in alignments: print(format_alignment(*a), '\n') # Get the number of possible sequence alignments alignment_score = pairwise2.align.globalxx(seq1,seq2,one_alignment_only=True,score_only=True) alignment_score ``` #### Sequence Similarity * Fraction of nucleotides that is the same/ total number of nucleotides * 100% ``` alignment_score/len(seq1)*100 ``` ### Hamming Distance: `How Many Subsitutions are Required to Match Two Sequences?` * Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. * In other words, it measures the minimum number of substitutions required to change one string into the other, or the minimum number of errors that could have transformed one string into the other * It is used for error detection or error correction * It is used to quantify the similarity of DNA sequences #### Edit Distance * Is a way of quantifying how dissimilar two strings (e.g., words) are to one another by counting the minimum number of operations required to transform one string into the other (e.g. Levenshtein distance) ``` def hamming_distance(lhs, rhs): return len([(x,y) for x,y in zip(lhs,rhs) if x != y]) hamming_distance('TT', 'ACCTA') def hammer_time(s1, s2, verbose=True): """Take two nucleotide sequences s1 and s2, and display the possible alignments and hamming distance. """ if verbose: print('s1 =', s1, '\ns2 =', s2, '\n\n') print('Hamming Distance:', hamming_distance(s1, s2), '\n(min substitutions for sequences to match)') print('\nAlignment Options:\n\n') alignments = pairwise2.align.globalxx(s1, s2) for a in alignments: print(format_alignment(*a), '\n') s1 = 'ACTCGAA' s2 = 'ACGA' hammer_time(s1, s2) ``` ### Dot Plot * A dot plot is a graphical method that allows the **comparison of two biological sequences** and identify regions of **close similarity** between them. * Simplest explanation: put a dot wherever sequences are identical #### Usefulness Dot plots can also be used to visually inspect sequences for - Direct or inverted repeats - Regions with low sequence complexity - Similar regions - Repeated sequences - Sequence rearrangements - RNA structures - Gene order Acknowledgement: https://stackoverflow.com/questions/40822400/how-to-create-a-dotplot-of-two-dna-sequence-in-python ``` def delta(x,y): return 0 if x == y else 1 def M(seq1,seq2,i,j,k): return sum(delta(x,y) for x,y in zip(seq1[i:i+k],seq2[j:j+k])) def makeMatrix(seq1,seq2,k): n = len(seq1) m = len(seq2) return [[M(seq1,seq2,i,j,k) for j in range(m-k+1)] for i in range(n-k+1)] def plotMatrix(M,t, seq1, seq2, nonblank = chr(0x25A0), blank = ' '): print(' |' + seq2) print('-'*(2 + len(seq2))) for label,row in zip(seq1,M): line = ''.join(nonblank if s < t else blank for s in row) print(label + '|' + line) def dotplot(seq1,seq2,k = 1,t = 1): M = makeMatrix(seq1,seq2,k) plotMatrix(M, t, seq1,seq2) #experiment with character choice # The dot plot: put a dot where the two sequences are identical s1 = 'ACTCGA' s2 = 'AC' dotplot(s1, s2) # Identical proteins will show a diagonal line. s1 = 'ACCTAG' s2 = 'ACCTAG' dotplot(s1, s2) print('\n\n') hammer_time(s1, s2, verbose=False) ``` # 🔬 2. Comparative Analysis of Virus DNA ### 🦠 `Covid-19`, `SARS`, `MERS`, `Ebola` * Covid19(`SARS-CoV2`) is a novel coronavirus identified as the cause of coronavirus disease 2019 (COVID-19) that began in Wuhan, China in late 2019 and spread worldwide. * MERS(`MERS-CoV`) was identified in 2012 as the cause of Middle East respiratory syndrome (MERS). * SARS(`SARS-CoV`) was identified in 2002 as the cause of an outbreak of severe acute respiratory syndrome (SARS). #### `fasta` DNA Sequence Files * Covid19 : https://www.rcsb.org/3d-view/6LU7 * SARS: https://www.ncbi.nlm.nih.gov/nuccore/NC_004718.3?report=fasta * MERS: https://www.ncbi.nlm.nih.gov/nuccore/NC_019843.3?report=fasta * EBOLA:https://www.rcsb.org/structure/6HS4 ``` import pandas as pd import numpy as np from Bio import SeqIO covid = SeqIO.read("../data/01_COVID_MN908947.3.fasta","fasta") mers = SeqIO.read("../data/02_MERS_NC_019843.3.fasta","fasta") sars = SeqIO.read("../data/03_SARS_rcsb_pdb_5XES.fasta","fasta") ebola = SeqIO.read("../data/04_EBOLA_rcsb_pdb_6HS4.fasta","fasta") # Convert imports to BioPython sequences covid_seq = covid.seq mers_seq = mers.seq sars_seq = sars.seq ebola_seq = ebola.seq # Create dataframe df = pd.DataFrame({'name': ['COVID19', 'MERS', 'SARS', 'EBOLA'], 'seq': [covid_seq, mers_seq, sars_seq, ebola_seq]}) df ``` #### Length of Each Genome ``` df['len'] = df.seq.apply(lambda x: len(x)) df[['name', 'len']].sort_values('len', ascending=False) \ .style.bar(color='#cde8F6', vmin=0, width=100, align='left') ``` * `MERS`, `COVID` and `SARS` all have about the same genome length (30,000 base pairs) #### Which of them is more heat stable? ``` # Check the GC content df['gc_content'] = df.seq.apply(lambda x: GC(x)) df[['name', 'gc_content']].sort_values('gc_content', ascending=False) \ .style.bar(color='#cde8F6', vmin=0) ``` * `MERS` is the most stable with a GC of `41.24` followed by Ebola #### Translate RNA into proteins How many proteins are in each dna sequence? ``` # Translate the RNA into Proteins df['proteins'] = df.seq.apply(lambda s: len(s.translate())) df[['name', 'proteins']].sort_values('proteins', ascending=False) \ .style.bar(color='#cde8F6', vmin=0) ``` #### How Many Amino Acids are Created? ``` from Bio.SeqUtils.ProtParam import ProteinAnalysis from collections import Counter # Method 1 covid_analysed = ProteinAnalysis(str(covid_protein)) mers_analysed = ProteinAnalysis(str(mers_protein)) sars_analysed = ProteinAnalysis(str(sars_protein)) ebola_analysed = ProteinAnalysis(str(ebola_protein)) # Check for the Frequence of AA covid_analysed.count_amino_acids() # Method 2 from collections import Counter # Find the Amino Acid Frequency df['aa_freq'] = df.seq.apply(lambda s: Counter(s.translate())) df ``` #### Most Common Amino Acid ``` # For Covid df[df.name=='COVID19'].aa_freq.values[0].most_common(10) # Plot the Amino Acids of COVID-19 aa = df[df.name=='COVID19'].aa_freq.values[0] plt.bar(aa.keys(), aa.values()) # All viruses -- same chart (not stacked) for virus in df.name: aa = df[df.name==virus].aa_freq.values[0] plt.bar(aa.keys(), aa.values()) plt.show() ``` ### Dot Plots of Opening Sequences ``` # COVID and MERS dotplot(covid_seq[0:10],mers_seq[0:10]) # COVID and SARS n = 10 dotplot(covid_seq[0:n],sars_seq[0:n]) # Plotting function to illustrate deeper matches def dotplotx(seq1, seq2, n): seq1=seq1[0:n] seq2=seq2[0:n] plt.imshow(np.array(makeMatrix(seq1,seq2,1))) # on x-axis list all sequences of seq 2 xt=plt.xticks(np.arange(len(list(seq2))),list(seq2)) # on y-axis list all sequences of seq 1 yt=plt.yticks(np.arange(len(list(seq1))),list(seq1)) plt.show() dotplotx(covid_seq, sars_seq, n=100) ``` Notice the large diagonal line for the second half of the first 100 nucleotides - indicating these are the same for `COVID19` and `SARS` ``` dotplotx(covid_seq, ebola_seq, n=100) ``` No corresponding matches for `EBOLA` and `COVID` #### Calculate Pairwise Alignment for the First 100 Nucleotides ``` def pairwise_alignment(s1, s2, n): if n == 'full': n = min(len(s1), len(s2)) alignment = pairwise2.align.globalxx(s1[0:n], s2[0:n], one_alignment_only=True, score_only=True) print(f'Pairwise alignment: {alignment:.0f}/{n} ({(alignment/n)*100:0.1f}%)') # SARS and COVID pairwise_alignment(covid_seq, sars_seq, n=100) pairwise_alignment(covid_seq, sars_seq, n=10000) pairwise_alignment(covid_seq, sars_seq, n=len(sars_seq)) ``` * `82.9`% of the COVID19 genome is exactly the same as SARS ``` pairwise_alignment(covid_seq, mers_seq, n='full') pairwise_alignment(covid_seq, ebola_seq, n='full') ``` * `COVID19` and `SARS` have a `82.9`% similarity. Both are of the same genus and belong to `Sars_Cov`. * `COVID19` and `EBOLA` have a `65.3`% similarity since they are from a different family of virus ### Example of the Opening Sequence of `COVID19` and `SARS` Sequencing found similar structure from `40:100` so lets use our functions to visualise it. ``` s1 = covid_seq[40:100] s2 = sars_seq[40:100] print('Similarity matrix (look for diagonal)') dotplotx(s1, s2, n=100) print('Possible alignment pathways: \n\n') hammer_time(s1, s2, verbose=False) ```
github_jupyter
<span style="font-size:20pt;color:blue">Add title here</span> This is a sample file of interactive stopped-flow data analysis. You do <b>NOT</b> need to understand python language to use this program. By replacing file names and options with your own, you can easily produce figures and interactively adjust plotting optinos. It is strongly recommended to keep this file for reference, and make edits on a duplication of this file. # import libraries and define functions <span style="color:red">Press Ctrl+Enter to run sections</span> ``` # import mpld3 # mpld3.enable_notebook() %matplotlib widget from sf_utils import * from uv_utils import * ``` # compare multiple inputs on selected lines In many cases, the same trace may be repeated for several times. This ``` rcParams['figure.figsize'] = [6, 4.5] csvfiles = [ 'average-sample-10s-1.csv', 'average-sample-10s-2.csv', 'average-sample-10s-3.csv', ] sfData = SFData.quickload_csv(csvfiles) sfData.plot_selected_kinetics() display(widgets.HBox([sfData.add_logx_button(), sfData.plot_scan_wavelength_button()])) sfData.plot_interactive_buttons_for_kinetics() ``` # save averaged files ``` csvfiles = [ 'average-sample-10s-1.csv', 'average-sample-10s-2.csv', 'average-sample-10s-3.csv', ] save_average(csvfiles) ``` # plot full spectra with more options ## plot with table ``` df = pd.DataFrame( columns=['csvfile', 'legend', 'shift', 'scale', 'color', 'timepoint'], data=[ ['average-sample-10s-ave.csv', '0.003s', 0, 1, 'black', 0.003], ['average-sample-10s-ave.csv', '0.01s', 0, 1, 'red', 0.01], ['average-sample-10s-ave.csv', '0.1s', 0, 1, 'blue', 0.1], ['average-sample-10s-ave.csv', '1s', 0, 1, 'orange', 1], ['average-sample-10s-ave.csv', '10s', 0, 1, 'green', 10], ] ) base = pd.DataFrame( columns=['csvfile', 'timepoint'], data=[ ['average-sample-10s-ave.csv', 0.002], ] ) # plot_from_df(df, valuetype='timepoint') # no subtraction plot_from_df(df, valuetype='timepoint', base=base) # subtract spectra at certain timepoint ``` ## plot with variables (lower level API) ``` rcParams['figure.figsize'] = [6, 4.5] fig = plt.figure() axis = fig.gca() csvfiles = [ 'average-sample-10s-ave.csv', 'average-sample-10s-ave.csv', 'average-sample-10s-ave.csv', 'average-sample-10s-ave.csv', 'average-sample-10s-ave.csv', ] dfs = list(map(load_data, csvfiles)) df_base = dfs[0].iloc[1,:] dfs = [df - df_base for df in dfs] timepoints = [ [0.003], [0.01], [0.1], [1], [10] ] legends = [ ['0.003s'], ['0.01s'], ['0.1s'], ['1s'], ['10s'] ] shifts = [ [0], [0], [0], [0], [0] ] scales = [ [1], [1], [1], [1], [1], ] colors = [ ['black'], ['red'], ['blue'], ['orange'], ['green'] ] sfData = SFData( axis=axis, dfs=dfs, colors=colors, legends=legends, scales=scales, shifts=shifts, xlabel='Time (s)', ylabel='Abs', ) sfData.plot_selected_spectra(timepoints) # display(widgets.HBox([sfData.plot_scan_timepoint_button()])) sfData.plot_interactive_buttons_for_spectra() ``` # plot kinetic curves with more options ## plot with table ``` df = pd.DataFrame( columns=['csvfile', 'legend', 'shift', 'scale', 'color', 'wavelength'], data=[ ['average-sample-10s-ave.csv', '350nm', 0, 1, 'black', 350], ['average-sample-10s-ave.csv', '400nm', 0, 1, 'red', 400], ['average-sample-10s-ave.csv', '450nm', 0, 1, 'blue', 450], ] ) base = pd.DataFrame( columns=['csvfile', 'timepoint'], data=[ ['average-sample-10s-ave.csv', 0.002], ] ) # plot_from_df(df, valuetype='wavelength') plot_from_df(df, valuetype='wavelength', base=base) ``` ## plot with variables (lower level API) ``` rcParams['figure.figsize'] = [6, 4.5] fig = plt.figure() axis = fig.gca() csvfiles = [ 'average-sample-10s-ave.csv', 'average-sample-10s-ave.csv', 'average-sample-10s-ave.csv', ] dfs = list(map(load_data, csvfiles)) df_base = dfs[0].iloc[1,:] dfs = [df - df_base for df in dfs] wavelengths = [ [350], [400], [450], ] legends = [ ['350 nm'], ['400 nm'], ['450 nm'], ] shifts = [ [0], [0], [0], ] scales = [ [1], [1], [1], ] colors = [ ['black'], ['red'], ['blue'], ] sfData = SFData( axis=axis, dfs=dfs, colors=colors, legends=legends, scales=scales, shifts=shifts, xlabel='Time (s)', ylabel='Abs', ) sfData.plot_selected_kinetics(wavelengths) display(sfData.add_logx_button()) # display(widgets.HBoxsfDataData.add_logx_button(), sfData.plot_scan_wavelength_button()])) sfData.plot_interactive_buttons_for_kinetics() ``` # overview - kinetic curve and full spectra <span style="color:red">Warning: this section is slow</span> ``` rcParams['figure.figsize'] = [9, 4.5] csvfile = 'average-sample-10s-ave.csv' df = load_data(csvfile) (row, col) = (1, 2) fig, axs = plt.subplots(row, col, sharex=False, sharey=False) axis1 = axs[0] # first axis axis2 = axs[1] # second axis plot_all_kinetic(df, axis1) plot_all_spectra(df, axis2) ``` # overview - difference spectra <span style="color:red">Warning: this section is slow</span> ``` rcParams['figure.figsize'] = [9, 4.5] df = load_data(csvfile) baseCurve = df.iloc[1,:] # select the second time point as baseline diffDf = df - baseCurve df1 = df - baseCurve (row, col) = (1, 2) fig, axs = plt.subplots(row, col, sharex=False, sharey=False) axis1 = axs[0] # first axis axis2 = axs[1] # second axis plot_all_kinetic(df1, axis1) plot_all_spectra(df1, axis2) ``` # export kintek input files ``` csvfile = 'kintek-sample-1-10s.csv' wavelengths = [440] kintekFileName = 'kintek-sample-1-10s-data.txt' export_kintek(csvfile, wavelengths, kintekFileName) csvfile = 'kintek-sample-2-10s.csv' wavelengths = [440] kintekFileName = 'kintek-sample-2-10s-data.txt' export_kintek(csvfile, wavelengths, kintekFileName) ``` # plot original kinetic and kintek simulation ``` rcParams['figure.figsize'] = [6, 4.5] rcParams.update({'xtick.labelsize': 14}) rcParams.update({'ytick.labelsize': 14}) rcParams.update({'axes.labelsize':16}) rcParams.update({'legend.frameon': False}) rcParams.update({'legend.fontsize': 14}) simfiles = [ 'kintek-sample-1-10s-data.txt', 'kintek-sample-2-10s-data.txt', 'kintek-sample-1-10s-sim.txt', 'kintek-sample-2-10s-sim.txt', ] dfs = list(map(read_kintek_simulation, simfiles)) df = pd.concat(dfs, axis=1) df = df[df.index > 0.002] # filter the value range to plot df = df[df.index < 0.8] # filter the value range to plot aPlot = AdjustablePlot.quickload_df(df) aPlot.colors = ['red', '#0080ff', 'black', 'black'] aPlot.shifts = [0.007, 0, 0.007, 0] aPlot.legends = ['1', '2', '1-sim', '2-sim'] aPlot.plot() aPlot.axis.set_xscale('log') aPlot.axis.set_xlim([0.001, 1]) # aPlot.axis.set_ylim([-0.019, 0.029]) for i in range(2): line = aPlot.axis.lines[i] line.set_marker('.') line.set_linewidth(0) line.set_markersize(5) aPlot.axis.lines[-1].set_linestyle('dashed') aPlot.plot_interactive_buttons() _ = aPlot.axis.legend().set_draggable(True) _ = aPlot.axis.set_xlabel('Time (s)') _ = aPlot.axis.set_ylabel('ΔAbs') ``` # plot UV-Vis titration data ## plot titration ``` rcParams['figure.figsize'] = [6, 4.5] uv_filenames = [ 'titration_and_UV/2.0.CSV', 'titration_and_UV/2.1.CSV', 'titration_and_UV/2.2.CSV', 'titration_and_UV/2.3.CSV', 'titration_and_UV/2.4.CSV', 'titration_and_UV/2.5.CSV', 'titration_and_UV/2.6.CSV', 'titration_and_UV/2.7.CSV', 'titration_and_UV/2.8.CSV', 'titration_and_UV/2.9.CSV', ] df = read_multiple_uv_to_df(uv_filenames) aPlot = AdjustablePlot.quickload_df(df) aPlot.colors = color_range('red', 'black', len(uv_filenames)) # calculate shift on each spectra to remove baseline floating issue # aPlot.shifts = shift_to_align_wavelength(df, wavelength=1000) aPlot.legends = ['%5.1f eq aKG' % (0.5*i) for i in range(len(uv_filenames))] aPlot.plot() aPlot.plot_interactive_buttons() aPlot.axis.set_xlim([320, 1100]) aPlot.axis.set_ylim([-0.2, 1.2]) aPlot.axis.set_title('Titration: PIsnB + 4Fe + 4TyrNC + n*0.5aKG') aPlot.axis.set_xlabel('wavelength (nm)') aPlot.axis.set_ylabel('Abs') ``` ## subtraction ``` # subtract base base = aPlot.df.iloc[:,[0]] aPlot.df = aPlot.df - base.values # plot in a new figure aPlot.axis = plt.figure().gca() aPlot.plot() aPlot.plot_interactive_buttons() aPlot.axis.set_xlim([350, 1100]) aPlot.axis.set_ylim([-0.02, 0.15]) aPlot.axis.set_title('Titration: PIsnB + 4Fe + 4TyrNC + n*0.5aKG') aPlot.axis.set_xlabel('wavelength (nm)') aPlot.axis.set_ylabel('ΔAbs') ``` ## remove baseline shift ``` # subtract base base = aPlot.df.iloc[:,[0]] aPlot.df = aPlot.df - base.values # remove baseline shift aPlot.shifts = shift_to_align_wavelength(aPlot.df, wavelength=800) # plot in a new figure aPlot.axis = plt.figure().gca() aPlot.plot() aPlot.plot_interactive_buttons() aPlot.axis.set_xlim([350, 1100]) aPlot.axis.set_ylim([-0.02, 0.12]) aPlot.axis.set_title('Titration: PIsnB + 4Fe + 4TyrNC + n*0.5aKG') aPlot.axis.set_xlabel('wavelength (nm)') aPlot.axis.set_ylabel('ΔAbs') ``` ## plot trend at certain x value ``` df_trace512 = aPlot.df.iloc[[get_index_of_closest_x_value(aPlot.df, 512)], :].transpose() df_trace512.index = [0.5*i for i in range(len(uv_filenames))] trace512Plot = AdjustablePlot.quickload_df(df_trace512) trace512Plot.plot() trace512Plot.plot_interactive_buttons() trace512Plot.axis.lines[0].set_marker('o') trace512Plot.axis.legend().set_draggable(True) trace512Plot.axis.set_xlabel('equivalent of aKG') trace512Plot.axis.set_ylabel('ΔAbs at 512 nm') ``` # Appendix: more options ``` # global settings # two semantics are equivalent # more options can be found at https://matplotlib.org/users/customizing.html # set figure size rcParams['figure.figsize'] = [9, 4.5] # set tick pointing inwards or outwards rcParams['xtick.direction'] = 'in' rcParams['ytick.direction'] = 'in' # set visibility of minor ticks rcParams['xtick.minor.visible'] = True rcParams['ytick.minor.visible'] = True # set ticks on top and right axes rcParams['xtick.top'] = True rcParams['ytick.right'] = True # set better layout for multiple plots in a figure # https://matplotlib.org/3.1.1/tutorials/intermediate/tight_layout_guide.html rcParams.update({'figure.autolayout': True}) # set x and y label size rcParams.update({'axes.labelsize': 20}) # set tick label size rcParams.update({'xtick.labelsize': 12}) rcParams.update({'ytick.labelsize': 12}) # turn on/off frame of legend box rcParams.update({'legend.frameon': False}) # set legend fontsize rcParams.update({'legend.fontsize': 10}) ```
github_jupyter
``` # supresses future warnings - See References #1 import warnings warnings.simplefilter(action='ignore', category=FutureWarning) # Import the pandas library for df creation import pandas as pd # Import the NumPy library to use the random package import numpy as np # Import the matplotlib library for plotting import matplotlib.pyplot as plt # set plot style plt.style.use('seaborn-whitegrid') # Use the magic function to ensure plots render in a notebook %matplotlib inline # Import the seaborn library for plotting import seaborn as sns # The NumPy Package produces random numbers using Random Number Generators # This defines the process by which numbers are generated for use by NumPy functions # Generators are a reliable way to generate random numbers # numpy.random offers several Random Number Generators. # By defining the Generator the code becomes reliable and repeatable - See References #2 # Sets Generator type (the default BitGenerator - PCG64) with a declared value of 100 - See References #3 rng = np.random.default_rng(100) # Set number of samples based on the number of live births in 2016 n = 63739 ``` ## Table of Contents - [Introduction](#introduction) - [Breastfeeding](#breastfeeding) - [What is breastfeeding and why is it important?](#w_breastfeeding) - [Variables](#variables) - [Age](#age) - [Civil Status](#civil_status) - [Health Insurance Status](#hel_ins_status) - [Initiate Breastfeeding](#int_breastfeeding) - [Dataset](#dataset) - [References](#references) - [Other Sources](#other_sources) ## Introduction<a class="anchor" id="introduction"></a> The project requires us to simulate a real-world phenomenon of our choosing. We have been asked to model and synthesise data related to this phenomenon using Python, particularly the numpy.random library. The output of the project should be a synthesised data set. I will be examining the rate of breastfeeding initiation in Ireland. I will create a dataset of variables associated with breastfeeding. I will simulate the distribution of breastfeeding initiation in a random sample of an identified segment of the population. I will explore the relationships between these factors and how they may influence the rate of breastfeeding initiation. This will include: 1. The distribution of breastfeeding initiation in an identified segment of the population 2. The factors contributing to breastfeeding initiation 3. How these factors are distributed in the identified segment of the population This topic is of particular interest to me as I have been successfully breastfeeding my daughter for the past year. The publication of the Irish Maternity Indicator System National Report 2019 report <sup>[4]</sup> received widespread news coverage and highlighted the low breastfeeding rates in Ireland <sup>[5]</sup>. On reflection, I was unable to identify how I had decided to breastfeed. I began to read more on the topic, including how Irish rates compare to international rates and breastfeeding socio-cultural changes. From this, I identified influencing factors on breastfeeding initiation. While I meet some of the criteria, I do not meet all. And yet as a mother in Ireland exclusively breastfeeding for over 12 months, I am one of only 7%. This intrigued me, and I wanted to examine what factors may have influenced my breastfeeding journey. ### Breastfeeding<a class="anchor" id="breastfeeding"></a> ![Lactation Image]("https://github.com/SharonNicG/52465-Project/blob/main/Lactation%20Image%20OS.jpg") #### What is breastfeeding and why is it important?<a class="anchor" id="w_breastfeeding"></a> Breastfeeding, or nursing, is the process of providing an infant with their mother's breastmilk <sup>[6]</sup>. This is usually done directly from the breast but can also be provided indirectly using expressed breast milk <sup>[*Ibid.*]</sup>. Breastfeeding is important as it offers numerous health benefits for both mother and infant. **Benefits to infant:** * Breast milk is naturally designed to meet the calorific and nutritional needs of an infant <sup>[7]</sup> and adapts to meet the needs of the infant as they change <sup>[8]</sup> * Breast milk provides natural antibodies that help to protect against common infections and diseases <sup>[*Ibid.*]</sup> * Breastfeeding is associated with better long-term health and wellbeing outcomes including less likelihood of developing asthma or obesity, and higher income in later life <sup>[9]</sup> **Benefits to mother:** * Breastfeeding lowers the mother's risk of breast and ovarian cancer, osteoporosis, cardiovascular disease, and obesity <sup>[10]</sup>. * Breastfeeding is associated with lower rates of post-natal depression and fewer depressive symptoms for women who do develop post-natal depression while breastfeeding <sup>[11, 12]</sup>. * Breastfeeding is a cost-effective, safe and hygienic method of infant feeding <sup>[13]</sup>. The World Health Organisation (WHO), and numerous other organizations recommend exclusively breastfeeding for the first six months of an infant's life and breastfeeding supplemented by other foods from 6 months on <sup>[14, 15, 16, 17]</sup>. However, globally nearly 2 out of 3 infants are not exclusively breastfed for the first six months <sup>[18]</sup>. Ireland has one of the lowest breastfeeding initiation rates in the world, with 63.8% of mothers breastfeeding for their child's first feed <sup>[19]</sup>. The rate of breastfeeding drops substantially within days as on average, only 37.3% of mothers are breastfeeding on discharge from hospital <sup>[*Ibid.*]</sup>. Given the physical, social and economic advantages to breastfeeding over artificial and combination feeding (a mix of breast and artificial) both the WHO and the HSE have undertaken a number of measures to increase the rates of breastfeeding initiation and exclusive breastfeeding for the first six months in Ireland <sup>[20, 21]</sup>. Funded research is one of these measures, including national longitudinal studies to identify factors that may influence breastfeeding rates <sup>[22]</sup>. ### Variables<a class="anchor" id="variables"></a> A review of some of the completed research projects has identified common factors that have been researched and for which there is a bank of data to refer to. These are identified in the table below: | Variable Name | Description | Data Type | Distribution | |--------------------------|-------------------------------------|-----------|-------------------| | Age | Age of mother | Numeric | Normal/Triangular | | Civil Status | Civil status of mother | Boolean | Normal/Triangular | | Private Health Insurance | Whether mother has health insurance | Boolean | Normal/Triangular | These factors will be used as the variables for the development of the dataset. The fourth variable will be 'Breastfeeding Initiation'. This will be informed by data from existing research and dependent on values assigned to records under the other variables. | Variable Name | Description | Data Type | Distribution | |--------------------------|--------------------|-----------|-------------------| | Breastfeeding Initiation | Dependent Variable | Boolean | Normal/Triangular | A trawl of information sources led to the decision to use data from 2016. While National Perinatal Reporting System (NPRS) produce annual reports <sup>[*Ibid.*]</sup>, they are published with a 12-month delay, and final versions (following feedback and review) are available after 24 months. This means the 2016 report is the latest final version available. An initial search for information on Civil Status statistics led to the 2016 census data. While this ultimately wasn't used, it guided the use of the 2016 NPRS data. Similarily historical data on Private Health Insurance rates in Ireland varies greatly, and 2016 seemed to produce the most applicable data for use here. Below is an outline of each variable; the data is based on and how it will be used for the development of a dataset. As the expected distributions for each variable are the same different approaches will be taken with each to generate them for the dataset. #### Age<a class="anchor" id="age"></a> A review of data provided by the NPRS study is used here to determine how maternal age is distributed across the population - mothers with live births in 2016 <sup>[23]</sup>. While age is a numerical value, it is presented by NPRS as a categorical variable/discrete groups, ranging from under 20 years of age to 45 years of age and older. The NPRS study provides the frequency and percentages of births within each group. ``` # Downloaded NPRS_Age.csv from NPRS 2016 Report age = pd.read_csv("Data/Age_and_Feeding_Type.csv", index_col='Age Group') # Transpose index and columns age = age.T # Integer based indexing for selection by position - See References #24 age = age.iloc[0:4, 0:7] age ``` The grouping of data by age group reduces the usefulness of the `describe()` function on the data frame. However, an initial view of the NPRS data indicates that the data is somewhat normally distributed with births increasing in the 25 - 29 age group, peaking at 30 - 34 years of age and beginning to decline in the 35 - 39 age set. Visualising the data set supports this analysis. It shows a minimum value of fewer than 20 years of age increasing in a positive direction until it significant peak around 32 years of age - the midpoint of the age group with the greatest frequency of births. ``` # Creates a figure and a set of subplots. fig, ax = plt.subplots(figsize=(5, 5), dpi=100) x_pos = np.arange(7) # Plot x versus y as markers ax.plot(x_pos, age.iloc[0, :], marker='^', label='Breast') ax.plot(x_pos, age.iloc[1, :], marker='^', label='Artificial') ax.plot(x_pos, age.iloc[2, :], marker='^', label='Combined') # Set labels for chart and axes ax.set_title('Age and Feeding Type') ax.set_xlabel('Maternal Age at Time of Birth') ax.set_ylabel('Frequency') # Create names on the x-axis ax.set_xticks(x_pos) # Rotate to make labels easier to read ax.set_xticklabels(age.columns, rotation=90) # Position legend ax.legend(loc="best") # Show plot plt.show() ``` The initial approach taken was to create a function that generated a random number using the percentages from the csv file and assigned this to each record. ``` def age_distribution(): y = rng.integers(1,100) if y <= 23: return random.randint(15,19) elif 23 < y <= 43: return random.randint(20,24) elif 43 < y <= 62: return random.randint(25,29) elif 62 < y <= 77: return random.randint(30,34) elif 77 < y <= 88: return random.randint(35,39) elif 88 < y <= 95: return random.randint(40,44) else: return random.randint(45,49) Age = age data = {'Age': age} ``` Then I realised that the distribution visualised above could be replicated using a Triangular Distribution. This generates a random number from a weighted range by distributing events between the maximum and minimum values provided, based on a third value that indicates what the most likely outcome will be <sup>[25, 26]</sup>. Here we are looking for 100 events (births) distributed between the ages of 16 and 50 with a known peak where the mothers age is 32. ``` # Here we are looking for a random array with a lower limit of 16 an upper limit of 50 # and 32 being the number that appears most frequently (the midpoint of the most frequent age group) # over n number of instances where n is the total number of births # and for the out to be presented on a Triangular Distribution plot Tri_var = np.random.triangular(left = 20, mode = 30, right = 50, size = 100).astype(int) print ("Here is your triangular continuous random variable:\n % s" % (Tri_var)) # [55] # Triangular Distribution - See References #27 plt.hist(np.ma.round(np.random.triangular(left = 20, mode = 30, right = 50, size = 100)).astype(int), range = (16, 50), bins = 300, density = True) # Set labels for chart and axes plt.title('Randomised Distribution of Age') plt.xlabel('Maternal Age at Birth of Child') plt.ylabel('Frequency') # Show plot plt.show() ``` #### Civil Status <a class="anchor" id="civil_status"></a> Research has shown that maternal civil status at the time of birth is significantly associated with breastfeeding initiation <sup>[28,29,30]</sup>. Data captured in the 2016 NPRS survey does not capture relational data between breastfeeding initiation and maternal civil status at the time of birth <sup>[31]</sup>. However, it does provide percentage values for maternal civil status across all age groups: | Maternal Civil Status at Birth | Percentage of Total births | |--------------------------------|----------------------------| | Married | 62.2 | | Single | 36.4 | | Other | 1.4 | Central Statistics Office (CSO) data on civil status for 2016 does record information across all age groups <sup>[32]</sup>. However, as it only captures data for * Marriages * Civil partnerships * Divorces, Judicial Separation and Nullity applications received by the courts * Divorces, Judicial Separation and Nullity applications granted by the courts It does not capture other civil arrangements such as informal separations or co-habitants. For the purposes of this simulation, the NPRS data will be used. This is a categorical variable that has three possible values 1. Married 2. Single 3. Other (encompassing all other civil statutes as identified by the survey respondent) Note, that same sex marriage wasn't legal in 2016 and Civil Partnerships are recorded as Other in the NPRS Report. The first approach was to link the age of a person to a civil status by calculating how many people within each age group may fall into each civil status category, based on the NPRS data. This was excessively complicated - requiring an If statement, a dictionary and a For Loop. While it produced a good distribution, it wasn't easily amendable if the figures changed as each line needed to be amended in the dictionary. ``` def civil_status(x): def distribution(st): x = rng.integers(0,100) if x <= chances[0][0]: return chances[0][1] elif x <= chances[0][0] + chances[1][0]: return chances[1][1] return chances[2][1] status = { range(15,20) : [(0,'other'),(95,'Single'),(5,'Married')]} for i in status: if x in i: return distribution(status[i]) ``` Instead, a much simpler way based on the information in the NPRS report `rng.choice` can be used to randomly distribute these values across the dataset population. ``` # Classifying Civil Status # 'single' if single, 'married' if married and 'other' for all other categories civil_options = ['single', 'married', 'other'] # Randomisation of civil status based on the probability provided civil_status = rng.choice(civil_options, n, p=[.364, .622, .014]) # Count of zero elements in array - See References #33 print("Single: ", np.count_nonzero(civil_status == 'single')) print("Married: ", np.count_nonzero(civil_status == 'married')) print("Other: ", np.count_nonzero(civil_status == 'other')) ``` #### Health Insurance Status<a class="anchor" id="hel_ins_status"></a> Gallagher's research also highlighted a significant association between the health insurance status of a mother and breastfeeding initiation <sup>[34]</sup>. A review of other research into factors affecting breastfeeding initiation showed that access to enhanced peri- and postnatal medical care have considerable influence on breastfeeding initiation and continuance <sup>[35, 36, 37]</sup>. These were primarily completed in countries without a funded, or part-funded national health service. While these demonstrated that mothers with private health insurance were more likely to initiate breastfeeding, they weren't comparable in an Irish context. However, a follow-on study from Gallagher's research further supported her findings that maternal access to private health insurance increased the likelihood of breastfeeding <sup>[38]</sup>. The Health Insurance Authority (HIA) in Ireland offers a comparison tool for health insurance policies, including the services available under each plan <sup>[39]</sup>. A review, carried out in December 2020, shows that of 314 plans on offer 237 provide outpatient maternity benefits which cover peri and postnatal cover care and support systems. These include one-to-one postnatal consultation with a lactation consultant. For mothers without health insurance, maternity care in Ireland is provided under the Maternity and Infant Care Scheme <sup>[40]</sup>. This is a combined hospital and GP service for the duration of pregnancy and six weeks postpartum. No specific resources are made available to support breastfeeding, though maternity hospitals and units may offer breastfeeding information sessions and one-to-one lactation consultations where needed. Access to these supports is limited. The Coombe Women's and Children's Hospital, for example, handles around 8,000 births per year <sup>[41]</sup> and provides breastfeeding information sessions to less than 1,000 mothers per year <sup>[42]</sup>. There are a number of community supports for breastfeeding, including [Le Leche](https://www.lalecheleagueireland.com/), [Friends of Breastfeeding](https://www.friendsofbreastfeeding.ie/) and [Cuidiu](https://www.cuidiu.ie/) and private lactation consultants. Interestingly, neither Irish study assessed whether mothers accessed these services perinatally. Gallagher's research showed that 66% of the insured mothers initiated breastfeeding <sup>[43]</sup>. As insurance status can only have two possible outcomes (True or False), a binomial distribution was initially used to evaluate distribution across `n` number of births. ``` # Here we are looking for a Binomial Distribution # with probability of 0.66 for each trial # repeated `n` times # to be presented as a Binomial Bistribution plot sns.distplot(rng.binomial(n=10, p=0.66, size=n), hist=True, kde=False) # Set labels for chart and axes plt.title('Insurance Distribution Based on Percentage Insured') plt.xlabel('Age Distribution') plt.ylabel('Number Insured') # Show plot plt.show() ``` While this randomly allocated health insurance status, it didn't distribute this across the age groups in an informed way. The HIA also provide historical market statistics on insurance in Ireland <sup>[44]</sup>. Data for 2016 based on the age groups previously used, the number of the total population insured within these groupings that are insured and the percentage of the total insured population these represent was extracted from the HIA historical market statistics into a csv file. ``` # Downloaded HIA historicial market statistics ins = pd.read_csv("Data/Insurance_by_Age.csv") ins ``` The extracted data shows a positive increase in the number of people insured from ages 20 to 35. Peaking significantly in the 35-39 category and declining thereafter. Visualising the data supports this analysis. ``` # Downloaded HIA historicial market statistics ins = pd.read_csv('Data/Insurance_by_Age.csv', index_col='Age') # Transpose index and columns ins = ins.T ins # Creates a figure and a set of subplots fig, ax = plt.subplots(figsize=(5,3), dpi=100) x_pos = np.arange(7) # Integer based indexing for selection by position - See References #24 # Scaled down y = ins.iloc[1, :] / 100 y = y*ins.iloc[0, :] # Plot x versus y as markers ax.plot(x_pos, y, marker='o', label='Insurance') # Set labels for chart and axes ax.set_title('Insurance by Age') ax.set_xlabel('Maternal Age') ax.set_ylabel('Number Insured') # Create names on the x-axis ax.set_xticks(x_pos) # Rotate to make labels easier to read ax.set_xticklabels(age.columns, rotation=90) # Show plot plt.show() ``` While it isn't possible to get a breakdown of insurance status by gender or linked to the parity of women within the given age groups, the above analysis has provided two points to work from in the generated dataset: the percentage of people within the age groups that are likely to have health insurance, and that 66% of pregnant women have health insurance. ``` # ins_by_age(x) takes one argument - the age of the person # based on this number and the percentages identified from HIA # it gives each record a value of True or False # a dict that holds the HIA statistics def ins_by_age (x): agerange_insurance = {range(15,20) : 10, range(20,25) : 9, range(25,30) : 9, range(30,35) : 14, range(35,40) : 19, range(40,45) : 19, range(45,50) : 20} # Introduce randomisation by assigning True or False based on # a randomly assigned number generated by rng.integers for i in agerange_insurance: if x in i: y = rng.integers(1,100) if y <= agerange_insurance[i]: return True return False health_ins_status = np.array([ins_by_age(i) for i in age]) ``` #### Initiate Breastfeeding<a class="anchor" id="int_breastfeeding"></a> This variable is based on the data used above and in Gallagher's study <sup>[45]</sup>. Using the data from the NPRS report the percentage of women within each age group that are likely to initiate breastfeeding can be calculated. ``` # Percentage of total births that initiate breastfeeding age = pd.read_csv("Data/Age_and_Feeding_Type.csv") age = age.iloc[0:7, 0:8] age['pct']=age['Breast']/(age['Total'])*100 age ``` Additionally, this variable will be a dependant variable influenced by data from the other variables and information Gallagher's study: 1. Women with health insurance are more likely to initiate breastfeeding * 66% of women have access to health insurance <sup>[*Ibid*]</sup> 2. Civil Status influences the likelihood of breastfeeding * Married women are three times more likely to breastfeed ## Dataset<a class="anchor" id="dataset"></a> ``` # the number of records n = 100 ``` ### Variable 1 - Age ``` # A Triangular Distribution with a lower limit of 16 an upper limit of 50 # and 32 being the number that appears most frequently (the midpoint of the most frequent age group) # over n number of instances where n is the total number of births age_dist = (np.random.triangular(left = 16, mode = 30, right = 50, size = 500)).astype(int) # Using rng.choice to randomise the output age = rng.choice(age_dist, n) # generating the dataframe using the the age distribution df = pd.DataFrame(age, columns = ['Maternal Age']) ``` ### Variable 2 - Civil Status ``` # Classifying Civil Status # 'single' if single, 'married' if married and 'other' for all other categories civil_status = ['single', 'married', 'other'] # Randomisation of civil status based on the probability provided civil_status = rng.choice(civil_status, n, p=[.364, .622, .014]) ``` ### Varible 3 - Insurance Status ``` # ins_by_age(x) takes one argument - the age of the person # based on this number and the percentages identified from HIA # it gives each record a value of True or False # a dictionary that holds the HIA statistics def ins_by_age (x): agerange_insurance = {range(15,20) : 10, range(20,25) : 9, range(25,30) : 9, range(30,35) : 14, range(35,40) : 19, range(40,45) : 19, range(45,50) : 20} # Introduce randomisation by assigning True or False based on # a randomly assigned number generated by rng.integers for i in agerange_insurance: if x in i: y = rng.integers(1,100) if y <= agerange_insurance[i]: return True return False health_ins_status = np.array([ins_by_age(i) for i in age]) df['Health Insurance Status'] = health_ins_status ``` ### Variable 4 - Initate Breastfeeding ``` # ins_by_age(x) takes one argument - the age of the person # based on this number and the percentages identified from HIA # it gives each record a value of True or False # a dictionary that holds the calculated percentage statistics def breastfeeding(x): bf_status = {range(15,20) : 23, range(20,25) : 32, range(25,30) : 43, range(30,35) : 53, range(35,40) : 55, range(40,45) : 53, range(45,50) : 50} a = age[x] # the age of the person b = health_ins_status[x] # the person's health insurance status c = civil_status[x] # the person's civil status q = 3 if c == 'Married' else 2 if c == 'Other' else 1 # If married they are 3 times more likely to start the program # Assigning 2 for people with Other as they may have an informal arrangemnt # Assigning 1 for people who identify as single # Introduce randomisation by assigning True or False based on # a randomly assigned number generated by rng.integers for i in bf_status: if a in i: y = rng.integers(1,1000) if y <= q*bf_status[i] or (b == True and y <= 66): # Using 66 as ^^% of insured women initiate breastfeeding return True return False # generating the variable based on the function initiate_bf = np.array([breastfeeding(i) for i in age]) df['Initiate Breastfeeding'] = initiate_bf ``` ### Dataset ``` # See References #46 and #47 pd.set_option("expand_frame_repr", True) df ``` ### References<a class="anchor" id="references"></a> 1. Stack Overflow (2020) How to suppress pandas future warnings, Available at: https://stackoverflow.com/questions/15777951/how-to-suppress-pandas-future-warning 2. Machine Learning Mastery, How to Generate Random Numbers in Python, Available at: https://machinelearningmastery.com/how-to-generate-random-numbers-in-python/ 3. The SciPy Community (2020) Random Generator, Available at: https://numpy.org/doc/stable/reference/random/generator.html 4. National Women and Infants Health Programme Clinical Programme for Obstetrics and Gynaecology (2020) Irish Maternity Indicator System National Report 2019, Available at: https://www.hse.ie/eng/about/who/acute-hospitals-division/woman-infants/national-reports-on-womens-health/imis-national-report-2019.pdf 5. Cullen, P (2020) Ireland has one of the lowest breastfeeding rates in the world – report, Available at: https://www.irishtimes.com/news/health/ireland-has-one-of-the-lowest-breastfeeding-rates-in-the-world-report-1.4391626 6. World Health Organisation (2020) Breastfeeding, Available at https://www.who.int/health-topics/breastfeeding 7. World Health Organisation (2020) Infant and young child feeding, Available at: https://www.who.int/news-room/fact-sheets/detail/infant-and-young-child-feeding 8. National Health Service (2020) Benefits of breastfeeding - Your pregnancy and baby guide, Available at: https://www.nhs.uk/conditions/pregnancy-and-baby/benefits-breastfeeding/ 9. Oddy, W. H., Sherriff, J. L., de Klerk, N. H., Kendall, G. E., Sly, P. D., Beilin, L. J., Blake, K. B., Landau, L. I., & Stanley, F. J. (2004) The Relation of Breastfeeding and Body Mass Index to Asthma and Atopy in Children: A Prospective Cohort Study to Age 6 Years, Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1448489/ 10. National Health Service (2020) Benefits of breastfeeding, Available at: https://www.nhs.uk/conditions/baby/breastfeeding-and-bottle-feeding/breastfeeding/benefits/ 11. Borra, C., Iacovou, M., Sevilla, A. 2015. New evidence on breastfeeding and postpartum depression: the importance of understanding women’s intentions. Maternal and Child Health Journal, 19, pp. 887-907. 12. Dennis, C.L., McQueen, K. 2009. The relationship between infant-feeding outcomes and postpartum depression: a qualitative systematic review. Pediatrics, 123, pp. 736-751 13. UNICEF (2013) Breastfeeding is the cheapest and most effective life-saver in history, Available at: https://www.unicef.org/media/media_70044.html 14. World Health Organisation (2020) Breastfeeding, Available at https://www.who.int/health-topics/breastfeeding 15. American Academy of Pediatrics (2012) Breastfeeding and the Use of Human Milk, Available at: https://doi.org/10.1542/peds.2011-3552 16. NHS (2020) How to breastfeed, Available at: https://www.nhs.uk/conditions/baby/breastfeeding-and-bottle-feeding/breastfeeding/ 17. HSE (2020) Extended breastfeeding, Available at: https://www2.hse.ie/wellbeing/child-health/extended-breastfeeding-beyond-1-year.html 18. World Health Organisation (2020) Breastfeeding, Available at https://www.who.int/health-topics/breastfeeding 19. National Women and Infants Health Programme Clinical Programme for Obstetrics and Gynaecology (2020) Irish Maternity Indicator System National Report 2019, Available at: https://www.hse.ie/eng/about/who/acute-hospitals-division/woman-infants/national-reports-on-womens-health/imis-national-report-2019.pdf 20. World Health Organisation (2017) Protecting, promoting and supporting breastfeeding in facilities providing maternity and newborn services, Geneva, Switzerland, Available at: https://apps.who.int/iris/bitstream/handle/10665/259386/9789241550086-eng.pdf 21. The Institute of Public Health in Ireland (2017) Breastfeeding on the island of Ireland, Available at: https://www.hse.ie/eng/about/who/healthwellbeing/our-priority-programmes/child-health-and-wellbeing/breastfeeding-healthy-childhood-programme/research-and-reports-breastfeeding/breastfeeding-on-the-island-of-ireland-report.pdf 22. The Institute of Public Health in Ireland (2017) Breastfeeding on the island of Ireland, Available at: https://www.hse.ie/eng/about/who/healthwellbeing/our-priority-programmes/child-health-and-wellbeing/breastfeeding-healthy-childhood-programme/research-and-reports-breastfeeding/breastfeeding-on-the-island-of-ireland-report.pdf 23. Healthcare Pricing Office (HPO), Health Service Executive (HSE) (2020) National Perinatal Statistics Report, 2017, Dublin: Health Service Executive. Available at: http://www.hpo.ie/latest_hipe_nprs_reports/NPRS_2017/Perinatal_Statistics_Report_2017.pdf 24. The Pandas Development Team (2014) pandas.DataFrame.iloc, Available at: https://pandas.pydata.org/pandas-docs/version/1.0.2/reference/api/pandas.DataFrame.iloc.html?highlight=iloc 25. numpy.random.triangular, Available at: https://numpy.org/doc/stable/reference/random/generated/numpy.random.triangular.html 26. Python – Triangular Distribution in Statistics, Available at: https://www.geeksforgeeks.org/python-triangular-distribution-in-statistics/ 27. Stack Overflow (2020) A weighted version of random randint, Available at: https://stackoverflow.com/questions/60870070/a-weighted-version-of-random-randint 28. Gallagher L, Begley C, Clarke M, Determinants of breastfeeding initiation in Ireland., Irish journal of medical science, 185, 3, 2015, 663 - 668 29. Thulier. D and Mercer, J. (2009) Variables Associated With Breastfeeding Duration, Available at: https://www.sciencedirect.com/science/article/abs/pii/S0884217515301866 30. Masho, S.W., Morris, M.R. and Wallenborn, J. T. (2016) Role of Marital Status in the Association between Prepregnancy Body Mass Index and Breastfeeding Duration, Available at: https://www.sciencedirect.com/science/article/abs/pii/S1049386716300391 31. Healthcare Pricing Office (HPO), Health Service Executive (HSE) (2020) National Perinatal Statistics Report, 2016, Dublin: Health Service Executive. Avilable at: http://www.hpo.ie/latest_hipe_nprs_reports/NPRS_2016/Perinatal_Statistics_Report_2016.pdf 32. Central Statistics Office (2020) Marriages and Civil Partnerships 2016, Available at: https://www.cso.ie/en/releasesandpublications/er/mcp/marriagesandcivilpartnerships2016/ 33. Stack Overflow (2020) Efficiently count zero elements in numpy array, Available at: https://stackoverflow.com/questions/42916330/efficiently-count-zero-elements-in-numpy-array/42916378 34. Gallagher L, Begley C, Clarke M, Determinants of breastfeeding initiation in Ireland., Irish journal of medical science, 185, 3, 2015, 663 - 668 35. Gurley-Calvez, T., Bullinger, L. and Kapinos, K.A. (2017) Effect of the Affordable Care Act on Breastfeeding Outcomes, Available at: https://ajph.aphapublications.org/doi/abs/10.2105/AJPH.2017.304108 36. Merewood, A., Brooks, D., Bauchner, H., MacAuley, L. and Mehta, S. D. (2006) Maternal Birthplace and Breastfeeding Initiation Among Term and Preterm Infants: A Statewide Assessment for Massachusetts, Available at: https://pediatrics.aappublications.org/content/118/4/e1048 37. González de Cosío, T., Escobar-Zaragoza, L., González-Castell, LD. and Rivera-Dommarco, JÁ. Infant feeding practices and deterioration of breastfeeding in Mexico. (2013) Avilable at: https://europepmc.org/article/med/24626693 38. Alberdi, G., O'Sullivan, E. J., Scully, H., Kelly, N., Kincaid, R., Murtagh, R., Murray, S., McGuinness, D., Clive, A., Brosnan, M., Sheehy, L., Dunn, E., McAuliffe, F.M. (2018) A feasibility study of a multidimensional breastfeeding-support intervention in Ireland,, Available at: https://doi.org/10.1016/j.midw.2017.12.018. 39. The Health Insurance Authority (2020) The Health Insurance Authority, Available at: https://www.hia.ie/ 40. Health Service Executive (2020) Maternity and Infant Care Scheme, Available at: https://www.hse.ie/eng/services/list/3/maternity/combinedcare.html 41. Irish Nurses & Midwives Organisation (2020) Over 11,000 babies born in Irish hospitals since COVID-19 arrived , Available at: https://www.inmo.ie/Home/Index/217/13585 42. Coombe Women & Infants University Hospital (2019) Breastfeeding and breastfeeding support, Available at: http://www.d1048212.blacknight.com/index.php?nodeId=231 43. Gallagher L, Begley C, Clarke M, Determinants of breastfeeding initiation in Ireland., Irish journal of medical science, 185, 3, 2015, 663 - 668 44. The Health Insurance Authority (2019) Historical Data on the market, Available at: https://www.hia.ie/publication/market-statistics 45. Gallagher L, Begley C, Clarke M, Determinants of breastfeeding initiation in Ireland., Irish journal of medical science, 185, 3, 2015, 663 - 668 46. Stack Overflow (2014) How to display full (non-truncated) dataframe information, Available at: https://stackoverflow.com/questions/25351968/how-to-display-full-non-truncated-dataframe-information-in-html-when-convertin 47. The Pandas Development Team (2020) Frequently used options, Available at: https://pandas.pydata.org/pandas-docs/stable/user_guide/options.html#frequently-used-options ### Other Sources<a class="anchor" id="Other_sources"></a> Yvonne L. Hauck, Ingrid Blixt, Ingegerd Hildingsson, Louise Gallagher, Christine Rubertsson, Brooke Thomson and Lucy Lewis, Australian, Irish and Swedish women's perceptions of what assisted them to breastfeed for six months: exploratory design using critical incident technique, BMC Public Health, 16, (1067), 2016 American Academy of Pediatrics. (2012). Breastfeeding and the use of human milk. Pediatrics, 129(3), e827–e841. Available at: https://pediatrics.aappublications.org/content/129/3/e827 Remove Unnamed Columns in Pandas Dataframe, Avilable at: https://stackoverflow.com/questions/43983622/remove-unnamed-columns-in-pandas-dataframe Basic Syntax, Available at: https://www.markdownguide.org/basic-syntax/ Mastering Markdown, Available at: https://guides.github.com/features/mastering-markdown/ Matplotlib: Visualization with Python, Available at: https://matplotlib.org/ Seaborn User guide and tutorial, Available at: https://seaborn.pydata.org/tutorial.html
github_jupyter
# Monetary Economics: Chapter 5 ### Preliminaries ``` # This line configures matplotlib to show figures embedded in the notebook, # instead of opening a new window for each figure. More about that later. # If you are using an old version of IPython, try using '%pylab inline' instead. %matplotlib inline import matplotlib.pyplot as plt from pysolve.model import Model from pysolve.utils import is_close,round_solution ``` ### Model LP1 ``` def create_lp1_model(): model = Model() model.set_var_default(0) model.var('Bcb', desc='Government bills held by the Central Bank') model.var('Bd', desc='Demand for government bills') model.var('Bh', desc='Government bills held by households') model.var('Bs', desc='Government bills supplied by government') model.var('BLd', desc='Demand for government bonds') model.var('BLh', desc='Government bonds held by households') model.var('BLs', desc='Supply of government bonds') model.var('CG', desc='Capital gains on bonds') model.var('CGe', desc='Expected capital gains on bonds') model.var('C', desc='Consumption') model.var('ERrbl', desc='Expected rate of return on bonds') model.var('Hd', desc='Demand for cash') model.var('Hh', desc='Cash held by households') model.var('Hs', desc='Cash supplied by the central bank') model.var('Pbl', desc='Price of bonds') model.var('Pble', desc='Expected price of bonds') model.var('Rb', desc='Interest rate on government bills') model.var('Rbl', desc='Interest rate on government bonds') model.var('T', desc='Taxes') model.var('V', desc='Household wealth') model.var('Ve', desc='Expected household wealth') model.var('Y', desc='Income = GDP') model.var('YDr', desc='Regular disposable income of households') model.var('YDre', desc='Expected regular disposable income of households') model.set_param_default(0) model.param('alpha1', desc='Propensity to consume out of income') model.param('alpha2', desc='Propensity to consume out of wealth') model.param('chi', desc='Weight of conviction in expected bond price') model.param('lambda10', desc='Parameter in asset demand function') model.param('lambda12', desc='Parameter in asset demand function') model.param('lambda13', desc='Parameter in asset demand function') model.param('lambda14', desc='Parameter in asset demand function') model.param('lambda20', desc='Parameter in asset demand function') model.param('lambda22', desc='Parameter in asset demand function') model.param('lambda23', desc='Parameter in asset demand function') model.param('lambda24', desc='Parameter in asset demand function') model.param('lambda30', desc='Parameter in asset demand function') model.param('lambda32', desc='Parameter in asset demand function') model.param('lambda33', desc='Parameter in asset demand function') model.param('lambda34', desc='Parameter in asset demand function') model.param('theta', desc='Tax rate') model.param('G', desc='Government goods') model.param('Rbar', desc='Exogenously set interest rate on govt bills') model.param('Pblbar', desc='Exogenously set price of bonds') model.add('Y = C + G') # 5.1 model.add('YDr = Y - T + Rb(-1)*Bh(-1) + BLh(-1)') # 5.2 model.add('T = theta *(Y + Rb(-1)*Bh(-1) + BLh(-1))') # 5.3 model.add('V - V(-1) = (YDr - C) + CG') # 5.4 model.add('CG = (Pbl - Pbl(-1))*BLh(-1)') model.add('C = alpha1*YDre + alpha2*V(-1)') model.add('Ve = V(-1) + (YDre - C) + CG') model.add('Hh = V - Bh - Pbl*BLh') model.add('Hd = Ve - Bd - Pbl*BLd') model.add('Bd = Ve*lambda20 + Ve*lambda22*Rb' + '- Ve*lambda23*ERrbl - lambda24*YDre') model.add('BLd = (Ve*lambda30 - Ve*lambda32*Rb ' + '+ Ve*lambda33*ERrbl - lambda34*YDre)/Pbl') model.add('Bh = Bd') model.add('BLh = BLd') model.add('Bs - Bs(-1) = (G + Rb(-1)*Bs(-1) + ' + 'BLs(-1)) - (T + Rb(-1)*Bcb(-1)) - (BLs - BLs(-1))*Pbl') model.add('Hs - Hs(-1) = Bcb - Bcb(-1)') model.add('Bcb = Bs - Bh') model.add('BLs = BLh') model.add('ERrbl = Rbl + chi * (Pble - Pbl) / Pbl') model.add('Rbl = 1./Pbl') model.add('Pble = Pbl') model.add('CGe = chi * (Pble - Pbl)*BLh') model.add('YDre = YDr(-1)') model.add('Rb = Rbar') model.add('Pbl = Pblbar') return model lp1_parameters = {'alpha1': 0.8, 'alpha2': 0.2, 'chi': 0.1, 'lambda20': 0.44196, 'lambda22': 1.1, 'lambda23': 1, 'lambda24': 0.03, 'lambda30': 0.3997, 'lambda32': 1, 'lambda33': 1.1, 'lambda34': 0.03, 'theta': 0.1938} lp1_exogenous = {'G': 20, 'Rbar': 0.03, 'Pblbar': 20} lp1_variables = {'V': 95.803, 'Bh': 37.839, 'Bs': 57.964, 'Bcb': 57.964 - 37.839, 'BLh': 1.892, 'BLs': 1.892, 'Hs': 20.125, 'YDr': 95.803, 'Rb': 0.03, 'Pbl': 20} ``` ### Scenario: Interest rate shock ``` lp1 = create_lp1_model() lp1.set_values(lp1_parameters) lp1.set_values(lp1_exogenous) lp1.set_values(lp1_variables) for _ in xrange(15): lp1.solve(iterations=100, threshold=1e-6) # shock the system lp1.set_values({'Rbar': 0.04, 'Pblbar': 15}) for _ in xrange(45): lp1.solve(iterations=100, threshold=1e-6) ``` ###### Figure 5.2 ``` caption = ''' Figure 5.2 Evolution of the wealth to disposable income ratio, following an increase in both the short-term and long-term interest rates, with model LP1''' data = [s['V']/s['YDr'] for s in lp1.solutions[5:]] fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 1.1, 1.1]) axes.tick_params(top='off', right='off') axes.spines['top'].set_visible(False) axes.spines['right'].set_visible(False) axes.set_ylim(0.89, 1.01) axes.plot(data, 'k') # add labels plt.text(20, 0.98, 'Wealth to disposable income ratio') fig.text(0.1, -.05, caption); ``` ###### Figure 5.3 ``` caption = ''' Figure 5.3 Evolution of the wealth to disposable income ratio, following an increase in both the short-term and long-term interest rates, with model LP1''' ydrdata = [s['YDr'] for s in lp1.solutions[5:]] cdata = [s['C'] for s in lp1.solutions[5:]] fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 1.1, 1.1]) axes.tick_params(top='off', right='off') axes.spines['top'].set_visible(False) axes.spines['right'].set_visible(False) axes.set_ylim(92.5, 101.5) axes.plot(ydrdata, 'k') axes.plot(cdata, linestyle='--', color='r') # add labels plt.text(16, 98, 'Disposable') plt.text(16, 97.6, 'income') plt.text(22, 95, 'Consumption') fig.text(0.1, -.05, caption); ``` ###### Figure 5.4 ``` caption = ''' Figure 5.4 Evolution of the bonds to wealth ration and the bills to wealth ratio, following an increase from 3% to 4% in the short-term interest rate, while the long-term interest rates moves from 5% to 6.67%, with model LP1''' bhdata = [s['Bh']/s['V'] for s in lp1.solutions[5:]] pdata = [s['Pbl']*s['BLh']/s['V'] for s in lp1.solutions[5:]] fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 1.1, 1.1]) axes.tick_params(top='off', right='off') axes.spines['top'].set_visible(False) axes.spines['right'].set_visible(False) axes.set_ylim(0.382, 0.408) axes.plot(bhdata, 'k') axes.plot(pdata, linestyle='--', color='r') # add labels plt.text(14, 0.3978, 'Bonds to wealth ratio') plt.text(17, 0.39, 'Bills to wealth ratio') fig.text(0.1, -.05, caption); ``` ### Model LP2 ``` def create_lp2_model(): model = Model() model.set_var_default(0) model.var('Bcb', desc='Government bills held by the Central Bank') model.var('Bd', desc='Demand for government bills') model.var('Bh', desc='Government bills held by households') model.var('Bs', desc='Government bills supplied by government') model.var('BLd', desc='Demand for government bonds') model.var('BLh', desc='Government bonds held by households') model.var('BLs', desc='Supply of government bonds') model.var('CG', desc='Capital gains on bonds') model.var('CGe', desc='Expected capital gains on bonds') model.var('C', desc='Consumption') model.var('ERrbl', desc='Expected rate of return on bonds') model.var('Hd', desc='Demand for cash') model.var('Hh', desc='Cash held by households') model.var('Hs', desc='Cash supplied by the central bank') model.var('Pbl', desc='Price of bonds') model.var('Pble', desc='Expected price of bonds') model.var('Rb', desc='Interest rate on government bills') model.var('Rbl', desc='Interest rate on government bonds') model.var('T', desc='Taxes') model.var('TP', desc='Target proportion in households portfolio') model.var('V', desc='Household wealth') model.var('Ve', desc='Expected household wealth') model.var('Y', desc='Income = GDP') model.var('YDr', desc='Regular disposable income of households') model.var('YDre', desc='Expected regular disposable income of households') model.var('z1', desc='Switch parameter') model.var('z2', desc='Switch parameter') model.set_param_default(0) model.param('add', desc='Random shock to expectations') model.param('alpha1', desc='Propensity to consume out of income') model.param('alpha2', desc='Propensity to consume out of wealth') model.param('beta', desc='Adjustment parameter in price of bills') model.param('betae', desc='Adjustment parameter in expectations') model.param('bot', desc='Bottom value for TP') model.param('chi', desc='Weight of conviction in expected bond price') model.param('lambda10', desc='Parameter in asset demand function') model.param('lambda12', desc='Parameter in asset demand function') model.param('lambda13', desc='Parameter in asset demand function') model.param('lambda14', desc='Parameter in asset demand function') model.param('lambda20', desc='Parameter in asset demand function') model.param('lambda22', desc='Parameter in asset demand function') model.param('lambda23', desc='Parameter in asset demand function') model.param('lambda24', desc='Parameter in asset demand function') model.param('lambda30', desc='Parameter in asset demand function') model.param('lambda32', desc='Parameter in asset demand function') model.param('lambda33', desc='Parameter in asset demand function') model.param('lambda34', desc='Parameter in asset demand function') model.param('theta', desc='Tax rate') model.param('top', desc='Top value for TP') model.param('G', desc='Government goods') model.param('Pblbar', desc='Exogenously set price of bonds') model.param('Rbar', desc='Exogenously set interest rate on govt bills') model.add('Y = C + G') # 5.1 model.add('YDr = Y - T + Rb(-1)*Bh(-1) + BLh(-1)') # 5.2 model.add('T = theta *(Y + Rb(-1)*Bh(-1) + BLh(-1))') # 5.3 model.add('V - V(-1) = (YDr - C) + CG') # 5.4 model.add('CG = (Pbl - Pbl(-1))*BLh(-1)') model.add('C = alpha1*YDre + alpha2*V(-1)') model.add('Ve = V(-1) + (YDre - C) + CG') model.add('Hh = V - Bh - Pbl*BLh') model.add('Hd = Ve - Bd - Pbl*BLd') model.add('Bd = Ve*lambda20 + Ve*lambda22*Rb' + '- Ve*lambda23*ERrbl - lambda24*YDre') model.add('BLd = (Ve*lambda30 - Ve*lambda32*Rb ' + '+ Ve*lambda33*ERrbl - lambda34*YDre)/Pbl') model.add('Bh = Bd') model.add('BLh = BLd') model.add('Bs - Bs(-1) = (G + Rb(-1)*Bs(-1) + BLs(-1))' + ' - (T + Rb(-1)*Bcb(-1)) - Pbl*(BLs - BLs(-1))') model.add('Hs - Hs(-1) = Bcb - Bcb(-1)') model.add('Bcb = Bs - Bh') model.add('BLs = BLh') model.add('ERrbl = Rbl + ((chi * (Pble - Pbl))/ Pbl)') model.add('Rbl = 1./Pbl') model.add('Pble = Pble(-1) - betae*(Pble(-1) - Pbl) + add') model.add('CGe = chi * (Pble - Pbl)*BLh') model.add('YDre = YDr(-1)') model.add('Rb = Rbar') model.add('Pbl = (1 + z1*beta - z2*beta)*Pbl(-1)') model.add('z1 = if_true(TP > top)') model.add('z2 = if_true(TP < bot)') model.add('TP = (BLh(-1)*Pbl(-1))/(BLh(-1)*Pbl(-1) + Bh(-1))') return model lp2_parameters = {'alpha1': 0.8, 'alpha2': 0.2, 'beta': 0.01, 'betae': 0.5, 'chi': 0.1, 'lambda20': 0.44196, 'lambda22': 1.1, 'lambda23': 1, 'lambda24': 0.03, 'lambda30': 0.3997, 'lambda32': 1, 'lambda33': 1.1, 'lambda34': 0.03, 'theta': 0.1938, 'bot': 0.495, 'top': 0.505 } lp2_exogenous = {'G': 20, 'Rbar': 0.03, 'Pblbar': 20, 'add': 0} lp2_variables = {'V': 95.803, 'Bh': 37.839, 'Bs': 57.964, 'Bcb': 57.964 - 37.839, 'BLh': 1.892, 'BLs': 1.892, 'Hs': 20.125, 'YDr': 95.803, 'Rb': 0.03, 'Pbl': 20, 'Pble': 20, 'TP': 1.892*20/(1.892*20+37.839), # BLh*Pbl/(BLh*Pbl+Bh) 'z1': 0, 'z2': 0} ``` ### Scenario: interest rate shock ``` lp2_bill = create_lp2_model() lp2_bill.set_values(lp2_parameters) lp2_bill.set_values(lp2_exogenous) lp2_bill.set_values(lp2_variables) lp2_bill.set_values({'z1': lp2_bill.evaluate('if_true(TP > top)'), 'z2': lp2_bill.evaluate('if_true(TP < bot)')}) for _ in xrange(10): lp2_bill.solve(iterations=100, threshold=1e-4) # shock the system lp2_bill.set_values({'Rbar': 0.035}) for _ in xrange(45): lp2_bill.solve(iterations=100, threshold=1e-4) ``` ###### Figure 5.5 ``` caption = ''' Figure 5.5 Evolution of the long-term interest rate (the bond yield), following an increase in the short-term interest rate (the bill rate), as a result of the response of the central bank and the Treasury, with Model LP2.''' rbdata = [s['Rb'] for s in lp2_bill.solutions[5:]] pbldata = [1./s['Pbl'] for s in lp2_bill.solutions[5:]] fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 1.1, 0.9]) axes.tick_params(top='off', right='off') axes.spines['top'].set_visible(False) axes.set_ylim(0.029, 0.036) axes.plot(rbdata, linestyle='--', color='r') axes2 = axes.twinx() axes2.spines['top'].set_visible(False) axes2.set_ylim(0.0495, 0.052) axes2.plot(pbldata, 'k') # add labels plt.text(12, 0.0518, 'Short-term interest rate') plt.text(15, 0.0513, 'Long-term interest rate') fig.text(0.05, 1.05, 'Bill rate') fig.text(1.15, 1.05, 'Bond yield') fig.text(0.1, -.1, caption); ``` ###### Figure 5.6 ``` caption = ''' Figure 5.6 Evolution of the target proportion (TP), that is the share of bonds in the government debt held by households, following an increase in the short-term interest rate (the bill rate) and the response of the central bank and of the Treasury, with Model LP2''' tpdata = [s['TP'] for s in lp2_bill.solutions[5:]] topdata = [s['top'] for s in lp2_bill.solutions[5:]] botdata = [s['bot'] for s in lp2_bill.solutions[5:]] fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 1.1, 1.1]) axes.tick_params(top='off', right='off') axes.spines['top'].set_visible(False) axes.set_ylim(0.490, 0.506) axes.plot(topdata, color='k') axes.plot(botdata, color='k') axes.plot(tpdata, linestyle='--', color='r') # add labels plt.text(30, 0.5055, 'Ceiling of target range') plt.text(30, 0.494, 'Floor of target range') plt.text(10, 0.493, 'Share of bonds') plt.text(10, 0.4922, 'in government debt') plt.text(10, 0.4914, 'held by households') fig.text(0.1, -.15, caption); ``` ### Scenario: Shock to the bond price expectations ``` lp2_bond = create_lp2_model() lp2_bond.set_values(lp2_parameters) lp2_bond.set_values(lp2_exogenous) lp2_bond.set_values(lp2_variables) lp2_bond.set_values({'z1': 'if_true(TP > top)', 'z2': 'if_true(TP < bot)'}) for _ in xrange(10): lp2_bond.solve(iterations=100, threshold=1e-5) # shock the system lp2_bond.set_values({'add': -3}) lp2_bond.solve(iterations=100, threshold=1e-5) lp2_bond.set_values({'add': 0}) for _ in xrange(43): lp2_bond.solve(iterations=100, threshold=1e-4) ``` ###### Figure 5.7 ``` caption = ''' Figure 5.7 Evolution of the long-term interest rate, following an anticipated fall in the price of bonds, as a consequence of the response of the central bank and of the Treasury, with Model LP2''' pbldata = [1./s['Pbl'] for s in lp2_bond.solutions[5:]] fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 0.9, 0.9]) axes.tick_params(top='off', right='off') axes.spines['top'].set_visible(False) axes.spines['right'].set_visible(False) axes.set_ylim(0.0497, 0.0512) axes.plot(pbldata, linestyle='--', color='k') # add labels plt.text(15, 0.0509, 'Long-term interest rate') fig.text(0.1, -.1, caption); ``` ###### Figure 5.8 ``` caption = ''' Figure 5.8 Evolution of the expected and actual bond prices, following an anticipated fall in the price of bonds, as a consequence of the response of the central bank and of the Treasury, with Model LP2''' pbldata = [s['Pbl'] for s in lp2_bond.solutions[5:]] pbledata = [s['Pble'] for s in lp2_bond.solutions[5:]] fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 0.9, 0.9]) axes.tick_params(top='off', right='off') axes.spines['top'].set_visible(False) axes.spines['right'].set_visible(False) axes.set_ylim(16.5, 21) axes.plot(pbldata, linestyle='--', color='k') axes.plot(pbledata, linestyle='-', color='r') # add labels plt.text(8, 20, 'Actual price of bonds') plt.text(10, 19, 'Expected price of bonds') fig.text(0.1, -.1, caption); ``` ###### Figure 5.9 ``` caption = ''' Figure 5.9 Evolution of the target proportion (TP), that is the share of bonds in the government debt held by households, following an anticipated fall in the price of bonds, as a consequence of the response of the central bank and of the Treasury, with Model LP2''' tpdata = [s['TP'] for s in lp2_bond.solutions[5:]] botdata = [s['top'] for s in lp2_bond.solutions[5:]] topdata = [s['bot'] for s in lp2_bond.solutions[5:]] fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 0.9, 0.9]) axes.tick_params(top='off', right='off') axes.spines['top'].set_visible(False) axes.spines['right'].set_visible(False) axes.set_ylim(0.47, 0.52) axes.plot(tpdata, linestyle='--', color='r') axes.plot(botdata, linestyle='-', color='k') axes.plot(topdata, linestyle='-', color='k') # add labels plt.text(30, 0.508, 'Ceiling of target range') plt.text(30, 0.491, 'Floor of target range') plt.text(10, 0.49, 'Share of bonds in') plt.text(10, 0.487, 'government debt') plt.text(10, 0.484, 'held by households') fig.text(0.1, -.15, caption); ``` ### Scenario: Model LP1, propensity to consume shock ``` lp1_alpha = create_lp1_model() lp1_alpha.set_values(lp1_parameters) lp1_alpha.set_values(lp1_exogenous) lp1_alpha.set_values(lp1_variables) for _ in xrange(10): lp1_alpha.solve(iterations=100, threshold=1e-6) # shock the system lp1_alpha.set_values({'alpha1': 0.7}) for _ in xrange(45): lp1_alpha.solve(iterations=100, threshold=1e-6) ``` ### Model LP3 ``` def create_lp3_model(): model = Model() model.set_var_default(0) model.var('Bcb', desc='Government bills held by the Central Bank') model.var('Bd', desc='Demand for government bills') model.var('Bh', desc='Government bills held by households') model.var('Bs', desc='Government bills supplied by government') model.var('BLd', desc='Demand for government bonds') model.var('BLh', desc='Government bonds held by households') model.var('BLs', desc='Supply of government bonds') model.var('CG', desc='Capital gains on bonds') model.var('CGe', desc='Expected capital gains on bonds') model.var('C', desc='Consumption') model.var('ERrbl', desc='Expected rate of return on bonds') model.var('Hd', desc='Demand for cash') model.var('Hh', desc='Cash held by households') model.var('Hs', desc='Cash supplied by the central bank') model.var('Pbl', desc='Price of bonds') model.var('Pble', desc='Expected price of bonds') model.var('PSBR', desc='Public sector borrowing requirement (PSBR)') model.var('Rb', desc='Interest rate on government bills') model.var('Rbl', desc='Interest rate on government bonds') model.var('T', desc='Taxes') model.var('TP', desc='Target proportion in households portfolio') model.var('V', desc='Household wealth') model.var('Ve', desc='Expected household wealth') model.var('Y', desc='Income = GDP') model.var('YDr', desc='Regular disposable income of households') model.var('YDre', desc='Expected regular disposable income of households') model.var('z1', desc='Switch parameter') model.var('z2', desc='Switch parameter') model.var('z3', desc='Switch parameter') model.var('z4', desc='Switch parameter') # no longer exogenous model.var('G', desc='Government goods') model.set_param_default(0) model.param('add', desc='Random shock to expectations') model.param('add2', desc='Addition to the government expenditure setting rule') model.param('alpha1', desc='Propensity to consume out of income') model.param('alpha2', desc='Propensity to consume out of wealth') model.param('beta', desc='Adjustment parameter in price of bills') model.param('betae', desc='Adjustment parameter in expectations') model.param('bot', desc='Bottom value for TP') model.param('chi', desc='Weight of conviction in expected bond price') model.param('lambda10', desc='Parameter in asset demand function') model.param('lambda12', desc='Parameter in asset demand function') model.param('lambda13', desc='Parameter in asset demand function') model.param('lambda14', desc='Parameter in asset demand function') model.param('lambda20', desc='Parameter in asset demand function') model.param('lambda22', desc='Parameter in asset demand function') model.param('lambda23', desc='Parameter in asset demand function') model.param('lambda24', desc='Parameter in asset demand function') model.param('lambda30', desc='Parameter in asset demand function') model.param('lambda32', desc='Parameter in asset demand function') model.param('lambda33', desc='Parameter in asset demand function') model.param('lambda34', desc='Parameter in asset demand function') model.param('theta', desc='Tax rate') model.param('top', desc='Top value for TP') model.param('Pblbar', desc='Exogenously set price of bonds') model.param('Rbar', desc='Exogenously set interest rate on govt bills') model.add('Y = C + G') # 5.1 model.add('YDr = Y - T + Rb(-1)*Bh(-1) + BLh(-1)') # 5.2 model.add('T = theta *(Y + Rb(-1)*Bh(-1) + BLh(-1))') # 5.3 model.add('V - V(-1) = (YDr - C) + CG') # 5.4 model.add('CG = (Pbl - Pbl(-1))*BLh(-1)') model.add('C = alpha1*YDre + alpha2*V(-1)') model.add('Ve = V(-1) + (YDre - C) + CG') model.add('Hh = V - Bh - Pbl*BLh') model.add('Hd = Ve - Bd - Pbl*BLd') model.add('Bd = Ve*lambda20 + Ve*lambda22*Rb' + '- Ve*lambda23*ERrbl - lambda24*YDre') model.add('BLd = (Ve*lambda30 - Ve*lambda32*Rb ' + '+ Ve*lambda33*ERrbl - lambda34*YDre)/Pbl') model.add('Bh = Bd') model.add('BLh = BLd') model.add('Bs - Bs(-1) = (G + Rb(-1)*Bs(-1) + BLs(-1))' + ' - (T + Rb(-1)*Bcb(-1)) - Pbl*(BLs - BLs(-1))') model.add('Hs - Hs(-1) = Bcb - Bcb(-1)') model.add('Bcb = Bs - Bh') model.add('BLs = BLh') model.add('ERrbl = Rbl + ((chi * (Pble - Pbl))/ Pbl)') model.add('Rbl = 1./Pbl') model.add('Pble = Pble(-1) - betae*(Pble(-1) - Pbl) + add') model.add('CGe = chi * (Pble - Pbl)*BLh') model.add('YDre = YDr(-1)') model.add('Rb = Rbar') model.add('Pbl = (1 + z1*beta - z2*beta)*Pbl(-1)') model.add('z1 = if_true(TP > top)') model.add('z2 = if_true(TP < bot)') model.add('TP = (BLh(-1)*Pbl(-1))/(BLh(-1)*Pbl(-1) + Bh(-1))') model.add('PSBR = (G + Rb*Bs(-1) + BLs(-1)) - (T + Rb*Bcb(-1))') model.add('z3 = if_true((PSBR(-1)/Y(-1)) > 0.03)') model.add('z4 = if_true((PSBR(-1)/Y(-1)) < -0.03)') model.add('G = G(-1) - (z3 + z4)*PSBR(-1) + add2') return model lp3_parameters = {'alpha1': 0.8, 'alpha2': 0.2, 'beta': 0.01, 'betae': 0.5, 'chi': 0.1, 'lambda20': 0.44196, 'lambda22': 1.1, 'lambda23': 1, 'lambda24': 0.03, 'lambda30': 0.3997, 'lambda32': 1, 'lambda33': 1.1, 'lambda34': 0.03, 'theta': 0.1938, 'bot': 0.495, 'top': 0.505 } lp3_exogenous = {'Rbar': 0.03, 'Pblbar': 20, 'add': 0, 'add2': 0} lp3_variables = {'G': 20, 'V': 95.803, 'Bh': 37.839, 'Bs': 57.964, 'Bcb': 57.964 - 37.839, 'BLh': 1.892, 'BLs': 1.892, 'Hs': 20.125, 'YDr': 95.803, 'Rb': 0.03, 'Pbl': 20, 'Pble': 20, 'PSBR': 0, 'Y': 115.8, 'TP': 1.892*20/(1.892*20+37.839), # BLh*Pbl/(BLh*Pbl+Bh) 'z1': 0, 'z2': 0, 'z3': 0, 'z4': 0} ``` ### Scenario: LP3, decrease in propensity to consume ``` lp3_alpha = create_lp3_model() lp3_alpha.set_values(lp3_parameters) lp3_alpha.set_values(lp3_exogenous) lp3_alpha.set_values(lp3_variables) for _ in xrange(10): lp3_alpha.solve(iterations=100, threshold=1e-6) # shock the system lp3_alpha.set_values({'alpha1': 0.7}) for _ in xrange(45): lp3_alpha.solve(iterations=100, threshold=1e-6) ``` ###### Figure 5.10 ``` caption = ''' Figure 5.10 Evolution of national income (GDP), following a sharp decrease in the propensity to consume out of current income, with Model LP1''' ydata = [s['Y'] for s in lp1_alpha.solutions[5:]] fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 0.9, 0.9]) axes.tick_params(top='off', right='off') axes.spines['top'].set_visible(False) axes.spines['right'].set_visible(False) axes.set_ylim(90, 128) axes.plot(ydata, linestyle='--', color='k') # add labels plt.text(20, 110, 'Gross Domestic Product') fig.text(0.1, -.05, caption); ``` ###### Figure 5.11 ``` caption = ''' Figure 5.11 Evolution of national income (GDP), following a sharp decrease in the propensity to consume out of current income, with Model LP3''' ydata = [s['Y'] for s in lp3_alpha.solutions[5:]] fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 0.9, 0.9]) axes.tick_params(top='off', right='off') axes.spines['top'].set_visible(False) axes.spines['right'].set_visible(False) axes.set_ylim(90, 128) axes.plot(ydata, linestyle='--', color='k') # add labels plt.text(20, 110, 'Gross Domestic Product') fig.text(0.1, -.05, caption); ``` ###### Figure 5.12 ``` caption = ''' Figure 5.12 Evolution of pure government expenditures and of the government deficit to national income ratio (the PSBR to GDP ratio), following a sharp decrease in the propensity to consume out of current income, with Model LP3''' gdata = [s['G'] for s in lp3_alpha.solutions[5:]] ratiodata = [s['PSBR']/s['Y'] for s in lp3_alpha.solutions[5:]] fig = plt.figure() axes = fig.add_axes([0.1, 0.1, 0.9, 0.9]) axes.tick_params(top='off') axes.spines['top'].set_visible(False) axes.set_ylim(16, 20.5) axes.plot(gdata, linestyle='--', color='r') plt.text(5, 20.4, 'Pure government') plt.text(5, 20.15, 'expenditures (LHS)') plt.text(30, 18, 'Deficit to national') plt.text(30, 17.75, 'income ration (RHS)') axes2 = axes.twinx() axes2.tick_params(top='off') axes2.spines['top'].set_visible(False) axes2.set_ylim(-.01, 0.04) axes2.plot(ratiodata, linestyle='-', color='b') # add labels fig.text(0.1, 1.05, 'G') fig.text(0.9, 1.05, 'PSBR to Y ratio') fig.text(0.1, -.1, caption); ```
github_jupyter
# `Cannabis (drug)` #### `INFORMATION`: ### Everything we need to know about marijuana (cannabis) >`Cannabis, also known as marijuana among other names, is a psychoactive drug from the Cannabis plant used for medical or recreational purposes. The main psychoactive part of cannabis is tetrahydrocannabinol (THC), one of 483 known compounds in the plant, including at least 65 other cannabinoids. Cannabis can be used by smoking, vaporizing, within food, or as an extract` >[For more information](https://www.medicalnewstoday.com/articles/246392.php) [VIDEO](https://youtu.be/GhTYI3DeNgA) ![](https://cdn.shopify.com/s/files/1/0975/1130/files/GLPlantInfo_58b4c4d0-eda7-4fc3-97ed-1319ca431e62.jpg?v=1525710776) ``` import pandas as pd import missingno import matplotlib.pyplot as plt import seaborn as sns sns.set_style("dark") import warnings warnings.simplefilter("ignore") import numpy as np df = pd.read_csv("cannabis.csv");df.head() ``` `Understainding attributes` >| Attribute Name | Info | | -------------- | -------------------------------------------- | | Strain Name | Given Name of strain | | Type | type of Strain(namly indica, sativa, hybrid) | | rating | User Rating | | Effects | Different effects optained | | Description | other backround info | ### UNDERSTAING DATA like finding null values ``` df.shape#showing (row , columns) df.info()#geting basic information like datatypes #we can clearly see there are some missing values in Flavor and Description missingno.matrix(df); df.isnull().sum()#see null values print(df.Type.value_counts())#counting the occurence of values sns.categorical.countplot(df.Type);#displaying it through graph plt.ylabel("distribuition") sns.distplot(df.Rating); #by this we can see that all the types have rating more than 3.5 #finding max rating to each type df.groupby(["Type"])["Rating"].max() #finding min rating to each type df.groupby(["Type"])["Rating"].min() #mean rating df.groupby(["Type"])["Rating"].mean() #Now we will extract the values in Effects and Flavor and pass to a new column effect = pd.DataFrame(df.Effects.str.split(',',4).tolist(), columns = ['Eone','Etwo','Ethree','Efour','Efive']) flavors = pd.DataFrame(df.Flavor.str.split(',',n=2,expand=True).values.tolist(), columns = ['Fone','Ftwo','Fthree']) df = pd.concat([df, effect], axis=1) df = pd.concat([df, flavors], axis=1)#concating the two dataframes #for more information plz visit #link => http://pandas.pydata.org/pandas-docs/stable/merging.html df.columns #finding top 5 effects print(df.Eone.value_counts().head()) plt.figure(figsize=(15,10)) sns.boxplot(x = "Eone",y = "Rating",hue="Type",data=df[df.Rating > 3.5]); #finding top 5 Flavor df.Eone.value_counts().head() plt.figure(figsize=(15,10)) plt.xticks(rotation=90) sns.countplot(x = "Fone",data=df); ```
github_jupyter
##### week1-Q1. What does the analogy “AI is the new electricity” refer to? 1. Through the “smart grid”, AI is delivering a new wave of electricity. 2. AI runs on computers and is thus powered by electricity, but it is letting computers do things not possible before. 3. Similar to electricity starting about 100 years ago, AI is transforming multiple industries. 4. AI is powering personal devices in our homes and offices, similar to electricity. <span style="color:blue"> Answer: 3. AI is transforming many fields from the car industry to agriculture to supply-chain. </span> ##### week1-Q2. Which of these are reasons for Deep Learning recently taking off? (Check the three options that apply.) 1. Deep learning has resulted in significant improvements in important applications such as online advertising, speech recognition, and image recognition. 2. Neural Networks are a brand new field. 3. We have access to a lot more data. 4. We have access to a lot more computational power. <span style="color:blue"> Answer: 1,3,4. The digitalization of our society has played a huge role in this. The development of hardware, perhaps especially GPU computing, has significantly improved deep learning algorithms' performance. </span> ##### week1-Q3. Recall this diagram of iterating over different ML ideas. Which of the statements below are true? (Check all that apply.) <img src="images/cycle.png" alt="cycle.png" style="width:300px"/> 1. Being able to try out ideas quickly allows deep learning engineers to iterate more quickly. 2. Faster computation can help speed up how long a team takes to iterate to a good idea. 3. It is faster to train on a big dataset than a small dataset. 4. Recent progress in deep learning algorithms has allowed us to train good models faster (even without changing the CPU/GPU hardware). <span style="color:blue"> Answer: 1,2,4. For example, we discussed how switching from sigmoid to ReLU activation functions allows faster training. </span> ##### week1-Q4. When an experienced deep learning engineer works on a new problem, they can usually use insight from previous problems to train a good model on the first try, without needing to iterate multiple times through different models. True/False? <span style="color:blue"> Answer: False. Finding the characteristics of a model is key to have good performance. Although experience can help, it requires multiple iterations to build a good model. </span> ##### week1-Q5. ReLU activation function? ![relu-quiz.png](images/relu-quiz.png) ##### week1-Q6. Images for cat recognition is an example of “structured” data, because it is represented as a structured array in a computer. True/False? <span style="color:blue"> Answer: False. Images for cat recognition is an example of “unstructured” data. </span> ##### week1-Q7. A demographic dataset with statistics on different cities' population, GDP per capita, economic growth is an example of “unstructured” data because it contains data coming from different sources. True/False? <span style="color:blue"> Answer: False. A demographic dataset with statistics on different cities' population, GDP per capita, economic growth is an example of “structured” data by opposition to image, audio or text datasets. </span> ##### week1-Q8. Why is an RNN (Recurrent Neural Network) used for machine translation, say translating English to French? (Check all that apply.) 1. It can be trained as a supervised learning problem. 2. It is strictly more powerful than a Convolutional Neural Network (CNN). 3. It is applicable when the input/output is a sequence (e.g., a sequence of words). 4. RNNs represent the recurrent process of Idea->Code->Experiment->Idea->.... <span style="color:blue"> Answer: 1,3. An RNN can map from a sequence of english words to a sequence of french words. </span> ##### week1-Q9. In this diagram which we hand-drew in lecture, what do the horizontal axis (x-axis) and vertical axis (y-axis) represent? <img src="images/networks.png" alt="networks.png" style="width:550px"/> <span style="color:blue"> Answer: x-axis is the amount of data. y-axis (vertical axis) is the performance of the algorithm. </span> ##### week1-Q10. Assuming the trends described in the previous question's figure are accurate (and hoping you got the axis labels right), which of the following are true? (Check all that apply.) 1. Decreasing the training set size generally does not hurt an algorithm’s performance, and it may help significantly. 2. Increasing the size of a neural network generally does not hurt an algorithm’s performance, and it may help significantly. 3. Increasing the training set size generally does not hurt an algorithm’s performance, and it may help significantly. 4. Decreasing the size of a neural network generally does not hurt an algorithm’s performance, and it may help significantly. <span style="color:blue"> Answer: 2,3. According to the trends in the figure above, big networks usually perform better than small networks. Bringing more data to a model is almost always beneficial. </span> ##### week2-Q3. Suppose img is a `(32,32,3)` array, representing a 32x32 image with 3 color channels red, green and blue. How do you reshape this into a column vector? <span style="color:blue"> Answer: x = img.reshape((32\*32\*3,1)). </span> ##### week2-Q4. Consider the two following random arrays "a" and "b". What will be the shape of "c"? ``` import numpy as np a = np.random.randn(2, 3) # a.shape = (2, 3) b = np.random.randn(2, 1) # b.shape = (2, 1) c = a + b print(c.shape) ``` <span style="color:blue"> Answer: This is broadcasting. b (column vector) is copied 3 times so that it can be summed to each column of a. </span> ##### week2-Q5. Consider the two following random arrays "a" and "b". What will be the shape of "c"? ``` a = np.random.randn(4, 3) # a.shape = (4, 3) b = np.random.randn(3, 2) # b.shape = (3, 2) # operands could not be broadcast together with shapes (4,3) (3,2) # print((a*b).shape) print((np.dot(a,b)).shape) ``` <span style="color:blue"> Answer: The computation cannot happen because the sizes don't match. It's going to be "Error"! In numpy the "*" operator indicates element-wise multiplication. It is different from "np.dot()". If you would try "c = np.dot(a,b)" you would get c.shape = (4, 2). </span> ##### week2-Q7. Recall that "np.dot(a,b)" performs a matrix multiplication on a and b, whereas "a*b" performs an element-wise multiplication. Consider the two following random arrays "a" and "b": ``` a = np.random.randn(12288, 150) # a.shape = (12288, 150) b = np.random.randn(150, 45) # b.shape = (150, 45) c = np.dot(a,b) print(c.shape) ``` <span style="color:blue"> Answer: Remember that a np.dot(a, b) has shape (number of rows of a, number of columns of b). The sizes match because : "number of columns of a = 150 = number of rows of b" </span> ##### week2-Q8. Consider the following code snippet. How do you vectorize this? ``` a = np.random.randn(3,4) b = np.random.randn(4,1) for i in range(3): for j in range(4): c[i][j] = a[i][j] + b[j] print(c.shape) c = a + b.T print(c.shape) ``` ##### week2-Q9. Consider the following code: ``` a = np.random.randn(3, 3) b = np.random.randn(3, 1) c = a*b print(c.shape) ``` What will be c? (If you’re not sure, feel free to run this in python to find out). <span style="color:blue"> Answer: This will invoke broadcasting, so b is copied three times to become (3,3), and ∗ is an element-wise product so c.shape will be (3, 3). </span> ##### week3-Q5. Consider the following code: ``` import numpy as np A = np.random.randn(4,3) B = np.sum(A, axis=1, keepdims=True) print(B.shape) ``` <span style="color:blue"> Answer: We use (keepdims = True) to make sure that A.shape is (4,1) and not (4, ). It makes our code more rigorous. </span> ##### week2-Q10. Consider the following computation graph. <img src="images/computation.png" alt="computation.png" style="width:600px"/> What is the output J? <span style="color:blue"> Answer: `J = (a - 1) * (b + c)` `J = u + v - w = a*b + a*c - (b + c) = a * (b + c) - (b + c) = (a - 1) * (b + c)`. </span> ##### week2-Q2. Which of these is the "Logistic Loss"? <span style="color:blue"> Answer: $\mathcal{L}^{(i)}(\hat{y}^{(i)}, y^{(i)}) = y^{(i)}\log(\hat{y}^{(i)}) + (1- y^{(i)})\log(1-\hat{y}^{(i)})$ </span> ##### week-Q6. How many layers does this network have? <img src="images/n4.png" alt="n4.png" style="width:400px" /> <span style="color:blue"> Answer: The number of layers $L$ is 4. The number of hidden layers is 3. As seen in lecture, the number of layers is counted as the number of hidden layers + 1. The input and output layers are not counted as hidden layers. </span> ##### week3-Q1. Which of the following are true? (Check all that apply.) - $a^{[2]}$ denotes the activation vector of the $2^{nd}$ layer. <span style="color:blue">(True)</span> - $a^{[2]}_4$ is the activation output by the $4^{th}$ neuron of the $2^{nd}$ layer. <span style="color:blue">(True)</span> - $a^{[2](12)}$ denotes the activation vector of the $2^{nd}$ layer for the $12^{th}$ training example.<span style="color:blue">(True)</span> - $X$ is a matrix in which each column is one training example. <span style="color:blue">(True)</span> ##### week2-Q1. What does a neuron compute? <span style="color:blue"> Answer: A neuron computes a linear function `(z = Wx + b)` followed by an activation function. The output of a neuron is `a = g(Wx + b)` where `g` is the activation function (sigmoid, tanh, ReLU, ...). </span> ##### week3-Q3. Vectorized implementation of forward propagation for layer $l$, where $1 \leq l \leq L$? <span style="color:blue"> Answer: $$Z^{[l]} = W^{[l]} A^{[l-1]}+ b^{[l]}$$ $$A^{[l]} = g^{[l]}(Z^{[l]})$$ </span> ##### week-Q4. Vectorization allows you to compute forward propagation in an L-layer neural network without an explicit for-loop (or any other explicit iterative loop) over the layers l=1, 2, …,L. True/False? <span style="color:blue"> Answer: False. Forward propagation propagates the input through the layers, although for shallow networks we may just write all the lines ($a^{[2]} = g^{[2]}(z^{[2]})$, $z^{[2]}= W^{[2]}a^{[1]}+b^{[2]}$, ...) in a deeper network, we cannot avoid a for loop iterating over the layers: ($a^{[l]} = g^{[l]}(z^{[l]})$, $z^{[l]} = W^{[l]}a^{[l-1]} + b^{[l]}$, ...). </span> ##### week-Q5. Assume we store the values for $n^{[l]}$ in an array called layers, as follows: layer_dims = $[n_x,4,3,2,1]$. So layer 1 has four hidden units, layer 2 has 3 hidden units and so on. Which of the following for-loops will allow you to initialize the parameters for the model? ```python for(i in range(1, len(layer_dims))): parameter[‘W’ + str(i)] = np.random.randn(layers[i], layers[i-1]) * 0.01 parameter[‘b’ + str(i)] = np.random.randn(layers[i], 1) * 0.01 ``` ##### week4-Q10. Whereas the previous question used a specific network, in the general case what is the dimension of $W^{[l]}$, the weight matrix associated with layer $l$? <span style="color:blue"> Answer: $W^{[l]}$ has shape $(n^{[l]}, n^{[l-1]})$. </span> ##### week4-Q9. Which of the following statements are True? (Check all that apply). <img src="./images/n2.png" alt="n2.png" style="width: 450px;"/> 1. $W^{[1]}$ will have shape (4, 4). <span style="color:blue">True, shape of $W^{[l]}$ is $(n^{[l]}, n^{[l-1]})$</span> 3. $W^{[2]}$ will have shape (3, 4). <span style="color:blue">True, shape of $W^{[l]}$ is $(n^{[l]}, n^{[l-1]})$</span> 3. $W^{[3]}$ will have shape (1, 3). <span style="color:blue">True, shape of $W^{[l]}$ is $(n^{[l]}, n^{[l-1]})$</span> 2. $b^{[1]}$ will have shape (4, 1). <span style="color:blue">True, shape of $b^{[l]}$ is $(n^{[l]}, 1)$</span> 4. $b^{[2]}$ will have shape (3, 1). <span style="color:blue">True, shape of $b^{[l]}$ is $(n^{[l]}, 1)$</span> 4. $b^{[3]}$ will have shape (1, 1). <span style="color:blue">True, shape of $b^{[l]}$ is $(n^{[l]}, 1)$</span> ##### week3-Q9. Consider the following 1 hidden layer neural network: ![1-hidden.png](images/1-hidden.png) Which of the following statements are True? (Check all that apply). - $W^{[1]}$ will have shape (4, 2) <span style="color:blue">(True)</span> - $W^{[2]}$ will have shape (1, 4) <span style="color:blue">(True)</span> - $b^{[1]}$ will have shape (4, 1) <span style="color:blue">(True)</span> - $b^{[2]}$ will have shape (1, 1) <span style="color:blue">(True)</span> ##### week2-Q6. Suppose you have nx input features per example. Recall that $X = [x^{(1)} x^{(2)} ... x^{(m)}]$. What is the dimension of X? <span style="color:blue"> Answer: $(n_x, m)$ </span> ##### week3-Q10. In the same network as the previous question, what are the dimensions of $Z^{[1]}$ and $A^{[1]}$? <span style="color:blue"> Answer: $Z^{[1]}$ and $A^{[1]}$ are (4, m). </span> ##### week3-Q4. You are building a binary classifier for recognizing cucumbers (y=1) vs. watermelons (y=0). Which one of these activation functions would you recommend using for the output layer? <span style="color:blue"> Answer: sigmoid. Sigmoid outputs a value between 0 and 1 which makes it a very good choice for binary classification. You can classify as 0 if the output is less than 0.5 and classify as 1 if the output is more than 0.5. It can be done with tanh as well but it is less convenient as the output is between -1 and 1. </span> ##### week3-Q2. The tanh activation usually works better than sigmoid activation function for hidden units because the mean of its output is closer to zero, and so it centers the data better for the next layer. True/False? <span style="color:blue"> Answer: True, as seen in lecture the output of the tanh is between -1 and 1, it thus centers the data which makes the learning simpler for the next layer. </span> ##### week3-Q8. You have built a network using the tanh activation for all the hidden units. You initialize the weights to relative large values, using `np.random.randn(..,..)*1000`. What will happen? <span style="color:blue"> Answer: This will cause the inputs of the tanh to also be very large, thus causing gradients to be close to zero. The optimization algorithm will thus become slow. `tanh` becomes flat for large values, this leads its gradient to be close to zero. This slows down the optimization algorithm. </span> ##### week3-Q6. Suppose you have built a neural network. You decide to initialize the weights and biases to be zero. Which of the following statements is true? <span style="color:blue"> Answer: Each neuron in the first hidden layer will perform the same computation. So even after multiple iterations of gradient descent each neuron in the layer will be computing the same thing as other neurons. </span> ##### week3-Q7. Logistic regression’s weights w should be initialized randomly rather than to all zeros, because if you initialize to all zeros, then logistic regression will fail to learn a useful decision boundary because it will fail to “break symmetry”, True/False? <span style="color:blue"> Answer: False. Logistic Regression doesn't have a hidden layer. If you initialize the weights to zeros, the first example x fed in the logistic regression will output zero but the derivatives of the Logistic Regression depend on the input x (because there's no hidden layer) which is not zero. So at the second iteration, the weights values follow x's distribution and are different from each other if x is not a constant vector. </span> ##### week4-Q1. What is the "cache" used for in our implementation of forward propagation and backward propagation? <span style="color:blue"> Answer: We use it to pass variables computed during forward propagation to the corresponding backward propagation step. It contains useful values for backward propagation to compute derivatives. The "cache" records values from the forward propagation units and sends it to the backward propagation units because it is needed to compute the chain rule derivatives. </span> ##### week4-Q2. Among the following, which ones are "hyperparameters"? (Check all that apply.) <span style="color:blue"> Answer: learning rate $\alpha$, number of layers $L$ in the neural network, number of iterations, size of the hidden layers $n^{[l]}$. </span> <span style="color:red"> Not hyperparameters: bias vectors $b^{[l]}$, weight matrices $W^{[l]}$, activation values $a^{[l]}$. </span> ##### week4-Q3. Which of the following statements is true? 1. The deeper layers of a neural network are typically computing more complex features of the input than the earlier layers. 2. The earlier layers of a neural network are typically computing more complex features of the input than the deeper layers. <span style="color:blue"> Answer: 1. </span> ##### week-Q7. During forward propagation, in the forward function for a layer l you need to know what is the activation function in a layer (Sigmoid, tanh, ReLU, etc.). During backpropagation, the corresponding backward function also needs to know what is the activation function for layer l, since the gradient depends on it. True/False? <span style="color:blue"> Answer: True, as you've seen in the week 3 each activation has a different derivative. Thus, during backpropagation you need to know which activation was used in the forward propagation to be able to compute the correct derivative. </span> ##### week-Q8. There are certain functions with the following properties: (i) To compute the function using a shallow network circuit, you will need a large network (where we measure size by the number of logic gates in the network), but (ii) To compute it using a deep network circuit, you need only an exponentially smaller network. True/False? <span style="color:blue"> Answer: True. </span>
github_jupyter
``` import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import numpy as np %matplotlib inline sns.set_style("whitegrid") plt.style.use("fivethirtyeight") df = pd.read_csv('diabetes.csv') df[0:10] pd.set_option("display.float", "{:.2f}".format) df.describe() df.info() missing_values_count = df.isnull().sum() total_cells = np.product(df.shape) total_missing = missing_values_count.sum() percentage_missing = (total_missing/total_cells)*100 print(percentage_missing) from sklearn.ensemble import RandomForestRegressor x = df.copy() y = x.pop('Outcome') from sklearn.model_selection import train_test_split X_train,X_test,Y_train,Y_test = train_test_split(x,y,test_size=0.20,random_state=0) from sklearn.metrics import accuracy_score ``` ## logistic regression ``` from sklearn.linear_model import LogisticRegression lr = LogisticRegression() lr.fit(X_train,Y_train) Y_pred_lr = lr.predict(X_test) score_lr = round(accuracy_score(Y_pred_lr,Y_test)*100,2) print("The accuracy score achieved using Logistic Regression is: "+str(score_lr)+" %") ``` ## Navie bayes ``` from sklearn.naive_bayes import GaussianNB nb = GaussianNB() nb.fit(X_train,Y_train) Y_pred_nb = nb.predict(X_test) score_nb = round(accuracy_score(Y_pred_nb,Y_test)*100,2) print("The accuracy score achieved using Naive Bayes is: "+str(score_nb)+" %") ``` ## Support Vector Machine ``` from sklearn import svm sv = svm.SVC(kernel='linear') sv.fit(X_train, Y_train) Y_pred_svm = sv.predict(X_test) score_svm = round(accuracy_score(Y_pred_svm,Y_test)*100,2) print("The accuracy score achieved using Linear SVM is: "+str(score_svm)+" %") ``` ## KNN ``` from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=7) knn.fit(X_train,Y_train) Y_pred_knn=knn.predict(X_test) score_knn = round(accuracy_score(Y_pred_knn,Y_test)*100,2) print("The accuracy score achieved using KNN is: "+str(score_knn)+" %") ``` ## XG Boost ``` import xgboost as xgb xgb_model = xgb.XGBClassifier(objective="binary:logistic", random_state=42) xgb_model.fit(X_train, Y_train) Y_pred_xgb = xgb_model.predict(X_test) score_xgb = round(accuracy_score(Y_pred_xgb,Y_test)*100,2) print("The accuracy score achieved using XGBoost is: "+str(score_xgb)+" %") ``` ## Feature Scaling ``` from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) ``` # Neural Network ``` from keras.models import Sequential from keras.layers import Conv2D from keras.layers import Dense model = Sequential() model.add(Dense(11,activation='relu',input_dim=8)) model.add(Dense(1,activation='sigmoid')) model.summary() model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) history = model.fit(X_train,Y_train, validation_data=(X_test, Y_test),epochs=200, batch_size=10) import matplotlib.pyplot as plt %matplotlib inline # Model accuracy plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('Model Accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test']) plt.show() # Model Losss plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model Loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test']) plt.show() Y_pred_nn = model.predict(X_test) rounded = [round(x[0]) for x in Y_pred_nn] Y_pred_nn = rounded score_nn = round(accuracy_score(Y_pred_nn,Y_test)*100,2) print("The accuracy score achieved using Neural Network is: "+str(score_nn)+" %") ``` # Convolutional Neural Network ``` import tensorflow as tf from tensorflow import keras from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense, Dropout, Flatten,BatchNormalization from tensorflow.keras.layers import Conv1D, MaxPool1D from tensorflow.keras.optimizers import Adam print(tf.__version__) X_train.shape X_test.shape x_train = X_train.reshape(614,8,1) x_test = X_test.reshape(154,8,1) epochs = 100 model = Sequential() model.add(Conv1D(filters=32, kernel_size=2, activation='relu', input_shape=(8,1))) model.add(BatchNormalization()) model.add(Dropout(0.2)) model.add(Conv1D(filters=32, kernel_size=2, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.5)) model.add(Conv1D(filters=32, kernel_size=2, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.5)) model.add(Flatten()) model.add(Dense(64,activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.summary() model.compile(loss='binary_crossentropy', optimizer=Adam(lr=0.00005),metrics=['accuracy']) hists = model.fit(x_train, Y_train,validation_data=(x_test, Y_test), epochs=200, verbose=1) # Model accuracy plt.plot(hists.history['accuracy']) plt.plot(hists.history['val_accuracy']) plt.title('Model Accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test']) plt.show() # Model Losss plt.plot(hists.history['loss']) plt.plot(hists.history['val_loss']) plt.title('Model Loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test']) plt.show() # Predicting the Test set results y_pred_cnn = model.predict(x_test) rounded = [round(x[0]) for x in y_pred_cnn] Y_pred_cnn = rounded score_cnn = round(accuracy_score(Y_pred_cnn,Y_test)*100,2) print("The accuracy score achieved using artificial Neural Network is: "+str(score_cnn)+" %") ``` # Artificial Neural Network ``` import keras from keras.models import Sequential from keras.layers import Dense # Intinialising the ANN classifier = Sequential() # Adding the input layer and the first Hidden layer classifier.add(Dense(activation="relu", input_dim=8, units=7, kernel_initializer="uniform")) # Adding the output layer classifier.add(Dense(activation="sigmoid", input_dim=8, units=1, kernel_initializer="uniform")) # Compiling the ANN classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy']) # Fitting the ANN to the training set hist = classifier.fit(X_train, Y_train,validation_data=(X_test, Y_test), batch_size=10, epochs=500) # Model accuracy plt.plot(hist.history['accuracy']) plt.plot(hist.history['val_accuracy']) plt.title('Model Accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test']) plt.show() # Model Losss plt.plot(hist.history['loss']) plt.plot(hist.history['val_loss']) plt.title('Model Loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test']) plt.show() # Predicting the Test set results y_pred_ann = classifier.predict(X_test) rounded = [round(x[0]) for x in y_pred_ann] Y_pred_ann = rounded score_ann = round(accuracy_score(Y_pred_ann,Y_test)*100,2) print("The accuracy score achieved using artificial Neural Network is: "+str(score_ann)+" %") ``` ## model with best score ``` scores = [score_lr,score_nb,score_svm,score_knn,score_xgb,score_nn,score_ann,score_cnn] algorithms = ["Logistic Regression","Naive Bayes","Support Vector Machine","K-Nearest Neighbors","XGBoost","Neural Network","Art. Neural Network","Conv. Neural Network"] for i in range(len(algorithms)): print("The accuracy score achieved using "+algorithms[i]+" is: "+str(scores[i])+" %") sns.set(rc={'figure.figsize':(15,7)}) plt.xlabel("Algorithms") plt.ylabel("Accuracy score") sns.barplot(algorithms,scores) ```
github_jupyter
# Importing the Libraries ``` import numpy as np import pandas as pd import tensorflow as tf import seaborn as sns from scipy import interp import matplotlib.pyplot as plt from itertools import cycle # Importing the Keras libraries and packages from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.layers import LSTM from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import Activation from tensorflow.keras.optimizers import Adam from tensorflow.keras import callbacks # Importing the libraries for evaluation from sklearn.metrics import roc_curve, auc from sklearn.metrics import classification_report from sklearn.metrics import (precision_score, recall_score,f1_score) from sklearn.metrics import multilabel_confusion_matrix from sklearn.metrics import balanced_accuracy_score from sklearn.metrics import precision_recall_curve from imblearn.metrics import geometric_mean_score ``` # Loading the dataset ``` def load_dataset(): X_train_load = np.loadtxt('data\X_train_reshaped_multi.csv', delimiter=',') X_train_scaled = np.reshape(X_train_load, (X_train_load.shape[0], X_train_load.shape[1], 1)) X_test_load = np.loadtxt('data\X_test_reshaped_multi.csv', delimiter=',') X_test_scaled = np.reshape(X_test_load, (X_test_load.shape[0], X_test_load.shape[1], 1)) y_train_scaled = np.loadtxt('data\y_train_reshaped_multi.csv', delimiter=',') y_test_scaled = np.loadtxt('data\y_test_reshaped_multi.csv', delimiter=',') X_val_load = np.loadtxt('data\X_val_reshaped_multi.csv', delimiter=',') X_val_scaled = np.reshape(X_val_load, (X_val_load.shape[0], X_val_load.shape[1], 1)) y_val_scaled = np.loadtxt('data\y_val_reshaped_multi.csv', delimiter=',') return X_train_scaled, X_test_scaled, y_train_scaled, y_test_scaled, X_val_scaled, y_val_scaled ``` # Creating the LSTM model for multi-class classification ``` def create_model(X_train_scaled): model = Sequential() # Adding the first LSTM layer and Dropout regularization model.add(LSTM(units= 76, return_sequences= True, input_shape= ( X_train_scaled.shape[1], 1))) model.add(Dropout(0.2)) # Adding the second LSTM layer and Dropout regularization model.add(LSTM(units= 76, return_sequences= True)) model.add(Dropout(0.2)) # Adding the third LSTM layer and Dropout regularization model.add(LSTM(units= 76, return_sequences= True)) model.add(Dropout(0.2)) # Adding the fourth LSTM layer and Dropout regularization model.add(LSTM(units= 76)) model.add(Dropout(0.2)) # Adding the output layer model.add(Dense(units= 15)) model.add(Activation('softmax')) opt = Adam(lr=0.00002) # Compiling the LSTM model.compile(optimizer= opt, loss= 'categorical_crossentropy', metrics=['accuracy']) model.summary() return model ``` # Training the model ``` def train_model(model, X_train_scaled, y_train_scaled, X_val_scaled, y_val_scaled): earlystopping = callbacks.EarlyStopping(monitor ="val_loss", mode ="min", patience = 5, restore_best_weights = True) hist = model.fit(X_train_scaled, y_train_scaled, batch_size = 1024, epochs = 40, validation_data =(X_val_scaled, y_val_scaled), callbacks = earlystopping) fin_epoch = earlystopping.stopped_epoch return(hist, fin_epoch) ``` # Evaluating the model ``` def evaluate_model(X_test_scaled, y_test_scaled, model, hist): # Predicting values y_pred = model.predict_classes(X_test_scaled) n_values = np.max(y_pred) + 1 y_prednew = np.eye(n_values)[y_pred] y_prednew = np.reshape(y_prednew, (y_prednew.shape[0], -1)) y_testnew = np.where(y_test_scaled==1)[1] y_prednew2 = model.predict(X_test_scaled) # Calculating the performance metrics training_loss = hist.history['loss'] training_acc = hist.history['accuracy'] loss, accuracy = model.evaluate(X_test_scaled, y_test_scaled) balanced_accuracy = balanced_accuracy_score(y_testnew, y_pred) gmean_score = geometric_mean_score(y_testnew, y_pred) recall = recall_score(y_test_scaled, y_prednew , average="weighted") precision = precision_score(y_test_scaled, y_prednew , average="weighted") f1 = f1_score(y_test_scaled, y_prednew, average="weighted") print("Training Loss:", training_loss) print("Training Accuracy:", training_acc) print("Overall Accuracy:", accuracy) print("Overall Loss:", loss) print("Balanced Accuracy:", balanced_accuracy) print("Geometric Mean:", gmean_score) print("Recall:", recall) print("Precision:", precision) print("F1 Score:", f1) # Multiclass Confusion Matrix multi_cm = multilabel_confusion_matrix(y_test_scaled, y_prednew) return(y_pred, y_prednew, y_prednew2, multi_cm, training_loss, training_acc) ``` # Plotting the results ``` # Plot Training Accuracy & Loss vs. Epochs def plot_acc_loss(fin_epoch, training_loss, training_acc): if fin_epoch > 0: epoch = fin_epoch else: epoch = 40 xc = range(epoch) plt.figure(1,figsize=(15,epoch)) plt.plot(xc,training_loss) plt.xlabel('No. of Epochs') plt.ylabel('loss') plt.title('Training Loss') plt.grid(True) plt.legend(['Train']) plt.figure(2,figsize=(15,epoch)) plt.plot(xc,training_acc) plt.xlabel('No. of Epochs') plt.ylabel('Accuracy') plt.title('Training Accuracy') plt.grid(True) plt.legend(['Train'],loc=4) # Plot the confusion matrix wrt one-vs-rest def calc_cm(multi_cm, axes, label, class_names, fontsize=25): df_cm = pd.DataFrame( multi_cm, index=class_names, columns=class_names) try: sns.set(font_scale=2.2) heatmap = sns.heatmap(df_cm, annot=True, fmt="d", cbar=False, ax=axes, cmap="Blues") except ValueError: raise ValueError("CM values must be integers.") heatmap.yaxis.set_ticklabels(heatmap.yaxis.get_ticklabels(), rotation=0, ha='right', fontsize=fontsize) heatmap.xaxis.set_ticklabels(heatmap.xaxis.get_ticklabels(), rotation=45, ha='right', fontsize=fontsize) axes.set_ylabel('True label') axes.set_xlabel('Predicted label') axes.set_title("CM for the class - " + label) # Plot ROC Curve def plot_roc_auc(y_test_scaled, y_prednew2, class_labels): # Compute ROC curve and ROC area for each class fpr = dict() tpr = dict() roc_auc = dict() n_classes = len(class_labels) for i in range(n_classes): fpr[i], tpr[i], _ = roc_curve(y_test_scaled[:, i], y_prednew2[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) # Compute micro-average ROC curve and ROC area fpr["micro"], tpr["micro"], _ = roc_curve(y_test_scaled.ravel(), y_prednew2.ravel()) roc_auc["micro"] = auc(fpr["micro"], tpr["micro"]) # Compute macro-average value for ROC curve and ROC area all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)])) mean_tpr = np.zeros_like(all_fpr) for i in range(n_classes): mean_tpr += interp(all_fpr, fpr[i], tpr[i]) mean_tpr /= n_classes fpr["macro"] = all_fpr tpr["macro"] = mean_tpr roc_auc["macro"] = auc(fpr["macro"], tpr["macro"]) # Plot all ROC curves fig = plt.figure(figsize=(8,6)) plt.tick_params(axis='both', which='major', labelsize=13) prop_cycle = plt.rcParams['axes.prop_cycle'] colors = plt.cm.jet(np.linspace(0, 1, 15)) for i,color in zip(range(n_classes), colors): plt.plot(fpr[i], tpr[i], color=color, lw=1.5, label="{}, AUC={:.3f}".format(class_labels[i], roc_auc[i])) plt.plot([0,1], [0,1], color='orange', linestyle='--') plt.xticks(np.arange(0.0, 1.1, step=0.1)) plt.xlabel("False Positive Rate", fontsize=15) plt.yticks(np.arange(0.0, 1.1, step=0.1)) plt.ylabel("True Positive Rate", fontsize=15) plt.title('ROC Curve Analysis', fontweight='bold', fontsize=15) plt.legend(prop={'size':13}, loc='center left', bbox_to_anchor=(1, 0.5)) plt.show() # Plot PR curve def plot_pr_auc(y_test_scaled, y_prednew, class_labels): precision = dict() recall = dict() pr_auc = dict() n_classes = len(class_labels) colors = plt.cm.jet(np.linspace(0, 1, 15)) for i in range(n_classes): precision[i], recall[i], _ = precision_recall_curve(y_test_scaled[:, i], y_prednew[:, i]) pr_auc[i] = auc(recall[i], precision[i]) fig = plt.figure(figsize=(8,6)) plt.tick_params(axis='both', which='major', labelsize=13) for i,color in zip(range(n_classes), colors): plt.plot(recall[i], precision[i], color=color, lw=1.5, label="{}, AUC={:.3f}".format(class_labels[i], pr_auc[i])) plt.plot([0,1], [0.5,0.5], color='orange', linestyle='--') plt.xticks(np.arange(0.0, 1.1, step=0.1)) plt.xlabel("Recall Rate", fontsize=15) plt.yticks(np.arange(0.0, 1.1, step=0.1)) plt.ylabel("Precision Rate", fontsize=15) plt.title('Precision Recall Curve', fontweight='bold', fontsize=15) plt.legend(prop={'size':13}, loc='center left', bbox_to_anchor=(1, 0.5)) plt.show() def main(): class_labels = ["Benign", "Bot", "Brute Force -Web", "Brute Force -XSS", "DDOS attack-HOIC", "DDOS attack-LOIC-UDP", "DDoS attacks-LOIC-HTTP", "DoS attacks-GoldenEye", "DoS attacks-Hulk", "DoS attacks-SlowHTTPTest", "DoS attacks-Slowloris", "FTP-BruteForce", "Infiltration", "SQL Injection", "SSH-Bruteforce"] X_train_scaled, X_test_scaled, y_train_scaled, y_test_scaled, X_val_scaled, y_val_scaled = load_dataset() model = create_model(X_train_scaled) hist = train_model(model, X_train_scaled, y_train_scaled, X_val_scaled, y_val_scaled) y_pred, y_prednew, y_prednew2, multi_cm, class_report = evaluate_model(X_test_scaled, y_test_scaled, model, hist) # Plot Classification Report sns.set(font_scale=0.8) sns.heatmap(pd.DataFrame(class_report).iloc[:-1, :].T, annot=True) # Plot Confusion Matrix fig, ax = plt.subplots(2, 2, figsize=(20, 20)) for axes, cfs_matrix, label in zip(ax.flatten(), multi_cm, class_labels): print_confusion_matrix(cfs_matrix, axes, label, ["N", "Y"]) fig.tight_layout() plt.show() # Plot ROC Curve plot_roc_auc(y_test_scaled, y_prednew2, class_labels) # Plot PR Curve plot_pr_auc(y_test_scaled, y_prednew, class_labels) ```
github_jupyter
``` import sys sys.path.append('../scripts/') from mcl import * from kf import * class EstimatedLandmark(Landmark): def __init__(self): super().__init__(0,0) self.cov = None def draw(self, ax, elems): if self.cov is None: return ##推定位置に青い星を描く## c = ax.scatter(self.pos[0], self.pos[1], s=100, marker="*", label="landmarks", color="blue") elems.append(c) elems.append(ax.text(self.pos[0], self.pos[1], "id:" + str(self.id), fontsize=10)) ##誤差楕円を描く## e = sigma_ellipse(self.pos, self.cov, 3) elems.append(ax.add_patch(e)) class MapParticle(Particle): def __init__(self, init_pose, weight, landmark_num): super().__init__(init_pose, weight) self.map = Map() for i in range(landmark_num): self.map.append_landmark(EstimatedLandmark()) def init_landmark_estimation(self, landmark, z, distance_dev_rate, direction_dev): landmark.pos = z[0]*np.array([np.cos(self.pose[2] + z[1]), np.sin(self.pose[2] + z[1])]).T + self.pose[0:2] H = matH(self.pose, landmark.pos)[0:2,0:2] #カルマンフィルタのHの右上2x2を取り出す Q = matQ(distance_dev_rate*z[0], direction_dev) landmark.cov = np.linalg.inv(H.T.dot( np.linalg.inv(Q) ).dot(H)) def observation_update_landmark(self, landmark, z, distance_dev_rate, direction_dev): ###fastslam4landestm estm_z = IdealCamera.observation_function(self.pose, landmark.pos) #ランドマークの推定位置から予想される計測値 if estm_z[0] < 0.01: # 推定位置が近すぎると計算がおかしくなるので回避 return H = - matH(self.pose, landmark.pos)[0:2,0:2] #ここは符号の整合性が必要 Q = matQ(distance_dev_rate*estm_z[0], direction_dev) K = landmark.cov.dot(H.T).dot( np.linalg.inv(Q + H.dot(landmark.cov).dot(H.T)) ) landmark.pos = K.dot(z - estm_z) + landmark.pos landmark.cov = (np.eye(2) - K.dot(H)).dot(landmark.cov) def observation_update(self, observation, distance_dev_rate, direction_dev): ###fastslam4obsupdate for d in observation: z = d[0] landmark = self.map.landmarks[d[1]] if landmark.cov is None: self.init_landmark_estimation(landmark, z, distance_dev_rate, direction_dev) else: #追加 self.observation_update_landmark(landmark, z, distance_dev_rate, direction_dev) class FastSlam(Mcl): def __init__(self, init_pose, particle_num, landmark_num, motion_noise_stds={"nn":0.19, "no":0.001, "on":0.13, "oo":0.2},\ distance_dev_rate=0.14, direction_dev=0.05): super().__init__(None, init_pose, particle_num, motion_noise_stds, distance_dev_rate, direction_dev) self.particles = [MapParticle(init_pose, 1.0/particle_num, landmark_num) for i in range(particle_num)] self.ml = self.particles[0] def observation_update(self, observation): for p in self.particles: p.observation_update(observation, self.distance_dev_rate, self.direction_dev) #self.mapを削除 self.set_ml() self.resampling() def draw(self, ax, elems): super().draw(ax, elems) self.ml.map.draw(ax, elems) def trial(): time_interval = 0.1 world = World(30, time_interval, debug=False) ###真の地図を作成### m = Map() for ln in [(-4,2), (2,-3), (3,3)]: m.append_landmark(Landmark(*ln)) world.append(m) ### ロボットを作る ### init_pose = np.array([0,0,0]).T pf = FastSlam(init_pose,100, len(m.landmarks)) a = EstimationAgent(time_interval, 0.2, 10.0/180*math.pi, pf) r = Robot(init_pose, sensor=Camera(m), agent=a, color="red") world.append(r) world.draw() trial() #a.estimator.particles[10].map.landmarks[2].cov #math.sqrt(0.0025) ```
github_jupyter
# Welcome to Enterprise-Scale! ## Verify Pre-req Powershell Version > 7.0 ``` $psversiontable ``` Git Version > 2.24 ``` git --version ``` ## Login to Azure Clear Azure Context ``` Clear-AzContext -Force ``` Login to Azure with SPN or User Account that has permission at '/' scope ``` $user = "" $password = "" $tenantid = "" $secureStringPwd = $password | ConvertTo-SecureString -AsPlainText -Force $cred = New-Object System.Management.Automation.PSCredential -ArgumentList $user, $secureStringPwd Connect-AzAccount -TenantId $tenantid -ServicePrincipal -Credential $cred ``` Verify SPN/user account is logged in for the Tenant ``` get-azcontext | fl ``` ## Bootstrap new Tenant Set GitHub token to access raw content ``` $GitHubToken = 'AD4QREEEQ7XNHXIAN4IHMSK62YTRG' Write-Output $GitHubToken ``` View Template File ``` echo "https://raw.githubusercontent.com/Azure/CET-NorthStar/master/examples/Enterprise-Scale-Template-Deployment.json?token=$GitHubToken" (Invoke-WebRequest -Uri "https://raw.githubusercontent.com/Azure/CET-NorthStar/master/examples/Enterprise-Scale-Template-Deployment.json?token=$GitHubToken").Content | ConvertFrom-Json ``` Set Management Group Prefix ``` $TopLevelManagementGroupPrefix = 'ES' $TemplateParameterObject = @{'TopLevelManagementGroupPrefix'='ES'} ``` Initialize Tenant Deployment Parameter ``` $parameters = @{ 'Name' = 'Enterprise-Scale-Template' 'Location' = 'North Europe' 'TemplateUri' = "https://raw.githubusercontent.com/Azure/CET-NorthStar/master/examples/Enterprise-Scale-Template-Deployment.json?token=$GitHubToken" 'TemplateParameterObject' = $TemplateParameterObject 'Verbose' = $true } ``` Invoke Tenant Level Deployment ``` New-AzTenantDeployment @parameters ``` View Tenant Level Deployment ``` Get-AzTenantDeployment | select DeploymentName, ProvisioningState, Timestamp,location |sort-object Timestamp -Descending ``` View Management Group Level Deployment ``` Get-AzManagementGroupDeployment -ManagementGroupId $TopLevelManagementGroupPrefix | select DeploymentName, ProvisioningState, Timestamp |sort-object Timestamp -Descending ``` ## Setting up Git Ephermal space for Git ``` jupyter --runtime-dir ``` Git Clone Your repo (Skip this step if you have already cloned). Please Ensure your Git Credentails are available for PowerShell to use in your session. ``` git clone https://github.com/uday31in/NorthStar.git ``` Change Path to Git Root ``` Write-Host "Changing Current Directory to: $(jupyter --runtime-dir)\NorthStar" cd "$(jupyter --runtime-dir)\NorthStar" ``` Add upstream repo ``` git remote add upstream https://github.com/Azure/CET-NorthStar.git ``` Verify Remote ``` git remote -v ``` Pull latest upstream/master in your local master branch ``` git pull upstream master -X theirs -f ``` ## Initialize Enviornment Ensure Current Path is set to Git Root of your repo ``` Write-Host "Changing Current Directory to: $(jupyter --runtime-dir)\NorthStar" cd "$(jupyter --runtime-dir)\NorthStar" ``` Ensure Azure Login ``` Get-AzContext | fl ``` Import PowerShell Module ``` Import-Module .\src\AzOps.psd1 -force Get-ChildItem -Path .\src -Include *.ps1 -Recurse | ForEach-Object { .$_.FullName } ``` Intialize Git Repo for your Azure Enviornement. Please Note: This will take few minutes to compelte depending size of an enviornment ``` Initialize-AzOpsRepository -Verbose -SkipResourceGroup ``` Commit Change to Feaure Branch "initial-discovery" ``` git checkout -b initial-discovery ``` Commit Changes to AzOps ``` git add .\azops ``` View Git Status ``` git status ``` Git commit ``` git commit -m "Initializing Azure Enviornment" ``` Push your changes to your Git repo ``` git push origin initial-discovery ``` Submit PR in Git Portal and merge to master before proceeding to next step ``` git remote get-url --all origin ``` ## Enable Git Action Ensure Current Path is set to Git Root of your repo ``` Write-Host "Changing Current Directory to: $(jupyter --runtime-dir)\NorthStar" cd "$(jupyter --runtime-dir)\NorthStar" ``` Commit Change to Feaure Branch "initial-discovery" ``` git checkout -b enable-git-action ``` Enable Action by copying ".github\workflows\azops-pull.yml.disabled" to ".github\workflows\azops-pull.yml" ``` copy "$(jupyter --runtime-dir)\NorthStar\.github\workflows\azops-push.yml.disabled" "$(jupyter --runtime-dir)\NorthStar\.github\workflows\azops-push.yml" ``` Add File to Git ``` git add .github\workflows\azops-push.yml ``` View Git Status ``` git status ``` Git commit ``` git commit -m "Enable Git Action" ``` Push your changes to your Git repo ``` git push origin enable-git-action ``` Submit PR in Git Portal and merge to master ``` git remote get-url --all origin ``` ## Deploying New Policy Assignment using pipeline Ensure Current Path is set to Git Root of your repo ``` Write-Host "Changing Current Directory to: $(jupyter --runtime-dir)\NorthStar" cd "$(jupyter --runtime-dir)\NorthStar" ``` Create Branch deploy-loganalytics ``` git checkout -b deploy-loganalytics ``` View Policy Assignment ``` echo 'https://github.com/Azure/CET-NorthStar/raw/master/azopsreference/3fc1081d-6105-4e19-b60c-1ec1252cf560/contoso/platform/management/.AzState/Microsoft.Authorization_policyAssignments-Deploy-Log-Analytics.parameters.json' ``` Create Policy Assignment Parameter file ``` @" { "`$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "input": { "value": { "Name": "Deploy-Log-Analytics", "ResourceType": "Microsoft.Authorization/policyAssignments", "Location": "northeurope", "Identity": { "type": "SystemAssigned" }, "Properties": { "displayName": "Deploy-LogAnalytics", "policyDefinitionId": "/providers/Microsoft.Management/managementGroups/$($TopLevelManagementGroupPrefix)/providers/Microsoft.Authorization/policyDefinitions/Deploy-Log-Analytics", "scope": "/providers/Microsoft.Management/managementGroups/$($TopLevelManagementGroupPrefix)-management", "notScopes": [], "parameters": { "workspaceName": { "value": "$($TopLevelManagementGroupPrefix)-weu-la" }, "automationAccountName": { "value": "$($TopLevelManagementGroupPrefix)-weu-aa" }, "workspaceRegion": { "value": "westeurope" }, "automationRegion": { "value": "westeurope" }, "rgName": { "value": "$($TopLevelManagementGroupPrefix)-weu-mgmt" } }, "enforcementMode": "Default" } } } } } "@ > ".\azops\Tenant Root Group\ES\ES-platform\ES-management\.AzState\Microsoft.Authorization_policyAssignments-Deploy-Log-Analytics.parameters.json" ``` Add File to Git ``` git add ".\azops\Tenant Root Group\ES\ES-platform\ES-management\.AzState\Microsoft.Authorization_policyAssignments-Deploy-Log-Analytics.parameters.json" ``` View Git Status ``` git status ``` Git commit ``` git commit -m "Deploy Log Analytics Policy" ``` Push your changes to your Git repo ``` git push origin deploy-loganalytics ``` Submit PR in Git Portal and wait for GitHub to action to complete. DO NOT merge, Pull request to master branch before GitHub actions complete. Go To Portal and verify Policy Assigment is created. Pull Master barnach locally ``` git checkout master && git pull ``` ## Demo Drift Detection <Manual> User Portal to make changes e.g. Add new management Group or update exisitng policy definition or assignment. To simulate OOB changes, we are making imperative change via PowerShell. ``` $GroupName = "$TopLevelManagementGroupPrefix-IAB" $ParentId = "/providers/Microsoft.Management/managementGroups/$TopLevelManagementGroupPrefix" New-AzManagementGroup -GroupName $GroupName -DisplayName $GroupName -ParentId $ParentId ``` Create Branch deploy-vWan ``` git checkout -b deploy-vWan ``` View Policy Assignment ``` echo 'https://github.com/Azure/CET-NorthStar/blob/master/azopsreference/3fc1081d-6105-4e19-b60c-1ec1252cf560/contoso/platform/connectivity/.AzState/Microsoft.Authorization_policyAssignments-Deploy-vWAN.parameters.json' ``` Create Policy Assignment Parameter file ``` @" { "`$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "input": { "value": { "Name": "Deploy-VWAN", "ResourceType": "Microsoft.Authorization/policyAssignments", "Location": "northeurope", "Identity": { "type": "SystemAssigned" }, "Properties": { "displayName": "Deploy-vWAN", "policyDefinitionId": "/providers/Microsoft.Management/managementGroups/$($TopLevelManagementGroupPrefix)/providers/Microsoft.Authorization/policyDefinitions/Deploy-vWAN", "scope": "/providers/Microsoft.Management/managementGroups/$($TopLevelManagementGroupPrefix)-connectivity", "notScopes": [], "parameters": { "vwanname": { "value": "$($TopLevelManagementGroupPrefix)-vwan" }, "vwanRegion": { "value": "northeurope" }, "rgName": { "value": "$($TopLevelManagementGroupPrefix)-global-vwan" } }, "description": "", "enforcementMode": "Default" } } } } } "@ > ".\azops\Tenant Root Group\ES\ES-platform\ES-connectivity\.AzState\Microsoft.Authorization_policyAssignments-Deploy-vWAN.parameters.json" ``` Add File to Git ``` git add ".\azops\Tenant Root Group\ES\ES-platform\ES-connectivity\.AzState\Microsoft.Authorization_policyAssignments-Deploy-vWAN.parameters.json" ``` View Git Status ``` git status ``` Git commit ``` git commit -m "Deploy vWAN Policy" ``` Push your changes to your Git repo ``` git push origin deploy-vWan ``` Submit PR in Git Portal and wait for GitHub to action to complete. DO NOT merge, Pull request to master branch before GitHub actions complete. When Git Action runs, it should detect ## Clean-up Previous Install Import-Module ``` Import-Module .\src\AzOps.psd1 -force Get-ChildItem -Path .\src -Include *.ps1 -Recurse | ForEach-Object { .$_.FullName } ``` Management Group To Clean-up ``` $ManagementGroupPrefix = "ES" ``` Clean-up Management Group ``` if (Get-AzManagementGroup -GroupName $ManagementGroupPrefix -ErrorAction SilentlyContinue) { Write-Verbose "Cleaning up Tailspin Management Group" Remove-AzOpsManagementGroup -groupName $ManagementGroupPrefix -Verbose } ``` Clean-up Tenant Deployment If you see an error "Your Azure credentials have not been set up or have expired", please re-run command. It might take several retries. ``` #Clean up Tenant Level Deployments Get-AzTenantDeployment | Foreach-Object -Parallel { Remove-AzTenantDeployment -Name $_.DeploymentName -Confirm:$false} ``` Delete initial-discovery remote branch ``` git branch -D initial-discovery git push origin --delete initial-discovery ``` Delete enable-git-action remote branch ``` git branch -D enable-git-action git push origin --delete enable-git-action ``` Delete deploy-loganalytics remote branch ``` git branch -D deploy-loganalytics git push origin --delete deploy-loganalytics ``` Delete deploy-loganalytics remote branch ``` git branch -D deploy-vWAN git push origin --delete deploy-vWAN ``` Reset upstream master branch ``` git checkout master -f git pull upstream master git reset --hard upstream/master git push -f ``` Remove Local Git Folder ``` rm -recurse -force "$(jupyter --runtime-dir)\NorthStar" ```
github_jupyter
# **밑바닥부터 시작하는 데이터과학** Data Science from Scratch - https://github.com/joelgrus/data-science-from-scratch/blob/master/first-edition/code-python3/network_analysis.py ## **선형대수 작업에 필요한 함수** 앞에서 작업했던 함수들을 호출 합니다 ``` # linear_algebra.py def dot(v, w): return sum(v_i * w_i for v_i, w_i in zip(v, w)) def squared_distance(v, w): return sum_of_squares(vector_subtract(v, w)) def sum_of_squares(v): return dot(v, v) def vector_subtract(v, w): return [v_i - w_i for v_i, w_i in zip(v,w)] # from linear_algebra import dot, get_row, get_column,\ # make_matrix, magnitude, scalar_multiply, shape, distance import math def dot(v, w): return sum(v_i * w_i for v_i, w_i in zip(v, w)) def get_row(A, i): return A[i] def get_column(A, j): return [A_i[j] for A_i in A] def make_matrix(num_rows, num_cols, entry_fn): return [[entry_fn(i, j) for j in range(num_cols)] for i in range(num_rows)] def magnitude(v): return math.sqrt(sum_of_squares(v)) def scalar_multiply(c, v): return [c * v_i for v_i in v] def shape(A): num_rows = len(A) num_cols = len(A[0]) if A else 0 return num_rows, num_cols def distance(v, w): return math.sqrt(squared_distance(v, w)) def sum_of_squares(v): return sum(v_i ** 2 for v_i in v) ``` <br></br> # **21장 네트워크 분석** 데이터 문제는 **노드(Node)** 와 사이를 연결하는 **엣지(edge)** 로 구성되는 **네트워크(Network)** 관점으로 해석할 수 있습니다 1. 개별 **속성이 Node**, 속성들의 **관계가 edge** 입니다 1. 이러한 **관계는 상호적** 인 특징으로 구분할 수 있어서 **뱡향성 Network** 라고도 합니다 1. 네트워크를 분석하는 방법으로 **임의의 두 사람의 최단경로** 를 계산할 수 있습니다. <img src="https://miro.medium.com/max/912/1*EKWy0bQjzoJ4RJ-iTTLa4w.png" align="left" width="400"> ## **1 Edge 와 Node 데이터 생성하기** 앞에서 활용하였던 **친구관계 NetWork** 데이터를 호출합니다 ``` # 1장에서 친구관계 Network 를 호출 합니다 users = [ { "id": 0, "name": "Hero" }, { "id": 1, "name": "Dunn" }, { "id": 2, "name": "Sue" }, { "id": 3, "name": "Chi" }, { "id": 4, "name": "Thor" }, { "id": 5, "name": "Clive" }, { "id": 6, "name": "Hicks" }, { "id": 7, "name": "Devin" }, { "id": 8, "name": "Kate" }, { "id": 9, "name": "Klein" } ] friendships = [(0, 1), (0, 2), (1, 2), (1, 3), (2, 3), (3, 4), (4, 5), (5, 6), (5, 7), (6, 8), (7, 8), (8, 9)] # 친구목록을 각 사용자의 dict 에 추가 합니다 for user in users: user["friends"] = [] for i, j in friendships: users[i]["friends"].append(users[j]) # j를 i의 친구로 추가 users[j]["friends"].append(users[i]) # i를 j의 친구로 ``` ## **2 매개 중심성** 1. 관계를 분석할 때에는 **최단 경로상에 빈번하게 등장하는 Node** 의 중요도가 높다 1. 이러한 중심에 위치하는 **Node** 를 **매개중심성(betweeness centrality)** 라고 합니다 ``` from collections import deque # 특정 사용자로부터 다른 사용자까지 최단경로 {dict} def shortest_paths_from(from_user): shortest_paths_to = { from_user["id"] : [[]] } # (이전_user, 다음_user) que 계산을 (from_user, friend_user 친구) 로 합니다 frontier = deque((from_user, friend) for friend in from_user["friends"]) # que 가 빌때까지 반복 while frontier: prev_user, user = frontier.popleft() # que 첫번째 사용자 user_id = user["id"] # que 에 사용자 추가 paths_to_prev = shortest_paths_to[prev_user["id"]] paths_via_prev = [path + [user_id] for path in paths_to_prev] old_paths_to_here = shortest_paths_to.get(user_id, []) # 최단경로를 안다면 if old_paths_to_here: # 지금까지의 최단경로는? min_path_length = len(old_paths_to_here[0]) else: min_path_length = float('inf') # 길지 않은 새로운 경로를 지정 new_paths_to_here = [path_via_prev for path_via_prev in paths_via_prev if len(path_via_prev) <= min_path_length and path_via_prev not in old_paths_to_here] shortest_paths_to[user_id] = old_paths_to_here + new_paths_to_here frontier.extend((user, friend) # 새로 추가되는 이웃을 frontier 에 추가 for friend in user["friends"] if friend["id"] not in shortest_paths_to) return shortest_paths_to # 매개 중심성 계산하기 for user in users: user["shortest_paths"] = shortest_paths_from(user) for user in users: user["betweenness_centrality"] = 0.0 for source in users: source_id = source["id"] for target_id, paths in source["shortest_paths"].items(): if source_id < target_id: # don't double count num_paths = len(paths) # how many shortest paths? contrib = 1 / num_paths # contribution to centrality for path in paths: for id in path: if id not in [source_id, target_id]: users[id]["betweenness_centrality"] += contrib print("Betweenness Centrality >>") for user in users: print(user["id"], user["betweenness_centrality"]) ``` ## **2 근접 중심성 (Closeness Centrality)** 먼저 개별 사용자의 **원접성(farness : from_user 와 다른 사용자와 최단거리 총합)** 을 측정합니다. 1. **측정방법이 복잡** 하고, **근접 중심성의 편차값이 작아서** 자주 사용하지 않습니다 ``` # 모든 사용자와의 최단거리 합 def farness(user): return sum(len(paths[0]) for paths in user["shortest_paths"].values()) for user in users: user["closeness_centrality"] = 1 / farness(user) print("Closeness Centrality >>") for user in users: print(user["id"], user["closeness_centrality"]) ``` ## **3 고유벡터 중심성 (eigenvector centrality)** 덜 직관적이지만 **계산이 용이한** 고유벡터 중심성을 자주 활용 합니다 1. 연결의 수가 많고, 중심성이 높은 사용자들은 **고유벡터 중심성이 높다** 1. **1, 2 번 사용자** 의 중심성이 높은데 이는 **중심성이 높은 사용자와 3번 연결 되었기 때문이다** ``` # 행렬의 곱 계산하기 from functools import partial def matrix_product_entry(A, B, i, j): return dot(get_row(A, i), get_column(B, j)) def matrix_multiply(A, B): n1, k1 = shape(A) n2, k2 = shape(B) if k1 != n2: raise ArithmeticError("incompatible shapes!") return make_matrix(n1, k2, partial(matrix_product_entry, A, B)) # [list] 형태의 벡터를 n X 1 행렬로 변환 def vector_as_matrix(v): return [[v_i] for v_i in v] # n X 1 행렬을 [list] 로 변환 def vector_from_matrix(v_as_matrix): return [row[0] for row in v_as_matrix] def matrix_operate(A, v): v_as_matrix = vector_as_matrix(v) product = matrix_multiply(A, v_as_matrix) return vector_from_matrix(product) # 고유벡터를 계산 합니다 def find_eigenvector(A, tolerance=0.00001): guess = [1 for __ in A] while True: result = matrix_operate(A, guess) length = magnitude(result) next_guess = scalar_multiply(1/length, result) if distance(guess, next_guess) < tolerance: return next_guess, length # eigenvector, eigenvalue guess = next_guess # 중심성의 계산 def entry_fn(i, j): return 1 if (i, j) in friendships or (j, i) in friendships else 0 n = len(users) adjacency_matrix = make_matrix(n, n, entry_fn) eigenvector_centralities, _ = find_eigenvector(adjacency_matrix) print("Eigenvector Centrality >>") for user_id, centrality in enumerate(eigenvector_centralities): print(user_id, centrality) ``` ## **4 방향성 그래프(Directed Graphs) 와 페이지랭크** 다른 과학자로 부터 **보증을 몇 번 받았는지** 를 계산 합니다. 1. (특정인 **index**, 보증한 사람 **value**) 데이터 목록 입니다 1. **가짜계정을 여러개** 생성, **친구끼리 보증** 하는 등의 조작이 가능합니다 1. 이를 피하는 방법으로 **누가 보증을 하는지**를 계산하는 방법 입니다. ``` # directed graphs endorsements = [(0, 1), (1, 0), (0, 2), (2, 0), (1, 2), (2, 1), (1, 3), (2, 3), (3, 4), (5, 4), (5, 6), (7, 5), (6, 8), (8, 7), (8, 9)] for user in users: user["endorses"] = [] # 다른 사람을 보증하는 정보 [list] user["endorsed_by"] = [] # 보증받는 것에 대한 정보를 유지하는 [list] for source_id, target_id in endorsements: users[source_id]["endorses"].append(users[target_id]) users[target_id]["endorsed_by"].append(users[source_id]) endorsements_by_id = [(user["id"], len(user["endorsed_by"])) for user in users] sorted(endorsements_by_id, key=lambda x: x[1], reverse=True) ``` 이러한 보증은 조작이 용이한 단점이 있어서 **PageRank** 알고리즘을 적용 합니다. 1. 모든 Node 에 **동일한 초기값** 을 배당 합니다. 1. **각 Step** 을 거칠때 마다 Node 에 **배당된 PageRank 에 균등하게 배당을** 합니다 1. **외부로 향하는 Link** 에 배당을 합니다 ``` def page_rank(users, damping = 0.85, num_iters = 100): num_users = len(users) pr = { user["id"] : 1 / num_users for user in users } # 동일한 초기값 # Step 마다 각 Node 에 PageRank 를 조금씩 배당 합니다 base_pr = (1 - damping) / num_users for __ in range(num_iters): next_pr = { user["id"] : base_pr for user in users } # PageRank 는 외부로 향하는 Link에 배당을 합니다 for user in users: links_pr = pr[user["id"]] * damping for endorsee in user["endorses"]: next_pr[endorsee["id"]] += links_pr / len(user["endorses"]) pr = next_pr return pr print("PageRank >>") for user_id, pr in page_rank(users).items(): print(user_id, pr) ```
github_jupyter
Put `coveval` folder into our path for easy import of modules: ``` import sys sys.path.append('../') ``` # Load data ``` from coveval import utils from coveval.connectors import generic ``` Let's load some data corresponding to the state of New-York and look at the number of daily fatalities: ``` df_reported = utils.get_outbreak_data_usa(prefix='reported_').loc['US-NY'] df_predicted = generic.load_predictions('../data/demo/predictions.json', prefix='predicted_') data = utils.add_outbreak_data(df_predicted, df_reported)[['reported_deathIncrease', 'predicted_incDeath']] _ = utils.show_data(df=data, cols=['reported_deathIncrease', 'predicted_incDeath'], t_min='2020-02', t_max='2021-01', colors={'cols':{'reported_deathIncrease': '#85C5FF', 'predicted_incDeath': '#FFAD66'}}, linewidths={'reported_deathIncrease': 3, 'predicted_incDeath': 3}, show_leg={'reported_deathIncrease': 'reported', 'predicted_incDeath': 'predicted'}, y_label='daily fatalities', x_label='date', figsize=(11,5)) ``` # Smooth reported values ``` from coveval.core import smoothing ``` We want to smooth out noise in the reported data due to reporting errors and delays. To do so we can use for instance the "missed case" smoother. And it's also useful to smooth out high frequency noise in the predictions made by stochastc models as they do not correspond to a useful signal and pollute trend comparisons. A simple low-pass filter can be appropriate here. ``` # define smoothers smoothers = {'missed': smoothing.missed_cases(cost_missing=.1, cost_der1=10, cost_der2=1), 'gaussian': smoothing.gaussian(sigma=2)} # smooth reported data col = 'reported_deathIncrease' s_name = 'missed' smoothers[s_name].smooth_df(data, col, inplace=True) data.rename(columns={col + '_smoothed' : col + '_smoothed_' + s_name}, inplace=True) # smooth predictions col = 'predicted_incDeath' s_name = 'gaussian' smoothers[s_name].smooth_df(data, col, inplace=True) data.rename(columns={col + '_smoothed' : col + '_smoothed_' + s_name}, inplace=True) _ = utils.show_data(data, cols=['reported_deathIncrease_smoothed_missed','predicted_incDeath_smoothed_gaussian'], scatter=['reported_deathIncrease'], colors={'cols':{'reported_deathIncrease_smoothed_missed': '#85C5FF', 'predicted_incDeath_smoothed_gaussian': '#FFAD66'}}, date_auto=False, t_min='2020-03', t_max='2020-06-20', show_leg={'reported_deathIncrease': 'reported', 'reported_deathIncrease_smoothed_missed': 'reported smoothed "missed"', 'predicted_incDeath_smoothed_gaussian': 'predicted smoothed "gaussian"'}, y_label='daily fatalities', x_label='date', figsize=(11,5)) ``` # Normalise predicted values ``` from coveval.core import normalising ``` The goal here is to avoid repeatedly punishing predictions made by a model due to e.g. the model getting the start of the outbreak wrong. ``` normaliser_scaling = normalising.dynamic_scaling() normaliser_scaling.normalise_df(df=data, col_truth='reported_deathIncrease_smoothed_missed', col_pred='predicted_incDeath_smoothed_gaussian', inplace=True) # let's store the difference between the truth and the normalised predictions data['predicted_incDeath_smoothed_gaussian_norm_match'] = data['reported_deathIncrease_smoothed_missed'] - data['predicted_incDeath_smoothed_gaussian_norm'] fig = utils.show_normalisation(data, truth='reported_deathIncrease', pred_raw='predicted_incDeath_smoothed_gaussian', pred_norm='predicted_incDeath_smoothed_gaussian_norm', pred_match='predicted_incDeath_smoothed_gaussian_norm_match', truth_smoothed='reported_deathIncrease_smoothed_missed') ``` # Compare to truth ``` from coveval.core import losses ``` Now we can use a simple Poisson loss to judge how well each normalised prediction compares to the reported data and compute an overall score that can be compared with that of other models. ``` poisson_loss = losses.poisson() poisson_loss.compute_df(df=data, col_truth='reported_deathIncrease_smoothed_missed', col_pred='predicted_incDeath_smoothed_gaussian_norm', inplace=True) data['predicted_incDeath_smoothed_gaussian_norm_loss'].mean() ``` # All in one: scorer ``` from coveval.scoring import scorer ``` The scorer object performes the above operations in a single call - with the exception of smoothing the predictions as for some models this is not necessary. ``` default_scorer = scorer(smoother=smoothers['missed'], normaliser=normaliser_scaling, loss=poisson_loss) results = default_scorer.score_df(df=data, col_truth='reported_deathIncrease', col_pred='predicted_incDeath_smoothed_gaussian') # the average loss is the score (so the closer to 0 the better) results['score'] ```
github_jupyter
``` # default_exp dl_101 ``` # Deep learning 101 with Pytorch and fastai > Some code and text snippets have been extracted from the book [\"Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD\"](https://course.fast.ai/), and from these blog posts [[ref1](https://muellerzr.github.io/fastblog/2021/02/14/Pytorchtofastai.html)]. ``` #hide from nbdev.showdoc import * from fastcore.all import * # export import torch from torch.utils.data import TensorDataset import matplotlib.pyplot as plt import torch.nn as nn from wwdl.utils import * use_cuda = torch.cuda.is_available() device = torch.device("cuda" if use_cuda else "cpu") device ``` ## Linear regression model in Pytorch ### Datasets and Dataloaders We'll create a dataset that contains $(x,y)$ pairs sampled from the linear function $y = ax + b+ \epsilon$. To do this, we'll create a PyTorch's `TensorDataset`. A PyTorch tensor is nearly the same thing as a NumPy array. The vast majority of methods and operators supported by NumPy on these structures are also supported by PyTorch, but PyTorch tensors have additional capabilities. One major capability is that these structures can live on the GPU, in which case their computation will be optimized for the GPU and can run much faster (given lots of values to work on). In addition, PyTorch can automatically calculate derivatives of these operations, including combinations of operations. These two things are critical for deep learning ``` # export def linear_function_dataset(a, b, n=100, show_plot=False): r""" Creates a Pytorch's `TensorDataset` with `n` random samples of the linear function y = `a`*x + `b`. `show_plot` dcides whether or not to plot the dataset """ x = torch.randn(n, 1) y = a*x + b + 0.1*torch.randn(n, 1) if show_plot: show_TensorFunction1D(x, y, marker='.') return TensorDataset(x, y) a = 2 b = 3 n = 100 data = linear_function_dataset(a, b, n, show_plot=True) test_eq(type(data), TensorDataset) ``` In every machine/deep learning experiment, we need to have at least two datasets: - training: used to train the model - validation: used to validate the model after each training step. It allows to detect overfitting and adjust the hyperparameters of the model properly ``` train_ds = linear_function_dataset(a, b, n=100, show_plot=True) valid_ds = linear_function_dataset(a, b, n=20, show_plot=True) ``` A dataloader combines a dataset an a sampler that samples data into **batches**, and provides an iterable over the given dataset. ``` bs = 10 train_dl = torch.utils.data.DataLoader(train_ds, batch_size=bs, shuffle=True) valid_dl = torch.utils.data.DataLoader(valid_ds, batch_size=bs, shuffle=False) for i, data in enumerate(train_dl, 1): x, y = data print(f'batch {i}: x={x.shape} ({x.device}), y={y.shape} ({y.device})') ``` ### Defining a linear regression model in Pytorch The class `torch.nn.Module` is the base structure for all models in Pytorch. It mostly helps to register all the trainable parameters. A module is an object of a class that inherits from the PyTorch `nn.Module` class. To implement an `nn.Module` you just need to: - Make sure the superclass __init__ is called first when you initialize it. - Define any parameters of the model as attributes with nn.Parameter. To tell `Module` that we want to treat a tensor as a parameter, we have to wrap it in the `nn.Parameter` class. All PyTorch modules use `nn.Parameter` for any trainable parameters. This class doesn't actually add any functionality (other than automatically calling `requires_grad`). It's only used as a "marker" to show what to include in parameters: - Define a forward function that returns the output of your model. ``` #export class LinRegModel(nn.Module): def __init__(self): super().__init__() self.a = nn.Parameter(torch.randn(1)) self.b = nn.Parameter(torch.randn(1)) def forward(self, x): return self.a*x + self.b model = LinRegModel() pa, pb = model.parameters() pa, pa.shape, pb, pb.shape ``` Objects of this class behave identically to standard Python functions, in that you can call them using parentheses and they will return the activations of a model. ``` x = torch.randn(10, 1) out = model(x) x, x.shape, out, out.shape ``` ### Loss function and optimizer The loss is the thing the machine is using as the measure of performance to decide how to update model parameters. The loss function is simple enough for a regression problem, we'll just use the Mean Square Error (MSE) ``` loss_func = nn.MSELoss() loss_func(x, out) ``` We have data, a model, and a loss function; we only need one more thing we can fit a model, and that's an optimizer. ``` opt_func = torch.optim.SGD(model.parameters(), lr = 1e-3) ``` ### Training loop During training, we need to push our model and our batches to the GPU. Calling `cuda()` on a model or a tensor this class puts all these parameters on the GPU: ``` model = model.to(device) ``` To train a model, we will need to compute all the gradients of a given loss with respect to its parameters, which is known as the *backward pass*. The *forward pass* is where we compute the output of the model on a given input, based on the matrix products. PyTorch computes all the gradients we need with a magic call to `loss.backward`. The backward pass is the chain rule applied multiple times, computing the gradients from the output of our model and going back, one layer at a time. In Pytorch, Each basic function we need to differentiate is written as a `torch.autograd.Function` object that has a `forward` and a `backward` method. PyTorch will then keep trace of any computation we do to be able to properly run the backward pass, unless we set the `requires_grad` attribute of our tensors to `False`. For minibatch gradient descent (the usual way of training in deep learning), we calculate gradients on batches. Before moving onto the next batch, we modify our model's parameters based on the gradients. For each iteration through our dataset (which would be called an **epoch**), the optimizer would perform as many updates as we have batches. There are two important methods in a Pytorch optimizer: - `zero_grad`: In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes. `zero_grad` just loops through the parameters of the model and sets the gradients to zero. It also calls `detach_`, which removes any history of gradient computation, since it won't be needed after `zero_grad`. ``` n_epochs = 10 # export def train(model, device, train_dl, loss_func, opt_func, epoch_idx): r""" Train `model` for one epoch, whose index is given in `epoch_idx`. The training loop will iterate through all the batches of `train_dl`, using the the loss function given in `loss_func` and the optimizer given in `opt_func` """ running_loss = 0.0 batches_processed = 0 for batch_idx, (x, y) in enumerate(train_dl, 1): x, y = x.to(device), y.to(device) # Push data to GPU opt_func.zero_grad() # Reset gradients # Forward pass output = model(x) loss = loss_func(output, y) # Backward pass loss.backward() # Optimizer step opt_func.step() # print statistics running_loss += loss.item() batches_processed += 1 print(f'Train loss [Epoch {epoch_idx}]: {running_loss/batches_processed : .2f})') for epoch in range(1, n_epochs+1): train(model, device, train_dl, loss_func, opt_func, epoch) ``` We can see how the parameters of the regression model are getting closer to the truth values `a` and `b` from the linear function. ``` L(model.named_parameters()) ``` ### Validating the model Validating the model requires only a forward pass, it's just inference. Disabling gradient calculation with the method `torch.no_grad()` is useful for inference, when you are sure that you will not call :meth:`Tensor.backward()`. ``` #export def validate(model, device, dl): running_loss = 0. total_batches = 0 with torch.no_grad(): for x, y in valid_dl: x, y = x.to(device), y.to(device) output = model(x) loss = loss_func(output, y) running_loss += loss.item() total_batches += 1 print(f'Valid loss: {running_loss/total_batches : .2f}') validate(model, device, valid_dl) ``` In order to spot overfitting, it is useful to validate the model after each training epoch. ``` for epoch in range(1, n_epochs +1): train(model, device, train_dl, loss_func, opt_func, epoch) validate(model, device, valid_dl) ``` ## Abstracting the manual training loop: moving from Pytorch to fastai ``` from fastai.basics import * from fastai.callback.progress import ProgressCallback ``` We can entirely replace the custom training loop with fastai's. That means you can get rid of `train()`, `validate()`, and the epoch loop in the original code, and replace it all with a couple of lines. fastai's training loop lives in a `Learner`. The Learner is the glue that merges everything together (Datasets, Dataloaders, model and optimizer) and enables to train by just calling a `fit` function. fastai's `Learner` expects DataLoaders to be used, rather than simply one DataLoader, so let's make that. We could just do `dls = Dataloaders(train_dl, valid_dl)`, to keep the PyTorch Dataloaders. However, by using a fastai `DataLoader` instead, created directly from the `TensorDataset` objects, we have some automations, such as automatic pushing of the data to GPU. ``` dls = DataLoaders.from_dsets(train_ds, valid_ds, bs=10) learn = Learner(dls, model=LinRegModel(), loss_func=nn.MSELoss(), opt_func=SGD) ``` Now we have everything needed to do a basic `fit`: ``` learn.fit(10, lr=1e-3) ``` Having a Learner allows us to easily gather the model predictions for the validation set, which we can use for visualisation and analysis ``` inputs, preds, outputs = learn.get_preds(with_input=True) inputs.shape, preds.shape, outputs.shape show_TensorFunction1D(inputs, outputs, y_hat=preds, marker='.') ``` ## Building a simple neural network For the next example, we will create the dataset by sampling values from a non linear sample $y(x) = -\frac{1}{100}x^7 - x^4 - 2x^2 - 4x + 1$ ``` # export def nonlinear_function_dataset(n=100, show_plot=False): r""" Creates a Pytorch's `TensorDataset` with `n` random samples of the nonlinear function y = (-1/100)*x**7 -x**4 -2*x**2 -4*x + 1 with a bit of noise. `show_plot` decides whether or not to plot the dataset """ x = torch.rand(n, 1)*20 - 10 # Random values between [-10 and 10] y = (-1/100)*x**7 -x**4 -2*x**2 -4*x + 1 + 0.1*torch.randn(n, 1) if show_plot: show_TensorFunction1D(x, y, marker='.') return TensorDataset(x, y) n = 100 ds = nonlinear_function_dataset(n, show_plot=True) x, y = ds.tensors test_eq(x.shape, y.shape) ``` We will create the trainin and test dataset, and build the Dataloaders with them, this time directly in fastai, using the `Dataloaders.from_dsets` method. ``` train_ds = nonlinear_function_dataset(n=1000) valid_ds = nonlinear_function_dataset(n=200) ``` Normalization in deep learning are used to make optimization easier by smoothing the loss surface of the network. We will normalize the data based on the mean and std of the train dataset ``` norm_mean = train_ds.tensors[1].mean() norm_std = train_ds.tensors[1].std() train_ds_norm = TensorDataset(train_ds.tensors[0], (train_ds.tensors[1] - norm_mean)/norm_std) valid_ds_norm = TensorDataset(valid_ds.tensors[0], (valid_ds.tensors[1] - norm_mean)/norm_std) dls = DataLoaders.from_dsets(train_ds_norm, valid_ds_norm, bs = 32) ``` We will build a Multi Layer Perceptron with 3 hidden layers. These networks are also known as Feed-Forward Neural Networks. The layers aof this type of networks are known as Fully Connected Layers, because, between every subsequent pair of layers, all the neurons are connected to each other. <img alt="Neural network architecture" caption="Neural network" src="https://i.imgur.com/5ZWPtRS.png"> The easiest way of wrapping several layers in Pytorch is using the `nn.Sequential` module. It creates a module with a `forward` method that will call each of the listed layers or functions in turn, without us having to do the loop manually in the forward pass. ``` #export class MLP3(nn.Module): r""" Multilayer perceptron with 3 hidden layers, with sizes `nh1`, `nh2` and `nh3` respectively. """ def __init__(self, n_in=1, nh1=200, nh2=100, nh3=50, n_out=1): super().__init__() self.layers = nn.Sequential( nn.Linear(n_in, nh1), nn.ReLU(), nn.Linear(nh1, nh2), nn.ReLU(), nn.Linear(nh2, nh3), nn.ReLU(), nn.Linear(nh3, n_out) ) def forward(self, x): return self.layers(x) x, y = dls.one_batch() model = MLP3() output = model(x) output.shape learn = Learner(dls, MLP3(), loss_func=nn.MSELoss(), opt_func=Adam) learn.fit(10, lr=1e-3) inputs, preds, outputs = learn.get_preds(with_input = True) show_TensorFunction1D(inputs, outputs, y_hat=preds, marker='.') ``` Let's compare these results with the ones by our previous linear regression model ``` learn_lin = Learner(dls, LinRegModel(), loss_func=nn.MSELoss(), opt_func=Adam) learn_lin.fit(20, lr=1e-3) inputs, preds, outputs = learn_lin.get_preds(with_input = True) show_TensorFunction1D(inputs, outputs, y_hat=preds, marker='.') ``` ## Export ``` #hide from nbdev.export import notebook2script notebook2script() ```
github_jupyter
## Program that searches for Bars nearby starting coordinates: Home - Romero y Cordero 209 ``` import requests # library to handle requests import pandas as pd # library for data analsysis import numpy as np # library to handle data in a vectorized manner import random # library for random number generation from geopy.geocoders import Nominatim # module to convert an address into latitude and longitude values # libraries for displaying images from IPython.display import Image from IPython.core.display import HTML # tranforming json file into a pandas dataframe library from pandas.io.json import json_normalize import folium # plotting library print('Folium imported') print('Libraries imported.') CLIENT_ID = '2BOJBTIFPAIFAECFCKJT4H5W404CWPNYYY0SVHFFOFKPWNZV' # your Foursquare ID CLIENT_SECRET = '5VNGWKFYYVIEHTGME0IBVR5AQ5LCP5WPBVLSSQFUMOZMSFKM' # your Foursquare Secret VERSION = '20180604' LIMIT = 30 print('Your credentails:') print('CLIENT_ID: ' + CLIENT_ID) print('CLIENT_SECRET:' + CLIENT_SECRET) lati=-0.141189 longi=-78.479422 search_query = 'bar' #what type of restaurant for example radius = 500 #search radius from initial location print(search_query + ' .... OK!') url = 'https://api.foursquare.com/v2/venues/search?client_id={}&client_secret={}&ll={},{}&v={}&query={}&radius={}&limit={}'.format(CLIENT_ID, CLIENT_SECRET, lati, longi, VERSION, search_query, radius, LIMIT) url results = requests.get(url).json() results # assign relevant part of JSON to venues venues = results['response']['venues'] # tranform venues into a dataframe dataframe = json_normalize(venues) dataframe.head() # keep only columns that include venue name, and anything that is associated with location filtered_columns = ['name', 'categories'] + [col for col in dataframe.columns if col.startswith('location.')] + ['id'] dataframe_filtered = dataframe.loc[:, filtered_columns] # function that extracts the category of the venue def get_category_type(row): try: categories_list = row['categories'] except: categories_list = row['venue.categories'] if len(categories_list) == 0: return None else: return categories_list[0]['name'] # filter the category for each row dataframe_filtered['categories'] = dataframe_filtered.apply(get_category_type, axis=1) # clean column names by keeping only last term dataframe_filtered.columns = [column.split('.')[-1] for column in dataframe_filtered.columns] dataframe_filtered #dataframe_filtered.name #to visualize the venues nearby dataframe_filtered = dataframe_filtered[(dataframe_filtered.categories == 'Bar')]#|(dataframe_filtered.categories == 'Seafood Restaurant')] dataframe_filtered.head() venues_map = folium.Map(location=[lati, longi], zoom_start=15) # generate map centred around the Conrad Hotel # add a red circle marker to represent the starting point (Home) folium.CircleMarker( [lati, longi], radius=10, color='red', popup='You Are Here!', fill = True, fill_color = 'red', fill_opacity = 0.6 ).add_to(venues_map) # add the Italian restaurants as blue circle markers for lat, lng, label in zip(dataframe_filtered.lat, dataframe_filtered.lng, dataframe_filtered.name): folium.CircleMarker( [lat, lng], radius=5, color='blue', popup=label, fill = True, fill_color='blue', fill_opacity=0.6 ).add_to(venues_map) # display map venues_map ```
github_jupyter
# CNN - Example 01 ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt ``` ### Load Keras Dataset ``` from tensorflow.keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() ``` #### Visualize data ``` print(x_train.shape) single_image = x_train[0] print(single_image.shape) plt.imshow(single_image) ``` ### Pre-Process data #### One Hot encode ``` # Make it one hot encoded otherwise it will think as a regression problem on a continuous axis from tensorflow.keras.utils import to_categorical print("Shape before one hot encoding" +str(y_train.shape)) y_example = to_categorical(y_train) print(y_example) print("Shape after one hot encoding" +str(y_train.shape)) y_example[0] y_cat_test = to_categorical(y_test,10) y_cat_train = to_categorical(y_train,10) ``` #### Normalize the images ``` x_train = x_train/255 x_test = x_test/255 scaled_single = x_train[0] plt.imshow(scaled_single) ``` #### Reshape the images ``` # Reshape to include channel dimension (in this case, 1 channel) # x_train.shape x_train = x_train.reshape(60000, 28, 28, 1) x_test = x_test.reshape(10000,28,28,1) # ### Image data augmentation from tensorflow.keras.preprocessing.image import ImageDataGenerator ``` help(ImageDataGenerator) ``` datagen = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization=True, rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=True) datagen.fit(x_train) it = datagen.flow(x_train, y_cat_train, batch_size=32) # Preparing the Samples and Plot for displaying output for i in range(9): # preparing the subplot plt.subplot(330 + 1 + i) # generating images in batches batch = it.next() # Remember to convert these images to unsigned integers for viewing image = batch[0][0].astype('uint8') # Plotting the data plt.imshow(image) # Displaying the figure plt.show() ``` ### Model # 1 ``` from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Conv2D, MaxPool2D, Flatten model = Sequential() model.add(Conv2D(filters=32, kernel_size=(4,4), input_shape=(28, 28, 1), activation='relu',)) model.add(MaxPool2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dense(10, activation='softmax')) ``` Notes : If y is not one hot coded then loss= sparse_categorical_crossentropy ``` model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy', 'categorical_accuracy']) # we can add in additional metrics https://keras.io/metrics/ model.summary() ``` #### Add Early Stopping ``` from tensorflow.keras.callbacks import EarlyStopping early_stop = EarlyStopping(monitor='val_loss', patience=2) ``` ##### Training using one hot encoding ``` # fits the model on batches with real-time data augmentation: history = model.fit(datagen.flow(x_train, y_cat_train, batch_size=32), epochs=10, steps_per_epoch=len(x_train) / 32, validation_data=(x_test,y_cat_test), callbacks=[early_stop]) ``` #### Save model Saving model from tensorflow.keras.models import load_model model_file = 'D:\\Sandbox\\Github\\MODELS\\' + '01_mnist.h5' model.save(model_file) #### Retreive model Retrieve model model = load_model(model_file) #### Evaluate Rule of thumb 1. High Bias accuracy = 80% val-accuracy = 78% (2% gap) 2. High Variance accuracy = 98% val-accuracy = 80% (18% gap) 3. High Bias and High Variance accuracy = 80% val-accuracy = 60% (20% gap) 4. Low Bias and Low Variance accuracy = 98% val-accuracy = 96% (2% gap) #### Eval - Train ``` model.metrics_names pd.DataFrame(history.history).head() #pd.DataFrame(model.history.history).head() ``` pd.DataFrame(history.history).plot() ``` losses = pd.DataFrame(history.history) losses[['loss','val_loss']].plot() losses[['accuracy','val_accuracy']].plot() # Plot loss per iteration plt.plot(history.history['loss'], label='loss') plt.plot(history.history['val_loss'], label='val_loss') plt.legend() # Plot accuracy per iteration plt.plot(history.history['accuracy'], label='acc') plt.plot(history.history['val_accuracy'], label='val_acc') plt.legend() ``` #### Eval - Test ``` test_metrics = model.evaluate(x_test,y_cat_test,verbose=1) print('Loss on test dataset:', test_metrics[0]) print('Accuracy on test dataset:', test_metrics[1]) print("Loss and Accuracy on Train dataset:") pd.DataFrame(history.history).tail(1) ``` As it turns out, the accuracy on the test dataset is smaller than the accuracy on the training dataset. This is completely normal, since the model was trained on the `train_dataset`. When the model sees images it has never seen during training, (that is, from the `test_dataset`), we can expect performance to go down. #### Prediction ``` y_prediction = np.argmax(model.predict(x_test), axis=-1) ``` #### Reports ``` from sklearn.metrics import classification_report,confusion_matrix print(classification_report(y_test, y_prediction)) print(confusion_matrix(y_test, y_prediction)) ``` Recall (sensivity) : Fraud detection recall because you want to catch FN (real fraud guys) Precision (specificity): Sentiment analysis precision is important. You want to catch all feeling FP () F1 score : Higher is better to compare two or more models accuracy : higher is better error : 1 - accuracy Ideally, We want both Precision & Recall to be 1 but it is a zero-sum game. You can't have both ``` import seaborn as sns plt.figure(figsize=(10,6)) sns.heatmap(confusion_matrix(y_test,y_prediction),annot=True) ``` #### Predictions go wrong! ``` # Show some misclassified examples misclassified_idx = np.where(y_prediction != y_test)[0] i = np.random.choice(misclassified_idx) plt.imshow(x_test[i].reshape(28,28), cmap='gray') plt.title("True label: %s Predicted: %s" % (y_test[i], y_prediction[i])); ``` #### Final thoughts Rule of thumb 1. High Bias accuracy = 80% val-accuracy = 78% (2% gap) 2. High Variance accuracy = 98% val-accuracy = 80% (18% gap) 3. High Bias and High Variance accuracy = 80% val-accuracy = 60% (20% gap) 4. Low Bias and Low Variance accuracy = 98% val-accuracy = 96% (2% gap) ``` print("Percentage of wrong predcitions : " + str(len(misclassified_idx)/len(y_prediction)*100) + " %") print("Models maximum accuracy : " + str(np.max(history.history['accuracy'])*100) + " %") print("Models maximum validation accuracy : " + str(np.max(history.history['val_accuracy'])*100) + " %") ``` Model has Low Bias and High Variance with more than 29% gap. The recall is also bad. Image augmentation doesn't help here. Augumentation with rotation and tilting doesn't help b/c it is a unique digital shape.
github_jupyter
### Note * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. ``` # Dependencies and Setup import pandas as pd # File to Load (Remember to Change These) file_to_load = "Resources/purchase_data.csv" # Read Purchasing File and store into Pandas data frame purchase_data = pd.read_csv(file_to_load) ``` ## Player Count * Display the total number of players ``` #number of players: do SN to count the names, w/o counting duplicate use unique #when using unique use len players= len(purchase_data["SN"].unique()) # Create a data frame with total players named player count count = pd.DataFrame({"players count":[players]}) count ``` ## Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc. * Create a summary data frame to hold the results * Optional: give the displayed data cleaner formatting * Display the summary data frame ``` #number of unique items # with len for unique. unique item use unique (not sure but it sounds like it make sense) unique_items = len(purchase_data["Item ID"].unique()) #average price #name=df[].mean() Average_Price = purchase_data["Price"].mean() #Number of Purchases #same formula^^ Number_of_Purchases = purchase_data["Purchase ID"].count() #sum #same same total_revenue = purchase_data["Price"].sum() #summary data frame # make dictionary? #df.add: is it simplier or more complicated? #purchase_data["Number of Unique Items"]= len(purchase_data["Item ID"].unique())? #^no! #make new dataframe, do method 1: list of dictionaries #words not in quotes belong in brackets #assign names to bracket values in a dictionary #use pd.DataFrame to make a table from the dict!!! otherwise it wont be organized summ_df = pd.DataFrame({"unique items": [unique_items], "Average Price": [Average_Price], "Number of Purchases" : [Number_of_Purchases], "total revenue" :[total_revenue]}) summ_df # add dollars and 2 decimals for price and revenue #summ_df.style.format= ({"Average Price":"${:.2f}"}) #summ_df.style.format= ({"total revenue" : "${:.2f}"}) #summ_df.format #^^^doesn't work #combine! summ_df.style.format({"Average Price": "${:.2f}", "total revenue": "${:.2f}"}) ``` ## Gender Demographics * Percentage and Count of Male Players * Percentage and Count of Female Players * Percentage and Count of Other / Non-Disclosed ``` #Total count #make a variable that'll hold the total to use for calculations allGender= purchase_data["SN"].count() #match gender to usernames. "SN" # count for sum malecount = purchase_data[purchase_data["Gender"] == "Male"]["SN"].nunique() femalecount = purchase_data[purchase_data["Gender"] == "Female"]["SN"].nunique() other = purchase_data[purchase_data["Gender"] == "Other / Non-Disclosed"]["SN"].nunique() # calc for percent malecalc= ((malecount/allGender)*100) femalecalc= ((femalecount/allGender)*100) othercalc= ((other/allGender)*100) #make new dataframe to compile results genders = pd.DataFrame({"Gender": ["Male", "Female", "Other / Non-Disclosed"], "Total Count": [malecount, femalecount, other],#no quotations, they hold the count "Percentage of Players": [malecalc, femalecalc, othercalc] }) genders.style.format({"Percentage of Players": "{:.2f}%" }) ``` ## Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender * Create a summary data frame to hold the results * Optional: give the displayed data cleaner formatting * Display the summary data frame ``` #Purchase analysis for all genders #same formats just change names and .formulas. use count, mean, sum respectively #Purchase Count #use price and gender #use count to count total of each gender malecount = purchase_data[purchase_data["Gender"] == "Male"]["SN"].nunique() femalecount = purchase_data[purchase_data["Gender"] == "Female"]["SN"].nunique() othercount = purchase_data[purchase_data["Gender"] == "Other / Non-Disclosed"]["SN"].nunique() # avg purchase price # use .mean to get mean malemean = purchase_data[purchase_data["Gender"] == "Male"]["Price"].mean() femalemean = purchase_data[purchase_data["Gender"] == "Female"]["Price"].mean() othermean = purchase_data[purchase_data["Gender"] == "Other / Non-Disclosed"]["Price"].mean() #purchase total per gender #use .sum to sum total purchase malesum = purchase_data[purchase_data["Gender"] == "Male"]["Price"].sum() femalesum = purchase_data[purchase_data["Gender"] == "Female"]["Price"].sum() othersum = purchase_data[purchase_data["Gender"] == "Other / Non-Disclosed"]["Price"].sum() #avg purchase by the total poeple in a gender #use variables ^^ to calculate #use the sum of total purchase per gender and divide by the count per gender so... # use malesum/malecount avgpergender_male= (malesum/malecount) avgpergender_female= (femalesum/femalecount) avgpergender_other= (othersum/othercount) #make dataframe to table purchase= pd.DataFrame ({"Gender" : ["Male", "Female", "Other / Non-Disclosed"], " Purchase Count": [malecount, femalecount, othercount], "Average Purchase Price" : [malemean, femalemean, othermean], "Total Purchase Value" : [malesum, femalesum, othersum], "Avg Total Purchase per Person" : [avgpergender_male, avgpergender_female, avgpergender_other] }) purchase.style.format({"Average Purchase Price": "${:.2f}","Avg Total Purchase per Person": "${:.2f}", "Total Purchase Value":"${:.2f}"}) ``` ## Age Demographics * Establish bins for ages * Categorize the existing players using the age bins. Hint: use pd.cut() * Calculate the numbers and percentages by age group * Create a summary data frame to hold the results * Optional: round the percentage column to two decimal points * Display Age Demographics Table ``` #age data analysis #print(purchase_data["Age"].max()) #print(purchase_data["Age"].min()) bins = [0,10,14,19,24,29,34,39,40] group_labels= ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"] pd.cut(purchase_data["Age"], bins, labels= group_labels) purchase_data["Age Group"] = pd.cut(purchase_data["Age"], bins, labels=group_labels) #purchase_data.head() purchase_group = purchase_data.groupby("Age Group") Total_count= purchase_group["Age Group"].count() #purchase_group[["Total Count", "Percentage of Players"]] age_group = pd.DataFrame ({ "Age Groups" : [group_labels], "Total Count": [Total_count], }) age_group #age_group[["Age Groups", "Total Count"]].count() #age_group #output= ( # f"Age Groups: {group_labels}"+ # f"Total Count: {Total_count}") #print (output) ``` ## Purchasing Analysis (Age) * Bin the purchase_data data frame by age * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below * Create a summary data frame to hold the results * Optional: give the displayed data cleaner formatting * Display the summary data frame ``` #purchase by age only ``` ## Top Spenders * Run basic calculations to obtain the results in the table below * Create a summary data frame to hold the results * Sort the total purchase value column in descending order * Optional: give the displayed data cleaner formatting * Display a preview of the summary data frame ## Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns * Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value * Create a summary data frame to hold the results * Sort the purchase count column in descending order * Optional: give the displayed data cleaner formatting * Display a preview of the summary data frame ## Most Profitable Items * Sort the above table by total purchase value in descending order * Optional: give the displayed data cleaner formatting * Display a preview of the data frame
github_jupyter
## Data Distillation In this notebook we train models using data distillation. ``` from google.colab import drive drive.mount('/content/drive') from google.colab import files uploaded = files.upload() !unzip dataset.zip -d dataset import warnings import os import shutil import glob import random import random import cv2 from fastai.vision import * from fastai.utils.mem import * warnings.filterwarnings("ignore", category=UserWarning, module="torch.nn.functional") dataset="dataset" classesPaths=sorted(glob.glob(dataset+'/*')) classes=[pt.split(os.sep)[-1] for pt in classesPaths if os.path.isdir(pt)] images=[pt for pt in classesPaths if not os.path.isdir(pt)] os.makedirs(dataset+'/train') os.makedirs(dataset+'/valid') os.makedirs(dataset+'/images') for im in images: shutil.move(im,dataset+'/images/') for cl in classes: os.mkdir(dataset+'/train/'+cl) images=sorted(glob.glob(dataset+'/'+cl+'/*')) for i in range(int(len(images)*0.75)): images=sorted(glob.glob(dataset+'/'+cl+'/*')) j=random.randint(0,len(images)-1) shutil.move(images[j],dataset+'/train/'+cl) os.mkdir(dataset+'/valid/'+cl) images=sorted(glob.glob(dataset+'/'+cl+'/*')) for i in range(len(images)): shutil.move(images[i],dataset+'/valid/'+cl) def learn_with_model(dataset,model): data=ImageDataBunch.from_folder(dataset, ds_tfms=get_transforms(), size=224,bs=32).normalize(imagenet_stats) learn = cnn_learner(data, model, metrics=accuracy) learn.fit_one_cycle(2) learn.unfreeze() learn.lr_find() lr=learn.recorder.lrs[np.argmin(learn.recorder.losses)] if lr<1e-05: lr=1e-03 learn.fit_one_cycle(8,max_lr=slice(lr/100,lr)) return learn,data def moda(lista): tam=len(lista[0][2]) x=np.zeros(tam) for l in lista: x=x+l[2].numpy() x=x/len(lista) maximo=x.argmax() return maximo, x[maximo] def omniData(dataset,learn,th): images=sorted(glob.glob(dataset+"/images/*")) for image in images: im=cv2.imread(image,1) im = cv2.cvtColor(im, cv2.COLOR_BGR2RGB) lista=[] n=Image(pil2tensor(im, dtype=np.float32).div_(255)) pn=learn.predict(n) lista.append(pn) h_im=cv2.flip(im,0) h=Image(pil2tensor(h_im, dtype=np.float32).div_(255)) ph=learn.predict(h) lista.append(ph) v_im=cv2.flip(im,1) v=Image(pil2tensor(v_im, dtype=np.float32).div_(255)) pv=learn.predict(v) lista.append(pv) b_im=cv2.flip(im,-1) b=Image(pil2tensor(b_im, dtype=np.float32).div_(255)) pb=learn.predict(b) lista.append(pb) blur_im=cv2.blur(im,(5,5)) blur=Image(pil2tensor(blur_im, dtype=np.float32).div_(255)) pblur=learn.predict(blur) lista.append(pblur) invGamma=1.0 table=np.array([((i/255.0)**invGamma)*255 for i in np.arange(0,256)]).astype('uint8') gamma_im=cv2.LUT(im,table) gamma=Image(pil2tensor(gamma_im, dtype=np.float32).div_(255)) pgamma=learn.predict(gamma) lista.append(pgamma) gblur_im=cv2.GaussianBlur(im,(5,5),cv2.BORDER_DEFAULT) gblur=Image(pil2tensor(gblur_im, dtype=np.float32).div_(255)) pgblur=learn.predict(gblur) lista.append(pgblur) mod, predMax=moda(lista) if predMax>th: shutil.copyfile(image,dataset+'/train/'+data.classes[mod]+'/'+data.classes[mod]+'_'+image.split('/')[-1]) os.remove(image) print(image+" --> "+dataset+'/train/'+data.classes[mod]+'/'+data.classes[mod]+'_'+image.split('/')[-1]) learner_resnet50,data=learn_with_model(dataset,models.resnet50) shutil.copytree(dataset, 'dataset_resnet50') omniData('dataset_resnet50',learner_resnet50,0) learnerDD_resnet50,data=learn_with_model('dataset_resnet50',models.resnet50) learnerDD_resnet50.export('/content/drive/My Drive/learnerDD_resnet50.pkl') learner_resnet34,data=learn_with_model(dataset,models.resnet34) shutil.copytree(dataset, 'dataset_resnet34') omniData('dataset_resnet34',learner_resnet34,0) learnerDD_resnet34,data=learn_with_model('dataset_resnet34',models.resnet34) learnerDD_resnet34.export('/content/drive/My Drive/learnerDD_resnet34.pkl') learner_resnet101,data=learn_with_model(dataset,models.resnet101) shutil.copytree(dataset, 'dataset_resnet101') omniData('dataset_resnet101',learner_resnet101,0) learnerDD_resnet101,data=learn_with_model('dataset_resnet101',models.resnet101) learnerDD_resnet101.export('/content/drive/My Drive/learnerDD_resnet101.pkl') ```
github_jupyter
``` # default_exp utils_blitz ``` # uitls_blitz > API details. ``` #export #hide from blitz.modules import BayesianLinear from blitz.modules import BayesianEmbedding, BayesianConv1d, BayesianConv2d, BayesianConv3d from blitz.modules.base_bayesian_module import BayesianModule from torch import nn import torch from fastcore.basics import patch @patch def extra_repr(self: BayesianLinear): return f"Shape: {list(self.weight_sampler.mu.shape)}" @patch def extra_repr(self: BayesianConv1d): return f"in_channels={self.in_channels}, out_channels={self.out_channels}, kernelel_size={self.kernel_size}, stride={self.stride}" @patch def extra_repr(self: BayesianConv2d): return f"in_channels={self.in_channels}, out_channels={self.out_channels}, kernelel_size={self.kernel_size}, stride={self.stride}" @patch def extra_repr(self: BayesianConv3d): return f"in_channels={self.in_channels}, out_channels={self.out_channels}, kernelel_size={self.kernel_size}, stride={self.stride}" #export def convert_layer_to_bayesian(layer, config: dict): if isinstance(layer, torch.nn.Linear): new_layer = BayesianLinear( layer.in_features, layer.out_features, prior_sigma_1=config["prior_sigma_1"], prior_sigma_2=config["prior_sigma_2"], prior_pi=config["prior_pi"], posterior_mu_init=config["posterior_mu_init"], posterior_rho_init=config["posterior_rho_init"], ) elif isinstance(layer, nn.Embedding): new_layer = BayesianEmbedding( layer.num_embeddings, layer.embedding_dim, prior_sigma_1=config["prior_sigma_1"], prior_sigma_2=config["prior_sigma_2"], prior_pi=config["prior_pi"], posterior_mu_init=config["posterior_mu_init"], posterior_rho_init=config["posterior_rho_init"], ) elif isinstance(layer, (nn.Conv1d, nn.Conv2d, nn.Conv3d)): matching_class = BayesianConv1d kernel_size = layer.kernel_size[0] if type(layer) == nn.Conv2d: kernel_size = layer.kernel_size matching_class = BayesianConv2d elif type(layer) == nn.Conv3d: matching_class = BayesianConv3d kernel_size = layer.kernel_size new_layer = matching_class( layer.in_channels, layer.out_channels, kernel_size=kernel_size, groups=layer.groups, padding=layer.padding, dilation=layer.dilation, prior_sigma_1=config["prior_sigma_1"], prior_sigma_2=config["prior_sigma_2"], prior_pi=config["prior_pi"], posterior_mu_init=config["posterior_mu_init"], posterior_rho_init=config["posterior_rho_init"], ) else: Warning( f"Could not find correct type for conversion of layer {layer} with type {type(layer)}" ) new_layer = layer return new_layer config = {"prior_sigma_1":0.1, "prior_sigma_2":0.4, "prior_pi":1, "posterior_mu_init":0, "posterior_rho_init":-7} convert_layer_to_bayesian(nn.Linear(10,2), config) convert_layer_to_bayesian(nn.Embedding(10,2), config) convert_layer_to_bayesian(nn.Conv1d(1,2,3), config) convert_layer_to_bayesian(nn.Conv2d(1,2,(3,3)), config) convert_layer_to_bayesian(nn.Conv2d(1,2,(3,3, 3)), config) #export def convert_to_bayesian_model(model, config: dict): for p in model.named_children(): cur_layer_name = p[0] cur_layer = p[1] if len(list(cur_layer.named_children())) > 0: convert_to_bayesian_model(cur_layer, config) elif not isinstance(cur_layer, BayesianModule): new_layer = convert_layer_to_bayesian(cur_layer, config) setattr(model, cur_layer_name, new_layer) return model convert_to_bayesian_model(nn.Sequential(nn.Linear(3,4), nn.Linear(4,1)), config) #export def set_train_mode(model, mode): if isinstance(model, BayesianModule): model.freeze = not mode for module in model.children(): set_train_mode(module, mode) ```
github_jupyter
``` ########### # PRELUDE # ########### # auto-reload changed python files %load_ext autoreload %autoreload 2 # Format cells with %%black %load_ext blackcellmagic # nice interactive plots %matplotlib inline # add repository directory to include path from pathlib import Path import sys PROJECT_DIR = Path('../..').resolve() sys.path.append(str(PROJECT_DIR)) from IPython.display import display, Markdown def markdown(s): return display(Markdown(s)) markdown("Surround markdown cells with `<div class=\"alert alert-block alert-info\">\\n\\n ... \\n\\n</div>` to mark professor-provided assignment content") ``` <div class="alert alert-block alert-info"> # Part 1: The Power of Two Choices </div> ``` from collections import Counter import matplotlib.pyplot as plt from random import randrange, choice as randchoice from tqdm import trange ``` <div class="alert alert-block alert-info"> ## Goal The goal of this part of the assignment is to gain an appreciation for the unreasonable effectiveness of simple randomized load balancing, and measure the benefits of some lightweight optimizations. </div> <div class="alert alert-block alert-info"> ## Description We consider random processes of the following type: there are N bins, and we throw N balls into them, one by one. \[This is an abstraction of the sort of allocation problem that arises throughout computing—e.g. allocating tasks on servers, routing packets within parallel networks, etc..] We’ll compare four different strategies for choosing the bin in which to place a given ball. </div> <div class="alert alert-block alert-info"> 1. Select one of the N bins uniformly at random, and place the current ball in it. </div> ``` def choose_bin_1(N, bins): return randrange(N) ``` <div class="alert alert-block alert-info"> 2. Select two of the N bins uniformly at random (either with or without replacement), and look at how many balls are already in each. If one bin has strictly fewer balls than the other, place the current ball in that bin. If both bins have the same number of balls, pick one of the two at random and place the current ball in it. </div> ``` def choose_bin_2(N, bins): bin_1 = choose_bin_1(N, bins) bin_2 = choose_bin_1(N, bins) bin_1_size = bins[bin_1] bin_2_size = bins[bin_2] if bin_1_size == bin_2_size: return randchoice([bin_1, bin_2]) elif bin_1_size < bin_2_size: return bin_1 else: return bin_2 ``` <div class="alert alert-block alert-info"> 3. Same as the previous strategy, except choosing three bins at random rather than two. </div> ``` def choose_bin_3(N, bins): bin_1 = choose_bin_1(N, bins) bin_2 = choose_bin_1(N, bins) bin_3 = choose_bin_1(N, bins) bin_1_size = bins[bin_1] bin_2_size = bins[bin_2] bin_3_size = bins[bin_3] if bin_1_size == bin_2_size == bin_3_size: # TODO: is this really necessary? Can't we just pick bin_1? return randchoice([bin_1, bin_2, bin_3]) min_size = bin_1_size min_bin = bin_1 if bin_2_size < min_size: min_size = bin_2_size min_bin = bin_2 if bin_3_size < min_size: min_bin = bin_3 return min_bin ``` <div class="alert alert-block alert-info"> 4. Select two bins as follows: the first bin is selected uniformly from the first N/2 bins, and the second uniformly from the last N/2 bins. (You can assume that N is even.) If one bin has strictly fewer balls than the other, place the current ball in that bin. If both bins have the same number of balls, place the current ball (deterministically) in the first of the two bins. </div> ``` def choose_bin_4(N, bins): halfN = N // 2 bin_1 = choose_bin_1(halfN, bins) bin_2 = choose_bin_1(halfN, bins) + halfN bin_1_size = bins[bin_1] bin_2_size = bins[bin_2] if bin_1_size < bin_2_size: return bin_1 elif bin_2_size < bin_1_size: return bin_2 else: return bin_1 # return randchoice([bin_1, bin_2]) ``` <div class="alert alert-block alert-info"> (a) (5 points) Write code to simulate strategies 1–4. For each strategy, there should be a function that takes the number N of balls and bins as input, simulates a run of the corresponding random process, and outputs the number of balls in the most populated bin (denoted by X below). Before running your code, try to guess how the above schemes will compare to eachother. </div> ``` def ball_toss(N, bin_chooser): """Place N balls into N bins, choosing the bin using the bin_chooser function. Return the maximum number of balls in any bin.""" bins = [0] * N max_size = 0 for _ in range(N): landed_bin = bin_chooser(N, bins) bins[landed_bin] += 1 if bins[landed_bin] > max_size: max_size = bins[landed_bin] return max_size # test each bin chooser function ball_toss(10, choose_bin_1) ball_toss(10, choose_bin_2) ball_toss(10, choose_bin_3) ball_toss(10, choose_bin_4) "OK" ``` ### Hypothesis I think 1 should do the worst job of load balancing; the nature of randomness is such that some bins will happen to be hit many times. 2 should be better, 3 even better than that. I think 4 should be equivalent to 2, since, unless our random function is not very good, there should not be any structure in the array of bins, and thus it should not help to choose them specifically from the first and second half of the array. <div class="alert alert-block alert-info"> (b) (10 points) Let N = 200, 000 and simulate each of the four strategies 30 times. For each strategy, plot the histogram of the 30 values of X. Discuss the pros and cons of the different strategies. Does one of them stand out as a “sweet spot”? \[As with many of the mini-projects, there is no single “right answer” to this question. Instead, the idea is to have you think about the processes and your experiments, and draw reasonable conclusions from this analysis.] </div> ``` def simulate(bin_chooser): max_values = [] N = 200_000 for _ in trange(100): max_values.append(ball_toss(N, bin_chooser)) return max_values max_values_1 = simulate(choose_bin_1) max_values_2 = simulate(choose_bin_2) max_values_3 = simulate(choose_bin_3) max_values_4 = simulate(choose_bin_4) data = [ ("Method 1", max_values_1), ("Method 2", max_values_2), ("Method 3", max_values_3), ("Method 4", max_values_4)] # output raw data, since it's hard to see in the histogram plots markdown("### Number of occurrences of maximum bin values") for d in data: markdown(f'#### {d[0]}') sorted_occurences = sorted(Counter(d[1]).items()) print(', '.join(f'{key}: {val}' for key, val in sorted_occurences)) # print(sorted(Counter(d[1]).items())) # fig, axes = plt.subplots(1, 4, sharex=True, sharey=True, figsize=(10,3)) fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(6,5)) for ax, d in zip(axes.flatten(), data): ax.hist(d[1], bins=range(2,12), rwidth=.5, align='left') ax.set_title(d[0]) fig.subplots_adjust(hspace=2.0, wspace=1.0) fig.suptitle("Simulated Performance of Bin-Choosing Functions") fig.supxlabel("Maximum Bin Value", size=14) fig.supylabel("Simulated Frequency") fig.tight_layout() ``` ### Analysis Looks like we can rank the performance as follows: 3 > 4 > 2 > 1; 4 perform only slightly worse than 3. I did not expect 4 to perform better than 2, but in retrospect it makes sense: random numbers tend to cluster, and could even be the same twice in a row! Method 4 guarantees that the 2 choices of bin are not the same, and probably also aleves the clustering issue more generally. The run time of 4 was better than 2 and 3, too, since the implementation was also simpler, so it does stand out as a possible sweet spot, assuming that over time the runtime improvement over method 1 outweighs the loss caused by a more complex algorithm. I do have an open question regarding the resolution of ties. Methods 2 and 3 resolve ties via random choice; I tried deterministically choosing the first bin in method 2, but there was no change in final outcome (suggesting this is probably fine to do for time optimization). However, I also tried changing method 4 to use a random choice instead of always picking the first bin, and the final outcomes worsened, matching method 2 almost perfectly. Why would this be? <div class="alert alert-block alert-info"> (c) (5 points) Propose an analogy between the first of the random processes above and the standard implementation of hashing N elements into a hash table with N buckets, and resolving collisions via chaining (i.e., one linked list per bucket). Discuss in particular any relationships between X and search times in the hash table. </div> The hypothetical hash table performance can be compared to strategy 1 above: a pseudo-random hash function picks the bucket to put a value in, just as the function placed balls in random bins. The maximum number of balls in a bin is analogous to the number of values placed in a single bucket (in Java these are all placed in `TreeMap` when possible, otherwise a `LinkedList`). The maximum numbers in the ball-tossing simulation indicate the maximum number of iterate-and-compare steps required to find a value in the hash table. <div class="alert alert-block alert-info"> (d) (5 points) Do the other random processes suggest alternative implementations of hash tables with chaining? Discuss the trade-offs between the different hash table implementations that you propose (e.g., in terms of insertion time vs. search time). </div> One could use multiple hash functions to place a value into one of multiple buckets, choosing the bucket with the fewest entries. Then the query method would search through all of the relevant buckets. The total number of iterations should still be the same on average, but the worst case performance should occur less often because we will do a better job of distributing the values. It's possible that the CPU cache behavior would be a worse, since we would access more disparate memory locations more often. Not sure about that, though. <div class="alert alert-block alert-info"> # Part 2: Conservative Updates in a Count-Min Sketch </div> ``` from hashlib import md5 from random import shuffle from statistics import mean ``` <div class="alert alert-block alert-info"> ## Goal The goal of this part is to understand the count-min sketch (from Lecture #2) via an implementation, and to explore the benefits of a “conservative updates” optimization. </div> <div class="alert alert-block alert-info"> ## Description You’ll use a count-min sketch with 4 independent hash tables, each with 256 counters. You will run 10 independent trials. This lets you measure not only the accuracy of the sketch, but the distribution of the accuracy over multiple datasets with the same frequency distribution. Your sketch should take a “trial” as input, and the hash value of an element x during trial i (i = 1, 2, . . . , 10) for table j (j = 1, 2, 3, 4) is calculated as follows: * Consider the input x as a string, and append i − 1 as a string to the end of the string. * Calculate the MD5 score of the resulting string. Do not implement the MD5 algorithm yourself; most modern programming languages have packages that calculate MD5 scores for you. For example, in Python 3, you can use the hashlib library and `hashlib.md5(foo.encode('utf-8')).hexdigest()` to compute the MD5 score of the string foo (returning a hexadecimal string). * The hash value is the j-th byte of the score. As an example, to compute the hash value of 100 in the 4th table of the 9th trial, we calculate the MD5 score of the string "1008", which is (in hexadecimal): 15 87 96 5f b4 d4 b5 af e8 42 8a 4a 02 4f eb 0d The 4th byte is 5f in hexadecimal, which is 95 in decimal. In Python, you can parse the hexadecimal string 5f with `int("5f", 16)`. (a) (5 points) Implement the count-min sketch, as above. </div> ``` # returns an array of hash values to use for assignng buckets in the count-min hash sketch def count_min_hashes(x, trial): return md5(f"{x}{trial - 1}".encode()) assert count_min_hashes(100, 9).hexdigest() == "1587965fb4d4b5afe8428a4a024feb0d" "OK" # Note: assignment says to use digest indices j=1..4, but it was easier to work with 0..3 class CountMinSketch: def __init__(self, trial: int, conservative: bool=False): """Create a new count min sketch. - trial: used to seed the hash function for experiments in this notebook - conservative: use conservative update optimization""" self.table = [[0] * 256 for i in range(4)] self.trial = trial self.conservative = conservative self.total = 0 def increment(self, x): self.total += 1 digest = count_min_hashes(x, self.trial).digest() if self.conservative: min_val = min( self.table[0][digest[0]], self.table[1][digest[1]], self.table[2][digest[2]], self.table[3][digest[3]] ) for index, table in enumerate(self.table): if table[digest[index]] == min_val: table[digest[index]] += 1 else: self.table[0][digest[0]] += 1 self.table[1][digest[1]] += 1 self.table[2][digest[2]] += 1 self.table[3][digest[3]] += 1 def count(self, x): digest = count_min_hashes(x, self.trial).digest() return min([ self.table[0][digest[0]], self.table[1][digest[1]], self.table[2][digest[2]], self.table[3][digest[3]]]) def run_trials(stream, conservative=False): sketches = [] for trial in range(1, 11): sketch = CountMinSketch(trial, conservative) for el in stream: sketch.increment(el) sketches.append(sketch) return sketches ``` <div class="alert alert-block alert-info"> You will be feeding data streams (i.e., sequences of elements) into count-min sketches. Every element of each stream is an integer between 1 and 9050 (inclusive). The frequencies are given by: * Integers $1000 \times (i − 1) + 1$ to $1000 \times i$, for $1 ≤ i ≤ 9$, appear i times in the stream. That is, the integers 1 to 1000 appear once in the stream; 1001 to 2000 appear twice; and so on. * An integer $9000 + i$, for $1 ≤ i ≤ 50$, appears $i^2$ times in the stream. For example, the integer 9050 appears 2500 times. (Each time an integer appears in the stream, it has a count of 1 associated with it.) ``` def create_stream(): stream = [] for i in range(1, 10): sub_stream = range(1000 * (i-1) + 1, 1000 * i + 1) for j in range(i): stream.extend(sub_stream) for i in range(1, 51): stream.extend([9000 + i] * (i**2)) return stream # Confirming distribution of values in created stream stream = create_stream() fig, ax1 = plt.subplots() ax1.hist(create_stream(), rwidth=.5) ax1.set_ylabel("Occurrences") ax1.set_xlabel("Element Values") None ``` <div class="alert alert-block alert-info"> (b) (2 points) Call an integer a heavy hitter if the number of times it appears is at least 1% of the total number of stream elements. How many heavy hitters are there in a stream with the above frequencies? </div> ``` def heavy_hitters(stream): total = len(stream) freqs = Counter(stream) threshold = total / 100 heavies = [] for (k, v) in freqs.items(): if v >= threshold: heavies.append(k) return heavies hh = heavy_hitters(create_stream()) print(f"The heavy hitters are the values {hh[0]} through {hh[-1]}") ``` <div class="alert alert-block alert-info"> Next, you will consider 3 different data streams, each corresponding to the elements above in a different order. 1. Forward: the elements appear in non-decreasing order. 2. Reverse: the elements appear in non-increasing order. 3. Random: the elements appear in a random order. </div> ``` def forward_stream(): stream = create_stream() return sorted(stream) def reverse_stream(): stream = create_stream() return sorted(stream, reverse=True) def random_stream(): stream = create_stream() shuffle(stream) return stream assert forward_stream()[:10] == list(range(1, 11)) assert reverse_stream()[:10] == [9050] * 10 # too difficult to check this automatically print('Confirm that this stream looks shuffled:') print(random_stream()[:10]) ``` <div class="alert alert-block alert-info"> (c) (6 points) For each of the three data streams, feed it into a count-min sketch (i.e., successively insert its elements), and compute the values of the following quantities, averaged over the 10 trials, for each order of the stream: * The sketch’s estimate for the frequency of element 9050. * The sketch’s estimate for the number of heavy hitters (elements with estimated frequency at least 1% of the stream length). Record the mean estimate for each of the three orders. Does the order of the stream affect the estimated counts? Explain your answer. </div> ``` forward_stream_sketches = run_trials(forward_stream()) reverse_stream_sketches = run_trials(reverse_stream()) random_stream_sketches = run_trials(random_stream()) ``` The order of the stream passed into a count-min sketch does not matter at all; count-min sketches only store frequencies, completely ignoring ordering of any kind. Therefore, the accumulated data will be exactly the same, and thus the estimated counts will also be exactly the same. Verification below: ``` for forward_sketch, reverse_sketch, random_sketch in zip( forward_stream_sketches, reverse_stream_sketches, random_stream_sketches ): assert forward_sketch.table == reverse_sketch.table == random_sketch.table markdown(f"* Trial {forward_sketch.trial} sketches are identical") print("Sketches for all trials are identical") ``` Therefore, we don't need to report separate numbers for each data stream. ``` def sketch_statistics(sketches): threshold = forward_stream_sketches[0].total / 100 heavy_hitter_count = [] for sketch in sketches: count = 0 for i in range(1,9051): if sketch.count(i) >= threshold: count += 1 heavy_hitter_count.append(count) estimated_highest_count = mean([sketch.count(9050) for sketch in sketches]) return heavy_hitter_count, mean(heavy_hitter_count), estimated_highest_count heavy_hitter_count, avg_heavy_hitters, estimated_highest_count = sketch_statistics(forward_stream_sketches) markdown(f'* The average estimated count of element 9050 is {estimated_highest_count}') markdown(f'* The estimated number of heavy hitters in each trial were {heavy_hitter_count}') markdown(f'* The average estimate was {avg_heavy_hitters}') ``` <div class="alert alert-block alert-info"> (d) (3 points) Implement the conservative updates optimization, as follows. When updating the counters during an insert, instead of incrementing all 4 counters, we only increment the subset of these 4 counters that have the lowest current count (if two or more of them are tied for the minimum current count, then we increment each of these). </div> #### Implementation Notes The `CountMinSketch` class above was refactored to take a `conservative` flag in the constructor which turns on this optimization. The implementation was straightforward, but one structural difference I needed to account for was that it was no longer possible to get the total number of elements aded to the sketch using `sum(sketch.table[0])` as before; since not all of the tables are updated on each `increment` call, the tables can no longer answer the question "how many items have we seen"? This was easy to make up for with a separate `total` field. <div class="alert alert-block alert-info"> (e) (3 points) Explain why, even with conservative updates, the count-min sketch never underestimates the count of a value. </div> The minimum value of the four tables constitutes a count-min sketch's best guess of the frequency of an element. Even with the conservative optimization, we always update this minimum value for each element encountered, so it is still equal to or greater than the actual number of occurrences of an element. Note also that it's important that we update all tables when there's a tie for the minimum value, since skipping the update for any of them would cause the sketch to underestimate the frequency of an item. <div class="alert alert-block alert-info"> (f) (6 points) Repeat part (c) with conservative updates. </div> ``` forward_stream_sketches_2 = run_trials(forward_stream(), True) reverse_stream_sketches_2 = run_trials(reverse_stream(), True) random_stream_sketches_2 = run_trials(random_stream(), True) ``` As shown below, when using the conservative update optimization the order of inputs *does* change the final state of the count-min sketches: ``` all_identical = True for forward_sketch, reverse_sketch, random_sketch in zip( forward_stream_sketches_2, reverse_stream_sketches_2, random_stream_sketches_2 ): all_identical = all_identical and ( forward_sketch.table == reverse_sketch.table == random_sketch.table ) if all_identical: markdown(f"* Trial {forward_sketch.trial} sketches are identical") else: markdown(f"* Trail {forward_sketch.trial} sketches are not identical; breaking now") break if all_identical: print("Through some miracle (or more likely a bug), sketches for all trials are identical") else: print("The sketches are not all identical (the expected outcome)") data = [ ("sorted stream", forward_stream_sketches_2), ("reverse sorted stream", reverse_stream_sketches_2), ("shuffled stream", random_stream_sketches_2)] for name, stream in data: heavy_hitter_count, avg_heavy_hitters, estimated_highest_count = sketch_statistics(stream) markdown(f'#### Results for {name}') markdown(f'* The average estimated count of element 9050 is {estimated_highest_count}') markdown(f'* The estimated number of heavy hitters in each trial were \\{heavy_hitter_count}') markdown(f'* The average estimate was {avg_heavy_hitters}') ``` The conservative update optimization improved the count estimations for all stream types. Performance was worse with the forward-sorted stream than for the other two sorts, but it was still better than the estimation without the optimization.
github_jupyter
``` import pandas as pd import matplotlib.pyplot as plt import numpy as np import statistics rep5_04_002_data = pd.read_csv('proc_rep5_04_002.csv') del rep5_04_002_data['Unnamed: 0'] rep5_04_002_data rgg_rgg_data = rep5_04_002_data.copy() rgg_rand_data = rep5_04_002_data.copy() rand_rgg_data = rep5_04_002_data.copy() rand_rand_data = rep5_04_002_data.copy() rgg_rgg_drop_list = [] rgg_rand_drop_list = [] rand_rgg_drop_list = [] rand_rand_drop_list = [] for i in range(400): if i % 4 == 0: rgg_rand_drop_list.append(i) rand_rgg_drop_list.append(i) rand_rand_drop_list.append(i) elif i % 4 == 1: rgg_rgg_drop_list.append(i) rand_rgg_drop_list.append(i) rand_rand_drop_list.append(i) elif i % 4 == 2: rgg_rgg_drop_list.append(i) rgg_rand_drop_list.append(i) rand_rand_drop_list.append(i) elif i % 4 == 3: rgg_rgg_drop_list.append(i) rgg_rand_drop_list.append(i) rand_rgg_drop_list.append(i) rgg_rgg_data = rgg_rgg_data.drop(rgg_rgg_drop_list) rgg_rand_data = rgg_rand_data.drop(rgg_rand_drop_list) rand_rgg_data = rand_rgg_data.drop(rand_rgg_drop_list) rand_rand_data = rand_rand_data.drop(rand_rand_drop_list) rgg_rgg_data = rgg_rgg_data.reset_index(drop=True) rgg_rand_data = rgg_rand_data.reset_index(drop=True) rand_rgg_data = rand_rgg_data.reset_index(drop=True) rand_rand_data = rand_rand_data.reset_index(drop=True) rgg_rgg_data rgg_rgg_for_rand_data rgg_rgg_dict = {} rgg_rand_dict = {} rand_rgg_dict = {} rand_rand_dict = {} for i in range(49): target = [i*5 + 0, i*5 + 1, i*5 + 2, i*5 + 3, i*5 + 4] temp_rgg_rgg = rgg_rgg_data[i*5 + 0 : i*5 + 5] temp_rgg_rand = rgg_rand_data[i*5 + 0 : i*5 + 5] temp_rand_rgg = rand_rgg_data[i*5 + 0 : i*5 + 5] temp_rand_rand = rand_rand_data[i*5 + 0 : i*5 + 5] if i == 0: rgg_rgg_dict['intra_thres'] = [statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())] rgg_rgg_dict['alive_nodes'] = [statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())] rgg_rand_dict['intra_thres'] = [statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())] rgg_rand_dict['alive_nodes'] = [statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())] rand_rgg_dict['intra_thres'] = [statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())] rand_rgg_dict['alive_nodes'] = [statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())] rand_rand_dict['intra_thres'] = [statistics.mean(temp_rand_rand['intra_thres'].values.tolist())] rand_rand_dict['alive_nodes'] = [statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())] else: rgg_rgg_dict['intra_thres'].append(statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())) rgg_rgg_dict['alive_nodes'].append(statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())) rgg_rand_dict['intra_thres'].append(statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())) rgg_rand_dict['alive_nodes'].append(statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())) rand_rgg_dict['intra_thres'].append(statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())) rand_rgg_dict['alive_nodes'].append(statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())) rand_rand_dict['intra_thres'].append(statistics.mean(temp_rand_rand['intra_thres'].values.tolist())) rand_rand_dict['alive_nodes'].append(statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())) rgg_rgg_for_rand_dict = {} rgg_rand_for_rand_dict = {} rand_rgg_for_rand_dict = {} rand_rand_for_rand_dict = {} for i in range(21): target = [i*5 + 0, i*5 + 1, i*5 + 2, i*5 + 3, i*5 + 4] temp_rgg_rgg = rgg_rgg_for_rand_data[i*5 + 0 : i*5 + 5] temp_rgg_rand = rgg_rand_for_rand_data[i*5 + 0 : i*5 + 5] temp_rand_rgg = rand_rgg_for_rand_data[i*5 + 0 : i*5 + 5] temp_rand_rand = rand_rand_for_rand_data[i*5 + 0 : i*5 + 5] if i == 0: rgg_rgg_for_rand_dict['intra_thres'] = [statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())] rgg_rgg_for_rand_dict['alive_nodes'] = [statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())] rgg_rand_for_rand_dict['intra_thres'] = [statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())] rgg_rand_for_rand_dict['alive_nodes'] = [statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())] rand_rgg_for_rand_dict['intra_thres'] = [statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())] rand_rgg_for_rand_dict['alive_nodes'] = [statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())] rand_rand_for_rand_dict['intra_thres'] = [statistics.mean(temp_rand_rand['intra_thres'].values.tolist())] rand_rand_for_rand_dict['alive_nodes'] = [statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())] else: rgg_rgg_for_rand_dict['intra_thres'].append(statistics.mean(temp_rgg_rgg['intra_thres'].values.tolist())) rgg_rgg_for_rand_dict['alive_nodes'].append(statistics.mean(temp_rgg_rgg['alive_nodes'].values.tolist())) rgg_rand_for_rand_dict['intra_thres'].append(statistics.mean(temp_rgg_rand['intra_thres'].values.tolist())) rgg_rand_for_rand_dict['alive_nodes'].append(statistics.mean(temp_rgg_rand['alive_nodes'].values.tolist())) rand_rgg_for_rand_dict['intra_thres'].append(statistics.mean(temp_rand_rgg['intra_thres'].values.tolist())) rand_rgg_for_rand_dict['alive_nodes'].append(statistics.mean(temp_rand_rgg['alive_nodes'].values.tolist())) rand_rand_for_rand_dict['intra_thres'].append(statistics.mean(temp_rand_rand['intra_thres'].values.tolist())) rand_rand_for_rand_dict['alive_nodes'].append(statistics.mean(temp_rand_rand['alive_nodes'].values.tolist())) plt.plot(rgg_rgg_dict['intra_thres'], rgg_rgg_dict['alive_nodes']) plt.plot(rgg_rgg_dict['intra_thres'], rgg_rand_dict['alive_nodes']) plt.plot(rgg_rgg_dict['intra_thres'], rand_rgg_dict['alive_nodes']) plt.plot(rgg_rgg_dict['intra_thres'], rand_rand_dict['alive_nodes']) plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.title('Mean Alive nodes') plt.show() plt.plot(rgg_rgg_for_rand_dict['intra_thres'], rgg_rgg_for_rand_dict['alive_nodes']) plt.plot(rgg_rgg_for_rand_dict['intra_thres'], rgg_rand_for_rand_dict['alive_nodes']) plt.plot(rgg_rgg_for_rand_dict['intra_thres'], rand_rgg_for_rand_dict['alive_nodes']) plt.plot(rgg_rgg_for_rand_dict['intra_thres'], rand_rand_for_rand_dict['alive_nodes']) plt.legend(['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand']) plt.title('Mean Alive nodes') plt.show() step_nums = [] step_nums.append(statistics.mean(rgg_rgg_data['cas_steps'].values.tolist())) step_nums.append(statistics.mean(rgg_rand_data['cas_steps'].values.tolist())) step_nums.append(statistics.mean(rand_rgg_data['cas_steps'].values.tolist())) step_nums.append(statistics.mean(rand_rand_data['cas_steps'].values.tolist())) index = np.arange(4) graph_types = ['RGG-RGG', 'RGG-Rand', 'Rand-RGG', 'Rand-Rand'] plt.bar(index, step_nums, width=0.3, color='gray') plt.xticks(index, graph_types) plt.title('Number of steps') plt.savefig('The number of steps.png') plt.show() rgg_rgg_isol = [] rgg_rgg_unsupp = [] rgg_rand_isol = [] rgg_rand_unsupp = [] rand_rgg_isol = [] rand_rgg_unsupp = [] rand_rand_isol = [] rand_rand_unsupp =[] index = 1 for col_name in rgg_rgg_data: if col_name == ('step%d_isol' % index): rgg_rgg_isol.append(statistics.mean(rgg_rgg_data[col_name].values.tolist())) if col_name == ('step%d_unsupp' % index): rgg_rgg_unsupp.append(statistics.mean(rgg_rgg_data[col_name].values.tolist())) index += 1 index = 1 for col_name in rgg_rand_data: if col_name == ('step%d_isol' % index): rgg_rand_isol.append(statistics.mean(rgg_rand_data[col_name].values.tolist())) if col_name == ('step%d_unsupp' % index): rgg_rand_unsupp.append(statistics.mean(rgg_rand_data[col_name].values.tolist())) index += 1 index = 1 for col_name in rand_rgg_data: if col_name == ('step%d_isol' % index): rand_rgg_isol.append(statistics.mean(rand_rgg_data[col_name].values.tolist())) if col_name == ('step%d_unsupp' % index): rand_rgg_unsupp.append(statistics.mean(rand_rgg_data[col_name].values.tolist())) index += 1 index = 1 for col_name in rand_rand_data: if col_name == ('step%d_isol' % index): rand_rand_isol.append(statistics.mean(rand_rand_data[col_name].values.tolist())) if col_name == ('step%d_unsupp' % index): rand_rand_unsupp.append(statistics.mean(rand_rand_data[col_name].values.tolist())) index += 1 print(len(rgg_rgg_isol)) print(len(rgg_rgg_unsupp)) print(len(rgg_rand_isol)) print(len(rgg_rand_unsupp)) print(len(rand_rgg_isol)) print(len(rand_rgg_unsupp)) print(len(rand_rand_isol)) print(len(rand_rand_unsupp)) cum_rgg_rgg_isol = [] cum_rgg_rgg_unsupp = [] cum_rgg_rand_isol = [] cum_rgg_rand_unsupp = [] cum_rand_rgg_isol = [] cum_rand_rgg_unsupp = [] cum_rand_rand_isol = [] cum_rand_rand_unsupp = [] total = [] for i in range(len(rgg_rgg_isol)): if i == 0: total.append(rgg_rgg_isol[i]) total.append(rgg_rgg_unsupp[i]) else: total[0] += rgg_rgg_isol[i] total[1] += rgg_rgg_unsupp[i] cum_rgg_rgg_isol.append(total[0]) cum_rgg_rgg_unsupp.append(total[1]) total = [] for i in range(len(rgg_rand_isol)): if i == 0: total.append(rgg_rand_isol[i]) total.append(rgg_rand_unsupp[i]) else: total[0] += rgg_rand_isol[i] total[1] += rgg_rand_unsupp[i] cum_rgg_rand_isol.append(total[0]) cum_rgg_rand_unsupp.append(total[1]) total = [] for i in range(len(rand_rgg_isol)): if i == 0: total.append(rand_rgg_isol[i]) total.append(rand_rgg_unsupp[i]) else: total[0] += rand_rgg_isol[i] total[1] += rand_rgg_unsupp[i] cum_rand_rgg_isol.append(total[0]) cum_rand_rgg_unsupp.append(total[1]) total = [] for i in range(len(rand_rand_isol)): if i == 0: total.append(rand_rand_isol[i]) total.append(rand_rand_unsupp[i]) else: total[0] += rand_rand_isol[i] total[1] += rand_rand_unsupp[i] cum_rand_rand_isol.append(total[0]) cum_rand_rand_unsupp.append(total[1]) ``` ## Isolation vs Unsupport ``` plt.plot(range(len(cum_rgg_rgg_isol)), cum_rgg_rgg_isol) plt.plot(range(len(cum_rgg_rgg_isol)), cum_rgg_rgg_unsupp) plt.legend(['rgg_rgg_isol','rgg_rgg_unsupp']) plt.title('Isolation vs Unsupport: RGG-RGG') plt.savefig('Isolation vs Unsupport_RGG-RGG.png') plt.show() plt.plot(range(len(cum_rgg_rand_isol)), cum_rgg_rand_isol) plt.plot(range(len(cum_rgg_rand_isol)), cum_rgg_rand_unsupp) plt.legend(['rgg_rand_isol','rgg_rand_unsupp']) plt.title('Isolation vs Unsupport: RGG-Rand') plt.savefig('Isolation vs Unsupport_RGG-Rand.png') plt.show() plt.plot(range(len(cum_rand_rgg_isol)), cum_rand_rgg_isol) plt.plot(range(len(cum_rand_rgg_isol)), cum_rand_rgg_unsupp) plt.legend(['rand_rgg_isol','rand_rgg_unsupp']) plt.title('Isolation vs Unsupport: Rand-RGG') plt.savefig('Isolation vs Unsupport_Rand-RGG.png') plt.show() plt.plot(range(len(cum_rand_rand_isol)), cum_rand_rand_isol) plt.plot(range(len(cum_rand_rand_isol)), cum_rand_rand_unsupp) plt.legend(['rand_rand_isol','rand_rand_unsupp']) plt.title('Isolation vs Unsupport: Rand-Rand') plt.savefig('Isolation vs Unsupport_Rand-Rand.png') plt.show() df_len = [] df_len.append(list(rgg_rgg_isol)) df_len.append(list(rgg_rand_isol)) df_len.append(list(rand_rgg_isol)) df_len.append(list(rand_rand_isol)) max_df_len = max(df_len, key=len) x_val = list(range(len(max_df_len))) proc_isol = [] proc_unsupp = [] proc_isol.append(cum_rgg_rgg_isol) proc_isol.append(cum_rgg_rand_isol) proc_isol.append(cum_rand_rgg_isol) proc_isol.append(cum_rand_rand_isol) proc_unsupp.append(cum_rgg_rgg_unsupp) proc_unsupp.append(cum_rgg_rand_unsupp) proc_unsupp.append(cum_rand_rgg_unsupp) proc_unsupp.append(cum_rand_rand_unsupp) for x in x_val: if len(rgg_rgg_isol) <= x: proc_isol[0].append(cum_rgg_rgg_isol[len(rgg_rgg_isol) - 1]) proc_unsupp[0].append(cum_rgg_rgg_unsupp[len(rgg_rgg_isol) - 1]) if len(rgg_rand_isol) <= x: proc_isol[1].append(cum_rgg_rand_isol[len(rgg_rand_isol) - 1]) proc_unsupp[1].append(cum_rgg_rand_unsupp[len(rgg_rand_isol) - 1]) if len(rand_rgg_isol) <= x: proc_isol[2].append(cum_rand_rgg_isol[len(rand_rgg_isol) - 1]) proc_unsupp[2].append(cum_rand_rgg_unsupp[len(rand_rgg_isol) - 1]) if len(rand_rand_isol) <= x: proc_isol[3].append(cum_rand_rand_isol[len(rand_rand_isol) - 1]) proc_unsupp[3].append(cum_rand_rand_unsupp[len(rand_rand_isol) - 1]) plt.plot(x_val, proc_isol[0]) plt.plot(x_val, proc_isol[1]) plt.plot(x_val, proc_isol[2]) plt.plot(x_val, proc_isol[3]) plt.legend(['rgg_rgg_isol','rgg_rand_isol', 'rand_rgg_isol', 'rand_rand_isol']) plt.title('Isolation trend') plt.show() plt.plot(x_val, proc_unsupp[0]) plt.plot(x_val, proc_unsupp[1]) plt.plot(x_val, proc_unsupp[2]) plt.plot(x_val, proc_unsupp[3]) plt.legend(['rgg_rgg_unsupp','rgg_rand_unsupp', 'rand_rgg_unsupp', 'rand_rand_unsupp']) plt.title('Unsupport trend') plt.show() ``` ## Pie Chart ``` init_death = 150 labels = ['Alive nodes', 'Initial death', 'Dead nodes from isolation', 'Dead nodes from unsupport'] alive = [] alive.append(statistics.mean(rgg_rgg_data['alive_nodes'])) alive.append(statistics.mean(rgg_rand_data['alive_nodes'])) alive.append(statistics.mean(rand_rgg_data['alive_nodes'])) alive.append(statistics.mean(rand_rand_data['alive_nodes'])) tot_isol = [] tot_isol.append(statistics.mean(rgg_rgg_data['tot_isol_node'])) tot_isol.append(statistics.mean(rgg_rand_data['tot_isol_node'])) tot_isol.append(statistics.mean(rand_rgg_data['tot_isol_node'])) tot_isol.append(statistics.mean(rand_rand_data['tot_isol_node'])) tot_unsupp = [] tot_unsupp.append(statistics.mean(rgg_rgg_data['tot_unsupp_node'])) tot_unsupp.append(statistics.mean(rgg_rand_data['tot_unsupp_node'])) tot_unsupp.append(statistics.mean(rand_rgg_data['tot_unsupp_node'])) tot_unsupp.append(statistics.mean(rand_rand_data['tot_unsupp_node'])) deaths = [alive[0], init_death, tot_isol[0], tot_unsupp[0]] plt.pie(deaths, labels=labels, autopct='%.1f%%') plt.title('RGG-RGG death trend') plt.show() deaths = [alive[1], init_death, tot_isol[1], tot_unsupp[1]] plt.pie(deaths, labels=labels, autopct='%.1f%%') plt.title('RGG-Rand death trend') plt.show() deaths = [alive[2], init_death, tot_isol[2], tot_unsupp[2]] plt.pie(deaths, labels=labels, autopct='%.1f%%') plt.title('Rand-RGG death trend') plt.show() deaths = [alive[3], init_death, tot_isol[3], tot_unsupp[3]] plt.pie(deaths, labels=labels, autopct='%.1f%%') plt.title('Rand-Rand death trend') plt.show() ``` ## Compute the number of nodes ``` x_val = np.arange(4) labels = ['initial', 'final'] plt.bar(x_val, alive) plt.xticks(x_val, graph_types) plt.title('Alive nodes') plt.savefig('alive nodes.png') plt.show() ``` ## Compare the number of edges ``` init_intra = [] init_intra.append(statistics.mean(rgg_rgg_data['init_intra_edge'])) init_intra.append(statistics.mean(rgg_rand_data['init_intra_edge'])) init_intra.append(statistics.mean(rand_rgg_data['init_intra_edge'])) init_intra.append(statistics.mean(rand_rand_data['init_intra_edge'])) init_inter = [] init_inter.append(statistics.mean(rgg_rgg_data['init_inter_edge'])) init_inter.append(statistics.mean(rgg_rand_data['init_inter_edge'])) init_inter.append(statistics.mean(rand_rgg_data['init_inter_edge'])) init_inter.append(statistics.mean(rand_rand_data['init_inter_edge'])) init_supp = [] init_supp.append(statistics.mean(rgg_rgg_data['init_supp_edge'])) init_supp.append(statistics.mean(rgg_rand_data['init_supp_edge'])) init_supp.append(statistics.mean(rand_rgg_data['init_supp_edge'])) init_supp.append(statistics.mean(rand_rand_data['init_supp_edge'])) fin_intra = [] fin_intra.append(statistics.mean(rgg_rgg_data['fin_intra_edge'])) fin_intra.append(statistics.mean(rgg_rand_data['fin_intra_edge'])) fin_intra.append(statistics.mean(rand_rgg_data['fin_intra_edge'])) fin_intra.append(statistics.mean(rand_rand_data['fin_intra_edge'])) fin_inter = [] fin_inter.append(statistics.mean(rgg_rgg_data['fin_inter_edge'])) fin_inter.append(statistics.mean(rgg_rand_data['fin_inter_edge'])) fin_inter.append(statistics.mean(rand_rgg_data['fin_inter_edge'])) fin_inter.append(statistics.mean(rand_rand_data['fin_inter_edge'])) fin_supp = [] fin_supp.append(statistics.mean(rgg_rgg_data['fin_supp_edge'])) fin_supp.append(statistics.mean(rgg_rand_data['fin_supp_edge'])) fin_supp.append(statistics.mean(rand_rgg_data['fin_supp_edge'])) fin_supp.append(statistics.mean(rand_rand_data['fin_supp_edge'])) plt.bar(x_val-0.1, init_intra, width=0.2) plt.bar(x_val+0.1, fin_intra, width=0.2) plt.legend(labels) plt.xticks(x_val, graph_types) plt.title('Initial_intra_edge vs Final_intra_edge') plt.show() plt.bar(x_val-0.1, init_inter, width=0.2) plt.bar(x_val+0.1, fin_inter, width=0.2) plt.legend(labels) plt.xticks(x_val, graph_types) plt.title('Initial_inter_edge vs Final_inter_edge') plt.show() plt.bar(x_val-0.1, init_supp, width=0.2) plt.bar(x_val+0.1, fin_supp, width=0.2) plt.legend(labels) plt.xticks(x_val, graph_types) plt.title('Initial_support_edge vs Final_support_edge') plt.show() ``` ## Network Analysis ``` init_far = [] init_far.append(statistics.mean(rgg_rgg_data['init_far_node'])) init_far.append(statistics.mean(rgg_rand_data['init_far_node'])) init_far.append(statistics.mean(rand_rgg_data['init_far_node'])) init_far.append(statistics.mean(rand_rand_data['init_far_node'])) fin_far = [] fin_far.append(statistics.mean(rgg_rgg_data['fin_far_node'])) fin_far.append(statistics.mean(rgg_rand_data['fin_far_node'])) fin_far.append(statistics.mean(rand_rgg_data['fin_far_node'])) fin_far.append(statistics.mean(rand_rand_data['fin_far_node'])) plt.bar(x_val-0.1, init_far, width=0.2) plt.bar(x_val+0.1, fin_far, width=0.2) plt.legend(labels) plt.xticks(x_val, graph_types) plt.title('Initial_far_node vs Final_far_node') plt.show() init_clust = [] init_clust.append(statistics.mean(rgg_rgg_data['init_clust'])) init_clust.append(statistics.mean(rgg_rand_data['init_clust'])) init_clust.append(statistics.mean(rand_rgg_data['init_clust'])) init_clust.append(statistics.mean(rand_rand_data['init_clust'])) fin_clust = [] fin_clust.append(statistics.mean(rgg_rgg_data['fin_clust'])) fin_clust.append(statistics.mean(rgg_rand_data['fin_clust'])) fin_clust.append(statistics.mean(rand_rgg_data['fin_clust'])) fin_clust.append(statistics.mean(rand_rand_data['fin_clust'])) plt.bar(x_val-0.1, init_clust, width=0.2) plt.bar(x_val+0.1, fin_clust, width=0.2) plt.legend(labels) plt.xticks(x_val, graph_types) plt.title('Initial_clustering_coefficient vs Final_clustering_coefficient') plt.show() init_mean_deg = [] init_mean_deg.append(statistics.mean(rgg_rgg_data['init_mean_deg'])) init_mean_deg.append(statistics.mean(rgg_rand_data['init_mean_deg'])) init_mean_deg.append(statistics.mean(rand_rgg_data['init_mean_deg'])) init_mean_deg.append(statistics.mean(rand_rand_data['init_mean_deg'])) fin_mean_deg = [] fin_mean_deg.append(statistics.mean(rgg_rgg_data['fin_mean_deg'])) fin_mean_deg.append(statistics.mean(rgg_rand_data['fin_mean_deg'])) fin_mean_deg.append(statistics.mean(rand_rgg_data['fin_mean_deg'])) fin_mean_deg.append(statistics.mean(rand_rand_data['fin_mean_deg'])) plt.bar(x_val-0.1, init_mean_deg, width=0.2) plt.bar(x_val+0.1, fin_mean_deg, width=0.2) plt.legend(labels) plt.xticks(x_val, graph_types) plt.title('Initial_mean_degree vs Final_mean_degree') plt.show() init_larg_comp = [] init_larg_comp.append(statistics.mean(rgg_rgg_data['init_larg_comp'])) init_larg_comp.append(statistics.mean(rgg_rand_data['init_larg_comp'])) init_larg_comp.append(statistics.mean(rand_rgg_data['init_larg_comp'])) init_larg_comp.append(statistics.mean(rand_rand_data['init_larg_comp'])) fin_larg_comp = [] fin_larg_comp.append(statistics.mean(rgg_rgg_data['fin_larg_comp'])) fin_larg_comp.append(statistics.mean(rgg_rand_data['fin_larg_comp'])) fin_larg_comp.append(statistics.mean(rand_rgg_data['fin_larg_comp'])) fin_larg_comp.append(statistics.mean(rand_rand_data['fin_larg_comp'])) plt.bar(x_val-0.1, init_larg_comp, width=0.2) plt.bar(x_val+0.1, fin_larg_comp, width=0.2) plt.legend(labels) plt.xticks(x_val, graph_types) plt.title('Initial_largest_component_size vs Final_largest_component_size') plt.show() deg_assort = [] a = rgg_rgg_data['deg_assort'].fillna(0) b = rgg_rand_data['deg_assort'].fillna(0) c = rand_rgg_data['deg_assort'].fillna(0) d = rand_rand_data['deg_assort'].fillna(0) deg_assort.append(statistics.mean(a)) deg_assort.append(statistics.mean(b)) deg_assort.append(statistics.mean(c)) deg_assort.append(statistics.mean(d)) plt.bar(x_val, deg_assort) plt.xticks(x_val, graph_types) plt.title('Degree Assortativity') plt.show() dist_deg_cent = [] dist_deg_cent.append(statistics.mean(rgg_rgg_data['dist_deg_cent'])) dist_deg_cent.append(statistics.mean(rgg_rand_data['dist_deg_cent'])) dist_deg_cent.append(statistics.mean(rand_rgg_data['dist_deg_cent'])) dist_deg_cent.append(statistics.mean(rand_rand_data['dist_deg_cent'])) plt.bar(x_val, dist_deg_cent) plt.xticks(x_val, graph_types) plt.title('Distance to degree centre from the attack point') plt.show() dist_bet_cent = [] dist_bet_cent.append(statistics.mean(rgg_rgg_data['dist_bet_cent'])) dist_bet_cent.append(statistics.mean(rgg_rand_data['dist_bet_cent'])) dist_bet_cent.append(statistics.mean(rand_rgg_data['dist_bet_cent'])) dist_bet_cent.append(statistics.mean(rand_rand_data['dist_bet_cent'])) plt.bar(x_val, dist_bet_cent) plt.xticks(x_val, graph_types) plt.title('Distance to betweenes centre from the attack point') plt.show() ```
github_jupyter
# 卷积神经网络示例与各层可视化 ``` import os import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data %matplotlib inline print ("当前TensorFlow版本为 [%s]" % (tf.__version__)) print ("所有包载入完毕") ``` ## 载入 MNIST ``` mnist = input_data.read_data_sets('data/', one_hot=True) trainimg = mnist.train.images trainlabel = mnist.train.labels testimg = mnist.test.images testlabel = mnist.test.labels print ("MNIST ready") ``` ## 定义模型 ``` # NETWORK TOPOLOGIES n_input = 784 n_channel = 64 n_classes = 10 # INPUTS AND OUTPUTS x = tf.placeholder("float", [None, n_input]) y = tf.placeholder("float", [None, n_classes]) # NETWORK PARAMETERS stddev = 0.1 weights = { 'c1': tf.Variable(tf.random_normal([7, 7, 1, n_channel], stddev=stddev)), 'd1': tf.Variable(tf.random_normal([14*14*64, n_classes], stddev=stddev)) } biases = { 'c1': tf.Variable(tf.random_normal([n_channel], stddev=stddev)), 'd1': tf.Variable(tf.random_normal([n_classes], stddev=stddev)) } print ("NETWORK READY") ``` ## 定义图结构 ``` # MODEL def CNN(_x, _w, _b): # RESHAPE _x_r = tf.reshape(_x, shape=[-1, 28, 28, 1]) # CONVOLUTION _conv1 = tf.nn.conv2d(_x_r, _w['c1'], strides=[1, 1, 1, 1], padding='SAME') # ADD BIAS _conv2 = tf.nn.bias_add(_conv1, _b['c1']) # RELU _conv3 = tf.nn.relu(_conv2) # MAX-POOL _pool = tf.nn.max_pool(_conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') # VECTORIZE _dense = tf.reshape(_pool, [-1, _w['d1'].get_shape().as_list()[0]]) # DENSE _logit = tf.add(tf.matmul(_dense, _w['d1']), _b['d1']) _out = { 'x_r': _x_r, 'conv1': _conv1, 'conv2': _conv2, 'conv3': _conv3 , 'pool': _pool, 'dense': _dense, 'logit': _logit } return _out # PREDICTION cnnout = CNN(x, weights, biases) # LOSS AND OPTIMIZER cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits( labels=y, logits=cnnout['logit'])) optm = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost) corr = tf.equal(tf.argmax(cnnout['logit'], 1), tf.argmax(y, 1)) accr = tf.reduce_mean(tf.cast(corr, "float")) # INITIALIZER init = tf.global_variables_initializer() print ("FUNCTIONS READY") ``` ## 存储 ``` savedir = "nets/cnn_mnist_simple/" saver = tf.train.Saver(max_to_keep=3) save_step = 4 if not os.path.exists(savedir): os.makedirs(savedir) print ("SAVER READY") ``` ## 运行 ``` # PARAMETERS training_epochs = 20 batch_size = 100 display_step = 4 # LAUNCH THE GRAPH sess = tf.Session() sess.run(init) # OPTIMIZE for epoch in range(training_epochs): avg_cost = 0. total_batch = int(mnist.train.num_examples/batch_size) # ITERATION for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) feeds = {x: batch_xs, y: batch_ys} sess.run(optm, feed_dict=feeds) avg_cost += sess.run(cost, feed_dict=feeds) avg_cost = avg_cost / total_batch # DISPLAY if (epoch+1) % display_step == 0: print ("Epoch: %03d/%03d cost: %.9f" % (epoch+1, training_epochs, avg_cost)) feeds = {x: batch_xs, y: batch_ys} train_acc = sess.run(accr, feed_dict=feeds) print ("TRAIN ACCURACY: %.3f" % (train_acc)) feeds = {x: mnist.test.images, y: mnist.test.labels} test_acc = sess.run(accr, feed_dict=feeds) print ("TEST ACCURACY: %.3f" % (test_acc)) # SAVE if (epoch+1) % save_step == 0: savename = savedir+"net-"+str(epoch+1)+".ckpt" saver.save(sess, savename) print ("[%s] SAVED." % (savename)) print ("OPTIMIZATION FINISHED") ``` ## 恢复 ``` do_restore = 0 if do_restore == 1: sess = tf.Session() epoch = 20 savename = savedir+"net-"+str(epoch)+".ckpt" saver.restore(sess, savename) print ("NETWORK RESTORED") else: print ("DO NOTHING") ``` ## CNN如何工作 ``` input_r = sess.run(cnnout['x_r'], feed_dict={x: trainimg[0:1, :]}) conv1 = sess.run(cnnout['conv1'], feed_dict={x: trainimg[0:1, :]}) conv2 = sess.run(cnnout['conv2'], feed_dict={x: trainimg[0:1, :]}) conv3 = sess.run(cnnout['conv3'], feed_dict={x: trainimg[0:1, :]}) pool = sess.run(cnnout['pool'], feed_dict={x: trainimg[0:1, :]}) dense = sess.run(cnnout['dense'], feed_dict={x: trainimg[0:1, :]}) out = sess.run(cnnout['logit'], feed_dict={x: trainimg[0:1, :]}) ``` ## 输入 ``` print ("Size of 'input_r' is %s" % (input_r.shape,)) label = np.argmax(trainlabel[0, :]) print ("Label is %d" % (label)) # PLOT plt.matshow(input_r[0, :, :, 0], cmap=plt.get_cmap('gray')) plt.title("Label of this image is " + str(label) + "") plt.colorbar() plt.show() ``` # CONV 卷积层 ``` print ("SIZE OF 'CONV1' IS %s" % (conv1.shape,)) for i in range(3): plt.matshow(conv1[0, :, :, i], cmap=plt.get_cmap('gray')) plt.title(str(i) + "th conv1") plt.colorbar() plt.show() ``` ## CONV + BIAS ``` print ("SIZE OF 'CONV2' IS %s" % (conv2.shape,)) for i in range(3): plt.matshow(conv2[0, :, :, i], cmap=plt.get_cmap('gray')) plt.title(str(i) + "th conv2") plt.colorbar() plt.show() ``` ## CONV + BIAS + RELU ``` print ("SIZE OF 'CONV3' IS %s" % (conv3.shape,)) for i in range(3): plt.matshow(conv3[0, :, :, i], cmap=plt.get_cmap('gray')) plt.title(str(i) + "th conv3") plt.colorbar() plt.show() ``` ## POOL ``` print ("SIZE OF 'POOL' IS %s" % (pool.shape,)) for i in range(3): plt.matshow(pool[0, :, :, i], cmap=plt.get_cmap('gray')) plt.title(str(i) + "th pool") plt.colorbar() plt.show() ``` ## DENSE ``` print ("SIZE OF 'DENSE' IS %s" % (dense.shape,)) print ("SIZE OF 'OUT' IS %s" % (out.shape,)) plt.matshow(out, cmap=plt.get_cmap('gray')) plt.title("OUT") plt.colorbar() plt.show() ``` ## CONVOLUTION FILTER 卷积核 ``` wc1 = sess.run(weights['c1']) print ("SIZE OF 'WC1' IS %s" % (wc1.shape,)) for i in range(3): plt.matshow(wc1[:, :, 0, i], cmap=plt.get_cmap('gray')) plt.title(str(i) + "th conv filter") plt.colorbar() plt.show() ```
github_jupyter
Python语言进阶 ``` """ 查找 - 顺序查找和二分查找 算法:解决问题的方法(步骤) 评价一个算法的好坏主要指标:渐进时间复杂度和空间复杂度,通常一个算法很难到时间复杂度和空间复杂度都很低(因为时间和空间是不可调和 的矛盾) 表示渐进时间复杂度通常用大o标记 o(c):常量时间复杂度 - 哈希存储/布隆过滤器 o(log_2 n):对数时间复杂度 - 折半查找 o(n):线性时间复杂度 - 顺序查找 o(n * log_2 n):对数线性时间复杂度 - 高级排序算法(归并算法,快速排序) o(n ** 2):平方时间复杂度 - 简单排序算法(冒泡排序。选择排序。插入排序 ) o(n ** 3): 立方时间复杂度 - Floyd算法 / 矩阵乘法运算 也称为多项式时间复杂度 o(2 ** n):几何级数时间复杂度 - 汉诺塔 o(3 ** n):几何级数时间复杂度 也称为指数时间复杂度 o(n!):阶乘时间复杂度 """ from math import log2,factorial from matplotlib import pyplot import numpy def seq_searth(items:list,elem) -> int: """顺序查找""" start,end = 0,len(items) - 1 while start <= end: mid = (start + end)// 2 if elem > items[mid]: start = mid + 1 elif elem < items[mid]: end = mid - 1 else : return mid return -1 def main(): """主函数(程序入口)""" num = 6 styles = ['r-.','g-*','b-o','y-x','c-^','m-+','k-d'] legends = ['对数', '线性', '线性对数', '平方', '立方', '几何级数', '阶乘'] x_data = [x for x in range(1, num + 1)] y_data1 = [log2(y) for y in range(1,num + 1)] y_data2 = [y for y in range(1,num + 1)] y_data3 = [y * log2(y) for y in range(1,num + 1)] y_data4 = [y ** 2 for y in range(1,num + 1)] y_data5 = [y ** 3 for y in range(1,num + 1)] y_data6 = [3 ** y for y in range(1,num + 1)] y_data7 = [factorial(y) for y in range(1,num + 1) ] y_datas = [y_data1,y_data2,y_data3,y_data4,y_data5,y_data6,y_data7] for index, y_data in enumerate(y_datas): pyplot.plot(x_data,y_data,styles[index]) pyplot.legend(legends) pyplot.xticks(numpy.arange(1,7,step=1)) pyplot.yticks(numpy.arange(0,751,step=50)) pyplot.show() if __name__ =='__main__': main() """ 查找 - 顺序查找和二分查找 算法:解决问题的方法(步骤) 评价一个算法的好坏主要有两个指标:渐近时间复杂度和渐近空间复杂度,通常一个算法很难做到时间复杂度和空间复杂度都很低(因为时间和空间是不可调和的矛盾) 表示渐近时间复杂度通常使用大O标记 O(c):常量时间复杂度 - 哈希存储 / 布隆过滤器 O(log_2 n):对数时间复杂度 - 折半查找 O(n):线性时间复杂度 - 顺序查找 O(n * log_2 n):- 对数线性时间复杂度 - 高级排序算法(归并排序、快速排序) O(n ** 2):平方时间复杂度 - 简单排序算法(冒泡排序、选择排序、插入排序) O(n ** 3):立方时间复杂度 - Floyd算法 / 矩阵乘法运算 也称为多项式时间复杂度 O(2 ** n):几何级数时间复杂度 - 汉诺塔 O(3 ** n):几何级数时间复杂度 也称为指数时间复杂度 O(n!):阶乘时间复杂度 - 旅行经销商问题 - NP """ from math import log2, factorial from matplotlib import pyplot import numpy def seq_search(items: list, elem) -> int: """顺序查找""" for index, item in enumerate(items): if elem == item: return index return -1 def bin_search(items, elem): """二分查找""" start, end = 0, len(items) - 1 while start <= end: mid = (start + end) // 2 if elem > items[mid]: start = mid + 1 elif elem < items[mid]: end = mid - 1 else: return mid return -1 def main(): """主函数(程序入口)""" num = 6 styles = ['r-.', 'g-*', 'b-o', 'y-x', 'c-^', 'm-+', 'k-d'] legends = ['对数', '线性', '线性对数', '平方', '立方', '几何级数', '阶乘'] x_data = [x for x in range(1, num + 1)] y_data1 = [log2(y) for y in range(1, num + 1)] y_data2 = [y for y in range(1, num + 1)] y_data3 = [y * log2(y) for y in range(1, num + 1)] y_data4 = [y ** 2 for y in range(1, num + 1)] y_data5 = [y ** 3 for y in range(1, num + 1)] y_data6 = [3 ** y for y in range(1, num + 1)] y_data7 = [factorial(y) for y in range(1, num + 1)] y_datas = [y_data1, y_data2, y_data3, y_data4, y_data5, y_data6, y_data7] for index, y_data in enumerate(y_datas): pyplot.plot(x_data, y_data, styles[index]) pyplot.legend(legends) pyplot.xticks(numpy.arange(1, 7, step=1)) pyplot.yticks(numpy.arange(0, 751, step=50)) pyplot.show() if __name__ == '__main__': main() ```
github_jupyter
# TV Script Generation In this project, you'll generate your own [Simpsons](https://en.wikipedia.org/wiki/The_Simpsons) TV scripts using RNNs. You'll be using part of the [Simpsons dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data) of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at [Moe's Tavern](https://simpsonswiki.com/wiki/Moe's_Tavern). ## Get the Data The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc.. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper data_dir = './data/simpsons/moes_tavern_lines.txt' text = helper.load_data(data_dir) # Ignore notice, since we don't use it for analysing the data text = text[81:] ``` ## Explore the Data Play around with `view_sentence_range` to view different parts of the data. ``` view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) scenes = text.split('\n\n') print('Number of scenes: {}'.format(len(scenes))) sentence_count_scene = [scene.count('\n') for scene in scenes] print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene))) sentences = [sentence for scene in scenes for sentence in scene.split('\n')] print('Number of lines: {}'.format(len(sentences))) word_count_sentence = [len(sentence.split()) for sentence in sentences] print('Average number of words in each line: {}'.format(np.average(word_count_sentence))) print() print('The sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) ``` ## Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation ### Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call `vocab_to_int` - Dictionary to go from the id to word, we'll call `int_to_vocab` Return these dictionaries in the following tuple `(vocab_to_int, int_to_vocab)` ``` import numpy as np import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ # TODO: Implement Function vocab_to_int = {w:i for i, w in enumerate(set(text))} int_to_vocab = {i:w for i, w in enumerate(set(text))} return vocab_to_int, int_to_vocab """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables) ``` ### Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||". ``` def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ return { '.': '||Period||', ',': '||Comma||', '"': '||Quotation_Mark||', ';': '||Semicolon||', '!': '||Exclamation_mark||', '?': '||Question_mark||', '(': '||Left_Parentheses||', ')': '||Right_Parentheses||', '--': '||Dash||', "\n": '||Return||' } """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup) ``` ## Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables) ``` # Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper import numpy as np import problem_unittests as tests int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() ``` ## Build the Neural Network You'll build the components necessary to build a RNN by implementing the following functions below: - get_inputs - get_init_cell - get_embed - build_rnn - build_nn - get_batches ### Check the Version of TensorFlow and Access to GPU ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ``` ### Input Implement the `get_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) `name` parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following the tuple `(Input, Targets, LearingRate)` ``` def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ # TODO: Implement Function inputs = tf.placeholder(dtype=tf.int32, shape=[None, None], name='input') targets = tf.placeholder(dtype=tf.int32, shape=[None, None], name='targets') learning_rate = tf.placeholder(dtype=tf.float32, name='learning_rate') return inputs, targets, learning_rate """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs) ``` ### Build RNN Cell and Initialize Stack one or more [`BasicLSTMCells`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/BasicLSTMCell) in a [`MultiRNNCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell). - The Rnn size should be set using `rnn_size` - Initalize Cell State using the MultiRNNCell's [`zero_state()`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell#zero_state) function - Apply the name "initial_state" to the initial state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity) Return the cell and initial state in the following tuple `(Cell, InitialState)` ``` def get_init_cell(batch_size, rnn_size, keep_prob=0.8, layers=3): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) multi = tf.contrib.rnn.MultiRNNCell([cell] * layers) init_state = multi.zero_state(batch_size, tf.float32) init_state = tf.identity(init_state, 'initial_state') return multi, init_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell) ``` ### Word Embedding Apply embedding to `input_data` using TensorFlow. Return the embedded sequence. ``` def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ embeddings = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1)) embed = tf.nn.embedding_lookup(embeddings, input_data) return embed """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_embed(get_embed) ``` ### Build RNN You created a RNN Cell in the `get_init_cell()` function. Time to use the cell to create a RNN. - Build the RNN using the [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) - Apply the name "final_state" to the final state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity) Return the outputs and final_state state in the following tuple `(Outputs, FinalState)` ``` def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) final_state = tf.identity(final_state, 'final_state') return outputs, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_rnn(build_rnn) ``` ### Build the Neural Network Apply the functions you implemented above to: - Apply embedding to `input_data` using your `get_embed(input_data, vocab_size, embed_dim)` function. - Build RNN using `cell` and your `build_rnn(cell, inputs)` function. - Apply a fully connected layer with a linear activation and `vocab_size` as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState) ``` def build_nn(cell, rnn_size, input_data, vocab_size): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :return: Tuple (Logits, FinalState) """ embed = get_embed(input_data, vocab_size, rnn_size) outputs, final_state = build_rnn(cell, embed) logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None) return logits, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn) ``` ### Batches Implement `get_batches` to create batches of input and targets using `int_text`. The batches should be a Numpy array with the shape `(number of batches, 2, batch size, sequence length)`. Each batch contains two elements: - The first element is a single batch of **input** with the shape `[batch size, sequence length]` - The second element is a single batch of **targets** with the shape `[batch size, sequence length]` If you can't fill the last batch with enough data, drop the last batch. For exmple, `get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3)` would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2 3], [ 7 8 9]], # Batch of targets [[ 2 3 4], [ 8 9 10]] ], # Second Batch [ # Batch of Input [[ 4 5 6], [10 11 12]], # Batch of targets [[ 5 6 7], [11 12 13]] ] ] ``` ``` def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # TODO: Implement Function n_batches = len(int_text) // (batch_size * seq_length) result = [] for i in range(n_batches): inputs = [] targets = [] for j in range(batch_size): idx = i * seq_length + j * seq_length inputs.append(int_text[idx:idx + seq_length]) targets.append(int_text[idx + 1:idx + seq_length + 1]) result.append([inputs, targets]) result=np.array(result) print(result.shape) print(result[1]) print(n_batches) print(batch_size) print(seq_length) # (number of batches, 2, batch size, sequence length). return np.array(result) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches) ``` ## Neural Network Training ### Hyperparameters Tune the following parameters: - Set `num_epochs` to the number of epochs. - Set `batch_size` to the batch size. - Set `rnn_size` to the size of the RNNs. - Set `seq_length` to the length of sequence. - Set `learning_rate` to the learning rate. - Set `show_every_n_batches` to the number of batches the neural network should print progress. ``` # Number of Epochs num_epochs = 100 # Batch Size batch_size = 128 # RNN Size rnn_size = 256 # Sequence Length seq_length = 25 # Learning Rate learning_rate = 0.01 # Show stats for every n number of batches show_every_n_batches = 50 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save' ``` ### Build the Graph Build the graph using the neural network you implemented. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data_shape[0], rnn_size) logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size) # Probabilities for generating words probs = tf.nn.softmax(logits, name='probs') # Loss function cost = seq2seq.sequence_loss( logits, targets, tf.ones([input_data_shape[0], input_data_shape[1]])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients] train_op = optimizer.apply_gradients(capped_gradients) ``` ## Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the [forms](https://discussions.udacity.com/) to see if anyone is having the same problem. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ batches = get_batches(int_text, batch_size, seq_length) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(num_epochs): state = sess.run(initial_state, {input_text: batches[0][0]}) for batch_i, (x, y) in enumerate(batches): feed = { input_text: x, targets: y, initial_state: state, lr: learning_rate} train_loss, state, _ = sess.run([cost, final_state, train_op], feed) # Show every <show_every_n_batches> batches if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0: print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format( epoch_i, batch_i, len(batches), train_loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_dir) print('Model Trained and Saved') ``` ## Save Parameters Save `seq_length` and `save_dir` for generating a new TV script. ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params((seq_length, save_dir)) ``` # Checkpoint ``` """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() seq_length, load_dir = helper.load_params() ``` ## Implement Generate Functions ### Get Tensors Get tensors from `loaded_graph` using the function [`get_tensor_by_name()`](https://www.tensorflow.org/api_docs/python/tf/Graph#get_tensor_by_name). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple `(InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)` ``` def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ inputs = loaded_graph.get_tensor_by_name('input:0') init_state = loaded_graph.get_tensor_by_name('initial_state:0') final_state = loaded_graph.get_tensor_by_name('final_state:0') probs = loaded_graph.get_tensor_by_name('probs:0') return inputs, init_state, final_state, probs """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_tensors(get_tensors) ``` ### Choose Word Implement the `pick_word()` function to select the next word using `probabilities`. ``` def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ # TODO: Implement Function return int_to_vocab[np.argmax(probabilities)] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word) ``` ## Generate TV Script This will generate the TV script for you. Set `gen_length` to the length of TV script you want to generate. ``` gen_length = 200 # homer_simpson, moe_szyslak, or Barney_Gumble prime_word = 'moe_szyslak' """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_dir + '.meta') loader.restore(sess, load_dir) # Get Tensors from loaded model input_text, initial_state, final_state, probs = get_tensors(loaded_graph) # Sentences generation setup gen_sentences = [prime_word + ':'] prev_state = sess.run(initial_state, {input_text: np.array([[1]])}) # Generate sentences for n in range(gen_length): # Dynamic Input dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]] dyn_seq_length = len(dyn_input[0]) # Get Prediction probabilities, prev_state = sess.run( [probs, final_state], {input_text: dyn_input, initial_state: prev_state}) pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab) gen_sentences.append(pred_word) # Remove tokens tv_script = ' '.join(gen_sentences) for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' tv_script = tv_script.replace(' ' + token.lower(), key) tv_script = tv_script.replace('\n ', '\n') tv_script = tv_script.replace('( ', '(') print(tv_script) ``` # The TV Script is Nonsensical It's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of [another dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data). We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course. # Submitting This Project When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.
github_jupyter
### How to Find Your Neighbor? In neighborhood based collaborative filtering, it is incredibly important to be able to identify an individual's neighbors. Let's look at a small dataset in order to understand, how we can use different metrics to identify close neighbors. ``` %load_ext lab_black from itertools import product import numpy as np import pandas as pd from scipy.stats import spearmanr, kendalltau import matplotlib.pyplot as plt import tests as t import helper as h %matplotlib inline play_data = pd.DataFrame( { "x1": [-3, -2, -1, 0, 1, 2, 3], "x2": [9, 4, 1, 0, 1, 4, 9], "x3": [1, 2, 3, 4, 5, 6, 7], "x4": [2, 5, 15, 27, 28, 30, 31], } ) # create play data dataframe play_data = play_data[["x1", "x2", "x3", "x4"]] ``` ### Measures of Similarity The first metrics we will look at have similar characteristics: 1. Pearson's Correlation Coefficient 2. Spearman's Correlation Coefficient 3. Kendall's Tau Let's take a look at each of these individually. ### Pearson's Correlation First, **Pearson's correlation coefficient** is a measure related to the strength and direction of a **linear** relationship. If we have two vectors **x** and **y**, we can compare their individual elements in the following way to calculate Pearson's correlation coefficient: $$CORR(\textbf{x}, \textbf{y}) = \frac{\sum\limits_{i=1}^{n}(x_i - \bar{x})(y_i - \bar{y})}{\sqrt{\sum\limits_{i=1}^{n}(x_i-\bar{x})^2}\sqrt{\sum\limits_{i=1}^{n}(y_i-\bar{y})^2}} $$ where $$\bar{x} = \frac{1}{n}\sum\limits_{i=1}^{n}x_i$$ 1. Write a function that takes in two vectors and returns the Pearson correlation coefficient. You can then compare your answer to the built in function in numpy by using the assert statements in the following cell. ``` def pearson_corr(x, y): """ INPUT x - an array of matching length to array y y - an array of matching length to array x OUTPUT corr - the pearson correlation coefficient for comparing x and y """ numerator = (x - np.mean(x)) @ (y - np.mean(y)) denominator = np.sqrt(np.sum((x - np.mean(x)) ** 2)) * np.sqrt( np.sum((y - np.mean(y)) ** 2) ) corr = numerator / denominator return corr # This cell will test your function against the built in numpy function assert ( pearson_corr(play_data["x1"], play_data["x2"]) == np.corrcoef(play_data["x1"], play_data["x2"])[0][1] ), "Oops! The correlation between the first two columns should be 0, but your function returned {}.".format( pearson_corr(play_data["x1"], play_data["x2"]) ) assert ( round(pearson_corr(play_data["x1"], play_data["x3"]), 2) == np.corrcoef(play_data["x1"], play_data["x3"])[0][1] ), "Oops! The correlation between the first and third columns should be {}, but your function returned {}.".format( np.corrcoef(play_data["x1"], play_data["x3"])[0][1], pearson_corr(play_data["x1"], play_data["x3"]), ) assert round(pearson_corr(play_data["x3"], play_data["x4"]), 2) == round( np.corrcoef(play_data["x3"], play_data["x4"])[0][1], 2 ), "Oops! The correlation between the first and third columns should be {}, but your function returned {}.".format( np.corrcoef(play_data["x3"], play_data["x4"])[0][1], pearson_corr(play_data["x3"], play_data["x4"]), ) print( "If this is all you see, it looks like you are all set! Nice job coding up Pearson's correlation coefficient!" ) ``` `2.` Now that you have computed **Pearson's correlation coefficient**, use the below dictionary to identify statements that are true about **this** measure. ``` a = True b = False c = "We can't be sure." pearson_dct = { "If when x increases, y always increases, Pearson's correlation will be always be 1.": False, # Letter here, "If when x increases by 1, y always increases by 3, Pearson's correlation will always be 1.": True, # Letter here, "If when x increases by 1, y always decreases by 5, Pearson's correlation will always be -1.": True, # Letter here, "If when x increases by 1, y increases by 3 times x, Pearson's correlation will always be 1.": False, # Letter here } t.sim_2_sol(pearson_dct) ``` ### Spearman's Correlation Now, let's look at **Spearman's correlation coefficient**. Spearman's correlation is what is known as a [non-parametric](https://en.wikipedia.org/wiki/Nonparametric_statistics) statistic, which is a statistic who's distribution doesn't depend parameters (statistics that follow normal distributions or binomial distributions are examples of parametric statistics). Frequently non-parametric statistics are based on the ranks of data rather than the original values collected. This happens to be the case with Spearman's correlation coefficient, which is calculated similarly to Pearson's correlation. However, instead of using the raw data, we use the rank of each value. You can quickly change from the raw data to the ranks using the **.rank()** method as shown here: ``` print( "The ranked values for the variable x1 are: {}".format( np.array(play_data["x1"].rank()) ) ) print( "The raw data values for the variable x1 are: {}".format(np.array(play_data["x1"])) ) ``` If we map each of our data to ranked data values as shown above: $$\textbf{x} \rightarrow \textbf{x}^{r}$$ $$\textbf{y} \rightarrow \textbf{y}^{r}$$ Here, we let the **r** indicate these are ranked values (this is not raising any value to the power of r). Then we compute Spearman's correlation coefficient as: $$SCORR(\textbf{x}, \textbf{y}) = \frac{\sum\limits_{i=1}^{n}(x^{r}_i - \bar{x}^{r})(y^{r}_i - \bar{y}^{r})}{\sqrt{\sum\limits_{i=1}^{n}(x^{r}_i-\bar{x}^{r})^2}\sqrt{\sum\limits_{i=1}^{n}(y^{r}_i-\bar{y}^{r})^2}} $$ where $$\bar{x}^r = \frac{1}{n}\sum\limits_{i=1}^{n}x^r_i$$ `3.` Write a function that takes in two vectors and returns the Spearman correlation coefficient. You can then compare your answer to the built in function in scipy stats by using the assert statements in the following cell. ``` def corr_spearman(x, y): """ INPUT x - an array of matching length to array y y - an array of matching length to array x OUTPUT corr - the spearman correlation coefficient for comparing x and y """ x = x.rank() y = y.rank() numerator = (x - np.mean(x)) @ (y - np.mean(y)) denominator = np.sqrt(np.sum((x - np.mean(x)) ** 2)) * np.sqrt( np.sum((y - np.mean(y)) ** 2) ) corr = numerator / denominator return corr # This cell will test your function against the built in scipy function assert ( corr_spearman(play_data["x1"], play_data["x2"]) == spearmanr(play_data["x1"], play_data["x2"])[0] ), "Oops! The correlation between the first two columns should be 0, but your function returned {}.".format( compute_corr(play_data["x1"], play_data["x2"]) ) assert ( round(corr_spearman(play_data["x1"], play_data["x3"]), 2) == spearmanr(play_data["x1"], play_data["x3"])[0] ), "Oops! The correlation between the first and third columns should be {}, but your function returned {}.".format( np.corrcoef(play_data["x1"], play_data["x3"])[0][1], compute_corr(play_data["x1"], play_data["x3"]), ) assert round(corr_spearman(play_data["x3"], play_data["x4"]), 2) == round( spearmanr(play_data["x3"], play_data["x4"])[0], 2 ), "Oops! The correlation between the first and third columns should be {}, but your function returned {}.".format( np.corrcoef(play_data["x3"], play_data["x4"])[0][1], compute_corr(play_data["x3"], play_data["x4"]), ) print( "If this is all you see, it looks like you are all set! Nice job coding up Spearman's correlation coefficient!" ) a = True b = False c = "We can't be sure." spearman_dct = { "If when x increases, y always increases, Spearman's correlation will be always be 1.": a, # Letter here, "If when x increases by 1, y always increases by 3, Pearson's correlation will always be 1.": a, # Letter here, "If when x increases by 1, y always decreases by 5, Pearson's correlation will always be -1.": a, # Letter here, "If when x increases by 1, y increases by 3 times x, Pearson's correlation will always be 1.": a, # Letter here } t.sim_4_sol(spearman_dct) ``` ### Kendall's Tau Kendall's tau is quite similar to Spearman's correlation coefficient. Both of these measures are nonparametric measures of a relationship. Specifically both Spearman and Kendall's coefficients are calculated based on ranking data and not the raw data. Similar to both of the previous measures, Kendall's Tau is always between -1 and 1, where -1 suggests a strong, negative relationship between two variables and 1 suggests a strong, positive relationship between two variables. Though Spearman's and Kendall's measures are very similar, there are statistical advantages to choosing Kendall's measure in that Kendall's Tau has smaller variability when using larger sample sizes. However Spearman's measure is more computationally efficient, as Kendall's Tau is O(n^2) and Spearman's correlation is O(nLog(n)). You can find more on this topic in [this thread](https://www.researchgate.net/post/Does_Spearmans_rho_have_any_advantage_over_Kendalls_tau). Let's take a closer look at exactly how this measure is calculated. Again, we want to map our data to ranks: $$\textbf{x} \rightarrow \textbf{x}^{r}$$ $$\textbf{y} \rightarrow \textbf{y}^{r}$$ Then we calculate Kendall's Tau as: $$TAU(\textbf{x}, \textbf{y}) = \frac{2}{n(n -1)}\sum_{i < j}sgn(x^r_i - x^r_j)sgn(y^r_i - y^r_j)$$ Where $sgn$ takes the the sign associated with the difference in the ranked values. An alternative way to write $$sgn(x^r_i - x^r_j)$$ is in the following way: $$ \begin{cases} -1 & x^r_i < x^r_j \\ 0 & x^r_i = x^r_j \\ 1 & x^r_i > x^r_j \end{cases} $$ Therefore the possible results of $$sgn(x^r_i - x^r_j)sgn(y^r_i - y^r_j)$$ are only 1, -1, or 0, which are summed to give an idea of the propotion of times the ranks of **x** and **y** are pointed in the right direction. `5.` Write a function that takes in two vectors and returns Kendall's Tau. You can then compare your answer to the built in function in scipy stats by using the assert statements in the following cell. ``` def kendalls_tau(x, y): """ INPUT x - an array of matching length to array y y - an array of matching length to array x OUTPUT tau - the kendall's tau for comparing x and y """ x = x.rank() y = y.rank() normalization = 2 / (len(x) * (len(x) - 1)) indeces = filter(lambda x: x[0] < x[1], product(range(len(x)), range(len(y)))) indeces = list(map(list, zip(*indeces))) indeces_i = indeces[0] indeces_j = indeces[1] tau = np.sum( np.sign(x.iloc[indeces_i].values - x.iloc[indeces_j].values) * np.sign(y.iloc[indeces_i].values - y.iloc[indeces_j].values) ) return tau * normalization %%timeit kendalls_tau(play_data["x1"], play_data["x2"]) # This cell will test your function against the built in scipy function assert ( kendalls_tau(play_data["x1"], play_data["x2"]) == kendalltau(play_data["x1"], play_data["x2"])[0] ), "Oops! The correlation between the first two columns should be 0, but your function returned {}.".format( kendalls_tau(play_data["x1"], play_data["x2"]) ) assert ( round(kendalls_tau(play_data["x1"], play_data["x3"]), 2) == kendalltau(play_data["x1"], play_data["x3"])[0] ), "Oops! The correlation between the first and third columns should be {}, but your function returned {}.".format( kendalltau(play_data["x1"], play_data["x3"])[0][1], kendalls_tau(play_data["x1"], play_data["x3"]), ) assert round(kendalls_tau(play_data["x3"], play_data["x4"]), 2) == round( kendalltau(play_data["x3"], play_data["x4"])[0], 2 ), "Oops! The correlation between the first and third columns should be {}, but your function returned {}.".format( kendalltau(play_data["x3"], play_data["x4"])[0][1], kendalls_tau(play_data["x3"], play_data["x4"]), ) print( "If this is all you see, it looks like you are all set! Nice job coding up Kendall's Tau!" ) ``` `6.` Use your functions (and/or your knowledge of each of the above coefficients) to accurately identify each of the below statements as True or False. **Note:** There may be some rounding differences due to the way numbers are stored, so it is recommended that you consider comparisons to 4 or fewer decimal places. ``` a = True b = False c = "We can't be sure." corr_comp_dct = { "For all columns of play_data, Spearman and Kendall's measures match.": a, # Letter here, "For all columns of play_data, Spearman and Pearson's measures match.": b, # Letter here, "For all columns of play_data, Pearson and Kendall's measures match.": b, # Letter here } t.sim_6_sol(corr_comp_dct) ``` ### Distance Measures Each of the above measures are considered measures of correlation. Similarly, there are distance measures (of which there are many). [This is a great article](http://dataaspirant.com/2015/04/11/five-most-popular-similarity-measures-implementation-in-python/) on some popular distance metrics. In this notebook, we will be looking specifically at two of these measures. 1. Euclidean Distance 2. Manhattan Distance Different than the three measures you built functions for, these two measures take on values between 0 and potentially infinity. Measures that are closer to 0 imply that two vectors are more similar to one another. The larger these values become, the more dissimilar two vectors are to one another. Choosing one of these two `distance` metrics vs. one of the three `similarity` above is often a matter of personal preference, audience, and data specificities. You will see in a bit a case where one of these measures (euclidean or manhattan distance) is optimal to using Pearson's correlation coefficient. ### Euclidean Distance Euclidean distance can also just be considered as straight-line distance between two vectors. For two vectors **x** and **y**, we can compute this as: $$ EUC(\textbf{x}, \textbf{y}) = \sqrt{\sum\limits_{i=1}^{n}(x_i - y_i)^2}$$ ### Manhattan Distance Different from euclidean distance, Manhattan distance is a 'manhattan block' distance from one vector to another. Therefore, you can imagine this distance as a way to compute the distance between two points when you are not able to go through buildings. Specifically, this distance is computed as: $$ MANHATTAN(\textbf{x}, \textbf{y}) = \sqrt{\sum\limits_{i=1}^{n}|x_i - y_i|}$$ Using each of the above, write a function for each to take two vectors and compute the euclidean and manhattan distances. <img src="images/distances.png"> You can see in the above image, the **blue** line gives the **Manhattan** distance, while the **green** line gives the **Euclidean** distance between two points. `7.` Use the below cell to complete a function for each distance metric. Then test your functions against the built in values using the below. ``` def eucl_dist(x, y): """ INPUT x - an array of matching length to array y y - an array of matching length to array x OUTPUT euc - the euclidean distance between x and y """ return np.sqrt(np.sum((x - y) ** 2)) def manhat_dist(x, y): """ INPUT x - an array of matching length to array y y - an array of matching length to array x OUTPUT manhat - the manhattan distance between x and y """ return np.sum(np.abs(x - y)) # Test your functions assert h.test_eucl(play_data["x1"], play_data["x2"]) == eucl_dist( play_data["x1"], play_data["x2"] ) assert h.test_eucl(play_data["x2"], play_data["x3"]) == eucl_dist( play_data["x2"], play_data["x3"] ) assert h.test_manhat(play_data["x1"], play_data["x2"]) == manhat_dist( play_data["x1"], play_data["x2"] ) assert h.test_manhat(play_data["x2"], play_data["x3"]) == manhat_dist( play_data["x2"], play_data["x3"] ) ``` ### Final Note It is worth noting that two vectors could be similar by metrics like the three at the top of the notebook, while being incredibly, incredibly different by measures like these final two. Again, understanding your specific situation will assist in understanding whether your metric is approporate.
github_jupyter
``` %run startup.py %%javascript $.getScript('./assets/js/ipython_notebook_toc.js') ``` # A Decision Tree of Observable Operators ## Part 1: NEW Observables. > source: http://reactivex.io/documentation/operators.html#tree. > (transcribed to RxPY 1.5.7, Py2.7 / 2016-12, Gunther Klessinger, [axiros](http://www.axiros.com)) **This tree can help you find the ReactiveX Observable operator you’re looking for.** <h2 id="tocheading">Table of Contents</h2> <div id="toc"></div> ## Usage There are no configured behind the scenes imports or code except [`startup.py`](./edit/startup.py), which defines output helper functions, mainly: - `rst, reset_start_time`: resets a global timer, in order to have use cases starting from 0. - `subs(observable)`: subscribes to an observable, printing notifications with time, thread, value All other code is explicitly given in the notebook. Since all initialisiation of tools is in the first cell, you always have to run the first cell after ipython kernel restarts. **All other cells are autonmous.** In the use case functions, in contrast to the official examples we simply use **`rand`** quite often (mapped to `randint(0, 100)`), to demonstrate when/how often observable sequences are generated and when their result is buffered for various subscribers. *When in doubt then run the cell again, you might have been "lucky" and got the same random.* ### RxJS The (bold printed) operator functions are linked to the [official documentation](http://reactivex.io/documentation/operators.html#tree) and created roughly analogous to the **RxJS** examples. The rest of the TOC lines links to anchors within the notebooks. ### Output When the output is not in marble format we display it like so: ``` new subscription on stream 276507289 3.4 M [next] 1.4: {'answer': 42} 3.5 T1 [cmpl] 1.6: fin ``` where the lines are syncronously `print`ed as they happen. "M" and "T1" would be thread names ("M" is main thread). For each use case in `reset_start_time()` (alias `rst`), a global timer is set to 0 and we show the offset to it, in *milliseconds* & with one decimal value and also the offset to the start of stream subscription. In the example 3.4, 3.5 are millis since global counter reset, while 1.4, 1.6 are offsets to start of subscription. # I want to create a **NEW** Observable... ## ... that emits a particular item: **[just](http://reactivex.io/documentation/operators/just.html) ** ``` reset_start_time(O.just) stream = O.just({'answer': rand()}) disposable = subs(stream) sleep(0.5) disposable = subs(stream) # same answer # all stream ops work, its a real stream: disposable = subs(stream.map(lambda x: x.get('answer', 0) * 2)) ``` ## ..that was returned from a function *called at subscribe-time*: **[start](http://reactivex.io/documentation/operators/start.html)** ``` print('There is a little API difference to RxJS, see Remarks:\n') rst(O.start) def f(): log('function called') return rand() stream = O.start(func=f) d = subs(stream) d = subs(stream) header("Exceptions are handled correctly (an observable should never except):") def breaking_f(): return 1 / 0 stream = O.start(func=breaking_f) d = subs(stream) d = subs(stream) # startasync: only in python3 and possibly here(?) http://www.tornadoweb.org/en/stable/concurrent.html#tornado.concurrent.Future #stream = O.start_async(f) #d = subs(stream) ``` ## ..that was returned from an Action, Callable, Runnable, or something of that sort, called at subscribe-time: **[from](http://reactivex.io/documentation/operators/from.html)** ``` rst(O.from_iterable) def f(): log('function called') return rand() # aliases: O.from_, O.from_list # 1.: From a tuple: stream = O.from_iterable((1,2,rand())) d = subs(stream) # d = subs(stream) # same result # 2. from a generator gen = (rand() for j in range(3)) stream = O.from_iterable(gen) d = subs(stream) rst(O.from_callback) # in my words: In the on_next of the subscriber you'll have the original arguments, # potentially objects, e.g. user original http requests. # i.e. you could merge those with the result stream of a backend call to # a webservice or db and send the request.response back to the user then. def g(f, a, b): f(a, b) log('called f') stream = O.from_callback(lambda a, b, f: g(f, a, b))('fu', 'bar') d = subs(stream.delay(200)) # d = subs(stream.delay(200)) # does NOT work ``` ## ...after a specified delay: **[timer](http://reactivex.io/documentation/operators/timer.html)** ``` rst() # start a stream of 0, 1, 2, .. after 200 ms, with a delay of 100 ms: stream = O.timer(200, 100).time_interval()\ .map(lambda x: 'val:%s dt:%s' % (x.value, x.interval))\ .take(3) d = subs(stream, name='observer1') # intermix directly with another one d = subs(stream, name='observer2') ``` ## ...that emits a sequence of items repeatedly: **[repeat](http://reactivex.io/documentation/operators/repeat.html) ** ``` rst(O.repeat) # repeat is over *values*, not function calls. Use generate or create for function calls! subs(O.repeat({'rand': time.time()}, 3)) header('do while:') l = [] def condition(x): l.append(1) return True if len(l) < 2 else False stream = O.just(42).do_while(condition) d = subs(stream) ``` ## ...from scratch, with custom logic and cleanup (calling a function again and again): **[create](http://reactivex.io/documentation/operators/create.html) ** ``` rx = O.create rst(rx) def f(obs): # this function is called for every observer obs.on_next(rand()) obs.on_next(rand()) obs.on_completed() def cleanup(): log('cleaning up...') return cleanup stream = O.create(f).delay(200) # the delay causes the cleanup called before the subs gets the vals d = subs(stream) d = subs(stream) sleep(0.5) rst(title='Exceptions are handled nicely') l = [] def excepting_f(obs): for i in range(3): l.append(1) obs.on_next('%s %s (observer hash: %s)' % (i, 1. / (3 - len(l)), hash(obs) )) obs.on_completed() stream = O.create(excepting_f) d = subs(stream) d = subs(stream) rst(title='Feature or Bug?') print('(where are the first two values?)') l = [] def excepting_f(obs): for i in range(3): l.append(1) obs.on_next('%s %s (observer hash: %s)' % (i, 1. / (3 - len(l)), hash(obs) )) obs.on_completed() stream = O.create(excepting_f).delay(100) d = subs(stream) d = subs(stream) # I think its an (amazing) feature, preventing to process functions results of later(!) failing functions rx = O.generate rst(rx) """The basic form of generate takes four parameters: the first item to emit a function to test an item to determine whether to emit it (true) or terminate the Observable (false) a function to generate the next item to test and emit based on the value of the previous item a function to transform items before emitting them """ def generator_based_on_previous(x): return x + 1.1 def doubler(x): return 2 * x d = subs(rx(0, lambda x: x < 4, generator_based_on_previous, doubler)) rx = O.generate_with_relative_time rst(rx) stream = rx(1, lambda x: x < 4, lambda x: x + 1, lambda x: x, lambda t: 100) d = subs(stream) ``` ## ...for each observer that subscribes OR according to a condition at subscription time: **[defer / if_then](http://reactivex.io/documentation/operators/defer.html) ** ``` rst(O.defer) # plural! (unique per subscription) streams = O.defer(lambda: O.just(rand())) d = subs(streams) d = subs(streams) # gets other values - created by subscription! # evaluating a condition at subscription time in order to decide which of two streams to take. rst(O.if_then) cond = True def should_run(): return cond streams = O.if_then(should_run, O.return_value(43), O.return_value(56)) d = subs(streams) log('condition will now evaluate falsy:') cond = False streams = O.if_then(should_run, O.return_value(43), O.return_value(rand())) d = subs(streams) d = subs(streams) ``` ## ...that emits a sequence of integers: **[range](http://reactivex.io/documentation/operators/range.html) ** ``` rst(O.range) d = subs(O.range(0, 3)) ``` ### ...at particular intervals of time: **[interval](http://reactivex.io/documentation/operators/interval.html) ** (you can `.publish()` it to get an easy "hot" observable) ``` rst(O.interval) d = subs(O.interval(100).time_interval()\ .map(lambda x, v: '%(interval)s %(value)s' \ % ItemGetter(x)).take(3)) ``` ### ...after a specified delay (see timer) ## ...that completes without emitting items: **[empty](http://reactivex.io/documentation/operators/empty-never-throw.html) ** ``` rst(O.empty) d = subs(O.empty()) ``` ## ...that does nothing at all: **[never](http://reactivex.io/documentation/operators/empty-never-throw.html) ** ``` rst(O.never) d = subs(O.never()) ``` ## ...that excepts: **[throw](http://reactivex.io/documentation/operators/empty-never-throw.html) ** ``` rst(O.on_error) d = subs(O.on_error(ZeroDivisionError)) ```
github_jupyter
#### Copyright 2017 Google LLC. ``` # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. %matplotlib inline ``` # Quick Introduction to pandas **Learning Objectives:** * Gain an introduction to the `DataFrame` and `Series` data structures of the *pandas* library * Access and manipulate data within a `DataFrame` and `Series` * Import CSV data into a *pandas* `DataFrame` * Reindex a `DataFrame` to shuffle data [*pandas*](http://pandas.pydata.org/) is a column-oriented data analysis API. It's a great tool for handling and analyzing input data, and many ML frameworks support *pandas* data structures as inputs. Although a comprehensive introduction to the *pandas* API would span many pages, the core concepts are fairly straightforward, and we'll present them below. For a more complete reference, the [*pandas* docs site](http://pandas.pydata.org/pandas-docs/stable/index.html) contains extensive documentation and many tutorials. ## Basic Concepts The following line imports the *pandas* API and prints the API version: ``` import pandas as pd pd.__version__ ``` The primary data structures in *pandas* are implemented as two classes: * **`DataFrame`**, which you can imagine as a relational data table, with rows and named columns. * **`Series`**, which is a single column. A `DataFrame` contains one or more `Series` and a name for each `Series`. The data frame is a commonly used abstraction for data manipulation. Similar implementations exist in [Spark](https://spark.apache.org/) and [R](https://www.r-project.org/about.html). One way to create a `Series` is to construct a `Series` object. For example: ``` pd.Series(['San Francisco', 'San Jose', 'Sacramento']) ``` `DataFrame` objects can be created by passing a `dict` mapping `string` column names to their respective `Series`. If the `Series` don't match in length, missing values are filled with special [NA/NaN](http://pandas.pydata.org/pandas-docs/stable/missing_data.html) values. Example: ``` city_names = pd.Series(['San Francisco', 'San Jose', 'Sacramento']) population = pd.Series([852469, 1015785, 485199]) pd.DataFrame({ 'City name': city_names, 'Population': population }) ``` But most of the time, you load an entire file into a `DataFrame`. The following example loads a file with California housing data. Run the following cell to load the data and create feature definitions: ``` california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_train.csv", sep=",") california_housing_dataframe.describe() ``` The example above used `DataFrame.describe` to show interesting statistics about a `DataFrame`. Another useful function is `DataFrame.head`, which displays the first few records of a `DataFrame`: ``` california_housing_dataframe.head() ``` Another powerful feature of *pandas* is graphing. For example, `DataFrame.hist` lets you quickly study the distribution of values in a column: ``` california_housing_dataframe.hist('housing_median_age') ``` ## Accessing Data You can access `DataFrame` data using familiar Python dict/list operations: ``` cities = pd.DataFrame({ 'City name': city_names, 'Population': population }) print(type(cities['City name'])) cities['City name'] print(type(cities['City name'][1])) cities['City name'][1] print(type(cities[0:2])) cities[0:2] ``` In addition, *pandas* provides an extremely rich API for advanced [indexing and selection](http://pandas.pydata.org/pandas-docs/stable/indexing.html) that is too extensive to be covered here. ## Manipulating Data You may apply Python's basic arithmetic operations to `Series`. For example: ``` population / 1000. ``` [NumPy](http://www.numpy.org/) is a popular toolkit for scientific computing. *pandas* `Series` can be used as arguments to most NumPy functions: ``` import numpy as np np.log(population) ``` For more complex single-column transformations, you can use `Series.apply`. Like the Python [map function](https://docs.python.org/2/library/functions.html#map), `Series.apply` accepts as an argument a [lambda function](https://docs.python.org/2/tutorial/controlflow.html#lambda-expressions), which is applied to each value. The example below creates a new `Series` that indicates whether `population` is over one million: ``` population.apply(lambda val: val > 1000000) ``` Modifying `DataFrames` is also straightforward. For example, the following code adds two `Series` to an existing `DataFrame`: ``` cities['Area square miles'] = pd.Series([46.87, 176.53, 97.92]) cities['Population density'] = cities['Population'] / cities['Area square miles'] cities ``` ## Exercise #1 Modify the `cities` table by adding a new boolean column that is True if and only if *both* of the following are True: * The city is named after a saint. * The city has an area greater than 50 square miles. **Note:** Boolean `Series` are combined using the bitwise, rather than the traditional boolean, operators. For example, when performing *logical and*, use `&` instead of `and`. **Hint:** "San" in Spanish means "saint." ``` cities['check'] = cities['City name'].apply(lambda x: x[:3]=='San') & \ cities['Area square miles'].apply(lambda x: x > 50) cities ``` ### Solution Click below for a solution. ``` cities['Is wide and has saint name'] = (cities['Area square miles'] > 50) & cities['City name'].apply(lambda name: name.startswith('San')) cities ``` ## Indexes Both `Series` and `DataFrame` objects also define an `index` property that assigns an identifier value to each `Series` item or `DataFrame` row. By default, at construction, *pandas* assigns index values that reflect the ordering of the source data. Once created, the index values are stable; that is, they do not change when data is reordered. ``` city_names.index cities.index ``` Call `DataFrame.reindex` to manually reorder the rows. For example, the following has the same effect as sorting by city name: ``` cities.reindex([2, 0, 1]) ``` Reindexing is a great way to shuffle (randomize) a `DataFrame`. In the example below, we take the index, which is array-like, and pass it to NumPy's `random.permutation` function, which shuffles its values in place. Calling `reindex` with this shuffled array causes the `DataFrame` rows to be shuffled in the same way. Try running the following cell multiple times! ``` cities.reindex(np.random.permutation(cities.index)) ``` For more information, see the [Index documentation](http://pandas.pydata.org/pandas-docs/stable/indexing.html#index-objects). ## Exercise #2 The `reindex` method allows index values that are not in the original `DataFrame`'s index values. Try it and see what happens if you use such values! Why do you think this is allowed? ``` cities.reindex([2, -1, 8]) ``` ### Solution Click below for the solution. If your `reindex` input array includes values not in the original `DataFrame` index values, `reindex` will add new rows for these "missing" indices and populate all corresponding columns with `NaN` values: ``` cities.reindex([0, 4, 5, 2]) ``` This behavior is desirable because indexes are often strings pulled from the actual data (see the [*pandas* reindex documentation](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html) for an example in which the index values are browser names). In this case, allowing "missing" indices makes it easy to reindex using an external list, as you don't have to worry about sanitizing the input.
github_jupyter
## Special Topics - Introduction to Deep Learning #### Prof. Thomas da Silva Paula ### Feature extraction example * Using Keras * Using VGG-16 ## Imports ``` import os import numpy as np import matplotlib.pyplot as plt from keras.preprocessing import image from keras.applications.vgg16 import VGG16 from keras.applications.vgg16 import preprocess_input plt.rcParams['figure.figsize'] = [15, 5] ``` ## Creating the model ``` model = VGG16(weights='imagenet', include_top=False, pooling='avg', input_shape=(224, 224, 3)) model.summary() ``` ### Feature extraction example ``` img_path = '../../sample_images/sneakers.png' img = image.load_img(img_path, target_size=(224, 224)) plt.imshow(img) ``` We need to prepare the image using the same preprocessing steps used to train the model. Fortunetly, Keras has methods to help us out. ``` x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) features = model.predict(x) ``` Checking shape and type ``` print(features.shape, features.dtype) ``` Printing features ``` from pprint import pprint pprint(features) print(features) ``` ### Features can be used for comparison ``` def load_and_extract_features(img_path): # Loading rgba to show the image properly img = image.load_img(img_path, color_mode='rgba') plt.imshow(img) # Loading rgb with expected input size img = image.load_img(img_path, target_size=(224, 224)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) features = model.predict(x) return features features_tshirt1 = load_and_extract_features('../../sample_images/tshirt.png') features_tshirt2 = load_and_extract_features('../../sample_images/tshirt2.png') features_pug = load_and_extract_features('../../sample_images/pug.png') features_pug2 = load_and_extract_features('../../sample_images/pug2.png') features_sneakers = load_and_extract_features('../../sample_images/sneakers.png') ``` ### Computing distance between features We can then compute the distance between these features and see whether given images are more similar to each other #### T-shirt 1 vs Pug 1 ``` from scipy.spatial.distance import cosine distance = cosine(features_tshirt1, features_pug) print(distance) ``` #### T-shirt 2 vs Pug 2 ``` distance = cosine(features_tshirt2, features_pug2) print(distance) ``` #### Pug 1 vs Sneakers ``` distance = cosine(features_pug, features_sneakers) print(distance) ``` #### T-shirt 1 vs T-shirt 2 ``` distance = cosine(features_tshirt1, features_tshirt2) print(distance) ``` #### Pug 1 vs Pug 2 ``` distance = cosine(features_pug, features_pug2) print(distance) distance = cosine(features_pug, features_pug) print(distance) ``` ### We can also use features to train classifiers We'll see how it works in the assignment :)
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import warnings sns.set_style('whitegrid') warnings.filterwarnings('ignore') df = pd.read_csv('../input/bigmarket/bigmarket.csv') df.head() df.shape df.info() ``` **# checking for missing values**** ``` df.isnull().sum() ``` Handling Missing Values MEAN -> AVERAGE MODE -> MORE REPEATED VALUE ``` df['Item_Weight'].mean() ``` # filling missing values in "Item-Weight" with MEAN VALUE ``` df['Item_Weight'].fillna(df['Item_Weight'].mean(), inplace=True) # Mode of "Outlet_Size" column df['Outlet_Size'].mode() # filling the missing values in "Outlet_Size" column with Mode mode_of_Outlet_size = df.pivot_table(values='Outlet_Size', columns='Outlet_Type', aggfunc=(lambda x: x.mode()[0])) print(mode_of_Outlet_size) miss_values = df['Outlet_Size'].isnull() print(miss_values) df.loc[miss_values, 'Outlet_Size'] = df.loc[miss_values, 'Outlet_Type'].apply(lambda x:mode_of_Outlet_size[x]) #checking for missing values df.isnull().sum() ``` **Data Analysis** ``` df.describe() ``` **# Item_Weight distribution** ``` plt.figure(figsize=(6,6)) sns.distplot(df['Item_Weight']) plt.show() ``` **# Item Visibility distribution** ``` plt.figure(figsize=(6,6)) sns.distplot(df['Item_Visibility']) plt.show() ``` **# Item MRP distribution** ``` plt.figure(figsize=(6,6)) sns.distplot(df['Item_MRP']) plt.show() ``` # Item_Outlet_Sales distribution ``` plt.figure(figsize=(6,6)) sns.distplot(df['Item_Outlet_Sales']) plt.show() ``` **# Outlet_Establishment_Year column** ``` plt.figure(figsize=(6,6)) sns.countplot(x='Outlet_Establishment_Year', data=df) plt.show() ``` **# Item_Fat_Content column** ``` plt.figure(figsize=(6,6)) sns.countplot(x='Item_Fat_Content', data=df) plt.show() ``` **# Item_Type column** ``` plt.figure(figsize=(30,6)) sns.countplot(x='Item_Type', data=df) plt.show() ``` **# Outlet_Size column** ``` plt.figure(figsize=(6,6)) sns.countplot(x='Outlet_Size', data=df) plt.show() ``` **Data Pre=Processing** ``` df.head() df['Item_Fat_Content'].value_counts() df.replace({'Item_Fat_Content':{'low fat':'Low Fat', 'LF':'Low Fat', 'reg':'Regular'}},inplace=True) df['Item_Fat_Content'].value_counts() ``` **Label Encoding** ``` from sklearn.preprocessing import LabelEncoder encoder = LabelEncoder() df['Item_Identifier'] = encoder.fit_transform(df['Item_Identifier']) df['Item_Fat_Content'] = encoder.fit_transform(df['Item_Fat_Content']) df['Item_Type'] = encoder.fit_transform(df['Item_Type']) df['Outlet_Identifier'] = encoder.fit_transform(df['Outlet_Identifier']) df['Outlet_Size'] = encoder.fit_transform(df['Outlet_Size']) df['Outlet_Location_Type'] = encoder.fit_transform(df['Outlet_Location_Type']) df['Outlet_Type'] = encoder.fit_transform(df['Outlet_Type']) df.head() ``` **Splitting features and Target** ``` X = df.iloc[:,:-1].values y = df.iloc[:,-1].values print(X) print(y) ``` **Splitting into Train Test Split** ``` from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2, random_state=0) print(X.shape, X_train.shape, X_test.shape) ``` **Training Algortihm** ``` from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense model = Sequential() model.add(Dense(20, activation='relu')) model.add(Dense(10, activation='relu')) model.add(Dense(5, activation='relu')) model.add(Dense(1)) # output layer model.compile(optimizer='rmsprop', loss='mse') model.fit(x = X_train, y = y_train, epochs=50) model.summary() loss_df = pd.DataFrame(model.history.history) loss_df loss_df.plot() ``` **Model Evaluation** ``` test_eval = model.evaluate(X_test, y_test, verbose=0) print(test_eval) ``` **# model evaluation on train set** ``` train_eval = model.evaluate(X_train, y_train, verbose=0) print(train_eval) ``` **# Checking difference between train_eval and test_eval** ``` model_diff = train_eval - test_eval print(model_diff) ``` **# Train prediction on test data** ``` train_prediction = model.predict(X_train) ``` **# Test prediction on test data** ``` test_prediction = model.predict(X_test) print(train_prediction) print(test_prediction) ``` **# R Squared : R-squared measures the strength of the relationship between your model and the dependent variable on a convenient 0 –** ``` from sklearn.metrics import r2_score r2_train = r2_score(y_train, train_prediction) print('R squared value of train data:', r2_train) r2_test = r2_score(y_test, test_prediction) print('R squared value of test data:',r2_test) ```
github_jupyter
``` import phys import phys.newton import phys.light import numpy as np import time import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D class ScatterDeleteStep2(phys.Step): def __init__(self, n, A): self.n = n self.A = A self.built = False def run(self, sim): if self.built != True: skip = phys.CLInput(name="photon_check", type="obj_action", code="if type(obj) != phys.light.PhotonObject:\n \t\t continue") d0, d1, d2 = tuple([phys.CLInput(name="d" + str(x), type="obj", obj_attr="dr[" + str(x) + "]") for x in range(0, 3)]) rand = phys.CLInput(name="rand", type="obj_def", obj_def="np.random.random()") A_, n_ = phys.CLInput(name="A", type="const", const_value=str(self.n)), phys.CLInput(name="n", type="const", const_value=str(self.A)) pht = phys.CLInput(name="pht", type="obj_track", obj_track="obj") res = phys.CLOutput(name="res", ctype="int") kernel = """ int gid = get_global_id(0); double norm = sqrt(pow(d0[gid], 2) + pow(d1[gid], 2) + pow(d2[gid], 2)); double pcoll = A * n * norm; if (pcoll >= rand[gid]){ // Mark for removal. res[gid] = 1; } else { res[gid] = 0; } """ self.prog = phys.CLProgram(sim, "test", kernel) self.prog.prep_metadata = [skip, d0, d1, d2, rand, pht, A_, n_] self.prog.output_metadata = [res] self.prog.build_kernel() self.built = True out = self.prog.run() for idx, x in enumerate(out["res"]): if x == 1: sim.remove_obj(self.prog.pht[idx]) def new_sim(step, n): sim = phys.Simulation({"cl_on": True}) sim.add_objs(phys.light.generate_photons_from_E(np.linspace(phys.Measurement(5e-19, "J**1"), phys.Measurement(1e-18, "J**1"), 1000))) sim.exit = lambda cond: len(cond.objects) == 0 sim.add_step(0, phys.UpdateTimeStep(lambda s: phys.Measurement(np.double(0.001), "s**1"))) sim.add_step(1, phys.newton.NewtonianKinematicsStep()) A = np.double(0.001) n = np.double(0.001) sim.add_step(2, step(n, A)) sim.add_step(3, phys.light.ScatterMeasureStep(None, True)) return sim orig, new = [], [] ns = np.floor(10 ** np.linspace(2, 5, 9)) for i in ns: print("Testing old " + str(i)) o = new_sim(phys.light.ScatterDeleteStepReference, int(i)) o.start() o.join() orig.append(o.run_time) plt.plot(o.ts, [x[1] for x in o.steps[3].data], label="n") plt.ylabel("Photons") plt.xlabel("Time (s)") plt.title("Photon Count vs. Time (s) w/ old, N = " + str(i)) plt.show() print("Testing new " + str(i)) n = new_sim(ScatterDeleteStep2, int(i)) n.start() n.join() new.append(n.run_time) plt.plot(ns, new, label="New") plt.plot(ns, orig, label="Original") plt.legend() plt.xlabel("$N_\gamma$") plt.ylabel("Time (s)") plt.title("$N_\gamma$ vs. Time") plt.show() o.ts[0].size ```
github_jupyter
<table style="float:left; border:none"> <tr style="border:none"> <td style="border:none"> <a href="https://bokeh.org/"> <img src="assets/bokeh-transparent.png" style="width:50px" > </a> </td> <td style="border:none"> <h1>Bokeh Tutorial</h1> </td> </tr> </table> <div style="float:right;"><h2>A4. Additional resources</h2></div> # Additional resources There are lots of things we haven't had time to tell you about. In general to learn more about bokeh, the following resources will hopefully be helpful: ## Doumentation ##### Main Page - https://bokeh.org The main front page, with links to many other resources --- ##### Documentation - https://docs.bokeh.org/en/latest The documentation toplevel page --- ##### User's Guide - https://docs.bokeh.org/en/latest/docs/user_guide.html The user's guide has many top-oriented subsections, for example "Plotting with Basic Glyphs", "Configuring Plot Tools", or "Adding Interactions". Each user's guide section typically example code and corresponding live plots that demonstrate how to accomplish various tasks. --- ##### Gallery - https://docs.bokeh.org/en/latest/docs/gallery.html One of the best ways to learn is to find an existing example similar to what you want, and to study it and then use it as a starting place. Starting from a known working example can often save time and effort when getting started by allowing you to make small, incremental changes and observing the outcome. The Bokeh docs have a large thumbnail gallery that links to live plots and apps with corresponding code. --- ##### Reference Guide - https://docs.bokeh.org/en/latest/docs/reference.html If you are already familiar with Bokeh and have questions about specific details of the obejcts you are already using, the reference guide is a good resource for finding information. The reference guide is automatically generated from the project source code and is a complete resources for all bokeh models and their properties. --- ##### Issue tracker - https://github.com/bokeh/bokeh/issues The GitHub issue tracker is the place to go to submit ***bug reports*** and ***feature requests***. It it NOT the right place for general support questions (see the *General Community Support* links below). ## Example apps and Scripts In addition to all the live gallery examples, Bokeh has many additional scripts and apps that can be instructive to study and emulate. ##### Examples folder - https://github.com/bokeh/bokeh/tree/master/examples/ The `examples` directory has many subfolders dedicated to different kinds of topics. Some of the highlights are: * `app` - example Bokeh apps, run with "`bokeh serve`" * `howto` - some examples arranged around specific topics such as layout or notebook comms * `models` - examples that demonstrate the low-level `bokeh.models` API * `plotting` - a large collections of examples using the `bokeh.plotting` interface * `webgl` - some examples demonstrating WebGL usage --- ## General Commnity Support Bokeh has a large and growing community. The best place to go for general support questions (either to ask, or to answer!) is https://discourse.bokeh.org ## Contributor Resources Bokeh has a small but growing developer community. We are always looking to have new contributors. Below are some resources for people involved in working on Bokeh itself. ##### Source code - https://github.com/bokeh/bokeh Go here to clone the GitHub repo (in order to contribute or get the examples), or to submit issues to the issue tracker --- ##### Issue tracker - https://github.com/bokeh/bokeh/issues The GitHub issue tracker is the place to go to submit ***bug reports*** and ***feature requests***. For general support questions, see the *General Community Support* links above. --- #### Developer's Guide - https://bokeh.pydata.org/en/latest/docs/dev_guide.html If you are interesting in becoming a contributor to Bokeh, the developer's guide is the place to start. It has information about getting a development environment set up, the library architecture, writing and running tests, --- #### Dev Channel in Discourse - https://discourse.bokeh.org/c/development Come here for assistance with any questions about developing Bokeh itself. # Next Section This is the last section of the appendices. To go back to the overview, click [here](00%20-%20Introduction%20and%20Setup.ipynb).
github_jupyter
``` # code by Tae Hwan Jung @graykode import numpy as np import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F class TextCNN(nn.Module): def __init__(self): super(TextCNN, self).__init__() self.num_filters_total = num_filters * len(filter_sizes) self.W = nn.Embedding(vocab_size, embedding_size) self.Weight = nn.Linear(self.num_filters_total, num_classes, bias=False) self.Bias = nn.Parameter(torch.ones([num_classes])) self.filter_list = nn.ModuleList([nn.Conv2d(1, num_filters, (size, embedding_size)) for size in filter_sizes]) def forward(self, X): embedded_chars = self.W(X) # [batch_size, sequence_length, sequence_length] embedded_chars = embedded_chars.unsqueeze(1) # add channel(=1) [batch, channel(=1), sequence_length, embedding_size] pooled_outputs = [] for i, conv in enumerate(self.filter_list): # conv : [input_channel(=1), output_channel(=3), (filter_height, filter_width), bias_option] h = F.relu(conv(embedded_chars)) # mp : ((filter_height, filter_width)) mp = nn.MaxPool2d((sequence_length - filter_sizes[i] + 1, 1)) # pooled : [batch_size(=6), output_height(=1), output_width(=1), output_channel(=3)] pooled = mp(h).permute(0, 3, 2, 1) pooled_outputs.append(pooled) h_pool = torch.cat(pooled_outputs, len(filter_sizes)) # [batch_size(=6), output_height(=1), output_width(=1), output_channel(=3) * 3] h_pool_flat = torch.reshape(h_pool, [-1, self.num_filters_total]) # [batch_size(=6), output_height * output_width * (output_channel * 3)] model = self.Weight(h_pool_flat) + self.Bias # [batch_size, num_classes] return model if __name__ == '__main__': embedding_size = 2 # embedding size sequence_length = 3 # sequence length num_classes = 2 # number of classes filter_sizes = [2, 2, 2] # n-gram windows num_filters = 3 # number of filters # 3 words sentences (=sequence_length is 3) sentences = ["i love you", "he loves me", "she likes baseball", "i hate you", "sorry for that", "this is awful"] labels = [1, 1, 1, 0, 0, 0] # 1 is good, 0 is not good. word_list = " ".join(sentences).split() word_list = list(set(word_list)) word_dict = {w: i for i, w in enumerate(word_list)} vocab_size = len(word_dict) model = TextCNN() criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) inputs = torch.LongTensor([np.asarray([word_dict[n] for n in sen.split()]) for sen in sentences]) targets = torch.LongTensor([out for out in labels]) # To using Torch Softmax Loss function # Training for epoch in range(5000): optimizer.zero_grad() output = model(inputs) # output : [batch_size, num_classes], target_batch : [batch_size] (LongTensor, not one-hot) loss = criterion(output, targets) if (epoch + 1) % 1000 == 0: print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.6f}'.format(loss)) loss.backward() optimizer.step() # Test test_text = 'sorry hate you' tests = [np.asarray([word_dict[n] for n in test_text.split()])] test_batch = torch.LongTensor(tests) # Predict predict = model(test_batch).data.max(1, keepdim=True)[1] if predict[0][0] == 0: print(test_text,"is Bad Mean...") else: print(test_text,"is Good Mean!!") ```
github_jupyter
# Testing cosmogan Aug 25, 2020 Borrowing pieces of code from : - https://github.com/pytorch/tutorials/blob/11569e0db3599ac214b03e01956c2971b02c64ce/beginner_source/dcgan_faces_tutorial.py - https://github.com/exalearn/epiCorvid/tree/master/cGAN ``` import os import random import logging import sys import torch import torch.nn as nn import torch.nn.parallel import torch.backends.cudnn as cudnn import torch.optim as optim import torch.utils.data from torchsummary import summary from torch.utils.data import DataLoader, TensorDataset import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.animation as animation from IPython.display import HTML import argparse import time from datetime import datetime import glob import pickle import yaml import collections import torch.distributed as dist import socket %matplotlib widget ``` ## Modules ``` def try_barrier(): """Attempt a barrier but ignore any exceptions""" print('BAR %d'%rank) try: dist.barrier() except: pass def _get_sync_file(): """Logic for naming sync file using slurm env variables""" #sync_file_dir = '%s/pytorch-sync-files' % os.environ['SCRATCH'] sync_file_dir = '/global/homes/b/balewski/prjs/tmp/local-sync-files' os.makedirs(sync_file_dir, exist_ok=True) sync_file = 'file://%s/pytorch_sync.%s.%s' % ( sync_file_dir, os.environ['SLURM_JOB_ID'], os.environ['SLURM_STEP_ID']) return sync_file def f_load_config(config_file): with open(config_file) as f: config = yaml.load(f, Loader=yaml.SafeLoader) return config ### Transformation functions for image pixel values def f_transform(x): return 2.*x/(x + 4.) - 1. def f_invtransform(s): return 4.*(1. + s)/(1. - s) # custom weights initialization called on netG and netD def weights_init(m): classname = m.__class__.__name__ if classname.find('Conv') != -1: nn.init.normal_(m.weight.data, 0.0, 0.02) elif classname.find('BatchNorm') != -1: nn.init.normal_(m.weight.data, 1.0, 0.02) nn.init.constant_(m.bias.data, 0) # Generator Code class View(nn.Module): def __init__(self, shape): super(View, self).__init__() self.shape = shape def forward(self, x): return x.view(*self.shape) class Generator(nn.Module): def __init__(self, gdict): super(Generator, self).__init__() ## Define new variables from dict keys=['ngpu','nz','nc','ngf','kernel_size','stride','g_padding'] ngpu, nz,nc,ngf,kernel_size,stride,g_padding=list(collections.OrderedDict({key:gdict[key] for key in keys}).values()) self.main = nn.Sequential( # nn.ConvTranspose2d(in_channels, out_channels, kernel_size,stride,padding,output_padding,groups,bias, Dilation,padding_mode) nn.Linear(nz,nc*ngf*8*8*8),# 32768 nn.BatchNorm2d(nc,eps=1e-05, momentum=0.9, affine=True), nn.ReLU(inplace=True), View(shape=[-1,ngf*8,8,8]), nn.ConvTranspose2d(ngf * 8, ngf * 4, kernel_size, stride, g_padding, output_padding=1, bias=False), nn.BatchNorm2d(ngf*4,eps=1e-05, momentum=0.9, affine=True), nn.ReLU(inplace=True), # state size. (ngf*4) x 8 x 8 nn.ConvTranspose2d( ngf * 4, ngf * 2, kernel_size, stride, g_padding, 1, bias=False), nn.BatchNorm2d(ngf*2,eps=1e-05, momentum=0.9, affine=True), nn.ReLU(inplace=True), # state size. (ngf*2) x 16 x 16 nn.ConvTranspose2d( ngf * 2, ngf, kernel_size, stride, g_padding, 1, bias=False), nn.BatchNorm2d(ngf,eps=1e-05, momentum=0.9, affine=True), nn.ReLU(inplace=True), # state size. (ngf) x 32 x 32 nn.ConvTranspose2d( ngf, nc, kernel_size, stride,g_padding, 1, bias=False), nn.Tanh() ) def forward(self, ip): return self.main(ip) class Discriminator(nn.Module): def __init__(self, gdict): super(Discriminator, self).__init__() ## Define new variables from dict keys=['ngpu','nz','nc','ndf','kernel_size','stride','d_padding'] ngpu, nz,nc,ndf,kernel_size,stride,d_padding=list(collections.OrderedDict({key:gdict[key] for key in keys}).values()) self.main = nn.Sequential( # input is (nc) x 64 x 64 # nn.Conv2d(in_channels, out_channels, kernel_size,stride,padding,output_padding,groups,bias, Dilation,padding_mode) nn.Conv2d(nc, ndf,kernel_size, stride, d_padding, bias=True), nn.BatchNorm2d(ndf,eps=1e-05, momentum=0.9, affine=True), nn.LeakyReLU(0.2, inplace=True), # state size. (ndf) x 32 x 32 nn.Conv2d(ndf, ndf * 2, kernel_size, stride, d_padding, bias=True), nn.BatchNorm2d(ndf * 2,eps=1e-05, momentum=0.9, affine=True), nn.LeakyReLU(0.2, inplace=True), # state size. (ndf*2) x 16 x 16 nn.Conv2d(ndf * 2, ndf * 4, kernel_size, stride, d_padding, bias=True), nn.BatchNorm2d(ndf * 4,eps=1e-05, momentum=0.9, affine=True), nn.LeakyReLU(0.2, inplace=True), # state size. (ndf*4) x 8 x 8 nn.Conv2d(ndf * 4, ndf * 8, kernel_size, stride, d_padding, bias=True), nn.BatchNorm2d(ndf * 8,eps=1e-05, momentum=0.9, affine=True), nn.LeakyReLU(0.2, inplace=True), # state size. (ndf*8) x 4 x 4 nn.Flatten(), nn.Linear(nc*ndf*8*8*8, 1) # nn.Sigmoid() ) def forward(self, ip): return self.main(ip) def f_gen_images(gdict,netG,optimizerG,ip_fname,op_loc,op_strg='inf_img_',op_size=500): '''Generate images for best saved models Arguments: gdict, netG, optimizerG, ip_fname: name of input file op_strg: [string name for output file] op_size: Number of images to generate ''' nz,device=gdict['nz'],gdict['device'] try: if torch.cuda.is_available(): checkpoint=torch.load(ip_fname) else: checkpoint=torch.load(ip_fname,map_location=torch.device('cpu')) except Exception as e: print(e) print("skipping generation of images for ",ip_fname) return ## Load checkpoint if gdict['multi-gpu']: netG.module.load_state_dict(checkpoint['G_state']) else: netG.load_state_dict(checkpoint['G_state']) ## Load other stuff iters=checkpoint['iters'] epoch=checkpoint['epoch'] optimizerG.load_state_dict(checkpoint['optimizerG_state_dict']) # Generate batch of latent vectors noise = torch.randn(op_size, 1, 1, nz, device=device) # Generate fake image batch with G netG.eval() ## This is required before running inference gen = netG(noise) gen_images=gen.detach().cpu().numpy()[:,:,:,:] print(gen_images.shape) op_fname='%s_epoch-%s_step-%s.npy'%(op_strg,epoch,iters) np.save(op_loc+op_fname,gen_images) print("Image saved in ",op_fname) def f_save_checkpoint(gdict,epoch,iters,best_chi1,best_chi2,netG,netD,optimizerG,optimizerD,save_loc): ''' Checkpoint model ''' if gdict['multi-gpu']: ## Dataparallel torch.save({'epoch':epoch,'iters':iters,'best_chi1':best_chi1,'best_chi2':best_chi2, 'G_state':netG.module.state_dict(),'D_state':netD.module.state_dict(),'optimizerG_state_dict':optimizerG.state_dict(), 'optimizerD_state_dict':optimizerD.state_dict()}, save_loc) else : torch.save({'epoch':epoch,'iters':iters,'best_chi1':best_chi1,'best_chi2':best_chi2, 'G_state':netG.state_dict(),'D_state':netD.state_dict(),'optimizerG_state_dict':optimizerG.state_dict(), 'optimizerD_state_dict':optimizerD.state_dict()}, save_loc) def f_load_checkpoint(ip_fname,netG,netD,optimizerG,optimizerD,gdict): ''' Load saved checkpoint Also loads step, epoch, best_chi1, best_chi2''' try: checkpoint=torch.load(ip_fname) except Exception as e: print(e) print("skipping generation of images for ",ip_fname) raise SystemError ## Load checkpoint if gdict['multi-gpu']: netG.module.load_state_dict(checkpoint['G_state']) netD.module.load_state_dict(checkpoint['D_state']) else: netG.load_state_dict(checkpoint['G_state']) netD.load_state_dict(checkpoint['D_state']) optimizerD.load_state_dict(checkpoint['optimizerD_state_dict']) optimizerG.load_state_dict(checkpoint['optimizerG_state_dict']) iters=checkpoint['iters'] epoch=checkpoint['epoch'] best_chi1=checkpoint['best_chi1'] best_chi2=checkpoint['best_chi2'] netG.train() netD.train() return iters,epoch,best_chi1,best_chi2 #################### ### Pytorch code ### #################### def f_torch_radial_profile(img, center=(None,None)): ''' Module to compute radial profile of a 2D image Bincount causes issues with backprop, so not using this code ''' y,x=torch.meshgrid(torch.arange(0,img.shape[0]),torch.arange(0,img.shape[1])) # Get a grid of x and y values if center[0]==None and center[1]==None: center = torch.Tensor([(x.max()-x.min())/2.0, (y.max()-y.min())/2.0]) # compute centers # get radial values of every pair of points r = torch.sqrt((x - center[0])**2 + (y - center[1])**2) r= r.int() # print(r.shape,img.shape) # Compute histogram of r values tbin=torch.bincount(torch.reshape(r,(-1,)),weights=torch.reshape(img,(-1,)).type(torch.DoubleTensor)) nr = torch.bincount(torch.reshape(r,(-1,))) radialprofile = tbin / nr return radialprofile[1:-1] def f_torch_get_azimuthalAverage_with_batch(image, center=None): ### Not used in this code. """ Calculate the azimuthally averaged radial profile. Only use if you need to combine batches image - The 2D image center - The [x,y] pixel coordinates used as the center. The default is None, which then uses the center of the image (including fracitonal pixels). source: https://www.astrobetter.com/blog/2010/03/03/fourier-transforms-of-images-in-python/ """ batch, channel, height, width = image.shape # Create a grid of points with x and y coordinates y, x = np.indices([height,width]) if not center: center = np.array([(x.max()-x.min())/2.0, (y.max()-y.min())/2.0]) # Get the radial coordinate for every grid point. Array has the shape of image r = torch.tensor(np.hypot(x - center[0], y - center[1])) # Get sorted radii ind = torch.argsort(torch.reshape(r, (batch, channel,-1))) r_sorted = torch.gather(torch.reshape(r, (batch, channel, -1,)),2, ind) i_sorted = torch.gather(torch.reshape(image, (batch, channel, -1,)),2, ind) # Get the integer part of the radii (bin size = 1) r_int=r_sorted.to(torch.int32) # Find all pixels that fall within each radial bin. deltar = r_int[:,:,1:] - r_int[:,:,:-1] # Assumes all radii represented rind = torch.reshape(torch.where(deltar)[2], (batch, -1)) # location of changes in radius rind=torch.unsqueeze(rind,1) nr = (rind[:,:,1:] - rind[:,:,:-1]).type(torch.float) # number of radius bin # Cumulative sum to figure out sums for each radius bin csum = torch.cumsum(i_sorted, axis=-1) # print(csum.shape,rind.shape,nr.shape) tbin = torch.gather(csum, 2, rind[:,:,1:]) - torch.gather(csum, 2, rind[:,:,:-1]) radial_prof = tbin / nr return radial_prof def f_get_rad(img): ''' Get the radial tensor for use in f_torch_get_azimuthalAverage ''' height,width=img.shape[-2:] # Create a grid of points with x and y coordinates y, x = np.indices([height,width]) center=[] if not center: center = np.array([(x.max()-x.min())/2.0, (y.max()-y.min())/2.0]) # Get the radial coordinate for every grid point. Array has the shape of image r = torch.tensor(np.hypot(x - center[0], y - center[1])) # Get sorted radii ind = torch.argsort(torch.reshape(r, (-1,))) return r.detach(),ind.detach() def f_torch_get_azimuthalAverage(image,r,ind): """ Calculate the azimuthally averaged radial profile. image - The 2D image center - The [x,y] pixel coordinates used as the center. The default is None, which then uses the center of the image (including fracitonal pixels). source: https://www.astrobetter.com/blog/2010/03/03/fourier-transforms-of-images-in-python/ """ # height, width = image.shape # # Create a grid of points with x and y coordinates # y, x = np.indices([height,width]) # if not center: # center = np.array([(x.max()-x.min())/2.0, (y.max()-y.min())/2.0]) # # Get the radial coordinate for every grid point. Array has the shape of image # r = torch.tensor(np.hypot(x - center[0], y - center[1])) # # Get sorted radii # ind = torch.argsort(torch.reshape(r, (-1,))) r_sorted = torch.gather(torch.reshape(r, ( -1,)),0, ind) i_sorted = torch.gather(torch.reshape(image, ( -1,)),0, ind) # Get the integer part of the radii (bin size = 1) r_int=r_sorted.to(torch.int32) # Find all pixels that fall within each radial bin. deltar = r_int[1:] - r_int[:-1] # Assumes all radii represented rind = torch.reshape(torch.where(deltar)[0], (-1,)) # location of changes in radius nr = (rind[1:] - rind[:-1]).type(torch.float) # number of radius bin # Cumulative sum to figure out sums for each radius bin csum = torch.cumsum(i_sorted, axis=-1) tbin = torch.gather(csum, 0, rind[1:]) - torch.gather(csum, 0, rind[:-1]) radial_prof = tbin / nr return radial_prof def f_torch_fftshift(real, imag): for dim in range(0, len(real.size())): real = torch.roll(real, dims=dim, shifts=real.size(dim)//2) imag = torch.roll(imag, dims=dim, shifts=imag.size(dim)//2) return real, imag def f_torch_compute_spectrum(arr,r,ind): GLOBAL_MEAN=1.0 arr=(arr-GLOBAL_MEAN)/(GLOBAL_MEAN) y1=torch.rfft(arr,signal_ndim=2,onesided=False) real,imag=f_torch_fftshift(y1[:,:,0],y1[:,:,1]) ## last index is real/imag part y2=real**2+imag**2 ## Absolute value of each complex number # print(y2.shape) z1=f_torch_get_azimuthalAverage(y2,r,ind) ## Compute radial profile return z1 def f_torch_compute_batch_spectrum(arr,r,ind): batch_pk=torch.stack([f_torch_compute_spectrum(i,r,ind) for i in arr]) return batch_pk def f_torch_image_spectrum(x,num_channels,r,ind): ''' Data has to be in the form (batch,channel,x,y) ''' mean=[[] for i in range(num_channels)] sdev=[[] for i in range(num_channels)] for i in range(num_channels): arr=x[:,i,:,:] batch_pk=f_torch_compute_batch_spectrum(arr,r,ind) mean[i]=torch.mean(batch_pk,axis=0) # sdev[i]=torch.std(batch_pk,axis=0)/np.sqrt(batch_pk.shape[0]) # sdev[i]=torch.std(batch_pk,axis=0) sdev[i]=torch.var(batch_pk,axis=0) mean=torch.stack(mean) sdev=torch.stack(sdev) return mean,sdev def f_compute_hist(data,bins): try: hist_data=torch.histc(data,bins=bins) ## A kind of normalization of histograms: divide by total sum hist_data=(hist_data*bins)/torch.sum(hist_data) except Exception as e: print(e) hist_data=torch.zeros(bins) return hist_data ### Losses def loss_spectrum(spec_mean,spec_mean_ref,spec_std,spec_std_ref,image_size,lambda1): ''' Loss function for the spectrum : mean + variance Log(sum( batch value - expect value) ^ 2 )) ''' idx=int(image_size/2) ### For the spectrum, use only N/2 indices for loss calc. ### Warning: the first index is the channel number.For multiple channels, you are averaging over them, which is fine. spec_mean=torch.log(torch.mean(torch.pow(spec_mean[:,:idx]-spec_mean_ref[:,:idx],2))) spec_sdev=torch.log(torch.mean(torch.pow(spec_std[:,:idx]-spec_std_ref[:,:idx],2))) lambda1=lambda1; lambda2=lambda1; ans=lambda1*spec_mean+lambda2*spec_sdev if torch.isnan(spec_sdev).any(): print("spec loss with nan",ans) return ans def loss_hist(hist_sample,hist_ref): lambda1=1.0 return lambda1*torch.log(torch.mean(torch.pow(hist_sample-hist_ref,2))) # def f_size(ip): # p=2;s=2 # # return (ip + 2 * 0 - 1 * (p-1) -1 )/ s + 1 # return (ip-1)*s - 2 * p + 1 *(5-1)+ 1 + 1 # f_size(128) # logging.basicConfig(filename=save_dir+'/log.log',filemode='w',format='%(name)s - %(levelname)s - %(message)s') ``` ## Main code ``` def f_train_loop(dataloader,metrics_df,gdict): ''' Train single epoch ''' ## Define new variables from dict keys=['image_size','start_epoch','epochs','iters','best_chi1','best_chi2','save_dir','device','flip_prob','nz','batchsize','bns'] image_size,start_epoch,epochs,iters,best_chi1,best_chi2,save_dir,device,flip_prob,nz,batchsize,bns=list(collections.OrderedDict({key:gdict[key] for key in keys}).values()) for epoch in range(start_epoch,epochs): t_epoch_start=time.time() for count, data in enumerate(dataloader, 0): ####### Train GAN ######## netG.train(); netD.train(); ### Need to add these after inference and before training tme1=time.time() ### Update D network: maximize log(D(x)) + log(1 - D(G(z))) netD.zero_grad() real_cpu = data[0].to(device) b_size = real_cpu.size(0) real_label = torch.full((b_size,), 1, device=device) fake_label = torch.full((b_size,), 0, device=device) g_label = torch.full((b_size,), 1, device=device) ## No flipping for Generator labels # Flip labels with probability flip_prob for idx in np.random.choice(np.arange(b_size),size=int(np.ceil(b_size*flip_prob))): real_label[idx]=0; fake_label[idx]=1 # Generate fake image batch with G noise = torch.randn(b_size, 1, 1, nz, device=device) fake = netG(noise) # Forward pass real batch through D output = netD(real_cpu).view(-1) errD_real = criterion(output, real_label) errD_real.backward() D_x = output.mean().item() # Forward pass real batch through D output = netD(fake.detach()).view(-1) errD_fake = criterion(output, fake_label) errD_fake.backward() D_G_z1 = output.mean().item() errD = errD_real + errD_fake optimizerD.step() ###Update G network: maximize log(D(G(z))) netG.zero_grad() output = netD(fake).view(-1) errG_adv = criterion(output, g_label) # Histogram pixel intensity loss hist_gen=f_compute_hist(fake,bins=bns) hist_loss=loss_hist(hist_gen,hist_val.to(device)) # Add spectral loss mean,sdev=f_torch_image_spectrum(f_invtransform(fake),1,r.to(device),ind.to(device)) spec_loss=loss_spectrum(mean,mean_spec_val.to(device),sdev,sdev_spec_val.to(device),image_size,gdict['lambda1']) if gdict['spec_loss_flag']: errG=errG_adv+spec_loss else: errG=errG_adv if torch.isnan(errG).any(): logging.info(errG) raise SystemError # Calculate gradients for G errG.backward() D_G_z2 = output.mean().item() optimizerG.step() tme2=time.time() ####### Store metrics ######## # Output training stats if count % gdict['checkpoint_size'] == 0: logging.info('[%d/%d][%d/%d]\tLoss_D: %.4f\tLoss_adv: %.4f\tLoss_G: %.4f\tD(x): %.4f\tD(G(z)): %.4f / %.4f' % (epoch, epochs, count, len(dataloader), errD.item(), errG_adv.item(),errG.item(), D_x, D_G_z1, D_G_z2)), logging.info("Spec loss: %s,\t hist loss: %s"%(spec_loss.item(),hist_loss.item())), logging.info("Training time for step %s : %s"%(iters, tme2-tme1)) # Save metrics cols=['step','epoch','Dreal','Dfake','Dfull','G_adv','G_full','spec_loss','hist_loss','D(x)','D_G_z1','D_G_z2','time'] vals=[iters,epoch,errD_real.item(),errD_fake.item(),errD.item(),errG_adv.item(),errG.item(),spec_loss.item(),hist_loss.item(),D_x,D_G_z1,D_G_z2,tme2-tme1] for col,val in zip(cols,vals): metrics_df.loc[iters,col]=val ### Checkpoint the best model checkpoint=True iters += 1 ### Model has been updated, so update iters before saving metrics and model. ### Compute validation metrics for updated model netG.eval() with torch.no_grad(): #fake = netG(fixed_noise).detach().cpu() fake = netG(fixed_noise) hist_gen=f_compute_hist(fake,bins=bns) hist_chi=loss_hist(hist_gen,hist_val.to(device)) mean,sdev=f_torch_image_spectrum(f_invtransform(fake),1,r.to(device),ind.to(device)) spec_chi=loss_spectrum(mean,mean_spec_val.to(device),sdev,sdev_spec_val.to(device),image_size,gdict['lambda1']) # Storing chi for next step for col,val in zip(['spec_chi','hist_chi'],[spec_chi.item(),hist_chi.item()]): metrics_df.loc[iters,col]=val # Checkpoint model for continuing run if count == len(dataloader)-1: ## Check point at last step of epoch f_save_checkpoint(gdict,epoch,iters,best_chi1,best_chi2,netG,netD,optimizerG,optimizerD,save_loc=save_dir+'/models/checkpoint_last.tar') if (checkpoint and (epoch > 1)): # Choose best models by metric if hist_chi< best_chi1: f_save_checkpoint(gdict,epoch,iters,best_chi1,best_chi2,netG,netD,optimizerG,optimizerD,save_loc=save_dir+'/models/checkpoint_best_hist.tar') best_chi1=hist_chi.item() logging.info("Saving best hist model at epoch %s, step %s."%(epoch,iters)) if spec_chi< best_chi2: f_save_checkpoint(gdict,epoch,iters,best_chi1,best_chi2,netG,netD,optimizerG,optimizerD,save_loc=save_dir+'/models/checkpoint_best_spec.tar') best_chi2=spec_chi.item() logging.info("Saving best spec model at epoch %s, step %s"%(epoch,iters)) if iters in gdict['save_steps_list']: f_save_checkpoint(gdict,epoch,iters,best_chi1,best_chi2,netG,netD,optimizerG,optimizerD,save_loc=save_dir+'/models/checkpoint_{0}.tar'.format(iters)) logging.info("Saving given-step at epoch %s, step %s."%(epoch,iters)) # Save G's output on fixed_noise if ((iters % gdict['checkpoint_size'] == 0) or ((epoch == epochs-1) and (count == len(dataloader)-1))): netG.eval() with torch.no_grad(): fake = netG(fixed_noise).detach().cpu() img_arr=np.array(fake[:,:,:,:]) fname='gen_img_epoch-%s_step-%s'%(epoch,iters) np.save(save_dir+'/images/'+fname,img_arr) t_epoch_end=time.time() logging.info("Time taken for epoch %s: %s"%(epoch,t_epoch_end-t_epoch_start)) # Save Metrics to file after each epoch metrics_df.to_pickle(save_dir+'/df_metrics.pkle') logging.info("best chis: {0}, {1}".format(best_chi1,best_chi2)) def f_init_gdict(gdict,config_dict): ''' Initialize the global dictionary gdict with values in config file''' keys1=['workers','nc','nz','ngf','ndf','beta1','kernel_size','stride','g_padding','d_padding','flip_prob'] keys2=['image_size','checkpoint_size','num_imgs','ip_fname','op_loc'] for key in keys1: gdict[key]=config_dict['training'][key] for key in keys2: gdict[key]=config_dict['data'][key] device='cuda' if device=='cuda': rank = int(os.environ['SLURM_PROCID']) world_size = int(os.environ['SLURM_NTASKS']) locRank=int(os.environ['SLURM_LOCALID']) else: rank=0; world_size = 1; locRank=0 host=socket.gethostname() verb=rank==0 print('M:myRank=',rank,'world_size =',world_size,'verb=',verb,host,'locRank=',locRank ) masterIP=os.getenv('MASTER_ADDR') if masterIP==None: assert device=='cuda' # must speciffy MASTER_ADDR sync_file = _get_sync_file() if verb: print('use sync_file =',sync_file) else: sync_file='env://' masterPort=os.getenv('MASTER_PORT') if verb: print('use masterIP',masterIP,masterPort) assert masterPort!=None if verb: print('imported PyTorch ver:',torch.__version__) dist.init_process_group(backend='nccl', init_method=sync_file, world_size=world_size, rank=rank) print("M:after dist.init_process_group") inp_dim=280 fc_dim=20 out_dim=10 epochs=15 batch_size=16*1024//world_size # local batch size steps=16 num_eve=steps*batch_size learning_rate = 0.02 num_cpus=5 # to load the data in parallel , -c10 locks 5 phys cores # Initialize model torch.manual_seed(0) model = JanModel(inp_dim,fc_dim,out_dim) if device=='cuda': torch.cuda.set_device(locRank) model.cuda(locRank) # define loss function loss_fn = nn.MSELoss().cuda(locRank) # Initialize optimizer optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # Wrap the model # This reproduces the model onto the GPU for the process. if device=='cuda': model = nn.parallel.DistributedDataParallel(model, device_ids=[locRank]) # - - - - DATA PERP - - - - - if verb: print('\nM: generate data and train, num_eve=',num_eve) X,Y=create_dataset(num_eve,inp_dim,out_dim) if verb: print('\nCreate torch-Dataset instance') trainDst=JanDataset(X,Y) if verb: print('\nCreate torch-DataLoader instance & test it, num_cpus=',num_cpus) trainLdr = DataLoader(trainDst, batch_size=batch_size, shuffle=True, num_workers=num_cpus,pin_memory=True) # - - - - DATA READY - - - - - # Note, intentionally I do not use torch.utils.data.distributed.DistributedSampler(..) # because I want to controll manually what data will be sent to which GPU - here data are generated on CPU matched to GPU if verb: print('\n print one batch of training data ') xx, yy = next(iter(trainLdr)) print('test_dataLoader: X:',xx.shape,'Y:',yy.shape) print('Y[:,]',yy[:,0]) print('\n= = = = Prepare for the treaining = = =\n') print('\n\nM: torchsummary.summary(model):'); print(model) inp_size=(inp_dim,) # input_size=(channels, H, W)) if CNN is first if device=='cuda': model=model.to(device) # re-cast model on device, data will be cast later ``` ## Start ``` if __name__=="__main__": torch.backends.cudnn.benchmark=True # torch.autograd.set_detect_anomaly(True) t0=time.time() ################################# # args=f_parse_args() # Manually add args ( different for jupyter notebook) args=argparse.Namespace() args.config='1_main_code/config_128.yaml' args.ngpu=1 args.batchsize=128 args.spec_loss_flag=True args.checkpoint_size=50 args.epochs=10 args.learn_rate=0.0002 args.mode='fresh' # args.mode='continue' # args.ip_fldr='/global/cfs/cdirs/m3363/vayyar/cosmogan_data/results_from_other_code/pytorch/results/128sq/20201211_093818_nb_test/' args.run_suffix='nb_test' args.deterministic=False args.seed='234373' args.lambda1=0.1 args.save_steps_list=[5,10] ### Set up ### config_file=args.config config_dict=f_load_config(config_file) # Initilize variables gdict={} f_init_gdict(gdict,config_dict) ## Add args variables to gdict for key in ['ngpu','batchsize','mode','spec_loss_flag','epochs','learn_rate','lambda1','save_steps_list']: gdict[key]=vars(args)[key] ###### Set up directories ####### if gdict['mode']=='fresh': # Create prefix for foldername fldr_name=datetime.now().strftime('%Y%m%d_%H%M%S') ## time format gdict['save_dir']=gdict['op_loc']+fldr_name+'_'+args.run_suffix if not os.path.exists(gdict['save_dir']): os.makedirs(gdict['save_dir']+'/models') os.makedirs(gdict['save_dir']+'/images') elif gdict['mode']=='continue': ## For checkpointed runs gdict['save_dir']=args.ip_fldr ### Read loss data with open (gdict['save_dir']+'df_metrics.pkle','rb') as f: metrics_dict=pickle.load(f) # ### Write all logging.info statements to stdout and log file (different for jpt notebooks) # logfile=gdict['save_dir']+'/log.log' # logging.basicConfig(level=logging.DEBUG, filename=logfile, filemode="a+", format="%(asctime)-15s %(levelname)-8s %(message)s") # Lg = logging.getLogger() # Lg.setLevel(logging.DEBUG) # lg_handler_file = logging.FileHandler(logfile) # lg_handler_stdout = logging.StreamHandler(sys.stdout) # Lg.addHandler(lg_handler_file) # Lg.addHandler(lg_handler_stdout) # logging.info('Args: {0}'.format(args)) # logging.info(config_dict) # logging.info('Start: %s'%(datetime.now().strftime('%Y-%m-%d %H:%M:%S'))) # if gdict['spec_loss_flag']: logging.info("Using Spectral loss") ### Override (different for jpt notebooks) gdict['num_imgs']=2000 ## Special declarations gdict['bns']=50 gdict['device']=torch.device("cuda" if (torch.cuda.is_available() and gdict['ngpu'] > 0) else "cpu") gdict['ngpu']=torch.cuda.device_count() gdict['multi-gpu']=True if (gdict['device'].type == 'cuda') and (gdict['ngpu'] > 1) else False print(gdict) ### Initialize random seed if args.seed=='random': manualSeed = np.random.randint(1, 10000) else: manualSeed=int(args.seed) logging.info("Seed:{0}".format(manualSeed)) random.seed(manualSeed) np.random.seed(manualSeed) torch.manual_seed(manualSeed) torch.cuda.manual_seed_all(manualSeed) logging.info('Device:{0}'.format(gdict['device'])) if args.deterministic: logging.info("Running with deterministic sequence. Performance will be slower") torch.backends.cudnn.deterministic=True # torch.backends.cudnn.enabled = False torch.backends.cudnn.benchmark = False ################################# ####### Read data and precompute ###### img=np.load(gdict['ip_fname'],mmap_mode='r')[:gdict['num_imgs']].transpose(0,1,2,3) t_img=torch.from_numpy(img) print("%s, %s"%(img.shape,t_img.shape)) dataset=TensorDataset(t_img) dataloader=DataLoader(dataset,batch_size=gdict['batchsize'],shuffle=True,num_workers=0,drop_last=True) # Precompute metrics with validation data for computing losses with torch.no_grad(): val_img=np.load(gdict['ip_fname'])[-3000:].transpose(0,1,2,3) t_val_img=torch.from_numpy(val_img).to(gdict['device']) # Precompute radial coordinates r,ind=f_get_rad(img) r=r.to(gdict['device']); ind=ind.to(gdict['device']) # Stored mean and std of spectrum for full input data once mean_spec_val,sdev_spec_val=f_torch_image_spectrum(f_invtransform(t_val_img),1,r,ind) hist_val=f_compute_hist(t_val_img,bins=gdict['bns']) del val_img; del t_val_img; del img; del t_img ################################# ###### Build Networks ### # Define Models print("Building GAN networks") # Create Generator netG = Generator(gdict).to(gdict['device']) netG.apply(weights_init) # print(netG) summary(netG,(1,1,64)) # Create Discriminator netD = Discriminator(gdict).to(gdict['device']) netD.apply(weights_init) # print(netD) summary(netD,(1,128,128)) print("Number of GPUs used %s"%(gdict['ngpu'])) if (gdict['multi-gpu']): netG = nn.DataParallel(netG, list(range(gdict['ngpu']))) netD = nn.DataParallel(netD, list(range(gdict['ngpu']))) #### Initialize networks #### # criterion = nn.BCELoss() criterion = nn.BCEWithLogitsLoss() if gdict['mode']=='fresh': optimizerD = optim.Adam(netD.parameters(), lr=gdict['learn_rate'], betas=(gdict['beta1'], 0.999),eps=1e-7) optimizerG = optim.Adam(netG.parameters(), lr=gdict['learn_rate'], betas=(gdict['beta1'], 0.999),eps=1e-7) ### Initialize variables iters,start_epoch,best_chi1,best_chi2=0,0,1e10,1e10 ### Load network weights for continuing run elif gdict['mode']=='continue': iters,start_epoch,best_chi1,best_chi2=f_load_checkpoint(gdict['save_dir']+'/models/checkpoint_last.tar',netG,netD,optimizerG,optimizerD,gdict) logging.info("Continuing existing run. Loading checkpoint with epoch {0} and step {1}".format(start_epoch,iters)) start_epoch+=1 ## Start with the next epoch ## Add to gdict for key,val in zip(['best_chi1','best_chi2','iters','start_epoch'],[best_chi1,best_chi2,iters,start_epoch]): gdict[key]=val print(gdict) fixed_noise = torch.randn(gdict['batchsize'], 1, 1, gdict['nz'], device=gdict['device']) #Latent vectors to view G progress if __name__=="__main__": ################################# ### Set up metrics dataframe cols=['step','epoch','Dreal','Dfake','Dfull','G_adv','G_full','spec_loss','hist_loss','spec_chi','hist_chi','D(x)','D_G_z1','D_G_z2','time'] # size=int(len(dataloader) * epochs)+1 metrics_df=pd.DataFrame(columns=cols) ################################# ########## Train loop and save metrics and images ###### print("Starting Training Loop...") f_train_loop(dataloader,metrics_df,gdict) ## Generate images for best saved models ###### op_loc=gdict['save_dir']+'/images/' ip_fname=gdict['save_dir']+'/models/checkpoint_best_spec.tar' f_gen_images(gdict,netG,optimizerG,ip_fname,op_loc,op_strg='best_spec',op_size=200) ip_fname=gdict['save_dir']+'/models/checkpoint_best_hist.tar' f_gen_images(gdict,netG,optimizerG,ip_fname,op_loc,op_strg='best_hist',op_size=200) tf=time.time() print("Total time %s"%(tf-t0)) print('End: %s'%(datetime.now().strftime('%Y-%m-%d %H:%M:%S'))) # metrics_df.plot('step','time') metrics_df gdict ```
github_jupyter
## Programming Exercise 1 - Linear Regression - [warmUpExercise](#warmUpExercise) - [Linear regression with one variable](#Linear-regression-with-one-variable) - [Gradient Descent](#Gradient-Descent) ``` # %load ../../standard_import.txt import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from mpl_toolkits.mplot3d import axes3d pd.set_option('display.notebook_repr_html', False) pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', 150) pd.set_option('display.max_seq_items', None) #%config InlineBackend.figure_formats = {'pdf',} %matplotlib inline import seaborn as sns sns.set_context('notebook') sns.set_style('white') ``` #### warmUpExercise ``` def warmUpExercise(): return(np.identity(5)) warmUpExercise() ``` ### Linear regression with one variable ``` data = np.loadtxt('data/ex1data1.txt', delimiter=',') X = np.c_[np.ones(data.shape[0]),data[:,0]] y = np.c_[data[:,1]] plt.scatter(X[:,1], y, s=30, c='r', marker='x', linewidths=1) plt.xlim(4,24) plt.xlabel('Population of City in 10,000s') plt.ylabel('Profit in $10,000s'); ``` #### Gradient Descent ``` def computeCost(X, y, theta=[[0],[0]]): m = y.size J = 0 h = X.dot(theta) J = 1/(2*m)*np.sum(np.square(h-y)) return(J) computeCost(X,y) def gradientDescent(X, y, theta=[[0],[0]], alpha=0.01, num_iters=1500): m = y.size J_history = np.zeros(num_iters) for iter in np.arange(num_iters): h = X.dot(theta) theta = theta - alpha*(1/m)*(X.T.dot(h-y)) J_history[iter] = computeCost(X, y, theta) return(theta, J_history) # theta for minimized cost J theta , Cost_J = gradientDescent(X, y) print('theta: ',theta.ravel()) plt.plot(Cost_J) plt.ylabel('Cost J') plt.xlabel('Iterations'); xx = np.arange(5,23) yy = theta[0]+theta[1]*xx # Plot gradient descent plt.scatter(X[:,1], y, s=30, c='r', marker='x', linewidths=1) plt.plot(xx,yy, label='Linear regression (Gradient descent)') # Compare with Scikit-learn Linear regression regr = LinearRegression() regr.fit(X[:,1].reshape(-1,1), y.ravel()) plt.plot(xx, regr.intercept_+regr.coef_*xx, label='Linear regression (Scikit-learn GLM)') plt.xlim(4,24) plt.xlabel('Population of City in 10,000s') plt.ylabel('Profit in $10,000s') plt.legend(loc=4); # Predict profit for a city with population of 35000 and 70000 print(theta.T.dot([1, 3.5])*10000) print(theta.T.dot([1, 7])*10000) # Create grid coordinates for plotting B0 = np.linspace(-10, 10, 50) B1 = np.linspace(-1, 4, 50) xx, yy = np.meshgrid(B0, B1, indexing='xy') Z = np.zeros((B0.size,B1.size)) # Calculate Z-values (Cost) based on grid of coefficients for (i,j),v in np.ndenumerate(Z): Z[i,j] = computeCost(X,y, theta=[[xx[i,j]], [yy[i,j]]]) fig = plt.figure(figsize=(15,6)) ax1 = fig.add_subplot(121) ax2 = fig.add_subplot(122, projection='3d') # Left plot CS = ax1.contour(xx, yy, Z, np.logspace(-2, 3, 20), cmap=plt.cm.jet) ax1.scatter(theta[0],theta[1], c='r') # Right plot ax2.plot_surface(xx, yy, Z, rstride=1, cstride=1, alpha=0.6, cmap=plt.cm.jet) ax2.set_zlabel('Cost') ax2.set_zlim(Z.min(),Z.max()) ax2.view_init(elev=15, azim=230) # settings common to both plots for ax in fig.axes: ax.set_xlabel(r'$\theta_0$', fontsize=17) ax.set_ylabel(r'$\theta_1$', fontsize=17) ```
github_jupyter
# BEL to Natural Language **Author:** [Charles Tapley Hoyt](https://github.com/cthoyt/) **Estimated Run Time:** 5 seconds This notebook shows how the PyBEL-INDRA integration can be used to turn a BEL graph into natural language. Special thanks to John Bachman and Ben Gyori for all of their efforts in making this possible. To view the interactive Javascript output in this notebook, open in the [Jupyter NBViewer](http://nbviewer.jupyter.org/github/pybel/pybel-notebooks/blob/master/BEL%20to%20Natural%20Language.ipynb). ## Imports ``` import sys import time import indra import indra.util.get_version import ndex2 import pybel from indra.assemblers.english_assembler import EnglishAssembler from indra.sources.bel.bel_api import process_pybel_graph from pybel.examples import sialic_acid_graph from pybel_tools.visualization import to_jupyter ``` ## Environment ``` print(sys.version) print(time.asctime()) ``` ## Dependencies ``` pybel.utils.get_version() indra.util.get_version.get_version() ``` # Data The [Sialic Acid graph](http://pybel.readthedocs.io/en/latest/examples.html#pybel.examples.sialic_acid_example.pybel.examples.sialic_acid_graph) is used as an example. ``` to_jupyter(sialic_acid_graph) ``` # Conversion The PyBEL BELGraph instance is converted to INDRA statments with the function [`process_pybel_graph`](http://indra.readthedocs.io/en/latest/modules/sources/bel/index.html#indra.sources.bel.bel_api.process_pybel_graph). It returns an instance of [`PybelProcessor`](`http://indra.readthedocs.io/en/latest/modules/sources/bel/index.html#module-indra.sources.bel.pybel_processor`), which stores the INDRA statments. ``` pbp = process_pybel_graph(sialic_acid_graph) ``` A list of INDRA statements is extracted from the BEL graph and stored in the field [`PybelProcessor.statements`](http://indra.readthedocs.io/en/latest/modules/sources/bel/index.html#indra.sources.bel.pybel_processor.PybelProcessor.statements). Note that INDRA is built to consider mechanistic information, and therefore excludes most associative relationships. ``` stmts = pbp.statements stmts ``` The list of INDRA statements is converted to plain english using the [`EnglishAssembler`](http://indra.readthedocs.io/en/latest/modules/assemblers/english_assembler.html#indra.assemblers.english_assembler.EnglishAssembler). ``` asm = EnglishAssembler(stmts) print(asm.make_model(), sep='\n') ``` # Conclusion While knowledge assembly is indeed difficult and precarious, the true scientific task is to use them to generate mechanistic hypotheses. By far, the most common way is for a scientist to use their intution and choose an explanatory subgraph or pathway. This notebook has demonstrated that after this has been done, the results can be serialized to english prose in a precise manner.
github_jupyter
# Basic Examples with Different Protocols ## Prerequisites * A kubernetes cluster with kubectl configured * curl * grpcurl * pygmentize ## Examples * [Seldon Protocol](#Seldon-Protocol-Model) * [Tensorflow Protocol](#Tensorflow-Protocol-Model) * [KFServing V2 Protocol](#KFServing-V2-Protocol-Model) ## Setup Seldon Core Use the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html) to setup Seldon Core with an ingress - either Ambassador or Istio. Then port-forward to that ingress on localhost:8003 in a separate terminal either with: * Ambassador: `kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080` * Istio: `kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8003:80` ``` !kubectl create namespace seldon !kubectl config set-context $(kubectl config current-context) --namespace=seldon import json import time from IPython.core.magic import register_line_cell_magic @register_line_cell_magic def writetemplate(line, cell): with open(line, 'w') as f: f.write(cell.format(**globals())) VERSION=!cat ../version.txt VERSION=VERSION[0] VERSION ``` ## Seldon Protocol Model We will deploy a REST model that uses the SELDON Protocol namely by specifying the attribute `protocol: seldon` ``` %%writetemplate resources/model_seldon.yaml apiVersion: machinelearning.seldon.io/v1 kind: SeldonDeployment metadata: name: example-seldon spec: protocol: seldon predictors: - componentSpecs: - spec: containers: - image: seldonio/mock_classifier:{VERSION} name: classifier graph: name: classifier type: MODEL name: model replicas: 1 !kubectl apply -f resources/model_seldon.yaml !kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=example-seldon -o jsonpath='{.items[0].metadata.name}') for i in range(60): state=!kubectl get sdep example-seldon -o jsonpath='{.status.state}' state=state[0] print(state) if state=="Available": break time.sleep(1) assert(state=="Available") X=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \ -X POST http://localhost:8003/seldon/seldon/example-seldon/api/v1.0/predictions \ -H "Content-Type: application/json" d=json.loads(X[0]) print(d) assert(d["data"]["ndarray"][0][0] > 0.4) X=!cd ../executor/proto && grpcurl -d '{"data":{"ndarray":[[1.0,2.0,5.0]]}}' \ -rpc-header seldon:example-seldon -rpc-header namespace:seldon \ -plaintext \ -proto ./prediction.proto 0.0.0.0:8003 seldon.protos.Seldon/Predict d=json.loads("".join(X)) print(d) assert(d["data"]["ndarray"][0][0] > 0.4) !kubectl delete -f resources/model_seldon.yaml ``` ## Tensorflow Protocol Model We will deploy a model that uses the TENSORLFOW Protocol namely by specifying the attribute `protocol: tensorflow` ``` %%writefile resources/model_tfserving.yaml apiVersion: machinelearning.seldon.io/v1 kind: SeldonDeployment metadata: name: example-tfserving spec: protocol: tensorflow predictors: - componentSpecs: - spec: containers: - args: - --port=8500 - --rest_api_port=8501 - --model_name=halfplustwo - --model_base_path=gs://seldon-models/tfserving/half_plus_two image: tensorflow/serving name: halfplustwo ports: - containerPort: 8501 name: http protocol: TCP - containerPort: 8500 name: grpc protocol: TCP graph: name: halfplustwo type: MODEL endpoint: httpPort: 8501 grpcPort: 8500 name: model replicas: 1 !kubectl apply -f resources/model_tfserving.yaml !kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=example-tfserving \ -o jsonpath='{.items[0].metadata.name}') for i in range(60): state=!kubectl get sdep example-tfserving -o jsonpath='{.status.state}' state=state[0] print(state) if state=="Available": break time.sleep(1) assert(state=="Available") X=!curl -s -d '{"instances": [1.0, 2.0, 5.0]}' \ -X POST http://localhost:8003/seldon/seldon/example-tfserving/v1/models/halfplustwo/:predict \ -H "Content-Type: application/json" d=json.loads("".join(X)) print(d) assert(d["predictions"][0] == 2.5) X=!cd ../executor/proto && grpcurl \ -d '{"model_spec":{"name":"halfplustwo"},"inputs":{"x":{"dtype": 1, "tensor_shape": {"dim":[{"size": 3}]}, "floatVal" : [1.0, 2.0, 3.0]}}}' \ -rpc-header seldon:example-tfserving -rpc-header namespace:seldon \ -plaintext -proto ./prediction_service.proto \ 0.0.0.0:8003 tensorflow.serving.PredictionService/Predict d=json.loads("".join(X)) print(d) assert(d["outputs"]["x"]["floatVal"][0] == 2.5) !kubectl delete -f resources/model_tfserving.yaml ``` ## KFServing V2 Protocol Model We will deploy a REST model that uses the KFServing V2 Protocol namely by specifying the attribute `protocol: kfserving` ``` %%writefile resources/model_v2.yaml apiVersion: machinelearning.seldon.io/v1alpha2 kind: SeldonDeployment metadata: name: triton spec: protocol: kfserving predictors: - graph: children: [] implementation: TRITON_SERVER modelUri: gs://seldon-models/trtis/simple-model name: simple name: simple replicas: 1 !kubectl apply -f resources/model_v2.yaml !kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=triton -o jsonpath='{.items[0].metadata.name}') for i in range(60): state=!kubectl get sdep triton -o jsonpath='{.status.state}' state=state[0] print(state) if state=="Available": break time.sleep(1) assert(state=="Available") X=!curl -s -d '{"inputs":[{"name":"INPUT0","data":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"datatype":"INT32","shape":[1,16]},{"name":"INPUT1","data":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"datatype":"INT32","shape":[1,16]}]}' \ -X POST http://0.0.0.0:8003/seldon/seldon/triton/v2/models/simple/infer \ -H "Content-Type: application/json" d=json.loads(X[0]) print(d) assert(d["outputs"][0]["data"][0]==2) X=!cd ../executor/api/grpc/kfserving/inference && \ grpcurl -d '{"model_name":"simple","inputs":[{"name":"INPUT0","contents":{"int_contents":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]},"datatype":"INT32","shape":[1,16]},{"name":"INPUT1","contents":{"int_contents":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]},"datatype":"INT32","shape":[1,16]}]}' \ -plaintext -proto ./grpc_service.proto \ -rpc-header seldon:triton -rpc-header namespace:seldon \ 0.0.0.0:8003 inference.GRPCInferenceService/ModelInfer X="".join(X) print(X) !kubectl delete -f resources/model_v2.yaml ```
github_jupyter
``` from tensorflow import keras from tensorflow.keras import * from tensorflow.keras.models import * from tensorflow.keras.layers import * from tensorflow.keras.regularizers import l2#正则化L2 import tensorflow as tf import numpy as np import pandas as pd # 12-0.2 # 13-2.4 # 18-12.14 import pandas as pd import numpy as np normal = np.loadtxt(r'F:\张老师课题学习内容\code\数据集\试验数据(包括压力脉动和振动)\2013.9.12-未发生缠绕前\2013-9.12振动\2013-9-12振动-1250rmin-mat\1250rnormalviby.txt', delimiter=',') chanrao = np.loadtxt(r'F:\张老师课题学习内容\code\数据集\试验数据(包括压力脉动和振动)\2013.9.17-发生缠绕后\振动\9-18上午振动1250rmin-mat\1250r_chanraoviby.txt', delimiter=',') print(normal.shape,chanrao.shape,"***************************************************") data_normal=normal[0:2] #提取前两行 data_chanrao=chanrao[0:2] #提取前两行 print(data_normal.shape,data_chanrao.shape) print(data_normal,"\r\n",data_chanrao,"***************************************************") data_normal=data_normal.reshape(1,-1) data_chanrao=data_chanrao.reshape(1,-1) print(data_normal.shape,data_chanrao.shape) print(data_normal,"\r\n",data_chanrao,"***************************************************") #水泵的两种故障类型信号normal正常,chanrao故障 data_normal=data_normal.reshape(-1, 512)#(65536,1)-(128, 515) data_chanrao=data_chanrao.reshape(-1,512) print(data_normal.shape,data_chanrao.shape) import numpy as np def yuchuli(data,label):#(4:1)(51:13) #打乱数据顺序 np.random.shuffle(data) train = data[0:102,:] test = data[102:128,:] label_train = np.array([label for i in range(0,102)]) label_test =np.array([label for i in range(0,26)]) return train,test ,label_train ,label_test def stackkk(a,b,c,d,e,f,g,h): aa = np.vstack((a, e)) bb = np.vstack((b, f)) cc = np.hstack((c, g)) dd = np.hstack((d, h)) return aa,bb,cc,dd x_tra0,x_tes0,y_tra0,y_tes0 = yuchuli(data_normal,0) x_tra1,x_tes1,y_tra1,y_tes1 = yuchuli(data_chanrao,1) tr1,te1,yr1,ye1=stackkk(x_tra0,x_tes0,y_tra0,y_tes0 ,x_tra1,x_tes1,y_tra1,y_tes1) x_train=tr1 x_test=te1 y_train = yr1 y_test = ye1 #打乱数据 state = np.random.get_state() np.random.shuffle(x_train) np.random.set_state(state) np.random.shuffle(y_train) state = np.random.get_state() np.random.shuffle(x_test) np.random.set_state(state) np.random.shuffle(y_test) #对训练集和测试集标准化 def ZscoreNormalization(x): """Z-score normaliaztion""" x = (x - np.mean(x)) / np.std(x) return x x_train=ZscoreNormalization(x_train) x_test=ZscoreNormalization(x_test) # print(x_test[0]) #转化为一维序列 x_train = x_train.reshape(-1,512,1) x_test = x_test.reshape(-1,512,1) print(x_train.shape,x_test.shape) def to_one_hot(labels,dimension=2): results = np.zeros((len(labels),dimension)) for i,label in enumerate(labels): results[i,label] = 1 return results one_hot_train_labels = to_one_hot(y_train) one_hot_test_labels = to_one_hot(y_test) x = layers.Input(shape=[512,1,1]) #普通卷积层 conv1 = layers.Conv2D(filters=16, kernel_size=(2, 1), activation='relu',padding='valid',name='conv1')(x) #池化层 POOL1 = MaxPooling2D((2,1))(conv1) #普通卷积层 conv2 = layers.Conv2D(filters=32, kernel_size=(2, 1), activation='relu',padding='valid',name='conv2')(POOL1) #池化层 POOL2 = MaxPooling2D((2,1))(conv2) #Dropout层 Dropout=layers.Dropout(0.1)(POOL2 ) Flatten=layers.Flatten()(Dropout) #全连接层 Dense1=layers.Dense(50, activation='relu')(Flatten) Dense2=layers.Dense(2, activation='softmax')(Dense1) model = keras.Model(x, Dense2) model.summary() #定义优化 model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy']) import time time_begin = time.time() history = model.fit(x_train,one_hot_train_labels, validation_split=0.1, epochs=50,batch_size=10, shuffle=True) time_end = time.time() time = time_end - time_begin print('time:', time) import time time_begin = time.time() score = model.evaluate(x_test,one_hot_test_labels, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) time_end = time.time() time = time_end - time_begin print('time:', time) #绘制acc-loss曲线 import matplotlib.pyplot as plt plt.plot(history.history['loss'],color='r') plt.plot(history.history['val_loss'],color='g') plt.plot(history.history['accuracy'],color='b') plt.plot(history.history['val_accuracy'],color='k') plt.title('model loss and acc') plt.ylabel('Accuracy') plt.xlabel('epoch') plt.legend(['train_loss', 'test_loss','train_acc', 'test_acc'], loc='center right') # plt.legend(['train_loss','train_acc'], loc='upper left') #plt.savefig('1.png') plt.show() import matplotlib.pyplot as plt plt.plot(history.history['loss'],color='r') plt.plot(history.history['accuracy'],color='b') plt.title('model loss and sccuracy ') plt.ylabel('loss/sccuracy') plt.xlabel('epoch') plt.legend(['train_loss', 'train_sccuracy'], loc='center right') plt.show() ```
github_jupyter
# OpenEO Use Case 2: Multi–source phenology toolbox Use case implemented by VITO. ## Official description This use case concentrates on data fusion tools, time-series generation and phenological metrics using Sentinel-2 data. It will be tested on several back-end platforms by pilot users from the Action against Hunger and the International Centre for Integrated Mountain Development. The here tested processes depend on the availability of orthorectified Sentinel-2 surface re- flectance data including per pixel quality masks. ## Overview In this use case, the goal is to derive phenology information from Sentinel-2 time series data. In this case, phenology is defined by: - Start of season, a date and the corresponding value of the biophysical indicator - The maximum value of the growing curve for the indicator - End of season, a date and the corresponding value of the biophysical indicator Multiple biophysical indicators exist. But in this use case, the enhanced vegitation index (EVI) is used. We start by importing the necessary packages, and defining an area of interest. During the algorithm development phase, we work on a limited study field, so that we can use the direct execution capabilities of OpenEO to receive feedback on the implemented changes. ``` %matplotlib inline import matplotlib.pyplot as plt from rasterio.plot import show, show_hist import rasterio from shapely.geometry import Polygon from openeo import ImageCollection import openeo import logging import os from pathlib import Path import json import numpy as np import pandas as pd import geopandas as gpd import scipy.signal #enable logging in requests library from openeo.rest.imagecollectionclient import ImageCollectionClient start = "2018-05-01" end = "2018-10-01" date = "2018-08-17" parcels = gpd.read_file('potato_field.geojson') parcels.plot() polygon = parcels.geometry[0] minx,miny,maxx,maxy = polygon.bounds #enlarge bounds, to also have some data outside of our parcel #minx -= 0.001 #miny -= 0.001 #maxx+=0.001 #maxy+=0.001 polygon.bounds ``` Connect to the OpenEO backend, and create a Sentinel-2 datacube containing 10M reflectance bands. We do not yet specify a time range, this allows us to play around with different time ranges later on. ``` session = openeo.session("nobody", "http://openeo.vgt.vito.be/openeo/0.4.0") #retrieve the list of available collections collections = session.list_collections() s2_radiometry = session.imagecollection("CGS_SENTINEL2_RADIOMETRY_V102_001") \ .filter_bbox(west=minx,east=maxx,north=maxy,south=miny,crs="EPSG:4326") ``` ## Preprocessing step 1: EVI computation Create an EVI data cube, based on reflectance bands. The formula for the EVI index can be expressed using plain Python. The bands retrieved from the backend are unscaled reflectance values with a valid range between 0 and 10000. ``` B02 = s2_radiometry.band('2') B04 = s2_radiometry.band('4') B08 = s2_radiometry.band('8') evi_cube_nodate = (2.5 * (B08 - B04)) / ((B08 + 6.0 * B04 - 7.5 * B02) + 10000.0*1.0) evi_cube = evi_cube_nodate.filter_temporal(start,end) #write graph to json, as example def write_graph(graph, filename): with open(filename, 'w') as outfile: json.dump(graph, outfile,indent=4) write_graph(evi_cube.graph,"evi_cube.json") ``` <div class="alert alert-block alert-success"> No actual processing has occurred until now, we have just been building a workflow consisting of multiple steps. In OpenEO, this workflow is representation is called a process graph. This allows your workflow to be exchanged between multiple systems. The figure below shows this in a graphical representation. ![EVI Process graph](https://open-eo.github.io/openeo-api/img/pg-example.png "Process graph") </div> ``` def show_image(cube,cmap='RdYlGn'): %time cube.filter_temporal(date,date).download("temp%s.tiff"%date,format='GTIFF') with rasterio.open("temp%s.tiff"%date) as src: band_temp = src.read(1) fig, (ax) = plt.subplots(1,1, figsize=(7,7)) show(band_temp,ax=ax,cmap=cmap,vmin=0,vmax=1) show_image(evi_cube_nodate) ``` ### Preprocessing step 2: Cloud masking In Sen2cor sceneclassification these values are relevant for phenology: - 4: vegetated - 5: not-vegetated Everything else is cloud, snow, water, shadow ... In OpenEO, the mask function will mask every value that is set to True. ``` s2_sceneclassification = session.imagecollection("S2_FAPAR_SCENECLASSIFICATION_V102_PYRAMID") \ .filter_bbox(west=minx,east=maxx,north=maxy,south=miny,crs="EPSG:4326") mask = s2_sceneclassification.band('classification') mask = (mask != 4) & (mask !=5) mask ``` Masks produced by sen2cor still include a lot of unwanted clouds and shadow. This problem usually occurs in the proximity of detected clouds, so we try to extend our mask. To do that, we use a bit of fuzzy logic: blur the binary mask using a gaussian so that our mask gives us an indication of how close to a cloud we are. By adjusting the window size, we can play around with how far from the detected clouds we want to extend our mask. A 30 pixel kernel applied to a 10M resolution image will cover a 300m area. ``` def makekernel(iwindowsize): kernel_vect = scipy.signal.windows.gaussian(iwindowsize, std = iwindowsize/4.0, sym=True) kernel = np.outer(kernel_vect, kernel_vect) kernel = kernel / kernel.sum() return kernel plt.imshow(makekernel(31)) ``` Use the apply_kernel OpenEO process: https://open-eo.github.io/openeo-api/v/0.4.0/processreference/#apply_kernel ``` fuzzy_mask = mask.apply_kernel(makekernel(29)) mask_extended = fuzzy_mask > 0.1 write_graph(mask_extended.graph,"mask.json") ``` To evaluate our masking code, we download some reference images: ``` mask_for_date = mask_extended.filter_temporal(date,date) %time fuzzy_mask.filter_temporal(date,date).download("mask%s.tiff"%date,format='GTIFF') #s2_sceneclassification.filter_temporal(date,date).download("scf%s.tiff"%date,format='GTIFF') %time evi_cube_nodate.filter_temporal(date,date).download("unmasked%s.tiff"%date,format='GTIFF') %time evi_cube_nodate.filter_temporal(date,date).mask(rastermask=mask_for_date,replacement=np.nan).download("masked%s.tiff"%date,format='GTIFF') with rasterio.open("unmasked%s.tiff"%date) as src: band_unmasked = src.read(1) with rasterio.open("masked%s.tiff"%date) as src: band_masked = src.read(1) with rasterio.open("mask%s.tiff"%date) as src: band_mask = src.read(1) fig, (axr, axg,axb) = plt.subplots(1,3, figsize=(14,14)) show(band_unmasked,ax=axr,cmap='RdYlGn',vmin=0,vmax=1) show(band_masked,ax=axg,cmap='RdYlGn',vmin=0,vmax=1) show(band_mask,ax=axb,cmap='coolwarm',vmin=0.0,vmax=0.8) ``` We can look under the hood of OpenEO, to look at the process graph that is used to encode our workflow: ``` evi_cube_masked = evi_cube.mask(rastermask=mask_extended.filter_temporal(start,end),replacement=np.nan) ``` #### Creating a viewing service OpenEO allows us to turn a datacube into a WMTS viewing service: ``` service = evi_cube_masked.tiled_viewing_service(type='WMTS',style={'colormap':'RdYlGn'}) print(service) ``` Extract an unsmoothed timeseries, this allows us to evaluate the intermediate result. For further analysis, smoothing will be needed. ``` %time timeseries_raw_dc = evi_cube.polygonal_mean_timeseries(polygon) timeseries_raw = pd.Series(timeseries_raw_dc.execute(),name="evi_raw") #timeseries are provided as an array, because of bands, so unpack timeseries_raw = timeseries_raw.apply(pd.Series) timeseries_raw.columns = ["evi_raw"] timeseries_raw.head(15) timeseries_masked_dc = evi_cube_masked.polygonal_mean_timeseries(polygon) %time timeseries_masked = pd.Series(timeseries_masked_dc.execute()) timeseries_masked = timeseries_masked.apply(pd.Series) timeseries_masked.columns = ["evi_masked"] timeseries_masked.head(15) ``` Now we can plot both the cloudmasked and unmasked values. Do note that the 'unmasked' layer already has some basic cloud filtering in place based on medium and high probability clouds. ``` all_timeseries = timeseries_raw.join(timeseries_masked).dropna(how='all') all_timeseries.index = pd.to_datetime(all_timeseries.index) all_timeseries.plot(figsize=(14,7)) all_timeseries.head(15) ``` In the plot, we can see that cloud masking seems to reduce some of the variation that is found in the original raw timeseries. ## Preprocessing step 3: Time series smoothing Cloud masking has reduced the noise in our signal, but it is clearly not perfect. This is due to the limitations of the pixel based cloud masking algorithm, which still leaves a lot of undetected bad pixels in our data. A commonly used approach is to apply a smoothing on the timeseries. Here we suggest to use a 'Savitzky-Golay' filter, which we first try out locally on the aggregated timeseries, before applying to the pixels through the OpenEO API. ``` timeseries_masked.index = pd.to_datetime(timeseries_masked.index) timeseries_masked.interpolate(axis=0).plot(figsize=(14,7)) ``` Run the filter with different parameters to assess the effect. ``` from scipy.signal import savgol_filter smooth_ts = pd.DataFrame(timeseries_masked.dropna()) #smooth_ts['smooth_5'] = savgol_filter(smooth_ts.evi_masked, 5, 1) smooth_ts['smooth_5_poly'] = savgol_filter(smooth_ts.evi_masked, 5, 2) #smooth_ts['smooth_9'] = savgol_filter(smooth_ts.evi_masked, 9, 1) smooth_ts['smooth_9_poly'] = savgol_filter(smooth_ts.evi_masked, 9, 2) smooth_ts.plot(figsize=(14,7)) ``` ### Using a UDF for pixel based smoothing The end result should be a phenology map, so we need to apply our smoothing method on the pixel values. We use a 'user defined function' (UDF) to apply custom Python code to a datacube containging time series per pixel. The code for our UDF function is contained in a separate file, and shown below: ``` def get_resource(relative_path): return str(Path( relative_path)) def load_udf(relative_path): import json with open(get_resource(relative_path), 'r+') as f: return f.read() smoothing_udf = load_udf('udf/smooth_savitzky_golay.py') print(smoothing_udf) ``` Now we apply our udf to the temporal dimension of the datacube. Use the code block below to display the api documentation. ``` ?evi_cube_masked.apply_dimension smoothed_evi = evi_cube_masked.apply_dimension(smoothing_udf,runtime='Python') timeseries_smooth = smoothed_evi.polygonal_mean_timeseries(polygon) write_graph(timeseries_smooth.graph,"timeseries_udf.json") ts_savgol = pd.Series(timeseries_smooth.execute()).apply(pd.Series) ts_savgol.head(10) ts_savgol.dropna(inplace=True) ts_savgol.index = pd.to_datetime(ts_savgol.index) ts_savgol.head(10) all_timeseries['savgol_udf'] =ts_savgol all_timeseries.plot(figsize=(14,7)) all_timeseries.head() ``` This plot shows the result of applying smoothing per pixel. The noise in the timeseries seems to be reduced, but we do still need to validate if this is correct! ### To be continued... ``` #smoothed_evi.filter_temporal(date,date).download("smoothed%s.tiff"%date,format='GTIFF') show_image(smoothed_evi) ```
github_jupyter
# Capstone Project ## Image classifier for the SVHN dataset ### Instructions In this notebook, you will create a neural network that classifies real-world images digits. You will use concepts from throughout this course in building, training, testing, validating and saving your Tensorflow classifier model. This project is peer-assessed. Within this notebook you will find instructions in each section for how to complete the project. Pay close attention to the instructions as the peer review will be carried out according to a grading rubric that checks key parts of the project instructions. Feel free to add extra cells into the notebook as required. ### How to submit When you have completed the Capstone project notebook, you will submit a pdf of the notebook for peer review. First ensure that the notebook has been fully executed from beginning to end, and all of the cell outputs are visible. This is important, as the grading rubric depends on the reviewer being able to view the outputs of your notebook. Save the notebook as a pdf (File -> Download as -> PDF via LaTeX). You should then submit this pdf for review. ### Let's get started! We'll start by running some imports, and loading the dataset. For this project you are free to make further imports throughout the notebook as you wish. ``` import tensorflow as tf from scipy.io import loadmat import matplotlib.pyplot as plt %matplotlib inline import numpy as np ``` ![SVHN overview image](data/svhn_examples.jpg) For the capstone project, you will use the [SVHN dataset](http://ufldl.stanford.edu/housenumbers/). This is an image dataset of over 600,000 digit images in all, and is a harder dataset than MNIST as the numbers appear in the context of natural scene images. SVHN is obtained from house numbers in Google Street View images. * Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu and A. Y. Ng. "Reading Digits in Natural Images with Unsupervised Feature Learning". NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. Your goal is to develop an end-to-end workflow for building, training, validating, evaluating and saving a neural network that classifies a real-world image into one of ten classes. ``` # Run this cell to load the dataset train = loadmat('data/train_32x32.mat') test = loadmat('data/test_32x32.mat') ``` Both `train` and `test` are dictionaries with keys `X` and `y` for the input images and labels respectively. ## 1. Inspect and preprocess the dataset * Extract the training and testing images and labels separately from the train and test dictionaries loaded for you. * Select a random sample of images and corresponding labels from the dataset (at least 10), and display them in a figure. * Convert the training and test images to grayscale by taking the average across all colour channels for each pixel. _Hint: retain the channel dimension, which will now have size 1._ * Select a random sample of the grayscale images and corresponding labels from the dataset (at least 10), and display them in a figure. ``` x_train = train['X'] y_train = train['y'] x_test = test['X'] y_test = test['y'] x_train.shape, y_train.shape, x_test.shape, y_test.shape x_train = np.transpose(x_train, (3, 0, 1, 2)) x_test = np.transpose(x_test, (3, 0, 1, 2)) x_train.shape, x_test.shape fig=plt.figure(figsize=(12,6)) columns = 5 rows = 2 for id in range(1, columns*rows +1): train_set = True if np.random.randint(2) == 1 else False if train_set: n = np.random.randint(x_train.shape[0]) ax = fig.add_subplot(rows, columns, id) ax.title.set_text(f"Img-{id}, label={y_train[n][0]}") ax.imshow(x_train[n]) else: n = np.random.randint(x_test.shape[0]) ax = fig.add_subplot(rows, columns, id) ax.title.set_text(f"Img-{id}, label={y_test[n][0]}") ax.imshow(x_test[n]) plt.show() x_train = np.mean(x_train, axis=3) / 255 x_test = np.mean(x_test, axis=3) / 255 fig=plt.figure(figsize=(12,6)) columns = 5 rows = 2 for id in range(1, columns*rows +1): train_set = True if np.random.randint(2) == 1 else False if train_set: n = np.random.randint(x_train.shape[0]) ax = fig.add_subplot(rows, columns, id) ax.title.set_text(f"Img-{id}, label={y_train[n][0]}") ax.imshow(x_train[n], cmap='gray') else: n = np.random.randint(x_test.shape[0]) ax = fig.add_subplot(rows, columns, id) ax.title.set_text(f"Img-{id}, label={y_test[n][0]}") ax.imshow(x_test[n], cmap='gray') plt.show() x_train = x_train.reshape(x_train.shape + (1,)) x_test = x_test.reshape(x_test.shape + (1,)) x_train.shape, x_test.shape y_train= y_train.reshape(y_train.shape[0]) y_train= y_train-1 y_train[0:10] y_test= y_test.reshape(y_test.shape[0]) y_test= y_test-1 y_test[0:10] y_train = tf.keras.utils.to_categorical(y_train) y_test = tf.keras.utils.to_categorical(y_test) y_train.shape, y_test.shape ``` ## 2. MLP neural network classifier * Build an MLP classifier model using the Sequential API. Your model should use only Flatten and Dense layers, with the final layer having a 10-way softmax output. * You should design and build the model yourself. Feel free to experiment with different MLP architectures. _Hint: to achieve a reasonable accuracy you won't need to use more than 4 or 5 layers._ * Print out the model summary (using the summary() method) * Compile and train the model (we recommend a maximum of 30 epochs), making use of both training and validation sets during the training run. * Your model should track at least one appropriate metric, and use at least two callbacks during training, one of which should be a ModelCheckpoint callback. * As a guide, you should aim to achieve a final categorical cross entropy training loss of less than 1.0 (the validation loss might be higher). * Plot the learning curves for loss vs epoch and accuracy vs epoch for both training and validation sets. * Compute and display the loss and accuracy of the trained model on the test set. ``` from tensorflow import keras from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten, Activation def get_model(input_shape): """ This function should build a Sequential model according to the above specification. Ensure the weights are initialised by providing the input_shape argument in the first layer, given by the function argument. Your function should return the model. """ model = Sequential() model.add(keras.Input(shape=input_shape)) model.add(Flatten()) model.add(Dense(units=1024,activation='relu')) model.add(Dense(units=256,activation='relu')) model.add(Dense(units=128,activation='relu')) model.add(Dense(units=64,activation='relu')) model.add(Dense(units=32,activation='relu')) model.add(Dense(10, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), metrics=['accuracy']) return model model = get_model(x_train[0].shape) model.summary() early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=5) checkpoint_path = "checkpoints_best_only/checkpoint" checkpoint_best_only = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, save_freq='epoch', save_weights_only=True, save_best_only=True, monitor='val_accuracy', verbose=1) callbacks = [checkpoint_best_only, early_stopping] history = model.fit(x_train,y_train, validation_split=0.15, epochs=60, verbose=1, callbacks=callbacks) try: plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) except KeyError: plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('Accuracy vs. epochs') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Training', 'Validation'], loc='lower right') plt.show() plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Loss vs. epochs') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Training', 'Validation'], loc='upper right') plt.show() test_loss, test_acc = model.evaluate(x_test, y_test, verbose=0) print("Test loss: {:.3f}\nTest accuracy: {:.2f}%".format(test_loss, 100 * test_acc)) ``` ## 3. CNN neural network classifier * Build a CNN classifier model using the Sequential API. Your model should use the Conv2D, MaxPool2D, BatchNormalization, Flatten, Dense and Dropout layers. The final layer should again have a 10-way softmax output. * You should design and build the model yourself. Feel free to experiment with different CNN architectures. _Hint: to achieve a reasonable accuracy you won't need to use more than 2 or 3 convolutional layers and 2 fully connected layers.)_ * The CNN model should use fewer trainable parameters than your MLP model. * Compile and train the model (we recommend a maximum of 30 epochs), making use of both training and validation sets during the training run. * Your model should track at least one appropriate metric, and use at least two callbacks during training, one of which should be a ModelCheckpoint callback. * You should aim to beat the MLP model performance with fewer parameters! * Plot the learning curves for loss vs epoch and accuracy vs epoch for both training and validation sets. * Compute and display the loss and accuracy of the trained model on the test set. ``` from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, BatchNormalization def get_cnn_model(input_shape): """ This function should build a Sequential model according to the above specification. Ensure the weights are initialised by providing the input_shape argument in the first layer, given by the function argument. Your function should return the model. """ model = Sequential([ Conv2D(name="conv_1", filters=32, kernel_size=(3,3), activation='relu', padding='SAME', input_shape=input_shape), MaxPooling2D(name="pool_1", pool_size=(2,2)), Conv2D(name="conv_2", filters=16, kernel_size=(3,3), activation='relu', padding='SAME'), MaxPooling2D(name="pool_2", pool_size=(4,4)), Flatten(name="flatten"), Dense(name="dense_1", units=32, activation='relu'), Dense(name="dense_2", units=10, activation='softmax') ]) model.compile(loss='categorical_crossentropy', optimizer="adam", metrics=['accuracy']) return model cnn_model = get_cnn_model(x_train[0].shape) cnn_model.summary() early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=5) cnn_checkpoint_path = "cnn_checkpoints_best_only/checkpoint" cnn_checkpoint_best_only = tf.keras.callbacks.ModelCheckpoint(filepath=cnn_checkpoint_path, save_freq='epoch', save_weights_only=True, save_best_only=True, monitor='val_accuracy', verbose=1) callbacks = [cnn_checkpoint_best_only, early_stopping] cnn_history = cnn_model.fit(x_train, y_train, epochs=15, validation_split=0.15, callbacks=callbacks, verbose=1) try: plt.plot(cnn_history.history['accuracy']) plt.plot(cnn_history.history['val_accuracy']) except KeyError: plt.plot(cnn_.history['acc']) plt.plot(cnn_.history['val_acc']) plt.title('Accuracy vs. epochs') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Training', 'Validation'], loc='lower right') plt.show() plt.plot(cnn_history.history['loss']) plt.plot(cnn_history.history['val_loss']) plt.title('Loss vs. epochs') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Training', 'Validation'], loc='upper right') plt.show() cnn_test_loss, cnn_test_acc = cnn_model.evaluate(x_test, y_test, verbose=0) print("Test loss: {:.3f}\nTest accuracy: {:.2f}%".format(cnn_test_loss, 100 * cnn_test_acc)) ``` ## 4. Get model predictions * Load the best weights for the MLP and CNN models that you saved during the training run. * Randomly select 5 images and corresponding labels from the test set and display the images with their labels. * Alongside the image and label, show each model’s predictive distribution as a bar chart, and the final model prediction given by the label with maximum probability. ``` model.load_weights(checkpoint_path) cnn_model.load_weights(cnn_checkpoint_path) num_test_images = x_test.shape[0] random_inx = np.random.choice(num_test_images, 5) random_test_images = x_test[random_inx, ...] random_test_labels = y_test[random_inx, ...] predictions = model.predict(random_test_images) cnn_predictions = cnn_model.predict(random_test_images) fig, axes = plt.subplots(5, 2, figsize=(16, 12)) fig.subplots_adjust(hspace=0.4, wspace=-0.2) for i, (cnn_prediction, prediction, image, label) in enumerate(zip(cnn_predictions, predictions, random_test_images, random_test_labels)): axes[i, 0].imshow(np.squeeze(image)) axes[i, 0].get_xaxis().set_visible(False) axes[i, 0].get_yaxis().set_visible(False) axes[i, 0].text(10., -1.5, f'Digit {label}') axes[i, 1].bar(np.arange(len(cnn_prediction))+1, cnn_prediction, color="green") axes[i, 1].bar(np.arange(len(prediction))+1, prediction) axes[i, 1].set_xticks(np.arange(len(prediction))+1) axes[i, 1].set_title(f"Model prediction: {np.argmax(prediction)+1}, CNN Model prediction: {np.argmax(cnn_prediction)+1}") plt.show() ```
github_jupyter
# Keras exercise In this exercise you will be creating a Keras model by loading a data set, preprocessing input data, building a Sequential Keras model and compiling the model with a training configuration. Afterwards, you train your model on the training data and evaluate it on the test set. To finish this exercise, you will past the accuracy of your model to the Coursera grader. This notebook is tested in IBM Watson Studio under python 3.6 ## Data For this exercise we will use the Reuters newswire dataset. This dataset consists of 11,228 newswires from the Reuters news agency. Each wire is encoded as a sequence of word indexes, just as in the IMDB data we encountered in lecture 5 of this series. Moreover, each wire is categorised into one of 46 topics, which will serve as our label. This dataset is available through the Keras API. ## Goal We want to create a Multi-layer perceptron (MLP) using Keras which we can train to classify news items into the specified 46 topics. ## Instructions We start by installing and importing everything we need for this exercise: ``` !pip install tensorflow==2.2.0rc0 !pip install --upgrade tensorflow import tensorflow as tf if not tf.__version__ == '2.2.0-rc0': print(tf.__version__) raise ValueError('please upgrade to TensorFlow 2.2.0-rc0, or restart your Kernel (Kernel->Restart & Clear Output)') ``` IMPORTANT! => Please restart the kernel by clicking on "Kernel"->"Restart and Clear Outout" and wait until all output disapears. Then your changes are beeing picked up As you can see, we use Keras' Sequential model with only two types of layers: Dense and Dropout. We also specify a random seed to make our results reproducible. Next, we load the Reuters data set: ``` import numpy as np from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout from tensorflow.keras.utils import to_categorical seed = 1337 np.random.seed(seed) from tensorflow.keras.datasets import reuters max_words = 1000 (x_train, y_train), (x_test, y_test) = reuters.load_data(num_words=max_words, test_split=0.2, seed=seed) num_classes = np.max(y_train) + 1 # 46 topics ``` Note that we cap the maximum number of words in a news item to 1000 by specifying the *num_words* key word. Also, 20% of the data will be test data and we ensure reproducibility by setting our random seed. Our training features are still simply sequences of indexes and we need to further preprocess them, so that we can plug them into a *Dense* layer. For this we use a *Tokenizer* from Keras' text preprocessing module. This tokenizer will take an index sequence and map it to a vector of length *max_words=1000*. Each of the 1000 vector positions corresponds to one of the words in our newswire corpus. The output of the tokenizer has a 1 at the i-th position of the vector, if the word corresponding to i is in the description of the newswire, and 0 otherwise. Even if this word appears multiple times, we still just put a 1 into our vector, i.e. our tokenizer is binary. We use this tokenizer to transform both train and test features: ``` from tensorflow.keras.preprocessing.text import Tokenizer tokenizer = Tokenizer(num_words=max_words) x_train = tokenizer.sequences_to_matrix(x_train, mode='binary') x_test = tokenizer.sequences_to_matrix(x_test, mode='binary') ``` ## 1. Exercise part: label encoding Use to_categorical, as we did in the lectures, to transform both *y_train* and *y_test* into one-hot encoded vectors of length *num_classes*: ``` y_train = to_categorical(y_train, num_classes=num_classes) y_test = to_categorical(y_test, num_classes=num_classes) ``` ## 2. Exercise part: model definition Next, initialise a Keras *Sequential* model and add three layers to it: Layer: Add a *Dense* layer with in input_shape=(max_words,), 512 output units and "relu" activation. Layer: Add a *Dropout* layer with dropout rate of 50%. Layer: Add a *Dense* layer with num_classes output units and "softmax" activation. ``` model = Sequential() # Instantiate sequential model model.add(Dense(512, activation='relu', input_shape = (max_words,))) # Add first layer. Make sure to specify input shape model.add(Dropout(0.5)) # Add second layer model.add(Dense(num_classes, activation='softmax')) # Add third layer ``` ## 3. Exercise part: model compilation As the next step, we need to compile our Keras model with a training configuration. Compile your model with "categorical_crossentropy" as loss function, "adam" as optimizer and specify "accuracy" as evaluation metric. NOTE: In case you get an error regarding h5py, just restart the kernel and start from scratch ``` model.compile(loss="categorical_crossentropy", optimizer='adam', metrics=['accuracy']) ``` ## 4. Exercise part: model training and evaluation Next, define the batch_size for training as 32 and train the model for 5 epochs on *x_train* and *y_train* by using the *fit* method of your model. Then calculate the score for your trained model by running *evaluate* on *x_test* and *y_test* with the same batch size as used in *fit*. ``` batch_size = 32 ###_YOUR_CODE_GOES_HERE_### model.fit(x_train, y_train, batch_size=batch_size, epochs=5, validation_data=(x_test,y_test)) score = model.evaluate(x_test,y_test, verbose=1) ``` If you have done everything as specified, in particular set the random seed as we did above, your test accuracy should be around 80% ``` score[1] ``` Congratulations, now it's time to submit your result to the Coursera grader by executing the following cells (Programming Assingment, Week2). We have to install a little library in order to submit to coursera ``` !rm -f rklib.py !wget https://raw.githubusercontent.com/IBM/coursera/master/rklib.py ``` Please provide your email address and obtain a submission token (secret) on the grader’s submission page in coursera, then execute the cell ``` from rklib import submit import json key = "XbAMqtjdEeepUgo7OOVwng" part = "HCvcp" email = "rapidhunter250@gmail.com" token = "dZdkRAY3MAalkTjx" submit(email, token, 'XbAMqtjdEeepUgo7OOVwng', part, [part], json.dumps(score[1]*100)) ```
github_jupyter
# NumPy Array Basics - Multi-dimensional Arrays ``` import sys print(sys.version) import numpy as np print(np.__version__) npa = np.arange(25) npa ``` We learned in the last video how to generate arrays, now let’s generate multidimensional arrays. These are, as you might guess, arrays with multiple dimensions. We can create these by reshaping arrays. One of the simplest ways is to just reshape an array with the reshape command. That gives us an x by x array. ``` npa.reshape((5,5)) ``` We can also use the zeros commands. ``` npa2 = np.zeros((5,5)) npa2 ``` To get the size of the array we can use the size method. ``` npa2.size ``` To get the shape of the array we can use the shape method. ``` npa2.shape ``` to get the number of dimension we use the ndim method. ``` npa2.ndim ``` We can create as many dimensions as we need to, here's 3 dimensions. ``` np.arange(8).reshape(2,2,2) ``` Here's 4 dimensions ``` np.zeros((4,4,4,4)) np.arange(16).reshape(2,2,2,2) ``` For the most part we’ll be working with 2 dimensions. ``` npa2 npa ``` Now we can really see the power of vectorization, let’s create two random 2 dimensional arrays. Now I’m going to set the random seed. This basically makes your random number generation reproducible. ``` np.random.seed(10) ``` let’s try some random number generation and then we can perform some matrix comparisons. ``` npa2 = np.random.random_integers(1,10,25).reshape(5,5) npa2 npa3 = np.random.random_integers(1,10,25).reshape(5,5) npa3 ``` We can do this comparison with greater than or equal to. ``` npa2 > npa3 ``` We can also sum up the values where there are equal. ``` (npa2 == npa3).sum() ``` Or we can sum where one is greater than or equal to in the columns. We can do that with sum or we could get the total by summing that array. ``` sum(npa2 >= npa3) sum(npa2 >= npa3).sum() ``` We can also get the minimums and maximums like we got with single dimensional arrays or for specific dimensions. ``` npa2.min() npa2.min(axis=1) npa2.max(axis=0) ``` There are plenty of other functions that numpy as. we can transpose with .T property or transpose method. ``` npa2.T npa2.transpose() npa2.T == npa2.transpose() ``` We can also multiply this transposition by itself for example. This will be an item by item multiplication ``` npa2.T * npa2 ``` We can flatten these arrays in several different ways. we can flatten it, which returns a new array that we can change ``` np2 = npa2.flatten() np2 ``` or we can ravel it which ends up returning the original array in a flattened format. ``` r = npa2.ravel() r np2[0] = 25 npa2 ``` With ravel if we change a value in the raveled array that will change it in the original n-dimensional array as well ``` r[0] = 25 npa2 ``` Now we can use some other helpful functions like cumsum and comprod to get the cumulative products and sums. This works for any dimensional array. ``` npa2.cumsum() npa2.cumprod() ``` That really covers a lot of the basic functions you’re going to use or need when working with pandas but it is worth being aware that numpy is a very deep library that does a lot more things that I've covered here. I wanted to cover these basics because they're going to come up when we're working with pandas. I'm sure this has felt fairly academic at this point but I can promise you that it provides a valuable foundation to pandas. need. If there’s anything you have questions about feel free to ask along the side and I can create some appendix videos to help you along.
github_jupyter
<a href="https://colab.research.google.com/github/RSid8/SMM4H21/blob/main/Task1a.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> #Importing the Libraries and Models ``` from google.colab import drive drive.mount('/content/drive') !pip install fairseq !git clone https://github.com/pytorch/fairseq %cd fairseq %%shell wget https://dl.fbaipublicfiles.com/fairseq/models/roberta.large.tar.gz tar -xzvf roberta.large.tar.gz import torch from tqdm import tqdm roberta = torch.hub.load('pytorch/fairseq', 'roberta.large') !ls tokens = roberta.encode('Hello world!') assert tokens.tolist() == [0, 31414, 232, 328, 2] roberta.decode(tokens) # 'Hello world!' ``` # Preprocessing the data for training ``` import pandas as pd df = pd.read_csv("/content/drive/MyDrive/UPENN/Task1a/train.tsv", sep = '\t') df_tweets = pd.read_csv('/content/drive/MyDrive/UPENN/Task1a/tweets.tsv', sep = '\t') df_class = pd.read_csv('/content/drive/MyDrive/UPENN/Task1a/class.tsv', sep = '\t') df.columns = ["tweet_id", "tweet", "label"] df_valid = pd.merge(df_tweets, df_class, on = 'tweet_id') df = pd.concat([df, df_valid], axis=0) df.head() df.label.value_counts() df.tweet_id.nunique() import numpy as np # count = df['tweet'].str.split().apply(len).value_counts() # count.index = count.index.astype(str) + ' words:' # count.sort_index(inplace=True) # count a = np.array(df['tweet'].str.split().apply(len)) print(f'Longest sentence {a.max()}, smallest sentence {a.min()}, average sentence length {a.mean()}') # something is wrong in example - 11986 index_names = df[df['tweet'].str.split().apply(len)>35].index df.drop(index_names, inplace=True) df.tweet_id.nunique() df['label'].replace({"NoADE":0, "ADE":1}, inplace=True) df.head() import os import random from glob import glob import sklearn from sklearn.model_selection import train_test_split from collections import Counter from imblearn.over_sampling import RandomOverSampler from imblearn.under_sampling import RandomUnderSampler X_train,X_val, Y_train, Y_val= train_test_split(df['tweet'], df['label'], test_size = 0.1, random_state = 21) X_train.reset_index(drop=True, inplace = True) X_val.reset_index(drop=True, inplace = True) Y_train.reset_index(drop=True, inplace = True) Y_val.reset_index(drop=True, inplace=True) # define oversampling strategy over = RandomOverSampler(sampling_strategy=0.1) # define undersampling strategy under = RandomUnderSampler(sampling_strategy=0.5) X_train = X_train.values.reshape(-1, 1) X_train, Y_train = over.fit_resample(X_train, Y_train) X_train, Y_train = under.fit_resample(X_train, Y_train) print(Counter(Y_train)) print(X_train[0][0]) for split in ['train', 'val']: out_fname = 'train' if split == 'train' else 'val' f1 = open(os.path.join("/content/drive/MyDrive/UPENN/Task1a", out_fname+'.input0'), 'w') f2 = open(os.path.join("/content/drive/MyDrive/UPENN/Task1a", out_fname+'.label'), 'w') if split=='train': for i in range(len(X_train)): f1.write(str(X_train[i][0])+'\n') f2.write(str(Y_train[i])+'\n') else: for i in range(len(X_val)): f1.write(X_val[i]+'\n') f2.write(str(Y_val[i])+'\n') f1.close() f2.close() ``` # Tokenize the data and Finetune Roberta ``` %%shell wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json' wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe' for SPLIT in train val; do python -m examples.roberta.multiprocessing_bpe_encoder \ --encoder-json encoder.json \ --vocab-bpe vocab.bpe \ --inputs "/content/drive/MyDrive/UPENN/Task1a/$SPLIT.input0" \ --outputs "/content/drive/MyDrive/UPENN/Task1a/$SPLIT.input0.bpe" \ --workers 60 \ --keep-empty done %%shell wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt' %%bash fairseq-preprocess \ --only-source \ --trainpref "/content/drive/MyDrive/UPENN/Task1a/train.input0.bpe" \ --validpref "/content/drive/MyDrive/UPENN/Task1a/val.input0.bpe" \ --destdir "/content/drive/MyDrive/UPENN/Task1a-bin/input0" \ --workers 60 \ --srcdict dict.txt %%bash fairseq-preprocess \ --only-source \ --trainpref "/content/drive/MyDrive/UPENN/Task1a/train.label" \ --validpref "/content/drive/MyDrive/UPENN/Task1a/val.label" \ --destdir "/content/drive/MyDrive/UPENN/Task1a-bin/label" \ --workers 60 %%shell TOTAL_NUM_UPDATES=3614 # 10 epochs through UPENN for bsz 32 WARMUP_UPDATES=217 # 6 percent of the number of updates LR=1e-05 # Peak LR for polynomial LR scheduler. HEAD_NAME=task1a_head # Custom name for the classification head. NUM_CLASSES=2 # Number of classes for the classification task. MAX_SENTENCES=8 # Batch size. ROBERTA_PATH=/content/fairseq/roberta.large/model.pt #/content/fairseq/checkpoint/checkpoint_best.pt CUDA_VISIBLE_DEVICES=0 fairseq-train /content/drive/MyDrive/UPENN/Task1a-bin/ \ --restore-file $ROBERTA_PATH \ --max-positions 512 \ --batch-size $MAX_SENTENCES \ --max-tokens 4400 \ --task sentence_prediction \ --reset-optimizer --reset-dataloader --reset-meters \ --required-batch-size-multiple 1 \ --init-token 0 --separator-token 2 \ --arch roberta_large \ --criterion sentence_prediction \ --classification-head-name $HEAD_NAME \ --num-classes $NUM_CLASSES \ --dropout 0.1 --attention-dropout 0.1 \ --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \ --clip-norm 0.0 \ --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \ --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \ --max-epoch 6 \ --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ --shorten-method "truncate" \ --find-unused-parameters \ --update-freq 4 !cp checkpoints/checkpoint_best.pt /content/drive/MyDrive/UPENN/checkpoints/ckpt_6_fin_rob.pt %ls /content/drive/MyDrive/UPENN/checkpoints/ ``` # Testing the Validation Split ``` from fairseq.models.roberta import RobertaModel roberta = RobertaModel.from_pretrained( 'checkpoints', checkpoint_file='checkpoint_best.pt', data_name_or_path='/content/drive/MyDrive/UPENN/Task1a-bin' ) roberta.eval() # disable dropout label_fn = lambda label: roberta.task.label_dictionary.string( [label + roberta.task.label_dictionary.nspecial] ) preds, labels = [], [] for i in tqdm(range(len(X_val)), total=len(X_val)): tokens = roberta.encode(X_val[i]) pred = label_fn(roberta.predict('task1a_head',tokens).argmax().item()) preds.append(pred) labels.append(Y_val[i]) import pandas as pd from sklearn.metrics import classification_report df_preds=pd.read_csv("/content/val_final.tsv", sep='\t') df_label=pd.read_csv("/content/class.tsv", sep='\t') df=df_preds.merge(df_label, on="tweet_id") df.columns=["tweet_id","preds","label"] df['label'].replace({"NoADE":0, "ADE":1}, inplace=True) df['preds'].replace({"NoADE":0, "ADE":1}, inplace=True) df.head() import pandas as pd from sklearn.metrics import classification_report preds=df["preds"] labels=df["label"] report = classification_report(labels, list(map(int, preds))) print(report) !rm checkpoints/checkpoint1.pt checkpoints/checkpoint2.pt checkpoints/checkpoint3.pt checkpoints/checkpoint4.pt ``` # Running on the Validation Set ``` df_tweets = pd.read_csv('/content/tweets.tsv', sep = '\t') df_class = pd.read_csv('/content/class.tsv', sep = '\t') df_valid = pd.merge(df_tweets, df_class, on = 'tweet_id') df_valid['label'].replace({"NoADE":0, "ADE":1}, inplace=True) df_valid.head() df_test = pd.read_csv('/content/drive/MyDrive/UPENN/test_tweets.tsv', sep='\t') index_names = df_test[df_test['tweet'].str.split().apply(len)>35].index df_test.drop(index_names, inplace=True) df_test.tweet_id.nunique() label_fn = lambda label: roberta.task.label_dictionary.string( [label + roberta.task.label_dictionary.nspecial] ) preds, id = [], [] for index, row in tqdm(df_test.iterrows(), total=len(df_test)): tokens = roberta.encode(row["tweet"]) pred = label_fn(roberta.predict('task1a_head',tokens).argmax().item()) preds.append(pred) id.append(row["tweet_id"]) df_1a = pd.DataFrame(list(zip(id, preds)), columns = ['tweet_id', 'label']) df_1a['label']=df_1a['label'].replace({0:"NoADE", 1:"ADE"}) df_1a.reset_index(drop=True, inplace=True) df_1a.head() df_1a.to_csv("/content/drive/MyDrive/UPENN/1a_sub2.tsv", sep='\t') from sklearn.metrics import classification_report report = classification_report(labels, list(map(int, preds))) print(report) print(Counter(preds)) df_preds = pd.DataFrame(preds, columns = ['Predictions']) df_id = pd.DataFrame(df_valid['tweet_id'], columns = ['tweet_id']) df_results = pd.concat([df_id, df_preds], join = 'outer', axis = 1) df_results.head() df_results.to_csv('/content/val.tsv', sep = '\t') len(df_id) import pandas as pd df = pd.read_csv('/content/drive/MyDrive/UPENN/1a_sub2.tsv', sep = '\t') df.drop(["Unnamed: 0"], axis=1, inplace=True) df.columns = ["tweet_id", "label"] df['label'].replace({0:"NoADE", 1:"ADE"}, inplace=True) df.reset_index(drop=True, inplace=True) df = df[df.label=="ADE"] df.head() df.to_csv('/content/test_sub2.tsv', sep = '\t', index= False) import pandas as pd df_1a = pd.read_csv('/content/test_sub2.tsv', sep = '\t') df_1b = pd.read_csv('/content/1b_new.tsv', sep = '\t') df_1b.drop(["Unnamed: 0"], axis=1, inplace=True) df_1b.head() df_1 = df_1a.merge(df_1b, on = 'tweet_id') df_1.columns = ["tweet_id", "label", "start", "end", "span"] df_1["start"] = df_1["start"].astype(int) df_1["end"] = df_1["end"].astype(int) df_1.dropna(axis=0, inplace=True) df_1 = df_1[df_1.label=="ADE"] df_1.head() df_1.to_csv('/content/testb_final.tsv', sep = '\t', index= False) ```
github_jupyter
``` # imports import warnings warnings. filterwarnings('ignore') import numpy from qpsolvers import solve_qp from qiskit.chemistry import FermionicOperator from qiskit.aqua.operators.legacy.op_converter import to_weighted_pauli_operator from qiskit.chemistry.components.variational_forms import UCCSD from qiskit.aqua.components.optimizers import L_BFGS_B from qiskit import Aer from qiskit.quantum_info import Pauli from qiskit.aqua.operators import WeightedPauliOperator from qiskit.aqua.operators.legacy import op_converter from qiskit.aqua.algorithms import VQE from qiskit.aqua import QuantumInstance from tqdm import tqdm from joblib import Parallel, delayed import itertools from qiskit import QuantumRegister, QuantumCircuit, execute, ClassicalRegister from qiskit.circuit.library import U3Gate from qiskit.aqua.components.initial_states import Custom from qiskit.chemistry.components.initial_states import HartreeFock import scipy import matplotlib.pyplot as plt from qiskit.quantum_info import partial_trace, Statevector # 2 site Hubbard model parameters t = 1 #hopping factor U = 2 #coulomb repulsion factor mu = U/2 #chemical potential factor # 2x1 Hubbard Hamiltonian def HubbardHamiltonian(U,t,num_spin_orbitals,num_particles): h1=numpy.zeros((4,4)) h2=numpy.zeros((4,4,4,4)) num_sites=int(num_spin_orbitals // 2) for i in range(num_sites - 1): h1[i, i + 1] = h1[i + 1, i] = -t h1[i + num_sites, i + 1 + num_sites] = h1[i + 1 + num_sites, i + num_sites] = -t h1[i][i] = -mu h1[i + num_sites][i + num_sites] = -mu h1[num_sites - 1][num_sites - 1] = -mu h1[2 * num_sites - 1][2 * num_sites - 1] = -mu h1[0, num_sites - 1] = h1[num_sites - 1, 0] = -t h1[num_sites, 2 * num_sites - 1] = h1[2 * num_sites - 1, num_sites] = -t for i in range(num_sites): h2[i, i , i + num_sites, i + num_sites] = U fermion_op = FermionicOperator(h1 = h1, h2 = h2) # Fermionic Hamiltonian qubit_op = fermion_op.mapping('jordan_wigner') #Qubit Hamiltonian return qubit_op # construct the qubit operator rep. of the 2x1 Hubbard model and then the matrix representation qubit_H = HubbardHamiltonian(U = U, t = 1, num_spin_orbitals = 4, num_particles = 2) #constructing matrix rep. in the Fock space H_mat=op_converter.to_matrix_operator(qubit_H).dense_matrix # compute exact ground state energy and wavefunction through diagonalization w,v = numpy.linalg.eigh(H_mat) Eg = w[0] # print("ground state energy-", w[0]) state_g = v[:,0] # print("ground state wvfn.", state_g) def rotated_state(labels,params,state0): U=WeightedPauliOperator([[1,Pauli.from_label('IIII')]]) for i in range(len(labels)): U=WeightedPauliOperator([[numpy.cos(params[i]),Pauli.from_label('IIII')],[-1j*numpy.sin(params[i]),Pauli.from_label(labels[i])]]).multiply(U) U_mat=op_converter.to_matrix_operator(U).dense_matrix rot_state=numpy.dot(U_mat,state0) return rot_state def TimeEvolutionOperator(T): return numpy.dot(numpy.dot(v,numpy.diag(numpy.exp(-1j*w*T))),numpy.conjugate(v.T)) ``` ### $G_{1,2}^{\uparrow,\uparrow}(t>0)=\langle G|e^{iHT}c_{1\uparrow}(0)e^{-iHT}c^{\dagger}_{2\uparrow}(0)|G\rangle$, $c^{\dagger}_{2\uparrow}(0)=IIXZ+iIIYZ$, <br> ### $|\mathcal{E}\rangle = IIXZ|G\rangle= e^{i\frac{\pi}{2}IIXZ}e^{i\frac{2\pi}{27}IZXY}e^{i\frac{\pi}{4}XYII}e^{i\frac{\pi}{4}IIXY}e^{-i\frac{\pi}{2}}|G\rangle$, <br> ### also constructing $IIIY|G\rangle$ ``` # excited state 1 exc_labels = ['IIII','IIXZ'] exc_params = numpy.array([-numpy.pi/2,numpy.pi/2]) exc_state = rotated_state(exc_labels,exc_params,state_g) # excited state 2 exc_labels2 = ['IIII','IIIY'] exc_params2 = [-numpy.pi/2,numpy.pi/2] exc_state2 = rotated_state(exc_labels2,exc_params2,state_g) exc_state2[numpy.abs(exc_state)<1e-5] = 0 # greens function evolution def greens_function(T, dT, T0): steps = int((T-T0)/dT) T_arr = numpy.linspace(T0, T, steps) GF_exact = [] for i in tqdm(range(len(T_arr))): U_T = TimeEvolutionOperator(T_arr[i]) exact_evolved_state = numpy.dot(U_T, exc_state) G1 = numpy.exp(1j*(U-rho)/2.)*numpy.dot(numpy.conjugate(exc_state2), exact_evolved_state) GF_exact.append(G1) return GF_exact # parameters for greens function T = 30 dT = 0.1 T0 = 0 steps = int((T-T0)/dT) T_arr = numpy.linspace(T0,T,steps) rho = numpy.sqrt(U**2+16*t*t) G = greens_function(T,dT,T0) # graphing greens function and spectral function # fig, ax = plt.subplots(1,2) # plt.rcParams["figure.figsize"] = (40, 20) # ax[0].tick_params(labelsize=30) # ax[0].plot(T_arr, numpy.real(G), color='black') # """SPECTRAL FUNCTION""" # # Number of sample points # # num_samp1=len(G) # # sample spacing # # ImgGf = numpy.fft.fft(numpy.imag(G)) # # Tf1 = numpy.linspace(0, 40, num_samp1//2) # # ax[1].set_yscale('log') # # ax[1].tick_params(labelsize=20) # # ax[1].plot(Tf1, 2.0/num_samp1 * numpy.abs(ImgGf[:num_samp1//2])/numpy.pi, color='black', linestyle='-') # # ax[1].plot(-Tf1, 2.0/num_samp1 * numpy.abs(ImgGf[:num_samp1//2])/numpy.pi, color='black', linestyle='-') # # ax[1].plot(Tf1, 2.0/num_samp1 * numpy.abs(ImgGf[:num_samp1//2])/numpy.pi, color='black', linestyle='-') # # ax[1].plot(-Tf1, 2.0/num_samp1 * numpy.abs(ImgGf[:num_samp1//2])/numpy.pi, color='black', linestyle='-') # # ax[1].plot(T_arr,numpy.imag(G),linestyle='-') # plt.show() # generators and angles for constructing adaptive ansatz for the 2x1 Hubbard model at U=2 t=1 labels=['IIXY', 'XYII', 'IZXY'] # U = 2 params = [-0.7853980948120887, -0.7853983093282092, 0.23182381954801887] # U = 3 # params = [0.7853959259806095, 0.7853996767775284, -1.2490064682759752] #circuit initialization init_circ = QuantumCircuit(2*2) init_circ.x(0) init_circ.x(2) init_state_circuit=Custom(4, circuit = init_circ) init_state = init_state_circuit #HartreeFock(num_spin_orbitals,num_particles=4,qubit_mapping='jordan_wigner',two_qubit_reduction=False) var_form_base = UCCSD(4,num_particles=2, initial_state=init_state,qubit_mapping='jordan_wigner',two_qubit_reduction=False) backend = Aer.get_backend('statevector_simulator') optimizer = L_BFGS_B() #adaptive circuit construction var_form_base.manage_hopping_operators() circ0 = var_form_base.construct_circuit(parameters = []) state0 = execute(circ0,backend).result().get_statevector() state0[numpy.abs(state0)<1e-5] = 0 adapt_state = rotated_state(labels, params, state0) # checking inner product between numerical and exact ground state print("overlap between analytic and numerical ground state is-",numpy.dot(state_g,adapt_state)) # confirming exact energy # check expectation value of the Hamiltonian with respect to adaptive ansatz def expectation_op(Op,state): return numpy.dot(numpy.dot(state,Op),numpy.conjugate(state)) E_adapt = expectation_op(H_mat,adapt_state) # print("exact energy-",Eg) # print("Energy from adaptive ansatz-",E_adapt) # print("convergence error", E_adapt-Eg) # constructing the excited state ansatz exc_labels = ['IIII','IIXZ'] exc_params = numpy.array([-numpy.pi/2,numpy.pi/2]) exc_state = rotated_state(exc_labels,exc_params,adapt_state) exc_state[numpy.abs(exc_state)<1e-5] = 0 # exact excited state exact_exc_state=rotated_state(exc_labels,exc_params,state_g) #checking inner product between numerical and analytic state print("overlap between analytic and numerical exc. state is-",numpy.dot(numpy.conjugate(exact_exc_state),exc_state)) def M(p,q,vqs_params,ref_state): thetas=numpy.array(vqs_params) shift_1=numpy.array([0]*(p)+[numpy.pi/2]+[0]*(len(vqs_params)-p-1)) shift_2=numpy.array([0]*(q)+[numpy.pi/2]+[0]*(len(vqs_params)-q-1)) state_1=rotated_state(vqs_generators,vqs_params+shift_1,ref_state) state_2=rotated_state(vqs_generators,vqs_params+shift_2,ref_state) M_arr=numpy.real(numpy.dot(numpy.conjugate(state_1),state_2)) return M_arr def V(p,vqs_params,ref_state): thetas=numpy.array(vqs_params) shift_1=numpy.array([0]*(p)+[numpy.pi/2]+[0]*(len(vqs_params)-p-1)) state_1=rotated_state(vqs_generators,vqs_params+shift_1,ref_state) state=rotated_state(vqs_generators,vqs_params,ref_state) V_arr=numpy.imag(numpy.dot(numpy.dot(numpy.conjugate(state_1),H_mat),state)) return V_arr ``` # Alex stuff ``` # basic setup import numpy as np import copy PAULI_X = np.array([[0,1],[1,0]], dtype='complex128') PAULI_Y = np.array([[0,-1j],[1j,0]], dtype='complex128') PAULI_Z = np.array([[1,0],[0,-1]], dtype='complex128') IDENTITY = np.eye(2, dtype='complex128') def pauli_string_to_matrix(pauli_string): return Pauli(pauli_string).to_matrix() def pauli_string_exp_to_matrix(pauli_string, param): return expm(-1j * param * Pauli(pauli_string).to_matrix()) backend = Aer.get_backend('statevector_simulator') qasm_backend = Aer.get_backend('qasm_simulator') # circuit creation def rotate_state(pauli_string, param, circuit): ancilla_boolean = (1 if circuit.num_qubits == 5 else 0) if pauli_string == 'IIII': gate = 1 for j in range(len(pauli_string)): gate = np.kron(gate, IDENTITY) gate *= -1j * np.sin(param) gate += np.cos(param) * np.eye(16) qubits_to_act_on = [1,2,3,4] if ancilla_boolean else [0,1,2,3] circuit.unitary(gate, qubits_to_act_on, label=pauli_string) else: qubits_to_act_on = [] gate = 1 for j in range(len(pauli_string)): if pauli_string[j] == 'X': gate = np.kron(gate, PAULI_X) elif pauli_string[j] == 'Y': gate = np.kron(gate, PAULI_Y) elif pauli_string[j] == 'Z': gate = np.kron(gate, PAULI_Z) if pauli_string[j] != 'I': qubits_to_act_on.append(np.abs(j - 3) + (0,1)[ancilla_boolean]) gate *= (-1j * np.sin(param)) gate += np.cos(param) * np.eye(2**len(qubits_to_act_on)) qubits_to_act_on.reverse() circuit.unitary(gate, qubits_to_act_on, label = pauli_string) circuit.barrier() def create_initial_state(): circuit = QuantumCircuit(4) circuit.x(0) circuit.x(2) circuit.barrier() return circuit def create_adapt_ground_state(): labels = ['IIXY', 'XYII', 'IZXY'] params = [-0.7853980948120887, -0.7853983093282092, 0.23182381954801887] circuit = create_initial_state() for i in range(len(labels)): rotate_state(labels[i], params[i], circuit) return circuit def create_excited_state(): labels=['IIXY', 'XYII', 'IZXY', 'IIII', 'IIXZ'] params=[-0.7853980948120887, -0.7853983093282092, 0.23182381954801887,numpy.pi/2,-numpy.pi/2.] circuit = create_initial_state() for i in range(len(labels)): rotate_state(labels[i], params[i], circuit) circuit.barrier() return circuit def create_excited_state2(): labels = ['IIXY', 'XYII', 'IZXY', 'IIII', 'IIIY'] params = [-0.7853980948120887, -0.7853983093282092, 0.23182381954801887, -numpy.pi/2, numpy.pi/2] circuit = create_initial_state() for i in range(len(labels)): rotate_state(labels[i], params[i], circuit) return circuit excited_state = execute(create_excited_state(), backend).result().get_statevector() excited_state2 = execute(create_excited_state2(), backend).result().get_statevector() def create_circuit_ancilla(ancilla_boolean, state): circuit = QuantumCircuit(4 + (0,1)[ancilla_boolean]) circuit.x(0 + (0,1)[ancilla_boolean]) circuit.x(2 + (0,1)[ancilla_boolean]) labels = ['IIXY', 'XYII', 'IZXY'] params = [-0.7853980948120887, -0.7853983093282092, 0.23182381954801887] if state == 'state2': labels.extend(['IIII', 'IIXZ']) params.extend([numpy.pi/2,-numpy.pi/2.]) for i in range(len(labels)): rotate_state(labels[i], params[i], circuit) circuit.barrier() return circuit def controlled_rotate_state(pauli_string, param, circuit): if pauli_string == 'IIII': return num_qubits = 4 #the ancilla does not count qubits_to_act_on = [] gate = 1 for j in range(len(pauli_string)): if pauli_string[j] == 'X': gate = np.kron(gate, PAULI_X) elif pauli_string[j] == 'Y': gate = np.kron(gate, PAULI_Y) elif pauli_string[j] == 'Z': gate = np.kron(gate, PAULI_Z) if pauli_string[j] != 'I': qubits_to_act_on.append(np.abs(j - num_qubits + 1) + 1) qubits_to_act_on.reverse() #convert unitary to gate through a temporary circuit temp_circuit = QuantumCircuit(2) temp_circuit.unitary(gate, [0, 1]) #we only have controlled 2-qubit unitaries: IIXX, XXII, IIYY, YYII, ZIZI, IZIZ controlled_gate = temp_circuit.to_gate(label = 'Controlled ' + pauli_string).control(1) qubits_to_act_on.insert(0, 0) #insert ancilla bit to front of list circuit.append(controlled_gate, qubits_to_act_on) def measure_ancilla(circuit, shots): classical_register = ClassicalRegister(1, 'classical_reg') circuit.add_register(classical_register) circuit.measure(0, classical_register[0]) result = execute(circuit, qasm_backend, shots = shots).result() counts = result.get_counts(circuit) if counts.get('0') != None: return 2 * (result.get_counts(circuit)['0'] / shots) - 1 else: return -1 def measure_ancilla_statevector(circuit): full_statevector = Statevector(circuit) partial_density_matrix = partial_trace(full_statevector, [1, 2, 3, 4]) partial_statevector = np.diagonal(partial_density_matrix) return ((2 * partial_statevector[0]) - 1).real def calculate_m_statevector(p, q, vqs_generators, vqs_params, state): circuit = create_circuit_ancilla(True, state) circuit.h(0) circuit.x(0) circuit.barrier() for i in range(0, p): rotate_state(vqs_generators[i], vqs_params[i], circuit) circuit.barrier() controlled_rotate_state(vqs_generators[p], vqs_params[p], circuit) circuit.barrier() for i in range(p, q): rotate_state(vqs_generators[i], vqs_params[i], circuit) circuit.barrier() circuit.x(0) controlled_rotate_state(vqs_generators[q], vqs_params[q], circuit) circuit.h(0) circuit.barrier() return measure_ancilla_statevector(circuit) def calculate_v_statevector(p, vqs_generators, vqs_params, state): n_theta = len(vqs_params) circuit = create_circuit_ancilla(True, state) circuit.h(0) circuit.x(0) for i in range(0, p): rotate_state(vqs_generators[i], vqs_params[i], circuit) circuit.barrier() controlled_rotate_state(vqs_generators[p], vqs_params[p], circuit) circuit.barrier() for i in range(p, n_theta): rotate_state(vqs_generators[i], vqs_params[i], circuit) circuit.barrier() circuit.x(0) coeffs = [0.5, 0.5, -0.5, -0.5, -0.5, -0.5, -1.0] measurements = [] for i in range(len(coeffs)): single_h_circuit = copy.deepcopy(circuit) controlled_rotate_state(vqs_generators[i], coeffs[i], single_h_circuit) single_h_circuit.h(0) measurements.append(measure_ancilla_statevector(single_h_circuit)) results = 0 for i in range(len(coeffs)): results += measurements[i] * coeffs[i] return results def calculate_m_shots(p, q, vqs_generators, vqs_params, shots, state): circuit = create_circuit_ancilla(True, state) #Creates |E> circuit.h(0) circuit.x(0) circuit.barrier() for i in range(0, p): rotate_state(vqs_generators[i], vqs_params[i], circuit) circuit.barrier() controlled_rotate_state(vqs_generators[p], vqs_params[p], circuit) circuit.barrier() for i in range(p, q): rotate_state(vqs_generators[i], vqs_params[i], circuit) circuit.barrier() circuit.x(0) controlled_rotate_state(vqs_generators[q], vqs_params[q], circuit) circuit.h(0) circuit.barrier() return measure_ancilla(circuit, shots) def calculate_v_shots(p, vqs_generators, vqs_params, shots, state): n_theta = len(vqs_params) circuit = create_circuit_ancilla(True, state) circuit.h(0) circuit.x(0) for i in range(0, p): rotate_state(vqs_generators[i], vqs_params[i], circuit) circuit.barrier() controlled_rotate_state(vqs_generators[p], vqs_params[p], circuit) circuit.barrier() for i in range(p, n_theta): rotate_state(vqs_generators[i], vqs_params[i], circuit) circuit.barrier() circuit.x(0) coeffs = [0.5, 0.5, -0.5, -0.5, -0.5, -0.5, -1.0] measurements = [] for i in range(len(coeffs)): single_h_circuit = copy.deepcopy(circuit) controlled_rotate_state(vqs_generators[i], coeffs[i], single_h_circuit) single_h_circuit.h(0) measurements.append(measure_ancilla(single_h_circuit, shots)) results = 0 for i in range(len(coeffs)): results += measurements[i] * coeffs[i] return results def Cost(M,V): #f=1/2x^TPx+q^{T}x #Gx<=h #Ax=b # alpha = 0 alpha=1e-3 P=M.T@M+alpha*numpy.eye(len(V)) q=M.T@V thetaDot=solve_qp(P,-q) return thetaDot def McEvolve(vqs_params_init,T,dT,T0,exc_state, way): steps=int((T-T0)/dT) T_arr=numpy.linspace(T0,T,steps) vqs_params=vqs_params_init vqs_dot_hist=[] vqs_hist=[vqs_params] FidelityArr=[] GF_exact=[] GF_sim=[] U = 2 for i in tqdm(range(len(T_arr))): #evaluations at time step t_i U_T=TimeEvolutionOperator(T_arr[i]) #exact state exact_evolved_state=numpy.dot(U_T,exc_state) vqs_state=rotated_state(vqs_generators,vqs_hist[-1], exc_state) G1=np.exp(1j*(U-rho)/2)*numpy.dot(np.conj(exc_state2), exact_evolved_state).real GF_exact.append(G1) G2=np.exp(1j*(U-rho)/2)*numpy.dot(np.conj(exc_state2), vqs_state).real GF_sim.append(G2) # print("Green functions",G1,G2) FidelityArr.append(numpy.abs(numpy.dot(vqs_state,numpy.conjugate(exact_evolved_state)))**2) print("Fidelity",FidelityArr[-1]) arr = [(j,k) for j in range(len(vqs_params)) for k in range(len(vqs_params)) if j<=k] M_mat = numpy.zeros((len(vqs_params),len(vqs_params))) #constructing McLachlan if way == 'Anirban': M_elems = Parallel(n_jobs=-1,verbose=0)(delayed(M)(arr[i][0],arr[i][1],vqs_params,exc_state) for i in range(len(arr))) V_vec=numpy.array([V(p,vqs_params,exc_state) for p in range(len(vqs_params))]) # Statevector way elif way == 'statevector': M_elems = Parallel(n_jobs=-1)(delayed(calculate_m_statevector)(arr[i][0], arr[i][1], vqs_generators, vqs_params, 'state2') for i in range(len(arr))) V_vec = Parallel(n_jobs=-1)(delayed(calculate_v_statevector)(p, vqs_generators, vqs_params, 'state2') for p in range(len(vqs_params))) # Shots way elif way == 'shots': shots = 2**15 M_elems = Parallel(n_jobs=-1)(delayed(calculate_m_shots)(arr[i][0], arr[i][1], vqs_generators, vqs_params, shots, 'state2') for i in range(len(arr))) V_vec = Parallel(n_jobs=-1)(delayed(calculate_v_shots)(p, vqs_generators, vqs_params, shots, 'state2') for p in range(len(vqs_params))) for p in range(len(arr)): M_mat[arr[p][0]][arr[p][1]] = M_mat[arr[p][1]][arr[p][0]] = M_elems[p] vqs_params_dot=Cost(M_mat,V_vec)#numpy.linalg.lstsq(M_mat,V_vec,rcond=None)[0] vqs_dot_hist.append(vqs_params_dot) # def Error(vqs_params_dot): # quant=numpy.sum((M_mat@vqs_params_dot-V_vec)@(M_mat@vqs_params_dot-V_vec).T) # print(quant) # return quant # error=Error(vqs_params_dot) # print("Initial Error after least squares-", error) #Euler #vqs_params=vqs_params+vqs_dot_hist[-1]*dT #Adams-Bashforth if i>0: vqs_params=vqs_params+1.5*dT*vqs_dot_hist[-1]-0.5*dT*vqs_dot_hist[-2] else: vqs_params=vqs_params+vqs_dot_hist[-1]*dT vqs_hist.append(vqs_params) return vqs_hist,FidelityArr,GF_sim,GF_exact # Single optimization T=5 dT=0.1 nd=2 vqs_generators=['ZIZI','IZIZ','IIXX','IIYY','XXII','YYII','IIII']*nd vqs_params=numpy.zeros(len(vqs_generators)) # vqs_params_history,FidelityArr,GF_sim,GF_exact=McEvolve(vqs_params,T,dT,0,exc_state, 'statevector') # fig, ax = plt.subplots(dpi=160) # ax.set_title('t=30, dt=0.1, U=2') # T_arr # ax.plot(GF_sim, label = 'VQS - statevector', color = 'blue') # ax.plot(GF_exact, label = 'Exact', color = 'red') # plt.legend() # plt.show() # # Spectral function plot # G_sim = GF_sim # G_exact = GF_exact # # Number of sample points # num_samp=len(G_sim) # # sample spacing # ImgG_1f = numpy.fft.fft(numpy.real(G_sim)) # ImgG_2f = numpy.fft.fft(numpy.real(G_exact)) # plt.rcParams["figure.figsize"] = (20,10) # Tf = numpy.linspace(0.0, 1//(2.0*dT), num_samp//2) # fig, ax = plt.subplots() # ax.set_xlabel(r'$\omega$',fontsize=20) # ax.tick_params(labelsize=20) # ax.set_yscale('log') # ax.plot(Tf, 2.0/num_samp * numpy.abs(ImgG_1f[:num_samp//2])/numpy.pi,marker='s',color='b',linestyle='',label=r'$Im G_{VHS - statevector}^{1,2}(1,2,\omega)$') # ax.plot(-Tf, 2.0/num_samp * numpy.abs(ImgG_1f[:num_samp//2])/numpy.pi,marker='s',color='b',linestyle='') # ax.plot(Tf, 2.0/num_samp * numpy.abs(ImgG_2f[:num_samp//2])/numpy.pi,color='r',linestyle='-',label=r'$Im G_{exact}^{1,2}(1,2,\omega)$') # ax.plot(-Tf, 2.0/num_samp * numpy.abs(ImgG_2f[:num_samp//2])/numpy.pi,color='r',linestyle='-') # plt.legend(fontsize=15) # plt.show() ``` Find a circuit rep of U(theta) such that $U(\theta)|G\rangle \approx e^{-iHT}|G\rangle$ and $U(\theta)|E\rangle \approx e^{-iHT}|E\rangle$.<br> $U(\theta)|G\rangle \approx e^{-iHT}|G\rangle\to M_{1}\dot{\theta}=V_{1}$, $U(\theta)|G\rangle \approx e^{-iHT}|E\rangle\to M_{2}\dot{\theta}=V_{2}$<br> Map this to a quadratic optimization problem<br> $(\dot{\boldsymbol{\theta}}^{T}M_{1}^{T}-V_{1}^{T})(M_{1}\dot{\boldsymbol{\theta}}-V_{1})=\dot{\boldsymbol{\theta}}^{T}M_{1}^{T}M_{1}\dot{\boldsymbol{\theta}}-\dot{\boldsymbol{\theta}}^{T}M_{1}^{T}V_{1}-V^{T}_{1}M_{1}\dot{\boldsymbol{\theta}}+V_{1}^{T}V_{1}\propto \frac{1}{2}\dot{\boldsymbol{\theta}}^{T}M_{1}^{T}M_{1}\dot{\boldsymbol{\theta}}-(M_{1}^{T}V_{1})^{T}\dot{\boldsymbol{\theta}}$<br> $(\dot{\boldsymbol{\theta}}^{T}M_{1}^{T}-V_{1}^{T})(M_{1}\dot{\boldsymbol{\theta}}-V_{1})+(\dot{\boldsymbol{\theta}}^{T}M_{2}^{T}-V_{2}^{T})(M_{2}\dot{\boldsymbol{\theta}}-V_{2})\propto\frac{1}{2}\dot{\boldsymbol{\theta}}^{T}(M_{1}^{T}M_{1}+M_{2}^{T}M_{2})\dot{\boldsymbol{\theta}}-\left[(M_{1}^{T}V_{1})^{T}+(M_{2}^{T}V_{2})^{T}\right]\dot{\boldsymbol{\theta}}$<br> Cost Function<br> $Cost=\frac{1}{2}\dot{\boldsymbol{\theta}}^{T}(M_{1}^{T}M_{1}+M_{2}^{T}M_{2})\dot{\boldsymbol{\theta}}-\left[(M_{1}^{T}V_{1})^{T}+(M_{2}^{T}V_{2})^{T}\right]\dot{\boldsymbol{\theta}}$ <br> $P=(M_{1}^{T}M_{1}+M_{2}^{T}M_{2})+\alpha$, $\alpha= $Tikhonov Regularization<br> $q=M^{T}V$<br> $f=1/2x^TPx+q^{T}x$, $x=\dot{\theta}$ ``` def JointCost(M1, V1, M2, V2, alpha): #f=1/2 {x^T} Px + q^{T}x #Gx<=h #Ax=b P = M1.T@M1 + M2.T@M2 + alpha * np.eye(len(V1)) q = M1.T@V1 + M2.T@V2 # thetaDot = numpy.linalg.lstsq(M1, V1, rcond=None)[0] thetaDot = solve_qp(P, -q) return thetaDot error_list = [] residual_list = [] def McEvolveJointOptimization(vqs_params_init, T, dT, T0, state1,state2, way, alpha): steps = int((T-T0)/dT) + 1 T_arr = numpy.linspace(T0, T, steps) vqs_params = vqs_params_init vqs_dot_hist = [] vqs_hist = [vqs_params] FidelityArr = [] for i in tqdm(range(len(T_arr))): # compute exact state U_T = TimeEvolutionOperator(T_arr[i]) exact_evolved_state1 = U_T@state1 exact_evolved_state2 = U_T@state2 # compute simulated state vqs_state1 = rotated_state(vqs_generators,vqs_hist[-1], state1) vqs_state2 = rotated_state(vqs_generators,vqs_hist[-1], state2) # compute fidelity FidelityArr.append([np.abs(vqs_state1@numpy.conjugate(exact_evolved_state1))**2, np.abs(vqs_state2@numpy.conjugate(exact_evolved_state2))**2]) print("Fidelity",FidelityArr[-1]) #constructing McLachlan arr = [(j,k) for j in range(len(vqs_params)) for k in range(len(vqs_params)) if j <= k] M1 = numpy.zeros((len(vqs_params),len(vqs_params))) M2 = numpy.zeros((len(vqs_params),len(vqs_params))) # Anirban's way if way == 'Anirban': M_elems1 = Parallel(n_jobs=-1,verbose=0)(delayed(M)(arr[i][0],arr[i][1],vqs_params,state1) for i in range(len(arr))) M_elems2 = Parallel(n_jobs=-1,verbose=0)(delayed(M)(arr[i][0],arr[i][1],vqs_params,state2) for i in range(len(arr))) V1 = numpy.array([V(p,vqs_params,state1) for p in range(len(vqs_params))]) V2 = numpy.array([V(p,vqs_params,state2) for p in range(len(vqs_params))]) # Statevector way if way == 'statevector': M_elems1 = Parallel(n_jobs=-1)(delayed(calculate_m_statevector)(arr[i][0], arr[i][1], vqs_generators, vqs_params, 'state1') for i in range(len(arr))) M_elems2 = Parallel(n_jobs=-1)(delayed(calculate_m_statevector)(arr[i][0], arr[i][1], vqs_generators, vqs_params, 'state2') for i in range(len(arr))) V1 = Parallel(n_jobs=-1)(delayed(calculate_v_statevector)(p, vqs_generators, vqs_params, 'state1') for p in range(len(vqs_params))) V2 = Parallel(n_jobs=-1)(delayed(calculate_v_statevector)(p, vqs_generators, vqs_params, 'state2') for p in range(len(vqs_params))) # Shots way if way == 'shots': shots = 2**17 M_elems1 = Parallel(n_jobs=-1)(delayed(calculate_m_shots)(arr[i][0], arr[i][1], vqs_generators, vqs_params, shots, 'state1') for i in range(len(arr))) M_elems2 = Parallel(n_jobs=-1)(delayed(calculate_m_shots)(arr[i][0], arr[i][1], vqs_generators, vqs_params, shots, 'state2') for i in range(len(arr))) V1 = Parallel(n_jobs=-1)(delayed(calculate_v_shots)(p, vqs_generators, vqs_params, shots, 'state1') for p in range(len(vqs_params))) V2 = Parallel(n_jobs=-1)(delayed(calculate_v_shots)(p, vqs_generators, vqs_params, shots, 'state2') for p in range(len(vqs_params))) for p in range(len(arr)): M1[arr[p][0]][arr[p][1]] = M1[arr[p][1]][arr[p][0]] = M_elems1[p] M2[arr[p][0]][arr[p][1]] = M2[arr[p][1]][arr[p][0]] = M_elems2[p] vqs_params_dot = JointCost(np.array(M1), np.array(V1), np.array(M2), np.array(V2), alpha) vqs_dot_hist.append(vqs_params_dot) #Euler # vqs_params += vqs_dot_hist[-1]*dT #Complete Adams-Bashforth if i == 0: vqs_params = vqs_params + vqs_dot_hist[-1]*dT else: vqs_params = vqs_params + (3/2)*dT*vqs_dot_hist[-1]-(1/2)*dT*vqs_dot_hist[-2] vqs_hist.append(vqs_params) return vqs_hist,FidelityArr fidelities = [] alpha = 0.001 depth = 2 vqs_generators=['ZIZI','IZIZ','IIXX','IIYY','XXII','YYII','IIII'] * depth vqs_params=numpy.zeros(len(vqs_generators)) T = 5 dT = 0.1 outside_vqs_hist, fidelity_list = McEvolveJointOptimization(vqs_params, T, dT, 0, adapt_state, exc_state, 'statevector', alpha) fidelities.append(fidelity_list) colors = plt.cm.cividis(np.linspace(0, 1, len(fidelities))) colors = np.flip(colors, axis=0) fig, ax = plt.subplots(dpi=160) ax.set_xlabel('Time') ax.set_ylabel('Fidelity') ax.set_title("t=50, dt=0.1, depth = 2, averaged fidelity") for i in range(len(fidelities)): ax.plot(list(range(0,int(T/dT))), np.mean(fidelities[i], axis=1), label = 'alpha = ' + str(10**-i), color = colors[i]) plt.legend() plt.show() ```
github_jupyter
# Malware Analysis & Triage Kit This notebook performs the initial stages of immediate malware triage. ## How To Take your malware specimen and drop it into the `dropbox` directory. The notebook will walk you through the stages of initial analysis. At the end of this process, you will have a triage report in the `saved-specimens` directory. This report includes findings from initial triage, including the defanged specimen in a password-proteced Zip file and static analysis artifacts. # Imports and Setup ``` # Imports from hashlib import * import sys import os from getpass import getpass from virus_total_apis import PublicApi as VirusTotalPublicApi import json from MalwareSample import * from pprint import pprint import os.path from time import sleep ``` ### Check Dropbox and Saved-Specimens ``` MalwareSample.check_dir("dropbox") MalwareSample.check_dir("saved-specimens") empty = MalwareSample.is_dir_empty("dropbox") if empty: print(r" \\--> " + recc + "Put some samples in the dropbox!") ``` ### Enumerate Samples in the Dropbox ``` samples=!ls dropbox/* for s in samples: print(info + "Sample: " + s) sample_obj = [MalwareSample(s) for s in samples] ``` ### Create a Saved Specimen directory for the specimen(s) ``` for obj in sample_obj: saved_sample_name = MalwareSample.create_specimen_dirs(obj.sample_name) obj.saved_sample_name = saved_sample_name ``` ### Defang Sample ``` for obj in sample_obj: sample_path = MalwareSample.move_and_defang(obj.sample_name, obj.saved_sample_name) obj.sample_path = sample_path ``` --- ## File Hashes ### SHA256 Sum ``` for obj in sample_obj: hash = MalwareSample.get_sha256sum(obj.sample_path, obj.saved_sample_name) obj.sha256sum = hash print(info + obj.sample_name + ": " + obj.sha256sum) ``` --- ## String Analysis ### StringSifter StringSifter is a FLARE developed tool that uses an ML model to rank a binary's strings by relevance to malware analysis. ``` length = int(input(recc + "Input your desired minimum string length [default is 4, 6-8 is recommended] > ")) for obj in sample_obj: MalwareSample.pull_strings(length, obj.saved_sample_name, obj.sample_path) ``` ## VT Analysis Submit samples to Virus Total and generate a malicious confidence level. ``` VT_API_KEY = getpass("Enter VirusTotal API Key (blank if none): ") if VT_API_KEY: vt = VirusTotalPublicApi(VT_API_KEY) else: print(info + "No VT API Key. Skipping...") ``` Note: If there are more than 4 samples in the dropbox, hashes are submitted with a sleep of 16 seconds to remain under the public API rate limit. So hit go, grab a beverage of choice, stretch out and relax. This could be a while depending on how many samples you're submitting. ``` if VT_API_KEY: for obj in sample_obj: print(info + obj.sample_name + ":") print(r" \\--> " + info + "SHA256sum: " + obj.sha256sum) res = vt.get_file_report(obj.sha256sum) conf = malicious_confidence(res) print(r" \\--> " + info + "Confidence level: " + str(conf)) crit_level = determine_criticality(conf) obj.criticality = crit_level if len(sample_obj) >= 5: sleep(16) else: print(info + "No VT API Key. Skipping...") ``` ## Zip and Password Protect ``` for obj in sample_obj: zip_file = MalwareSample.zip_and_password_protect(obj.sample_path, obj.saved_sample_name) MalwareSample.delete_unzipped_sample(obj.sample_path, zip_file) ``` --- ### Debug Object Vars ``` for obj in sample_obj: pprint(vars(obj)) ```
github_jupyter
``` import intake import xarray as xr import os import pandas as pd import numpy as np import zarr import rhg_compute_tools.kubernetes as rhgk import warnings warnings.filterwarnings("ignore") write_direc = '/gcs/rhg-data/climate/downscaled/workdir' client, cluster = rhgk.get_standard_cluster() cluster ``` get some CMIP6 data from GCS here we're going to get daily `tmax` from `IPSL` for historical and SSP370 runs. The ensemble member `r1i1p1f1` isn't available in GCS so we're using `r4i1p1f1` instead. Note that the `activity_id` for historical runs is `CMIP`, not `ScenarioMIP` as it is for the ssp-rcp scenarios. ``` activity_id = 'ScenarioMIP' experiment_id = 'ssp370' table_id = 'day' variable_id = 'tasmax' source_id = 'IPSL-CM6A-LR' institution_id = 'NCAR' member_id = 'r4i1p1f1' ``` first we'll take a look at what our options are ``` df_cmip6 = pd.read_csv('https://cmip6.storage.googleapis.com/cmip6-zarr-consolidated-stores-noQC.csv', dtype={'version': 'unicode'}) len(df_cmip6) df_subset_future = df_cmip6.loc[(df_cmip6['activity_id'] == activity_id) & (df_cmip6['experiment_id'] == experiment_id) & (df_cmip6['table_id'] == table_id) & (df_cmip6['variable_id'] == variable_id) & (df_cmip6['source_id'] == source_id) & (df_cmip6['member_id'] == member_id)] df_subset_future df_subset_hist = df_cmip6.loc[(df_cmip6['experiment_id'] == 'historical') & (df_cmip6['table_id'] == table_id) & (df_cmip6['variable_id'] == variable_id) & (df_cmip6['source_id'] == source_id) & (df_cmip6['member_id'] == member_id)] df_subset_hist ``` now let's actually pull the data ``` # search the cmip6 catalog col = intake.open_esm_datastore("https://storage.googleapis.com/cmip6/pangeo-cmip6.json") cat = col.search(activity_id=['CMIP', activity_id], experiment_id=['historical', experiment_id], table_id=table_id, variable_id=variable_id, source_id=source_id, member_id=member_id) ds_model = {} ds_model['historical'] = cat['CMIP.IPSL.IPSL-CM6A-LR.historical.day.gr'].to_dask().isel(member_id=0 ).squeeze(drop=True).drop(['member_id', 'height', 'time_bounds']) ds_model['ssp370'] = cat['ScenarioMIP.IPSL.IPSL-CM6A-LR.ssp370.day.gr'].to_dask().isel(member_id=0 ).squeeze(drop=True).drop(['member_id', 'height', 'time_bounds']) ds_model['historical'] ``` rechunk in space for global bias correction ``` chunks = {'lat': 10, 'lon': 10, 'time': -1} ds_model['historical'] = ds_model['historical'].chunk(chunks) ds_model['historical'] = ds_model['historical'].persist() ds_model['historical'] = ds_model['historical'].load() ds_model['ssp370'] = ds_model['ssp370'].chunk(chunks) ds_model['ssp370'] = ds_model['ssp370'].persist() ds_model['historical'].to_zarr(os.path.join(write_direc, 'cmip6_test_model_historical'), consolidated=True, compute=False, mode='w') ds_test = xr.open_zarr(os.path.join(write_direc, 'cmip6_test_model_historical.zarr')) ds_test ds_test.info ds_model['historical'].to_zarr(os.path.join(write_direc, 'cmip6_test_model_historical'), mode='w') ds_model['ssp370'].to_netcdf(os.path.join(write_direc, 'cmip6_test_model_ssp370.nc')) ``` read in the zarr stores and see how hard it is to rechunk them in time instead of space for computing weights ``` ds_hist = zarr.open(os.path.join(write_direc, 'cmip6_test_model_historical.zarr'), mode='r') ds_hist ds_hist.info ```
github_jupyter
## Observations and Insights ``` # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import scipy.stats as st # Study data files mouse_metadata_path = "data/Mouse_metadata.csv" study_results_path = "data/Study_results.csv" # Read the mouse data and the study results mouse_metadata = pd.read_csv(mouse_metadata_path) study_results = pd.read_csv(study_results_path) # Combine the data into a single dataset # Display the data table for preview # Checking the number of mice. # Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint. # Optional: Get all the data for the duplicate mouse ID. # Create a clean DataFrame by dropping the duplicate mouse by its ID. # Checking the number of mice in the clean DataFrame. ``` ## Summary Statistics ``` # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen # Use groupby and summary statistical methods to calculate the following properties of each drug regimen: # mean, median, variance, standard deviation, and SEM of the tumor volume. # Assemble the resulting series into a single summary dataframe. # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen # Using the aggregation method, produce the same summary statistics in a single line ``` ## Bar and Pie Charts ``` # Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas. # Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot. # Generate a pie plot showing the distribution of female versus male mice using pandas # Generate a pie plot showing the distribution of female versus male mice using pyplot ``` ## Quartiles, Outliers and Boxplots ``` # Calculate the final tumor volume of each mouse across four of the treatment regimens: # Capomulin, Ramicane, Infubinol, and Ceftamin # Start by getting the last (greatest) timepoint for each mouse # Merge this group df with the original dataframe to get the tumor volume at the last timepoint # Put treatments into a list for for loop (and later for plot labels) # Create empty list to fill with tumor vol data (for plotting) # Calculate the IQR and quantitatively determine if there are any potential outliers. # Locate the rows which contain mice on each drug and get the tumor volumes # add subset # Determine outliers using upper and lower bounds # Generate a box plot of the final tumor volume of each mouse across four regimens of interest ``` ## Line and Scatter Plots ``` # Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin # Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen ``` ## Correlation and Regression ``` # Calculate the correlation coefficient and linear regression model # for mouse weight and average tumor volume for the Capomulin regimen ```
github_jupyter
# Gymnasion Data Processing Here I'm going to mine some chunk of Project Gutenberg texts for `(adj,noun)` and `(noun,verb,object)` relations using mostly SpaCy and textacy. Extracting them is easy. Filtering out the chaff is not so easy. ``` #!/usr/bin/env python # -*- coding: utf-8 -*- from tqdm import tqdm import json from collections import defaultdict from nltk import ngrams from textacy import extract import spacy nlp = spacy.load('en') ``` Load in some randomly chosen Gutenberg texts. ``` import os gb_files = [f for f in os.listdir("/Users/kyle/Documents/gutenberg_samples/") if f.startswith('gb_')] ``` Define a function to extract `(adj,noun)` relations. ``` def extract_adj2nouns(tempspacy): """ For a sentence like "I ate the small frog." returns [(small, frog)]. lemmatizes the noun, lowers the adjective """ nouns = ["NN","NNS"] adj_noun_tuples = [] for token in tempspacy: if token.dep_=="amod": if token.head.tag_ in nouns: adj_noun_tuples.append((token.text.lower(),token.head.lemma_)) return adj_noun_tuples extract_adj2nouns(nlp(u"The small frogs were not the only ones there. The dog walked itself through the house.")) ``` Textacy extracts `(s,v,o)` triples. ``` for triple in extract.subject_verb_object_triples(nlp(u"The husband ignored his wife.")): print triple ``` I want to loop through a bunch of Gutenberg texts that I've randomly downloaded with the Gutenberg python package. ``` from langdetect import detect ## to make sure texts are english from unidecode import unidecode ## to crudely deal with text encoding issues noun2adj = defaultdict(list) noun2object = defaultdict(list) noun2adj_tuples = [] svo_triples = [] errors = 0 for fy in tqdm(gb_files[:1000]): with open("/Users/kyle/Documents/gutenberg_samples/"+fy,'r') as f: tempdata = f.read() try: if detect(tempdata)=="en": ## check english tempspacy = nlp(tempdata.decode('utf-8')) ### adjectives try: for pair in extract_adj2nouns(tempspacy): noun2adj_tuples.append(pair) except: pass ### svo triples try: gutenberg_svo_triples = extract.subject_verb_object_triples(tempspacy) for trip in gutenberg_svo_triples: svo_triples.append(trip) except: pass except: errors+=1 ``` How many pairs (not unique) do I have of `(adj,noun)` relations? ``` len(noun2adj_tuples) ``` Of `(s,v,o)` relations? ``` len(svo_triples) ``` ## Inspecting the data so far... ### `(adj, noun)` relations ``` import random random.sample(noun2adj_tuples,20) ``` Another way to inspect data: frequency distributions. ``` from nltk import FreqDist as fd ADJ_noun_fd = fd([a for a,n in noun2adj_tuples]) adj_NOUN_fd = fd([n for a,n in noun2adj_tuples]) ADJ_noun_fd.most_common(40) adj_NOUN_fd.most_common(40) ``` #### Ideas... So there are really two problems. Looking at the frequency distribution tells me that some of the most common adjectives (e.g. "few", "other")) are undesirable, because they aren't closely tied to a noun. That leaves are `green` is better to know than that leaves can be `other`. (Also, certain common nouns are probably not as interesting, especially ones like `other`). I have two intuitions: 1) really common relationships between adjectives and nouns are less interesting/desirable than less common ones, and 2) at the same time, I really don't want `(adj,noun)` pairs that are totally aberrant. Regarding the second point, I could filter out any adjective that doesn't occur at least `n` in modification of a certain noun, but that really penalizes uncommon nouns (which won't have many adjectives modifying them). My plan: 1. Filter out relations containing the most adjectives as well as a handful of annoying nouns 2. Filter out those relations between words that are not strongly related according to a word2vec model A handmade list of nouns to exclude. ``` ADJS_nouns_to_exclude = [word for word,count in ADJ_noun_fd.most_common(40)] print ADJS_nouns_to_exclude from nltk.corpus import stopwords stops = stopwords.words('english') stops = stops + ["whose","less","thee","thine","thy","thou","one"] ##adjectives nouns stops = stops + ["time","thing","one","way","part","something"] ##annoying nouns noun2adj = defaultdict(list) for a,n in noun2adj_tuples: if n not in stops: if (a not in ADJS_nouns_to_exclude) and (a not in stops): noun2adj[n].append(a) import gensim word2vec_path = "/Users/kyle/Desktop/GoogleNews-vectors-negative300.bin" model = gensim.models.KeyedVectors.load_word2vec_format(word2vec_path, binary=True) ``` Below I define a pretty confusing loop to go through the dictionary I just made to filter out those words that `(adj,noun)` pairs that are unrelated according to a word2vec model. (Here I'm using just cosine similarity but I could probably maybe just measure the probability of the text according to the model.) ``` new_noun2adj = defaultdict(list) for k in tqdm(noun2adj.keys()): adjs = [] for adj in noun2adj[k]: try: adjs.append((adj,model.similarity(adj,k))) except: pass adjs.sort(key = lambda x: x[1], reverse=True) adjs_ = [] for a,score in adjs: if a not in adjs_: adjs_.append(a) ## this is some weird hand-crafted logic to filter adjectives belonging to rare and common nouns differently... ## the idea is to only take the cream of the crop when there are a lot of options --- i.e. when the noun is common if len(adjs_)>20: for adj in adjs_[:10]: new_noun2adj[k].append(adj) elif len(adjs_)>10: for adj in adjs_[:10]: new_noun2adj[k].append(adj) elif len(adjs_)>2: adj = adjs_[0] new_noun2adj[k].append(adj) else: pass new_noun2adj['hat'] with open("data/noun2adj.json","w") as f: json.dump(new_noun2adj,f) ``` ### `(s,v,o)` triples... ``` svo_triples_reformatted = [(s.lemma_,v.lemma_,o.text) for s,v,o, in svo_triples] ``` Inspect data. ``` random.sample(svo_triples_reformatted,20) Svo_fd = fd([s for s,v,o in svo_triples_reformatted]) sVo_fd = fd([v for s,v,o in svo_triples_reformatted]) svO_fd = fd([o for s,v,o, in svo_triples_reformatted]) topS = [word for word,count in Svo_fd.most_common(40)] print topS topV = [word for word,count in sVo_fd.most_common(40)] print topV topO = [word for word,count in svO_fd.most_common(40)] print topO ``` The loop below filters out an `(s,v,o)` triple if any one of its elements meets certain exclusionary conditions. ``` svo_triples_filtered = [] for s,v,o, in svo_triples_reformatted: Sval,Vval,Oval=False,False,False if len(s.split())==1: ## make sure it's not a complicated noun chunk if s.lower() not in stops: ## make sure it's not a stopword Sval=True if v not in topV: ## make sure it's not really common if len(v.split())==1: ## make sure it's not a complicated verb chunk if v.lower() not in stops: ## make sure it's not a stopwords Vval=True if len(o.split())==1: ### make sure it's not a complicated noun chunk if o.lower() not in stops: ### make sure it's not a stopword if o.lower()==o: ## this is kind of a hack to exclude proper nouns if o.endswith("ing")==False: ## filter out annoying present participles Oval=True if (Sval,Vval,Oval)==(True,True,True): svo_triples_filtered.append((s,v,o)) noun2v_o = defaultdict(list) for s,v,o in svo_triples_filtered: noun2v_o[s].append((v,o)) noun2v_o["king"] ``` Again, filter out those `(s,v,o)` combinations in which the `v` and `o` are not similar according to word2vec model. ``` new_noun2v_o = defaultdict(list) for k in tqdm(noun2v_o.keys()): vos = [] for verb,obj in noun2v_o[k]: try: vos.append((verb,obj,model.similarity(obj,verb))) except: pass vos.sort(key = lambda x: x[2], reverse=True) ##again, logic to handle rare and common nouns differently if len(vos)>20: for verb,obj,value in vos[:10]: new_noun2v_o[k].append((verb,obj)) elif len(vos)>10: for verb,obj,value in vos[:5]: new_noun2v_o[k].append((verb,obj)) elif len(vos)>2: verb,obj,value = vos[0] new_noun2v_o[k].append((verb,obj)) else: pass new_noun2v_o["seed"] with open("data/noun2v_o.json","w") as f: json.dump(new_noun2v_o,f) ```
github_jupyter
# T1129 - Shared Modules Adversaries may abuse shared modules to execute malicious payloads. The Windows module loader can be instructed to load DLLs from arbitrary local paths and arbitrary Universal Naming Convention (UNC) network paths. This functionality resides in NTDLL.dll and is part of the Windows [Native API](https://attack.mitre.org/techniques/T1106) which is called from functions like <code>CreateProcess</code>, <code>LoadLibrary</code>, etc. of the Win32 API. (Citation: Wikipedia Windows Library Files) The module loader can load DLLs: * via specification of the (fully-qualified or relative) DLL pathname in the IMPORT directory; * via EXPORT forwarded to another DLL, specified with (fully-qualified or relative) pathname (but without extension); * via an NTFS junction or symlink program.exe.local with the fully-qualified or relative pathname of a directory containing the DLLs specified in the IMPORT directory or forwarded EXPORTs; * via <code>&#x3c;file name="filename.extension" loadFrom="fully-qualified or relative pathname"&#x3e;</code> in an embedded or external "application manifest". The file name refers to an entry in the IMPORT directory or a forwarded EXPORT. Adversaries may use this functionality as a way to execute arbitrary code on a victim system. For example, malware may execute share modules to load additional components or features. ## Atomic Tests: Currently, no tests are available for this technique. ## Detection Monitoring DLL module loads may generate a significant amount of data and may not be directly useful for defense unless collected under specific circumstances, since benign use of Windows modules load functions are common and may be difficult to distinguish from malicious behavior. Legitimate software will likely only need to load routine, bundled DLL modules or Windows system DLLs such that deviation from known module loads may be suspicious. Limiting DLL module loads to <code>%SystemRoot%</code> and <code>%ProgramFiles%</code> directories will protect against module loads from unsafe paths. Correlation of other events with behavior surrounding module loads using API monitoring and suspicious DLLs written to disk will provide additional context to an event that may assist in determining if it is due to malicious behavior. ## Shield Active Defense ### Software Manipulation Make changes to a system's software properties and functions to achieve a desired effect. Software Manipulation allows a defender to alter or replace elements of the operating system, file system, or any other software installed and executed on a system. #### Opportunity There is an opportunity for the defender to observe the adversary and control what they can see, what effects they can have, and/or what data they can access. #### Use Case A defender can modify system calls to break communications, route things to decoy systems, prevent full execution, etc. #### Procedures Hook the Win32 Sleep() function so that it always performs a Sleep(1) instead of the intended duration. This can increase the speed at which dynamic analysis can be performed when a normal malicious file sleeps for long periods before attempting additional capabilities. Hook the Win32 NetUserChangePassword() and modify it such that the new password is different from the one provided. The data passed into the function is encrypted along with the modified new password, then logged so a defender can get alerted about the change as well as decrypt the new password for use. Alter the output of an adversary's profiling commands to make newly-built systems look like the operating system was installed months earlier. Alter the output of adversary recon commands to not show important assets, such as a file server containing sensitive data.
github_jupyter
``` from tc_python import * import itertools as itertool import time import numpy as np import matplotlib.pyplot as plt def singlePoint(database=["tcfe9"],T=[273],P=[1e5],components=["fe","c"],phases=["bcc"],mole_fractions=[99.02,0.08]): """ Single point equilibrium calculations ## input: database name: [string], Temperature float, Pressure float, elements: [string], phases [string], if empty all phases from the database are included mole fractions [float] ## output: dictionary {"nps","vs","ws","xiphs","ys", "acs","mus" } stable_phases: [string], npms: phase fractions [float], vpvs: volume fractions of phases [float], ws: weight fractions of elements [float], xiphs: mole fractions of elements in phases [float], ys: y fractions of elements is phases [float], bineries: binary list of all elements and stable phases [tuple (component,phase)] acs: activities of elements with respect to all phases mus: chemical potentials of all components """ with TCPython() as start: if not phases: system_int = start.select_database_and_elements(database,components) else: system_int = start.select_database_and_elements(database,components).without_default_phases() for phase in phases: system_int.select_phase(phase) system = system_int.get_system() ticc=time.time() calc = system.with_single_equilibrium_calculation() for i in range(len(components)-1): calc.set_condition(ThermodynamicQuantity.mole_fraction_of_a_component((components[i])), mole_fractions[i]) calc.set_condition(ThermodynamicQuantity.temperature(), 1723.15) calc.set_condition(ThermodynamicQuantity.pressure(), 1e5) calc_res = calc.calculate() stable_phases = calc_res.get_stable_phases() volume_fractions,phase_fractions,weight_fractions,xs_in_phases,ys_in_phases,activities,chemical_potentials = \ [],[],[],[],[],[],[] for phase in stable_phases: volume_fractions.append(calc_res.get_value_of('vpv({})'.format(phase))) phase_fractions.append(calc_res.get_value_of('npm({})'.format(phase))) for element in components: weight_fractions.append(calc_res.get_value_of('w({})'.format(element))) chemical_potentials.append(calc_res.get_value_of('mu({})'.format(element))) binarys = list(itertool.product(stable_phases, components)) for binary in binarys: xs_in_phases.append(calc_res.get_value_of('x({},{})'.format(binary[0], binary[1]))) try: ys_in_phases.append(calc_res.get_value_of('y({},{})'.format(binary[0], binary[1]))) except Exception as error: a=1 #ys_in_phases.append(-1) try: activities.append(calc_res.get_value_of('ac({},{})'.format(binary[1], binary[0]))) except Exception as error: a=1 #ys_in_phases.append(-1) tocc=time.time() print(tocc-ticc) return {"stable_phases":stable_phases,"npms":phase_fractions, \ "vpvs":volume_fractions,"ws":weight_fractions,"xiphs":xs_in_phases, \ "ys":ys_in_phases,"acs":activities,"mus":chemical_potentials,"binaries":binarys} help(singlePoint) database = "TCFE8" elements = ["C","Co","N","Ti","W"] phases = ["liquid", "fcc", "mc_shp", "graphite"] mole_fractions = [0.42943834995313096,0.1,0.019999999999999993,0.01999999999999999,0.4305616500468691] tic=time.time() a=singlePoint(database,1723, 1e5, elements,phases,mole_fractions) toc = time.time() print(toc-tic) ```
github_jupyter
# <center>Master M2 MVA 2017/2018 - Graphical models - HWK 3<center/> ### <center>WANG Yifan && CHU Xiao<center/> ``` import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl from scipy.stats import multivariate_normal as norm import warnings warnings.filterwarnings("ignore") # Data loading data_path = 'classification_data_HWK3/' train = np.loadtxt(data_path + 'EMGaussian.data') test = np.loadtxt(data_path + 'EMGaussian.test') print(train.shape, test.shape) plt.scatter(train[0:100,0], train[0:100,1]) plt.show() ``` ## Question 1 The code is implemented in class `HMM`, in function `gamma_` ( for $p(q_t |u_1 ,... , u_T )$) and `ksi_` ( for $p(q_t , q_{t+1} |u_1 ,..., u_T )$). ``` class HMM(object): def __init__(self, K, A, means, covs, pi): """ Args: K (int): number of states A: transition matrix A(u, q) = p(u|q) pi (K,1): prior p(q) means: means of guassian distributions covs: covariances of gaussian distributions """ self.K = K self.A = A self.pi = pi self.means = means self.covs = covs def p_(self, z, u): """ Gaussian emission probability ~ N(means, covs) Args: z: latent variable, 0...K-1 u: observation """ return norm.pdf(u, self.means[z], self.covs[z]) def emission_(self, u): """ Compute p(u|q=0...K-1) u: observation q: latent variable Return: proba (K, 1) """ eps = 1e-30 proba = np.asarray([self.p_(z, u) for z in range(self.K)]).reshape(-1,1) + eps return proba def alpha(self, data): """ p(u_1...u_t, q_t) Return: alpha (K, T) logalpha (K, T) """ T = len(data) eps = 1e-30 logalpha = np.zeros((self.K, T)) logalpha[:, 0] = (np.log(self.emission_(data[0]) + eps) + np.log(self.pi) + eps).reshape(-1) for t in range(1, T): logalpha_max = logalpha[:, t-1].max() p = np.exp(logalpha[:, t-1] - logalpha_max).reshape(-1,1) logalpha[:, t] = (np.log(self.emission_(data[t]) + eps) \ + np.log(self.A.dot(p) + eps) + logalpha_max).reshape(-1) alpha = np.exp(logalpha) return alpha, logalpha def beta(self, data): """ p(u_{t+1}...u_T|q_t) Return: beta (K, T) logbeta (K, T) """ T = len(data) eps = 1e-30 logbeta = np.zeros((self.K, T)) logbeta[:, T-1] = (np.log(self.emission_(data[0]) + eps) + np.log(self.pi + eps)).reshape(-1) for t in range(1, T): t = T - t -1 # T-2 ... 0 logbeta_max = logbeta[:, t+1].max() p = np.exp((logbeta[:, t+1] - logbeta_max).reshape(-1,1) + np.log(self.emission_(data[t+1]) + eps)).reshape(-1,1) logbeta[:, t] = (np.log(self.A.T.dot(p) + eps) + logbeta_max).reshape(-1) beta = np.exp(logbeta) return beta, logbeta def gamma_(self, data): """ Marginal posterior distribution of all latent variable q_t=0..T-1: p(q_t|U) Return: gamma (K, T) """ T = len(data) _, logalpha = self.alpha(data) _, logbeta = self.beta(data) gamma = np.zeros((self.K, T)) for t in range(T): log_alpha_beta = logalpha[:,t] + logbeta[:,t] log_alpha_beta_max = np.max(log_alpha_beta) # p(q_t, U) p = np.exp(log_alpha_beta-log_alpha_beta_max) gamma[:, t] = p/np.sum(p) return gamma def ksi_(self, data): """ Joint posterior distribution of two successive latent variables: ksi[i,j] = p(q_t=i, q_t+1=j|U) Return: ksi (K, K, T-1) """ T = len(data) _, logalpha = self.alpha(data) _, logbeta = self.beta(data) ksi = np.zeros((self.K, self.K, T-1)) log_ksi = np.zeros((self.K, self.K, T-1)) for t in range(T-1): for i in range(self.K): for j in range(self.K): log_alpha_beta = logalpha[:, t] + logbeta[:, t] log_alpha_beta_max = log_alpha_beta.max() log_p = log_alpha_beta_max + np.log(np.sum(np.exp(log_alpha_beta - log_alpha_beta_max))) log_ksi[i, j, t] = -log_p + logalpha[i, t] + logbeta[j, t+1] + np.log(self.A[j, i]) \ + np.log(self.p_(j, data[t+1])) ksi[i, j, t] = np.exp(log_ksi[i, j, t]) return ksi, log_ksi def smoothing(self, data): """ p(q_t|U) Return: gamma (K, T) """ return self.gamma_(data) def lower_bound(self, data): """Compute lower bound of complete log likelihood """ ll = 0 eps = 1e-30 T = len(data) gamma = self.gamma_(data) ksi, _ = self.ksi_(data) ll += np.sum(gamma[:,0].reshape(-1,1) * np.log(self.pi + eps)) for t in range(T-1): ll += np.sum(ksi[:,:,t].reshape(self.K, self.K).T * np.log(self.A + eps)) for t in range(1, T): ll += np.sum(gamma[:,t].reshape(-1,1) * np.log(self.emission_(data[t]) + eps)) return ll def log_likelihood(self, data): """ Compute the log likelihood of the observations """ T = len(data) _, logalpha = self.alpha(data) _, logbeta = self.beta(data) mx = (logalpha[:,0] + logbeta[:,0]).max() ll = np.log(np.sum([np.exp(logalpha[:,0] + logbeta[:,0] - mx) for i in range(self.K)])) + mx return ll def train(self, data, max_iter=100, verbal=True, validation=None): """ Args: data: (T, D), training data, D is the feature dimension max_iter: int, maximal number of iterations verbal: boolean, if True, print log likelyhood valdation: None or (T, D), if provided, its log likelyhood will be computed and returned Return: lls: list, log likelyhoods of training data lls_valid: list, log likelyhoods of validation dataset """ i = 0 eps = 1e-4 lls = [self.log_likelihood(data)] lls_valid = [] if validation is None else [self.log_likelihood(validation)] if verbal: print("\tTrain log likelihood: {1}".format(i, lls[0])) if validation is not None: print("\tValid log likelihood: {0}".format(lls_valid[0])) while i < max_iter: i += 1 self.train_step(data) ll = self.log_likelihood(data) if len(lls) > 2 and (ll - lls[-1]) < eps: break lls.append(ll) if verbal: print("Iteration {0}:\n\tTrain log likelihood: {1}".format(i, ll)) if validation is not None: ll_valid = self.log_likelihood(validation) lls_valid.append(ll_valid) if verbal: print("\tValid log likelihood: {0}".format(ll_valid)) return lls, lls_valid def train_step(self, data): """ Perform EM algorithm for one step Args: data: (T, D), training data, D is the feature dimension """ T = len(data) # E-step gamma = self.gamma_(data) ksi, _ = self.ksi_(data) # M-step self.pi = (gamma[:,0] / gamma[:,0].sum()).reshape(-1,1) for j in range(self.K): for k in range(self.K): self.A[k, j] = ksi[j, k, :].sum()/np.sum(ksi[j, :, :]) for k in range(self.K): self.means[k] = gamma[k,:].dot(data)/gamma[k,:].sum() # (1,T)*(T,D) -> (1,D) self.covs[k] = np.sum([gamma[k,n]*(data[n]-self.means[k]).reshape(-1, 1).dot((data[n]-self.means[k]).reshape(1,-1)) for n in range(T)], 0)/gamma[k,:].sum() def decode(self, data): """ Viterbi algorithm (forward) Args: data: (T, D), training data, D is the feature dimension """ # Initialization T = len(data) eps = 1e-30 maxProb = np.zeros((self.K, T)) prev_state = np.zeros((self.K, T)) # Find the index which can maximiser the tmp_proba for t in range(T): if (t==0): maxProb[:,0] = np.log(self.pi).reshape(-1) else: for i in range(self.K): tmp_proba = maxProb[:,t-1] + np.log(A[i,:].T + eps) + np.log(self.emission_(data[t-1]) + eps).reshape(-1) maxValue = np.max(tmp_proba) maxIndex = np.argmax(tmp_proba) maxProb[i,t] = maxValue prev_state[i,t] = maxIndex tmp_proba = np.log(maxProb[:,T-1]) + np.log(self.emission_(data[T-1]) + eps).reshape(-1) maxValue = np.max(tmp_proba) maxIndex = np.argmax(tmp_proba) # Find the best path state_index_path = np.zeros(T, dtype=int) state_index_path[T-1] = maxIndex; for t in range(T-2,-1,-1): state_index_path[t] = prev_state[state_index_path[t+1],t+1] # # Viterbi algorithm (backward) # T = len(data) # log_viterbi = np.zeros((self.K, T)) # log_post_viterbi = np.zeros((self.K, T)) # viterbi_path = np.zeros((self.K, T), dtype=int) # for t in range(T-1,-1,-1): # if t == T-1: # log_post_viterbi[:, t] = np.zeros(self.K) # else: # mxvalue = np.max(log_viterbi[:, t + 1]) # p = np.exp(log_viterbi[:, t + 1] - mxvalue) # max_x = [np.max(A.T[i, :] * p) for i in range(self.K)] # viterbi_path[:, t] = [np.argmax(self.A.T[i, :] * p) for i in range(self.K)] # log_post_viterbi[:, t] = np.log(max_x) + mxvalue # log_viterbi[:, t] = log_post_viterbi[:, t] + np.log(self.emission_(data[t])).reshape(-1) # state_index_path = np.ones(T, dtype=int) * -1 # z = np.argmax(log_viterbi[:, 0]) # state_index_path[0] = z # for t in range(T - 1): # z = viterbi_path[z, t] # state_index_path[t+1] = z # return state_index_path return state_index_path # GMM classifier class GMM(object): def __init__(self, k, covariance_type='full'): self.k = k self.mus = None self.alpha2 = None self.sigmas = None self.resp = None self.pis = None self.clusters = {} self.labels = None self.label_history = [] self.covariance_type = covariance_type def train(self, X, init="kmeans"): n, d = X.shape centers = None # initialize if init == "kmeans": clf = KMeans(self.k) clf.train(X) self.mus = clf.centers self.labels = clf.labels self.pis = np.array([len(clf.clusters[i])/n for i in range(self.k)]) if self.covariance_type == 'spherical': self.alpha2 = np.array([np.sum((np.array(clf.clusters[i]) - self.mus[i]) ** 2)/len(clf.clusters[i])/2. for i in range(self.k)]) self.sigmas = np.array([self.alpha2[i] * np.eye(d) for i in range(self.k)]) elif self.covariance_type == 'full': self.sigmas = np.array([np.cov(np.array(clf.clusters[k]).T) for k in range(self.k)]) self.resp = np.zeros((self.k, n)) for i in range(self.k): self.resp[i] = np.array(gamma(X, i, self.k, self.pis, self.mus, self.sigmas)) t = 0 resp = self.resp.copy() pis = self.pis.copy() mus = self.mus.copy() if self.covariance_type == 'spherical': alpha2 = self.alpha2.copy() sigmas = self.sigmas.copy() while t < 30: t += 1 # update for i in range(self.k): pis[i] = np.mean(self.resp[i]) mus[i] = np.sum(X * self.resp[i][:, np.newaxis], 0)/np.sum(self.resp[i]) if self.covariance_type == 'spherical': alpha2[i] = np.sum([(X[j] - self.mus[i]) ** 2 * self.resp[i,j] for j in range(n)])/np.sum(self.resp[i])/2. sigmas[i] = alpha2[i] * np.eye(d) elif self.covariance_type == 'full': sigmas[i] = np.sum([(X[j] - self.mus[i]).reshape(-1,1).dot((X[j] - self.mus[i]).reshape(1,-1)) * self.resp[i,j] for j in range(n)], 0)/np.sum(self.resp[i]) for i in range(self.k): resp[i] = np.array(gamma(X, i, self.k, pis, mus, sigmas)) self.resp = resp.copy() self.pis = pis.copy() self.mus = mus.copy() if self.covariance_type == 'spherical': self.alpha2 = alpha2.copy() self.sigmas = sigmas.copy() labels = np.zeros(n) for i in range(n): self.labels[i] = np.argmax(self.resp[:, i]) def test(self, X): n, d = X.shape resp = np.zeros((self.k, n)) for i in range(self.k): resp[i] = np.array(gamma(X, i, self.k, self.pis, self.mus, self.sigmas)) labels = np.zeros(n) for i in range(n): labels[i] = np.argmax(resp[:, i]) return labels.astype(np.int32), resp def log_likelihood(self, X): n, d = X.shape _, resp = self.test(X) return np.sum([[resp[k,i] * np.log(self.pis[k] * norm.pdf(X[i], self.mus[k], self.sigmas[k])) for k in range(self.k)] for i in range(n)]) # K-means classifier class KMeans(object): def __init__(self, k): self.k = k self.centers = None self.clusters = {} self.labels = None self.inertia = None self.label_history = [] def train(self, X, init="random"): n = X.shape[0] centers = None # initialize if init == "random": self.centers = X[np.random.choice(n, self.k, replace=False)] elif init == 'kmeans++': # TODO: implement K-means++ pass while (centers is None or np.abs(centers - self.centers).max() > 1e-5): # old centers centers = self.centers.copy() for i in range(self.k): self.clusters[i] = [] labels = [] for x in X: dis = np.sum((centers - x)**2, 1) label = np.argmin(dis) self.clusters[label].append(x) labels.append(label) self.labels = np.array(labels) self.label_history.append(self.labels) # new centers for i in range(self.k): self.centers[i] = np.mean(np.array(self.clusters[i]), 0) def gamma(X, k, K, pis, mus, sigmas): """ Responsibilities """ return (pis[k]* norm.pdf(X, mus[k], sigmas[k]))/(np.sum([pis[i]* norm.pdf(X, mus[i], sigmas[i]) for i in range(K)], 0)) ``` ## Question 2 Represent $p(q_t |u_1 ,..., u_T )$ for each of the 4 states as a function of time for the 100 first data points in `EMGaussienne.test`. ``` A = np.diag([1./2 - 1./6]*4) + np.ones((4,4)) * 1./6 pi = np.ones((4,1))/4. # pre-train GMM clf = GMM(4, covariance_type='full') clf.train(test) # train HMM hmm = HMM(K=4, A=A, pi=pi, means=clf.mus, covs=clf.sigmas) smoothing = hmm.smoothing(test) print(smoothing.shape) for i in range(4): plt.scatter(range(100), smoothing[i, :100]) plt.legend(['state 1', 'state 2', 'state 3', 'state 4']) plt.show() ``` ## Question 3 Derive the estimation equations of the EM algorithm. ## Question 4 Implement the EM algorithm to learn the parameters of the model ($\pi$, $A$, $\mu_k$ , $\Sigma_k$, $k = 1...4$). The means and covariances could be initialized with the ones obtained in the previous homework. Learn the model from the training data in“EMGaussienne.data”. ``` A = np.diag([1./2 - 1./6]*4) + np.ones((4,4)) * 1./6 pi = np.ones((4,1))/4. clf = GMM(4, covariance_type='full') clf.train(train) # train HMM hmm = HMM(K=4, A=A, pi=pi, means=clf.mus, covs=clf.sigmas) ll, ll_valid = hmm.train(train, max_iter=20, verbal=True, validation=test) ``` ## Question 5 Plot the log-likelihood on the train data “EMGaussienne.data” and on the test data “EMGaussienne.test” as a function of the iterations of the algorithm. Comment. ``` plt.plot(ll) plt.plot(ll_valid) plt.legend(['EMGaussienne.data', 'EMGaussienne.test']) plt.title("Log-likelihood on train and test data") plt.xlabel("iteration") plt.ylabel("log-likelihood") plt.show() ``` ## Question 6 Return in a table the values of the log-likelihoods of the Gaussian mixture models and of the HMM on the train and on the test data. ``` # GMM print("GMM-train:", clf.log_likelihood(train)) print("GMM-test:", clf.log_likelihood(test)) # HMM print("HMM-train:", hmm.log_likelihood(train)) print("HMM-test:", hmm.log_likelihood(test)) ``` ### 8. Implement Viterbi decoding. ``` viterbi_path = hmm.decode(train) plt.figure() plt.title("Most likely sequence of states (Viterbi algorithm)") plt.scatter(train[:,0], train[:,1], c=viterbi_path) plt.scatter(hmm.means[:,0], hmm.means[:,1], color = "red") plt.show() plt.figure() plt.title("Most likely sequence of states (Viterbi algorithm)") plt.scatter(train[0:100,0], train[0:100,1], c=viterbi_path[0:100]) plt.scatter(hmm.means[:,0], hmm.means[:,1], color = "red") plt.show() ``` ### 9. For the datapoints in the test file “EMGaussienne.test”, compute the marginal probability p(qt|u_1, . . . , u_T) for each point to be in state {1,2,3,4} for the parameters learned on the training set. ``` gamma_test = hmm.smoothing(test) plt.figure(figsize=(15,5)) plt.title("The smoothing distribution (test file)") plt.imshow(1-gamma_test[:,0:100], cmap="gray",origin="lower") plt.xlabel("T") plt.ylabel("States") plt.show() ``` ### 10. For each of these same 100 points, compute their most likely state according to the marginal probability computed in the previous question. ``` state_smoothing = np.argmax(gamma_test, axis=0) plt.figure(figsize=(12,3)) plt.title("Most likely states (Smoothing distribution)") plt.scatter(np.arange(100), state_smoothing[0:100]+1) plt.xlabel("T") plt.ylabel("States") plt.show() ``` ### 11. Run Viterbi on the test data. Compare the most likely sequence of states obtained for the 100 first data points with the sequence of states obtained in the previous question. ``` viterbi_test = hmm.decode(test) plt.figure(figsize=(12,3)) plt.title("Most likely states (Viterbi algorithm)") plt.scatter(np.arange(100), viterbi_test[0:100]+1) plt.xlabel("T") plt.ylabel("States") plt.show() ```
github_jupyter
# Self-Driving Car Engineer Nanodegree ## Project: **Finding Lane Lines on the Road** *** In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right. In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project. --- Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image. **Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".** --- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.** --- <figure> <img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p> </figcaption> </figure> <p></p> <figure> <img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" /> <figcaption> <p></p> <p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p> </figcaption> </figure> ## Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report. Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters. ``` # Build pipeline that draws the detected lane lines on the original images def process_image(image): # Mask the image with color selection filter color_filtered_image = color_selection(image) # Convert the RGB image to grayscale gray_image = grayscale(color_filtered_image) # Blur the image with Gaussian filter blurred_image = gaussian_blur(gray_image, kernel_size=15) # Extrct the edge using Canny edge filter edge_image = canny(blurred_image, low_threshold=50, high_threshold=150) # Return the coordinates of the vertices of ROI vertices = select_vertices(edge_image) # Mask the edge_image with ROI roi_image = region_of_interest(edge_image, vertices) # Detect the straight lines using the Hough transform lines = hough_lines(roi_image, rho=1, theta=np.pi/180, threshold=20, min_line_len = 20, max_line_gap = 300) # Merge the detected lines into two lines: left line and right line left_lane, right_lane = average_line(lines) # Detect the (x1,y1) , (x2, y2) points for each line and plot them on blank image lane_image = lane_points(image, left_lane, right_lane) # Overlay the lane line on the origin RGB image lane_image_scene = cv2.addWeighted(image, 1.0, lane_image, 0.50, 0.0) # return the overlay image return lane_image_scene ``` ## Helper Functions ``` # Mask the pavement images with lane colors: yellow and white def color_selection(img): # white color selection lower = np.uint8([200, 200, 200]) upper = np.uint8([255, 255, 255]) white_pixels = cv2.inRange(img, lower, upper) # yellow color selection lower = np.uint8([190, 190, 0]) upper = np.uint8([255, 255, 255]) yellow_pixels = cv2.inRange(img, lower, upper) # mask the image masked_pixels = cv2.bitwise_or(white_pixels, yellow_pixels) masked_image = cv2.bitwise_and(img, img, mask = masked_pixels) return masked_image # Covert the RGB image into gray image def grayscale(img): return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Apply gaussian blur filter def gaussian_blur(img, kernel_size): return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) # Apply edge detection def canny(img, low_threshold, high_threshold): return cv2.Canny(img, low_threshold, high_threshold) # Mask region of interest def region_of_interest(img, vertices): #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image # Select vertices for the mask def select_vertices(img): # first, define the polygon by vertices rows, cols = img.shape[:2] bottom_left = [cols*0.1, rows*0.95] top_left = [cols*0.4, rows*0.6] bottom_right = [cols*0.9, rows*0.95] top_right = [cols*0.6, rows*0.6] # the vertices are an array of polygons (i.e array of arrays) and the data type must be integer vertices = np.array([[bottom_left, top_left, top_right, bottom_right]], dtype=np.int32) return vertices # Apply Hough transform to detect the line def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) return lines # Draw lines on the image def draw_lines(img, lines, color=[255, 0, 0], thickness=2): for line in lines: for x1,y1,x2,y2 in line: cv2.line(img, (x1, y1), (x2, y2), color, thickness) # Merge the detected lines into two lines: left and right def average_line(lines): left_lines = [] # (slope, intercept) left_weights = [] # (length,) right_lines = [] # (slope, intercept) right_weights = [] # (length,) for line in lines: for x1, y1, x2, y2 in line: if x2==x1: continue # ignore a vertical line slope = (float(y2)-float(y1))/(float(x2)-float(x1)) intercept = y1 - slope*x1 length = np.sqrt((y2-y1)**2+(x2-x1)**2) if slope < 0: # y is reversed in image left_lines.append((slope, intercept)) left_weights.append((length)) else: right_lines.append((slope, intercept)) right_weights.append((length)) # add more weight to longer lines left_lane = np.dot(left_weights, left_lines) /np.sum(left_weights) if len(left_weights) >0 else None right_lane = np.dot(right_weights, right_lines)/np.sum(right_weights) if len(right_weights)>0 else None # return two lanes in the format of slope and intercept return left_lane, right_lane # convert the y = mX +b to x1, y1, x2, y2 def make_line_points(y1, y2, line): if line is None: return None slope, intercept = line # make sure everything is integer as cv2.line requires it x1 = int((y1 - intercept)/slope) x2 = int((y2 - intercept)/slope) y1 = int(y1) y2 = int(y2) return ((x1, y1), (x2, y2)) # Plot the lane line points def lane_points(img, left_lane, right_lane): lane_image = np.zeros_like(image) y1 = img.shape[0] # bottom of the image y2 = y1*0.6 # slightly lower than the middle left_lane_points = make_line_points(y1, y2, left_lane) right_lane_points = make_line_points(y1, y2, right_lane) cv2.line(lane_image, left_lane_points[0], left_lane_points[1], color=[0, 255, 0], thickness=10) cv2.line(lane_image, right_lane_points[0], right_lane_points[1], color=[0, 255, 0], thickness=10) return lane_image ``` ## Test on Videos You know what's cooler than drawing lanes over images? Drawing lanes over video! We can test our solution on two provided videos: `solidWhiteRight.mp4` `solidYellowLeft.mp4` **Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** **If you get an error that looks like this:** ``` NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download() ``` **Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.** ``` import matplotlib.pyplot as plt import numpy as np import matplotlib.image as mpimg import os import cv2 import math fourcc = cv2.VideoWriter_fourcc(*'XVID') out = cv2.VideoWriter('output.avi',fourcc, 15.0, (540, 960)) cap = cv2.VideoCapture('test_videos/solidYellowLeft.mp4') while(cap.isOpened()): ret, frame = cap.read() RGB_img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) processed_image = process_image(RGB_img) processed_image_RGB = cv2.cvtColor(processed_image, cv2.COLOR_BGR2RGB) out.write(processed_image_RGB) cv2.imshow('frame', processed_image_RGB) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() out.release() cv2.destroyAllWindows() def process_image(image): color_filtered_image = color_selection(image) gray_image = grayscale(color_filtered_image) blurred_image = gaussian_blur(gray_image, kernel_size=15) edge_image = canny(blurred_image, low_threshold=50, high_threshold=150) vertices = select_vertices(edge_image) roi_image = region_of_interest(edge_image, vertices) lines = hough_lines(roi_image, rho=1, theta=np.pi/180, threshold=20, min_line_len = 20, max_line_gap = 300) left_lane, right_lane = average_line(lines) lane_image = lane_points(image, left_lane, right_lane) lane_image_scene = cv2.addWeighted(image, 1.0, lane_image, 0.50, 0.0) return result ``` Let's try the one with the solid white lane on the right first ... ## Writeup and Submission If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
github_jupyter
Adapted from [https://github.com/PacktPublishing/Bioinformatics-with-Python-Cookbook-Second-Edition](https://github.com/PacktPublishing/Bioinformatics-with-Python-Cookbook-Second-Edition), Chapter 2. # Getting the necessary packages ``` conda config --add channels bioconda conda install samtools pysam ``` # Getting the necessary data You just need to do this only once ``` !rm -f data/NA18489.chrom20.ILLUMINA.bwa.YRI.exome.20121211.bam 2>/dev/null !rm -f data/NA18489.chrom20.ILLUMINA.bwa.YRI.exome.20121211.bam.bai 2>/dev/null # BAM is the alignment file and BAI is the index, generated by samtools !wget ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA18489/exome_alignment/NA18489.chrom20.ILLUMINA.bwa.YRI.exome.20121211.bam !wget ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA18489/exome_alignment/NA18489.chrom20.ILLUMINA.bwa.YRI.exome.20121211.bam.bai !mv *bam *bai data ``` # The recipe ``` #pip install pysam from collections import defaultdict import numpy as np %matplotlib inline import seaborn as sns import matplotlib.pyplot as plt import pysam bam = pysam.AlignmentFile('data/NA18489.chrom20.ILLUMINA.bwa.YRI.exome.20121211.bam', 'rb') headers = bam.header for record_type, records in headers.items(): print (record_type) for i, record in enumerate(records): if type(record) == dict: print('\t%d' % (i + 1)) for field, value in record.items(): print('\t\t%s\t%s' % (field, value)) else: print('\t\t%s' % record) ``` check the meaning of [paired-end reads](https://www.illumina.com/science/technology/next-generation-sequencing/plan-experiments/paired-end-vs-single-read.html) ``` # pysam is 0-based for rec in bam: if rec.cigarstring.find('M') > -1 and rec.cigarstring.find('S') > -1 and not rec.is_unmapped and not rec.mate_is_unmapped: break print('query template name and ID') print(rec.query_name, rec.reference_id, bam.getrname(rec.reference_id), rec.reference_start, rec.reference_end) print('CIGAR string (indicates how many mapped bases. How many bases per read here?)') print(rec.cigarstring) print('position and length of the alignment') print(rec.query_alignment_start, rec.query_alignment_end, rec.query_alignment_length) print('info for the paired end') print(rec.next_reference_id, rec.next_reference_start, rec.template_length) print(rec.is_paired, rec.is_proper_pair, rec.is_unmapped, rec.mapping_quality) print('Phred score for mapped and complete sequence') print(rec.query_qualities) print(rec.query_alignment_qualities) print('Finally, the complete sequence of this read') print(rec.query_sequence) ``` Now, let us plot the distribution of the successfully mapped positions in a subset of sequences in the BAM file. we will use only the positions between 0 and 10Mbp of chromosome 20. Mappability is not homogeneous! ``` counts = [0] * 76 for n, rec in enumerate(bam.fetch('20', 0, 10000000)): for i in range(rec.query_alignment_start, rec.query_alignment_end): counts[i] += 1 freqs = [x / (n + 1.) for x in counts] fig, ax = plt.subplots(figsize=(16,9)) ax.plot(range(1, 77), freqs) phreds = defaultdict(list) for rec in bam.fetch('20', 0, None): for i in range(rec.query_alignment_start, rec.query_alignment_end): phreds[i].append(rec.query_qualities[i]) maxs = [max(phreds[i]) for i in range(76)] tops = [np.percentile(phreds[i], 95) for i in range(76)] medians = [np.percentile(phreds[i], 50) for i in range(76)] bottoms = [np.percentile(phreds[i], 5) for i in range(76)] medians_fig = [x - y for x, y in zip(medians, bottoms)] tops_fig = [x - y for x, y in zip(tops, medians)] maxs_fig = [x - y for x, y in zip(maxs, tops)] fig, ax = plt.subplots(figsize=(16,9)) ax.stackplot(range(1, 77), (bottoms, medians_fig, tops_fig, maxs_fig)) ax.plot(range(1, 77), maxs, 'k-') ```
github_jupyter
# CNN study case: train on Cifar10. On this notebook we will cover the training of a simple model on the CIFAR 10 dataset, and we will cover the next topics: - Cifar10 dataset - Model architecture: - 2D Convolutional layers - MaxPooling - Relu activation - Batch normalization - Image generator data augmentation - TTA (Test time augmentation) ### The dataset The dataset is composed by 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. These are the image classes: - airplane - automobile - bird - cat - deer - dog - frog - horse - ship - truck ``` import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorflow as tf tf.get_logger().setLevel('ERROR') physical_devices = tf.config.experimental.list_physical_devices('GPU') tf.config.experimental.set_memory_growth(physical_devices[0], True) tf.keras.backend.clear_session() # For easy reset of notebook state. from tensorflow.keras.datasets.cifar10 import load_data from tensorflow.keras.utils import to_categorical (trainX, trainY), (testX, testY) = load_data() # normalize pixel values trainX = trainX.astype('float32') / 255 testX = testX.astype('float32') / 255 # one hot encode target values trainY = to_categorical(trainY) testY = to_categorical(testY) trainX[0].shape ``` <font color=red><b>Plot some examples of the dataset. <br>Hint: use the imshow function of the pyplot package</b> </font> ``` from matplotlib import pyplot as plt %matplotlib inline for j in range (10): plt.imshow((trainX[j]*255).astype('uint8')) plt.show() from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D from tensorflow.keras.layers import MaxPooling2D from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Flatten from tensorflow.keras.layers import BatchNormalization from numpy import mean from numpy import std ``` ## Model Architecture Build the CNN model to be trained train on the data, on this config: - Conv2d layer, with 32 units and 3x3 filter, with relu activation and padding of "same" type. Use the "he_uniform" initializer. - batchnorm - max pooling (2x2) - Conv2d layer, with 64 units and 3x3 filter, with relu activation and padding of "same" type. Use the "he_uniform" initializer. - batchnorm - max pooling (2x2) - Dense layer, with 128 units - Dense softmax layer - On compilation, use adam as the optimizer and categorical_crossentropy as the loss function. Add 'accuracy' as a metric - Print the summary <font color=red><b>Remember to initialize it propperly and to include input_shape on the first layer. <br> Hint: - Use the imported libraries</b></font> ``` def define_model(): # define model model = Sequential() model.add(Conv2D(32, (3, 3), activation='relu', padding='same', kernel_initializer='he_uniform', input_shape=(32, 32, 3))) model.add(BatchNormalization()) model.add(MaxPooling2D((2, 2))) model.add(Conv2D(64, (3, 3), activation='relu', padding='same', kernel_initializer='he_uniform')) model.add(BatchNormalization()) model.add(MaxPooling2D((2, 2))) model.add(Flatten()) model.add(Dense(128, activation='relu', kernel_initializer='he_uniform')) model.add(BatchNormalization()) model.add(Dense(10, activation='softmax')) # compile model model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) return model ``` We are going to train small steps of our model in order to evaluate the hyperparameters and the strategy. In order to do that, we will define a step epochs number and will train and evaluate the model for that amount of epochs. after a number of repeats we will reduce the effect random initialization of certain parameters. <font color=red>Evaluate the built model by training 10 times on different initializations<b> Hint: we would like to have some parameters of the score distribution, like the ones imported </b></font> ``` step_epochs = 3 batch_size = 128 def evaluate_model(model, trainX, trainY, testX, testY): # fit model model.fit(trainX, trainY, epochs=step_epochs, batch_size=batch_size, verbose=0) # evaluate model _, acc = model.evaluate(testX, testY, verbose=0) return acc def evaluate(trainX, trainY, testX, testY, repeats=10): scores = list() for _ in range(repeats): # define model model = define_model() # fit and evaluate model accuracy = evaluate_model(model, trainX, trainY, testX, testY) # store score scores.append(accuracy) print('> %.3f' % accuracy) return scores # evaluate model scores = evaluate(trainX, trainY, testX, testY) # summarize result print('Accuracy: %.3f (%.3f)' % (mean(scores), std(scores))) ``` ### Keras Image data generator In order to perform some data augmentation, Keras includes the Image data generator, which can be used to improve performance and reduce generalization error when training neural network models for computer vision problems. A range of techniques are supported, as well as pixel scaling methods. Some of the most commn are: - Image shifts via the width_shift_range and height_shift_range arguments. - Image flips via the horizontal_flip and vertical_flip arguments. - Image rotations via the rotation_range argument - Image brightness via the brightness_range argument. - Image zoom via the zoom_range argument. Let's see it with an example: ``` # expand dimension to one sample from numpy import expand_dims from tensorflow.keras.preprocessing.image import ImageDataGenerator data = trainX[0]*255 samples = expand_dims(data, 0) # create image data augmentation generator datagen = ImageDataGenerator(horizontal_flip=True, featurewise_center=True, featurewise_std_normalization=True, rotation_range=20, width_shift_range=0.2, height_shift_range=0.2) # prepare iterator it = datagen.flow(samples, batch_size=1) # generate samples and plot for i in range(9): # define subplot plt.subplot(330 + 1 + i) # generate batch of images batch = it.next() # convert to unsigned integers for viewing image = batch[0].astype('uint8') # plot raw pixel data plt.imshow(image) # show the figure plt.show() batch.shape ``` <font color=red>Evaluate the model with data augmentation <br> Hint: Use the ?model.fit_generator command and please take into acount the parameters of the model.fit_generator: It needs to include: epochs, steps_per_epoch and a generator (i.e: a flow of images). </font> ``` # fit and evaluate a defined model def evaluate_model_increased(model, trainX, trainY, testX, testY): datagen = ImageDataGenerator(horizontal_flip=True) # in case there is mean/std to normalize datagen.fit(trainX) # Fit the model on the batches generated by datagen.flow(). model.fit_generator(datagen.flow(trainX, trainY, batch_size=batch_size), epochs=step_epochs, steps_per_epoch=len(trainX) // batch_size, verbose = 0) # evaluate model _, acc = model.evaluate(testX, testY, verbose=0) return acc # repeatedly evaluate model, return distribution of scores def repeated_evaluation_increased(trainX, trainY, testX, testY, repeats=10): scores = list() for _ in range(repeats): # define model model = define_model() # fit and evaluate model accuracy = evaluate_model_increased(model, trainX, trainY, testX, testY) # store score scores.append(accuracy) print('> %.3f' % accuracy) return scores # evaluate model scores = repeated_evaluation_increased(trainX, trainY, testX, testY) # summarize result print('Accuracy: %.3f (%.3f)' % (mean(scores), std(scores))) ``` ### Test Time augmentation (TTA) The image data augmentation technique can also be applied when making predictions with a fit model in order to allow the model to make predictions for multiple different versions of each image in the test dataset. Specifically, it involves creating multiple augmented copies of each image in the test set, having the model make a prediction for each, then returning an ensemble of those predictions.(e.g: majority voting in case of classification) Augmentations are chosen to give the model the best opportunity for correctly classifying a given image, and the number of copies of an image for which a model must make a prediction is often small, such as less than 10 or 20. Often, a single simple test-time augmentation is performed, such as a shift, crop, or image flip. <font color=red>Evaluate the model with data augmentation. <b>Please note on this case we are not going to use the generateor on training, but on testing.</b> <br> Hint: Use the model.predict_generator function </font> ``` from sklearn.metrics import accuracy_score import numpy as np n_examples_per_image = 3 # make a prediction using test-time augmentation def prediction_augmented_on_test(datagen, model, image, n_examples): # convert image into dataset samples = expand_dims(image, 0) # prepare iterator it = datagen.flow(samples, batch_size=n_examples) # make predictions for each augmented image yhats = model.predict_generator(it, steps=n_examples, verbose=0) # sum across predictions summed = np.sum(yhats, axis=0) # argmax across classes return np.argmax(summed) # evaluate a model on a dataset using test-time augmentation def evaluate_model_test_time_agumented(model, testX, testY): # configure image data augmentation datagen = ImageDataGenerator(horizontal_flip=True) # define the number of augmented images to generate per test set image yhats = list() for i in range(len(testX)): # make augmented prediction yhat = prediction_augmented_on_test(datagen, model, testX[i], n_examples_per_image) # store for evaluation yhats.append(yhat) # calculate accuracy testY_labels = np.argmax(testY, axis=1) acc = accuracy_score(testY_labels, yhats) return acc def evaluate_model_test_augmented(model, trainX, trainY, testX, testY): # fit model model.fit(trainX, trainY, epochs=step_epochs, batch_size=batch_size, verbose=0) # evaluate model acc = evaluate_model_test_time_agumented(model, testX, testY) return acc def evaluate_test_augmented(trainX, trainY, testX, testY, repeats=10): scores = list() for _ in range(repeats): # define model model = define_model() # fit and evaluate model accuracy = evaluate_model_test_augmented(model, trainX, trainY, testX, testY) # store score scores.append(accuracy) print('> %.3f' % accuracy) return scores # evaluate model scores = evaluate_test_augmented(trainX, trainY, testX, testY) # summarize result print('Accuracy: %.3f (%.3f)' % (mean(scores), std(scores))) ```
github_jupyter
# Predicting Heart Disease using Machine Learning This notebook uses various Python based machine learning and data science libraries in an attempt to build a machine learning model capable of predicting whether or not someone has a Heart Disease based on their medical attributes. We're going to take the following approach: 1. [Problem Definition](#definition) 2. [Data](#data) 3. [Evaluation](#evaluation) 4. [Features](#features) 5. [Modelling](#modelling) 6. [Experimentation](#experimentation) ## <a name="definition">1. Problem Definition</a> In a statement, > Given clinical parameters about a patient, can we predict whether or not a they have a heart disease or not? ## <a name="data">2. Data</a> [Heart Disease UCI - Original Version](https://archive.ics.uci.edu/ml/datasets/heart+disease) [Heart Disease UCI - Kaggle Version](https://www.kaggle.com/ronitf/heart-disease-uci) ## <a name="evaluation">3.Evaluation</a> > If we can reach 95% of accuracy at predicting whether or not a patient has heart disease during the proof of concept, we'll pursue the project. ## <a name="features">4.Features</a> The following are the features we'll use to predict our target variable (heart disease or no heart disease). 1. age - age in years 2. sex - (1 = male; 0 = female) 3. cp - chest pain type * 0: Typical angina: chest pain related decrease blood supply to the heart * 1: Atypical angina: chest pain not related to heart * 2: Non-anginal pain: typically esophageal spasms (non heart related) * 3: Asymptomatic: chest pain not showing signs of disease 4. trestbps - resting blood pressure (in mm Hg on admission to the hospital) * anything above 130-140 is typically cause for concern 5. chol - serum cholestoral in mg/dl * serum = LDL + HDL + .2 * triglycerides * above 200 is cause for concern 6. fbs - (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false) * '>126' mg/dL signals diabetes 7. restecg - resting electrocardiographic results * 0: Nothing to note * 1: ST-T Wave abnormality - can range from mild symptoms to severe problems - signals non-normal heart beat * 2: Possible or definite left ventricular hypertrophy - Enlarged heart's main pumping chamber 8. thalach - maximum heart rate achieved 9. exang - exercise induced angina (1 = yes; 0 = no) 10. oldpeak - ST depression induced by exercise relative to rest * looks at stress of heart during excercise * unhealthy heart will stress more 11. slope - the slope of the peak exercise ST segment * 0: Upsloping: better heart rate with excercise (uncommon) * 1: Flatsloping: minimal change (typical healthy heart) * 2: Downslopins: signs of unhealthy heart 12. ca - number of major vessels (0-3) colored by flourosopy * colored vessel means the doctor can see the blood passing through * the more blood movement the better (no clots) 13. thal - thalium stress result * 1,3: normal * 6: fixed defect: used to be defect but ok now * 7: reversable defect: no proper blood movement when excercising 14. target - have disease or not (1=yes, 0=no) (= the predicted attribute) **Note:** No personal identifiable information (PPI) can be found in the dataset. ``` # Regular EDA and plotting libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # Models from scikit-learn from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import RandomForestClassifier # Model Evaluations from sklearn.model_selection import train_test_split, cross_val_score from sklearn.model_selection import RandomizedSearchCV, GridSearchCV from sklearn.metrics import confusion_matrix, classification_report from sklearn.metrics import precision_score, recall_score, f1_score from sklearn.metrics import plot_roc_curve, plot_confusion_matrix ``` --------- ## Load data ``` df = pd.read_csv('data/heart-disease.csv') df.head() ``` -------- ## Exploratory Data Analysis (EDA) 1. What question(s) are we trying to solve? 2. What kind of data do we have and how do we treat different types? 3. What are the missing data and how are we going to handle them? 4. What are the outliers, why we care about them and how are we going to handle them? 5. How can we add, change or remove features to get more out of the data? ``` df.tail() df.info() # check if there is any missing data df.isnull().sum() # how many classes are in target variable? df['target'].value_counts() # visulaiztion of classes # sns.countplot(x=df['target']); df['target'].value_counts().plot.bar(color=['salmon', 'lightblue']); plt.xlabel('0: No Disease, 1: Heart Disease') plt.ylabel('Count'); ``` Seem like there are 2 categories and pretty balanced dataset. ------- ## Finding Patterns in data ``` df.describe().transpose() ``` ### Heart disease frequency according to Sex ``` df['sex'].value_counts() ``` There are about 207 male and 96 females. So we have more male population, so we need to keep that in back of our mind (1 = male; 0 = female) ``` pd.crosstab(df['sex'], df['target']) 72/(24+72), 93/(114+93) ``` We can see that based on existing data, there are 75% chances that female can have a heart disease. For male, there are 45% chance. ``` # visualize the data # pd.crosstab(df['sex'], df['target']).plot(kind='bar', color=['salmon', 'lightblue']); pd.crosstab(df['sex'], df['target']).plot(kind='bar'); plt.title('Heart disease frequency by Sex') plt.xlabel('0: No Disease, 1: Heart Disease ') plt.ylabel('Count') plt.legend(['Female', 'Male']); plt.xticks(rotation=0); ``` ### Age Vs Max. Heart Rate for people who have Heart Disease ``` df.columns plt.figure(figsize=(10, 7)) # positive cases sns.scatterplot(data=df, x=df.age[df.target==1], y=df.thalach[df.target==1], color='salmon', s=50, alpha=0.8); # negative cases sns.scatterplot(data=df, x=df.age[df.target==0], y=df.thalach[df.target==0], color='lightblue', s=50, alpha=0.8) plt.title('Heart Disease in function of Age and Max Heart Rate') plt.xlabel('Age') plt.ylabel('Max Heart Rate'); plt.legend(['Heart Disease', 'No Disease']); ``` ### Distribution of Age ``` sns.histplot(data=df, x=df['age'], bins=30); ``` ### Heart Disease Frequency per Chest Pain level cp - chest pain type * 0: Typical angina: chest pain related decrease blood supply to the heart * 1: Atypical angina: chest pain not related to heart * 2: Non-anginal pain: typically esophageal spasms (non heart related) * 3: Asymptomatic: chest pain not showing signs of disease ``` pd.crosstab(df['target'], df['cp']) pd.crosstab(df['cp'], df['target']).plot(kind='bar', color=['lightblue', 'salmon']); plt.title('Heart Disease Frequency per Chest Pain level') plt.xlabel('Chest Pain Level') plt.ylabel('Count') plt.legend(['No Diease', 'Heart Disease']) plt.xticks(rotation=0); ``` ### Correlation between indepedent variables ``` df.corr()['target'][:-1] # visualization corr_matrix = df.corr() plt.figure(figsize=(12, 8)) sns.heatmap(corr_matrix, annot=True, linewidth=0.5, fmt='.2f', cmap='viridis_r'); ``` As per the heatmap above, `Chest pain (cp)` has the highest positive correlation with Target variable among the features, followed by `thalach (Maximum Heart Rate)` variable. On the other hand, `exang - exercise induced angina` and `oldpeak - ST depression induced by exercise relative to rest` have the lowest correlation with Target variable. -------- ## <a name="modelling">5. Modelling</a> ``` df.head(2) # split features and labels X = df.drop('target', axis=1) y = df['target'] X.head(2) y.head(2) # split into training, testing datasets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) X_train.shape, X_test.shape, y_train.shape, y_test.shape ``` As there is no missing data and no values to convert from categorical to numerical values, we will continue to build model and train them . ### Model Training We will try 3 different models. 1. Logistic Regression 2. K-Nearest Neighbours Classifier 3. Random Forest Classifier ``` # put models in dictionary models = { 'LogisticRegression': LogisticRegression(max_iter=1000), 'KNN': KNeighborsClassifier(), 'RandomForestClassifer': RandomForestClassifier() } # create function to fit and score model def fit_and_score(models, X_train, X_test, y_train, y_test): """ Fits and evalute given machine learning models. models: a dictionary of different scikit learn machine learning models X_train: training date (no labels) X_test: testing data (no labels) y_train: training labels y_test : testing labels returns model scores dictionary. """ # set random seed np.random.seed(42) # make dictonary to keep scores model_scores = {} # loop through models to fit and score for model_name, model in models.items(): model.fit(X_train, y_train) # fit model score = model.score(X_test, y_test) # get score model_scores[model_name] = score # put score for each model return model_scores # fit and score model_scores = fit_and_score(models, X_train, X_test, y_train, y_test) model_scores ``` ### Model Comparison ``` model_compare = pd.DataFrame(model_scores, index=['accuracy']) model_compare.head() model_compare.T.plot(kind='bar'); ``` --------- ## <a name="experimentation">6.Experimentation</a> ### Tuning or Improving our models Now we've got baseline models. and we might want to experiment to improve the results. We will be doing: * Hyperparameter tuning * Feature Importance * Confusion Matrix * Cross Validation * Precision * Recall * F1 Score * Classification Report * ROC curve * Area Under the Curve (AUC) ### Hyperparameter Tuning 1. [Hyperparameter Tuning - Manually](#manually) 2. [Hyperparameter Tuning - using RandomizedSearchCV](#randomized) 3.[Hyperparameter Tuning - using GridSearchCV](#grid) ### <a name='manually'>Hyperparameter Tuning - Manually</a> ``` train_scores = [] test_scores = [] ``` ### KNN ``` # create a different values of parameters neighbours = range(1, 21) # instantiate instance knn = KNeighborsClassifier() # loop through different n_neighbors for i in neighbours: # set param knn.set_params(n_neighbors=i) # fit model knn.fit(X_train, y_train) # get score train_scores.append(knn.score(X_train, y_train)) test_scores.append(knn.score(X_test, y_test)) plt.plot(neighbours, train_scores, label='Train Score') plt.plot(neighbours, test_scores, label='Test Score'); plt.xticks(np.arange(1,21,1)) plt.legend(); plt.xlabel('n_neighbor') plt.ylabel('score'); print(f"Maximum KNN score on the test data: {max(test_scores) * 100:.2f}%") ``` ----- ## <a name='randomized'>Hyperparameter Tuning - using RandomizedSearchCV</a> We are going to tune the following models using RandomizedSearchCV. * Logistic Regression * RandomForest Classifier ``` # help(LogisticRegression) np.logspace(-4, 4, 20) # help(RandomForestClassifier) ``` #### Create Hyperparameter Grid ``` # create hyperparameter grid for Logistic Regression log_reg_grid = { 'C': np.logspace(-4, 4, 20), 'solver': ['liblinear'] } # create hyperparameter grid for Random Forest Classifier rf_grid = { 'n_estimators': np.arange(10, 1000, 50), 'max_depth': [None, 3, 5, 10], 'min_samples_split': np.arange(2, 20, 2), 'min_samples_leaf': np.arange(1, 20, 2) } ``` #### Create RandomizedSearchCV with created Hyperparameter Grid (Logistic Regression) ``` np.random.seed(42) # set up random hyperparameter search for Logistic Regression rs_log_reg = RandomizedSearchCV(LogisticRegression(), log_reg_grid, cv=5, n_iter=20, verbose=True) # fit random hyperparameter search model for Logistic Regression rs_log_reg.fit(X_train, y_train) # check best parameters rs_log_reg.best_params_ # check the score rs_log_reg.score(X_test, y_test) # comparing with baseline scores model_scores ``` #### Create RandomizedSearchCV with created Hyperparameter Grid (Random Forest Classifier) ``` np.random.seed(42) # set up random hyperparameter search for RandomForestClassifier rs_rf = RandomizedSearchCV(RandomForestClassifier(), rf_grid, cv=5, n_iter=20, verbose=True) # fit random hyperparamter search model rs_rf.fit(X_train, y_train) # check best parameters rs_rf.best_params_ # check the score rs_rf.score(X_test, y_test) # comparing with baseline scores model_scores ``` **We can see that between LogisticRegression and RandomForestClassifier using RandomizedSearchCV, LogisticRegression score is better.** **So we will explore using LogisticRegression with GridSearchCV to further improve the performance.** --------- ## <a name='grid'>Hyperparameter Tuning - using GridSearchCV</a> We are going to tune the following models using GridSearchCV. * Logistic Regression ``` # create hyperparameter grid for Logistic Regression log_reg_grid = { 'C': np.logspace(-4, 4, 20), 'solver': ['liblinear'] } # set up grid hyperparameter search for Logistic Regression gs_log_reg = GridSearchCV(LogisticRegression(), log_reg_grid, cv=5, verbose=True) # train the model gs_log_reg.fit(X_train, y_train) # get best parameters gs_log_reg.best_params_ # get the score gs_log_reg.score(X_test, y_test) ``` --------- ### Evaluating Models Evaluating our tuned machine learning classifiers, beyond accuracy * ROC and AUC score * Confusion Matrix, Plot Confusion Matrix * Classification Report * Precision * Recall * F1 ``` # make predictions y_preds = gs_log_reg.predict(X_test) # ROC curve and AUC plot_roc_curve(gs_log_reg, X_test, y_test); confusion_matrix(y_test, y_preds) plot_confusion_matrix(gs_log_reg, X_test, y_test); print(classification_report(y_test, y_preds)) ``` **NOTE: As the above `classification report` only covers ONE train test split set of Test data.** **So we may want to use cross validated precision, recall, f1 score to get the whole idea.** -------- ## Calculate evaluation metrics using Cross Validated Precision, Recall and F1 score - we will use `cross_val_score` for this with different `scoring` parameter. - we will create new model and validated on whole dataset. ``` # check current best parameter gs_log_reg.best_params_ # create a new classifier with current best parameter clf = LogisticRegression(C=0.23357214690901212, solver='liblinear') # Cross Validated Accuracy cv_accuracy = cross_val_score(clf, X, y, scoring='accuracy', cv=5) cv_accuracy # mean of cross valided accuracy cv_accuracy = np.mean(cv_accuracy) cv_accuracy # Cross Validated Precision cv_precision = cross_val_score(clf, X, y, scoring='precision', cv=5) cv_precision = np.mean(cv_precision) cv_precision # Cross Validated Recall cv_recall = cross_val_score(clf, X, y, scoring='recall', cv=5) cv_recall = np.mean(cv_recall) cv_recall # Cross Validated F1 cv_f1 = cross_val_score(clf, X, y, scoring='f1', cv=5) cv_f1 = np.mean(cv_f1) cv_f1 # Visualize cross-validated metrics cv_metrics = pd.DataFrame({'Accuracy': cv_accuracy, 'Precision': cv_precision, 'Recall': cv_recall, 'F1': cv_f1}, index=[0]) cv_metrics.T.plot.bar(legend=False); plt.title('Cross Validated Classification Metrics') plt.xticks(rotation=30); ``` ----------- ## Feature Importance Feature Importance is another as asking, "which features contributed most to the outcomes of the model and how did they contribute? Finding Feature Importance is different for each machine learning model. ### Finding Feature Importance for Logistic Regression ``` model = LogisticRegression(C=0.23357214690901212, solver='liblinear') model.fit(X_train, y_train) # check Coefficient of features model.coef_ df.head(2) # Match coef's of features to columns name feature_dict = dict(zip(df.columns, list(model.coef_[0]))) feature_dict ``` **NOTE: Unlike correlation which is done during EDA, cofficient is model driven.** We got those coef_ values after we have the model. ``` # Visualize Feature Importance feature_df = pd.DataFrame(feature_dict, index=[0]) feature_df.T.plot.bar(title='Feature Importance of Logistic Regression', legend=False); pd.crosstab(df['slope'], df['target']) ``` based on the coef_, the higher the value of slope, the model tends to predict higher value (which is 0 to 1: meaning likely to have heart disease) ------- ``` pd.crosstab(df['sex'], df['target']) 72/24 93/114 ``` based on the coef_, the higher the value of sex (0 => 1), the model tends to predict lower value. Example: For Sex 0 (female), Target changes from 0 => 1 is 72/24 = 3.0 For Sex 1 (male), Target changes from 0 => 1 is 93/114 = 0.8157894736842105 So the value got decrease from 3.0 to 0.8157894736842105. ------- ## Additional Experimentation To improve our evaluation metrics, we can * collect more data. * try different models like XGBoost or CatBoost. * improve the current model with additional hyperparameter tuning ``` # save the model from joblib import dump dump(clf, 'model/mdl_logistic_regression') ```
github_jupyter
# Keras Callbacks and Functional API ``` from keras.datasets import cifar10 from keras.models import Sequential from keras.layers.core import Dense, Activation, Dropout from keras.optimizers import SGD, RMSprop from keras.callbacks import EarlyStopping, TensorBoard, ModelCheckpoint import numpy as np %matplotlib inline import matplotlib.pyplot as plt (X_train_t, y_train), (X_test_t, y_test) = cifar10.load_data() X_train_t = X_train_t.astype('float32') / 255. X_test_t = X_test_t.astype('float32') / 255. X_train = X_train_t.reshape(len(X_train_t), 32*32*3) X_test = X_test_t.reshape(len(X_test_t), 32*32*3) print("Training set:") print("Tensor images shape:\t", X_train_t.shape) print("Flat images shape:\t", X_train.shape) print("Labels shape:\t\t", y_train.shape) plt.figure(figsize=(15, 4)) for i in range(0, 8): plt.subplot(1, 8, i+1) plt.imshow(X_train[i].reshape(32, 32, 3)) plt.title(y_train[i]) ``` ## Callbacks on a simple model ``` outpath='/tmp/tensorflow_logs/cifar/' early_stopper = EarlyStopping(monitor='val_acc', patience=10) tensorboard = TensorBoard(outpath, histogram_freq=1) checkpointer = ModelCheckpoint(outpath+'weights_epoch_{epoch:02d}_val_acc_{val_acc:.2f}.hdf5', monitor='val_acc') model = Sequential() model.add(Dense(1024, activation='relu', input_dim=3072)) model.add(Dense(512, activation='relu')) model.add(Dense(10, activation='softmax')) model.compile(loss='sparse_categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(X_train, y_train, batch_size=128, epochs=5, verbose=1, validation_split=0.1, callbacks=[early_stopper, tensorboard, checkpointer]) import os sorted(os.listdir(outpath)) ``` Now check the tensorboard. - If using provided instance, just browse to: `http://<your-ip>:6006` - If using local, open a terminal, activate the environment and run: ``` tensorboard --logdir=/tmp/tensorflow_logs/cifar/ ``` then open a browser at `localhost:6006` You should see something like this: ![tensorboard.png](../assets/tensorboard.png) ## Exercise 1: Keras functional API We'e built a model using the `Sequential API` from Keras. Keras also offers a [functional API](https://keras.io/getting-started/functional-api-guide/). This API is the way to go for defining complex models, such as multi-output models, directed acyclic graphs, or models with shared layers. Can you rewrite the model above using the functional API? ``` from keras.layers import Input from keras.models import Model model.compile(loss='sparse_categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(X_train, y_train, batch_size=128, epochs=10, verbose=1, validation_split=0.1) # Final test evaluation score = model.evaluate(X_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) ``` ## Exercise 2: Convolutional Model with Functional API The above model is a very simple fully connected deep neural network. As we have seen, Convolutional Neural Networks are much more powerful when dealing with images. The original data has shape: (N_images, Height, Width, Channels) Can you write a convolutional model using the functional API? ``` from keras.layers.core import Dense, Dropout, Activation from keras.layers import Conv2D, MaxPool2D, AveragePooling2D, Flatten ``` ## Exrcise 3: Discuss with the person next to you 1. What are the pros/cons of the sequential API? - What are the pros/cons of the functional API? - What are the key differences between a Fully connected and a Convolutional neural network? - What is a dropout layer? How does it work? Why does it help? *Copyright &copy; 2017 Francesco Mosconi & CATALIT LLC. All rights reserved.*
github_jupyter
<pre> Torch : Manipulating vectors like dot product, addition etc and using GPU Numpy : Manipuliting vectors Pandas : Reading CSV file Matplotlib : Plotting figure </pre> ``` import numpy as np import torch import pandas as pd from matplotlib import pyplot as plt ``` <pre> O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O | O | O | O | O | | | | | | | | | | | | | | | Visible Hidden/Feature Layer Layer (n_v) (n_h) RBM : A class that initialize RBM with default values </pre> <pre> Parameters n_v : Number of visible inputs Initialized by 0 but then take value of number of inputs n_h : Number of features want to extract Must be set by user k : Sampling steps for contrastive divergance Default value is 2 steps epochs : Number of epochs for training RBM Must be set by user mini_batch_size : Size of mini batch for training Must be set by user alpha : Learning rate for updating parameters of RBM Default value is 0.01 momentum : Reduces large jumps for updating parameters weight_decay : Reduces the value of weight after every step of contrastive divergance data : Data to be fitted for RBM Must be given by user or else, thats all useless </pre> ``` class RBM(): # Parameters # n_v : Number of visible inputs # Initialized by 0 but then take value of number of inputs # n_h : Number of features want to extract # Must be set by user # k : Sampling steps for contrastive divergance # Default value is 2 steps # epochs : Number of epochs for training RBM # Must be set by user # mini_batch_size : Size of mini batch for training # Must be set by user # alpha : Learning rate for updating parameters of RBM # Default value is 0.001 # momentum : Reduces large jumps for updating parameters # weight_decay : Reduces the value of weight after every step of contrastive divergance # data : Data to be fitted for RBM # Must be given by user or else, thats all useless def __init__(self, n_v=0, n_h=0, k=2, epochs=15, mini_batch_size=64, alpha=0.001, momentum=0.9, weight_decay=0.001): self.number_features = 0 self.n_v = n_v self.n_h = self.number_features self.k = k self.alpha = alpha self.momentum = momentum self.weight_decay = weight_decay self.mini_batch_size = mini_batch_size self.epochs = epochs self.data = torch.randn(1, device="cuda") # fit method is called to fit RBM for provided data # First, data is converted in range of 0-1 cuda float tensors by dividing it by their maximum value # Here, after calling this method, n_v is reinitialized to number of input values present in data # number_features must be given by user before calling this method # w Tensor of weights of RBM # (n_v x n_h) Randomly initialized between 0-1 # a Tensor of bias for visible units # (n_v x 1) Initialized by 1's # b Tensor of bias for hidden units # (n_b x 1) Initialized by 1's # w_moment Momentum value for weights # (n_v x n_h) Initialized by zeros # a_moment Momentum values for visible units # (n_v x 1) Initialized by zeros # b_moment Momentum values for hidden units # (n_h x 1) Initialized by zeros def fit(self): self.data /= self.data.max() self.data = self.data.type(torch.cuda.FloatTensor) self.n_v = len(self.data[0]) self.n_h = self.number_features self.w = torch.randn(self.n_v, self.n_h, device="cuda") * 0.1 self.a = torch.ones(self.n_v, device="cuda") * 0.5 self.b = torch.ones(self.n_h, device="cuda") self.w_moment = torch.zeros(self.n_v, self.n_h, device="cuda") self.a_moment = torch.zeros(self.n_v, device="cuda") self.b_moment = torch.zeros(self.n_h, device="cuda") self.train() # train This method splits dataset into mini_batch and run for given epoch number of times def train(self): for epoch_no in range(self.epochs): ep_error = 0 for i in range(0, len(self.data), self.mini_batch_size): mini_batch = self.data[i:i+self.mini_batch_size] ep_error += self.contrastive_divergence(mini_batch) print("Epoch Number : ", epoch_no, " Error : ", ep_error.item()) # cont_diverg It performs contrastive divergance using gibbs sampling algorithm # p_h_0 Value of hidden units for given visivle units # h_0 Activated hidden units as sampled from normal distribution (0 or 1) # g_0 Positive associations of RBM # wv_a Unactivated hidden units # p_v_h Probability of hidden neuron to be activated given values of visible neurons # p_h_v Probability of visible neuron to be activated given values of hidden neurons # p_v_k Value of visible units for given visivle units after k step Gibbs Sampling # p_h_k Value of hidden units for given visivle units after k step Gibbs Sampling # g_k Negative associations of RBM # error Recontruction error for given mini_batch def contrastive_divergence(self, v): p_h_0 = self.sample_hidden(v) h_0 = (p_h_0 >= torch.rand(self.n_h, device="cuda")).float() g_0 = v.transpose(0, 1).mm(h_0) wv_a = h_0 # Gibbs Sampling step for step in range(self.k): p_v_h = self.sample_visible(wv_a) p_h_v = self.sample_hidden(p_v_h) wv_a = (p_h_v >= torch.rand(self.n_h, device="cuda")).float() p_v_k = p_v_h p_h_k = p_h_v g_k = p_v_k.transpose(0, 1).mm(p_h_k) self.update_parameters(g_0, g_k, v, p_v_k, p_h_0, p_h_k) error = torch.sum((v - p_v_k)**2) return error # p_v_h : Probability of hidden neuron to be activated given values of visible neurons # p_h_v : Probability of visible neuron to be activated given values of hidden neurons #-----------------------------------Bernoulli-Bernoulli RBM-------------------------------------------- # p_h_v = sigmoid ( weight x visible + visible_bias ) # p_v_h = sigmoid (weight.t x hidden + hidden_bias ) #------------------------------------------------------------------------------------------------------ def sample_hidden(self, p_v_h): # Bernoulli-Bernoulli RBM wv = p_v_h.mm(self.w) wv_a = wv + self.b p_h_v = torch.sigmoid(wv_a) return p_h_v def sample_visible(self, p_h_v): # Bernoulli-Bernoulli RBM wh = p_h_v.mm(self.w.transpose(0, 1)) wh_b = wh + self.a p_v_h = torch.sigmoid(wh_b) return p_v_h # weight_(t) = weight_(t) + ( positive_association - negative_association ) + weight_(t-1) # visible_bias_(t) = visible_bias_(t) + sum( input - activated_visivle_at_k_step_sample ) + visible_bias_(t-1) # hidden_bias_(t) = hidden_bias_(t) + sum( activated_initial_hidden - activated_hidden_at_k_step_sample ) + hidden_bias_(t-1) def update_parameters(self, g_0, g_k, v, p_v_k, p_h_0, p_h_k): self.w_moment *= self.momentum del_w = (g_0 - g_k) + self.w_moment self.a_moment *= self.momentum del_a = torch.sum(v - p_v_k, dim=0) + self.a_moment self.b_moment *= self.momentum del_b = torch.sum(p_h_0 - p_h_k, dim=0) + self.b_moment batch_size = v.size(0) self.w += del_w * self.alpha / batch_size self.a += del_a * self.alpha / batch_size self.b += del_b * self.alpha / batch_size self.w -= (self.w * self.weight_decay) self.w_moment = del_w self.a_moment = del_a self.b_moment = del_b dataset = pd.read_csv("/home/pushpull/mount/intHdd/dataset/mnist/mnist_train.csv", header=None) data = torch.tensor(np.array(dataset)[:, 1:], device="cuda") mnist = RBM() mnist.data = data mnist.number_features = 300 error = mnist.fit() ```
github_jupyter
# Step 0.0. Install LightAutoML Uncomment if doesn't clone repository by git. (ex.: colab, kaggle version) ``` #! pip install -U lightautoml ``` # Step 0.1. Import necessary libraries ``` # Standard python libraries import logging import os import time import requests logging.basicConfig(format='[%(asctime)s] (%(levelname)s): %(message)s', level=logging.INFO) # Installed libraries import numpy as np import pandas as pd from sklearn.metrics import roc_auc_score from sklearn.model_selection import train_test_split import torch # Imports from our package from lightautoml.automl.presets.tabular_presets import TabularAutoML, TabularUtilizedAutoML from lightautoml.dataset.roles import DatetimeRole from lightautoml.tasks import Task from lightautoml.utils.profiler import Profiler ``` # Step 0.2. Parameters ``` N_THREADS = 8 # threads cnt for lgbm and linear models N_FOLDS = 5 # folds cnt for AutoML RANDOM_STATE = 42 # fixed random state for various reasons TEST_SIZE = 0.2 # Test size for metric check TIMEOUT = 300 # Time in seconds for automl run TARGET_NAME = 'TARGET' # Target column name ``` # Step 0.3. Fix torch number of threads and numpy seed ``` np.random.seed(RANDOM_STATE) torch.set_num_threads(N_THREADS) ``` # Step 0.4. Change profiling decorators settings By default, profiling decorators are turned off for speed and memory reduction. If you want to see profiling report after using LAMA, you need to turn on the decorators using command below: ``` p = Profiler() p.change_deco_settings({'enabled': True}) ``` # Step 0.5. Example data load Load a dataset from the repository if doesn't clone repository by git. ``` DATASET_DIR = './example_data/test_data_files' DATASET_NAME = 'sampled_app_train.csv' DATASET_FULLNAME = os.path.join(DATASET_DIR, DATASET_NAME) DATASET_URL = 'https://raw.githubusercontent.com/sberbank-ai-lab/LightAutoML/master/example_data/test_data_files/sampled_app_train.csv' %%time if not os.path.exists(DATASET_FULLNAME): os.makedirs(DATASET_DIR, exist_ok=True) dataset = requests.get(DATASET_URL).text with open(DATASET_FULLNAME, 'w') as output: output.write(dataset) %%time data = pd.read_csv('./example_data/test_data_files/sampled_app_train.csv') data.head() ``` # Step 0.6. (Optional) Some user feature preparation Cell below shows some user feature preparations to create task more difficult (this block can be omitted if you don't want to change the initial data): ``` %%time data['BIRTH_DATE'] = (np.datetime64('2018-01-01') + data['DAYS_BIRTH'].astype(np.dtype('timedelta64[D]'))).astype(str) data['EMP_DATE'] = (np.datetime64('2018-01-01') + np.clip(data['DAYS_EMPLOYED'], None, 0).astype(np.dtype('timedelta64[D]')) ).astype(str) data['constant'] = 1 data['allnan'] = np.nan data['report_dt'] = np.datetime64('2018-01-01') data.drop(['DAYS_BIRTH', 'DAYS_EMPLOYED'], axis=1, inplace=True) ``` # Step 0.7. (Optional) Data splitting for train-test Block below can be omitted if you are going to train model only or you have specific train and test files: ``` %%time train_data, test_data = train_test_split(data, test_size=TEST_SIZE, stratify=data[TARGET_NAME], random_state=RANDOM_STATE) logging.info('Data splitted. Parts sizes: train_data = {}, test_data = {}' .format(train_data.shape, test_data.shape)) train_data.head() ``` # Step 0.8. (Optional) Reading data from SqlDataSource ### Preparing datasets as SQLite data bases ``` import sqlite3 as sql for _fname in ('train.db', 'test.db'): if os.path.exists(_fname): os.remove(_fname) train_db = sql.connect('train.db') train_data.to_sql('data', train_db) test_db = sql.connect('test.db') test_data.to_sql('data', test_db) ``` ### Using dataset wrapper for a connection ``` from lightautoml.reader.tabular_batch_generator import SqlDataSource # train_data is replaced with a wrapper for an SQLAlchemy connection # Wrapper requires SQLAlchemy connection string and query to obtain data from train_data = SqlDataSource('sqlite:///train.db', 'select * from data', index='index') test_data = SqlDataSource('sqlite:///test.db', 'select * from data', index='index') ``` # ========= AutoML preset usage ========= ## Step 1. Create Task ``` %%time task = Task('binary', ) ``` ## Step 2. Setup columns roles Roles setup here set target column and base date, which is used to calculate date differences: ``` %%time roles = {'target': TARGET_NAME, DatetimeRole(base_date=True, seasonality=(), base_feats=False): 'report_dt', } ``` ## Step 3. Create AutoML from preset To create AutoML model here we use `TabularAutoML` preset, which looks like: ![TabularAutoML preset pipeline](imgs/tutorial_2_pipeline.png) All params we set above can be send inside preset to change its configuration: ``` %%time automl = TabularAutoML(task = task, timeout = TIMEOUT, general_params = {'nested_cv': False, 'use_algos': [['linear_l2', 'lgb', 'lgb_tuned']]}, reader_params = {'cv': N_FOLDS, 'random_state': RANDOM_STATE}, tuning_params = {'max_tuning_iter': 20, 'max_tuning_time': 30}, lgb_params = {'default_params': {'num_threads': N_THREADS}}) oof_pred = automl.fit_predict(train_data, roles = roles) logging.info('oof_pred:\n{}\nShape = {}'.format(oof_pred, oof_pred.shape)) ``` ## Step 4. Predict to test data and check scores ``` %%time test_pred = automl.predict(test_data) logging.info('Prediction for test data:\n{}\nShape = {}' .format(test_pred, test_pred.shape)) logging.info('Check scores...') logging.info('OOF score: {}'.format(roc_auc_score(train_data.data[TARGET_NAME].values, oof_pred.data[:, 0]))) logging.info('TEST score: {}'.format(roc_auc_score(test_data.data[TARGET_NAME].values, test_pred.data[:, 0]))) ``` ## Step 5. Profiling AutoML To build report here, we **must** turn on decorators on step 0.4. Report is interactive and you can go as deep into functions call stack as you want: ``` %%time p.profile('my_report_profile.html') assert os.path.exists('my_report_profile.html'), 'Profile report failed to build' ``` ## Step 6. Create AutoML with time utilization Below we are going to create specific AutoML preset for TIMEOUT utilization (try to spend it as much as possible): ``` %%time automl = TabularUtilizedAutoML(task = task, timeout = TIMEOUT, general_params = {'nested_cv': False, 'use_algos': [['linear_l2', 'lgb', 'lgb_tuned']]}, reader_params = {'cv': N_FOLDS, 'random_state': RANDOM_STATE}, tuning_params = {'max_tuning_iter': 20, 'max_tuning_time': 30}, lgb_params = {'default_params': {'num_threads': N_THREADS}}) oof_pred = automl.fit_predict(train_data, roles = roles) logging.info('oof_pred:\n{}\nShape = {}'.format(oof_pred, oof_pred.shape)) ``` ## Step 7. Predict to test data and check scores for utilized automl ``` %%time test_pred = automl.predict(test_data) logging.info('Prediction for test data:\n{}\nShape = {}' .format(test_pred, test_pred.shape)) logging.info('Check scores...') logging.info('OOF score: {}'.format(roc_auc_score(train_data.data[TARGET_NAME].values, oof_pred.data[:, 0]))) logging.info('TEST score: {}'.format(roc_auc_score(test_data.data[TARGET_NAME].values, test_pred.data[:, 0]))) ``` ## Step 8. Profiling utilized AutoML To build report here, we **must** turn on decorators on step 0.4. Report is interactive and you can go as deep into functions call stack as you want: ``` %%time p.profile('my_report_profile.html') assert os.path.exists('my_report_profile.html'), 'Profile report failed to build' ``` # Appendix. Profiling report screenshots After loading HTML with profiling report, you can see fully folded report (please wait for green LOAD OK text for full load finish). If you click on triangle on the left, it unfolds and look like this: <img src="imgs/tutorial_2_initial_report.png" alt="Initial profiling report" style="width: 500px;"/> If we go even deeper we will receive situation like this: <img src="imgs/tutorial_2_unfolded_report.png" alt="Profiling report after several unfoldings on different levels" style="width: 600px;"/>
github_jupyter
# Data Space Report <img src="images/polito_logo.png" alt="Polito Logo" style="width: 200px;"/> ## Pittsburgh Bridges Data Set <img src="images/andy_warhol_bridge.jpg" alt="Andy Warhol Bridge" style="width: 200px;"/> Andy Warhol Bridge - Pittsburgh. Report created by Student Francesco Maria Chiarlo s253666, for A.A 2019/2020. **Abstract**:The aim of this report is to evaluate the effectiveness of distinct, different statistical learning approaches, in particular focusing on their characteristics as well as on their advantages and backwards when applied onto a relatively small dataset as the one employed within this report, that is Pittsburgh Bridgesdataset. **Key words**:Statistical Learning, Machine Learning, Bridge Design. ### Imports Section <a class="anchor" id="imports-section"></a> ``` # =========================================================================== # # STANDARD IMPORTS # =========================================================================== # print(__doc__) # Critical Imports # --------------------------------------------------------------------------- # import warnings; warnings.filterwarnings("ignore") # Imports through 'from' syntax # --------------------------------------------------------------------------- # from pprint import pprint from itertools import islice from os import listdir; from os.path import isfile, join # Standard Imports # --------------------------------------------------------------------------- # import copy; import os import sys; import time import itertools import sklearn # Imports through 'as' syntax # --------------------------------------------------------------------------- # import numpy as np; import pandas as pd # Imports for handling graphics # --------------------------------------------------------------------------- # %matplotlib inline # Matplotlib pyplot provides plotting API import matplotlib as mpl from matplotlib import pyplot as plt import chart_studio.plotly.plotly as py import seaborn as sns; sns.set(style="ticks", color_codes=True) # sns.set() # =========================================================================== # # UTILS IMPORTS (Done by myself) # =========================================================================== # from utils.load_dataset_pittsburg_utils import load_brdiges_dataset; from utils.display_utils import * from utils.preprocessing_utils import *; from utils.training_utils import * from utils.sklearn_functions_custom import *; from utils.learning_curves_custom import * from utils.training_utils_v2 import fit_by_n_components, fit_all_by_n_components, grid_search_all_by_n_components # =========================================================================== # # sklearn IMPORT # =========================================================================== # from sklearn.decomposition import PCA, KernelPCA # Import scikit-learn classes: models (Estimators). from sklearn.naive_bayes import GaussianNB, MultinomialNB # Non-parametric Generative Model from sklearn.linear_model import LogisticRegression # Parametric Linear Discriminative Model from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC # Parametric Linear Discriminative "Support Vector Classifier" from sklearn.tree import DecisionTreeClassifier # Non-parametric Model from sklearn.ensemble import RandomForestClassifier # Non-parametric Model (Meta-Estimator, that is, an Ensemble Method) # =========================================================================== # # READ INPUT DATASET # =========================================================================== # dataset_path = 'C:\\Users\\Francesco\Documents\\datasets\\pittsburgh_dataset' dataset_name = 'bridges.data.csv' TARGET_COL = 'T-OR-D' # Target variable name dataset, feature_vs_values = load_brdiges_dataset(dataset_path, dataset_name) columns_2_avoid = ['ERECTED', 'LENGTH', 'LOCATION'] # Make distinction between Target Variable and Predictors # --------------------------------------------------------------------------- # columns = dataset.columns # List of all attribute names # Get Target values and map to 0s and 1s y = np.array(list(map(lambda x: 0 if x == 1 else 1, dataset[TARGET_COL].values))) print('Summary about Target Variable {target_col}') print('-' * 50) print(dataset[TARGET_COL].value_counts()) # Get Predictors X = dataset.loc[:, dataset.columns != TARGET_COL].values # Standardizing the features # --------------------------------------------------------------------------- # scaler_methods = ['minmax', 'standard', 'norm'] scaler_method = 'standard' rescaledX = preprocessing_data_rescaling(scaler_method, X) ``` ## Pricipal Component Analysis ``` n_components = rescaledX.shape[1] pca = PCA(n_components=n_components) # pca = PCA(n_components=2) # X_pca = pca.fit_transform(X) pca = pca.fit(rescaledX) X_pca = pca.transform(rescaledX) print(f"Cumulative varation explained(percentage) up to given number of pcs:") tmp_data = [] principal_components = [pc for pc in '2,5,6,7,8,9,10'.split(',')] for _, pc in enumerate(principal_components): n_components = int(pc) cum_var_exp_up_to_n_pcs = np.cumsum(pca.explained_variance_ratio_)[n_components-1] # print(f"Cumulative varation explained up to {n_components} pcs = {cum_var_exp_up_to_n_pcs}") # print(f"# pcs {n_components}: {cum_var_exp_up_to_n_pcs*100:.2f}%") tmp_data.append([n_components, cum_var_exp_up_to_n_pcs * 100]) tmp_df = pd.DataFrame(data=tmp_data, columns=['# PCS', 'Cumulative Varation Explained (percentage)']) tmp_df.head(len(tmp_data)) ``` #### Major Pros & Cons of PCA ## Learning Models <a class="anchor" id="learning-models"></a> ``` # Parameters to be tested for Cross-Validation Approach estimators_list = [GaussianNB(), LogisticRegression(), KNeighborsClassifier(), SGDClassifier(), SVC(), DecisionTreeClassifier(), RandomForestClassifier()] estimators_names = ['GaussianNB', 'LogisticRegression', 'KNeighborsClassifier', 'SGDClassifier', 'SVC', 'DecisionTreeClassifier', 'RandomForestClassifier'] plots_names = list(map(lambda xi: f"{xi}_learning_curve.png", estimators_names)) pca_kernels_list = ['linear', 'poly', 'rbf', 'cosine', 'sigmoid'] cv_list = [10, 9, 8, 7, 6, 5, 4, 3, 2] parmas_logistic_regression = { 'penalty': ('l1', 'l2', 'elastic'), 'solver': ('newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'), 'fit_intercept': (True, False), 'tol': (1e-4, 1e-3, 1e-2), 'C': (1.0, .1, .01, .001), } parmas_knn_forest = { 'n_neighbors': (2,3,4,5,6,7,8,9,10), 'weights': ('uniform', 'distance'), 'algorithm': ('ball_tree', 'kd_tree', 'brute'), } parameters_sgd_classifier = { 'loss': ('log', 'modified_huber'), # ('hinge', 'log', 'modified_huber', 'squared_hinge', 'perceptron') 'penalty': ('l2', 'l1', 'elasticnet'), 'alpha': (1e-1, 1e-2, 1e-3, 1e-4), 'max_iter': (50, 100, 150, 200, 500, 1000, 1500, 2000, 2500), 'learning_rate': ('optimal',), 'tol': (None, 1e-2, 1e-4, 1e-5, 1e-6) } kernel_type = 'svm-rbf-kernel' parameters_svm = { 'gamma': (0.003, 0.03, 0.05, 0.5, 0.7, 1.0, 1.5), 'max_iter':(1e+2, 1e+3, 2 * 1e+3, 5 * 1e+3, 1e+4, 1.5 * 1e+3), # 'penalty': ('l2','l1'), 'kernel': ['linear', 'poly', 'rbf', 'sigmoid',], 'C': (1e-4, 1e-3, 1e-2, 0.1, 1.0, 10, 1e+2, 1e+3), 'probability': (True,), } parmas_decision_tree = { 'splitter': ('random', 'best'), 'criterion':('gini', 'entropy'), 'max_features': (None, 'auto', 'sqrt', 'log2') } parmas_random_forest = { 'n_estimators': (3, 5, 7, 10, 30, 50, 70, 100, 150, 200), 'criterion':('gini', 'entropy'), 'bootstrap': (True, False) } param_grids = [parmas_logistic_regression, parmas_knn_forest, parameters_sgd_classifier, parameters_svm, parmas_decision_tree, parmas_random_forest] N_CV, N_KERNEL, N_GS = 9, 4, 6 n_components=9 learning_curves_by_kernels( # learning_curves_by_components( estimators_list[:], estimators_names[:], rescaledX, y, train_sizes=np.linspace(.1, 1.0, 10), n_components=9, pca_kernels_list=pca_kernels_list[0], verbose=0, by_pairs=True, savefigs=True, scoring='accuracy', figs_dest=os.path.join('figures', 'learning_curve', f"Pcs_{n_components}") # figsize=(20,5) ) ``` ### Improvements and Conclusions <a class="anchor" id="Improvements-and-conclusions"></a> ### References <a class="anchor" id="references"></a> - Data Domain Information part: - (Deck) https://en.wikipedia.org/wiki/Deck_(bridge) - (Cantilever bridge) https://en.wikipedia.org/wiki/Cantilever_bridge - (Arch bridge) https://en.wikipedia.org/wiki/Deck_(bridge) - Machine Learning part: - (Theory Book) https://jakevdp.github.io/PythonDataScienceHandbook/ - (Decsion Trees) https://scikit-learn.org/stable/modules/tree.html#tree - (SVM) https://scikit-learn.org/stable/modules/svm.html - (PCA) https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html - Chart part: - (Seaborn Charts) https://acadgild.com/blog/data-visualization-using-matplotlib-and-seaborn - Markdown Math part: - https://share.cocalc.com/share/b4a30ed038ee41d868dad094193ac462ccd228e2/Homework%20/HW%201.2%20-%20Markdown%20and%20LaTeX%20Cheatsheet.ipynb?viewer=share - https://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Typesetting%20Equations.html #### others - Plots: - (Python Plot) https://www.datacamp.com/community/tutorials/matplotlib-tutorial-python?utm_source=adwords_ppc&utm_campaignid=898687156&utm_adgroupid=48947256715&utm_device=c&utm_keyword=&utm_matchtype=b&utm_network=g&utm_adpostion=&utm_creative=255798340456&utm_targetid=aud-299261629574:dsa-473406587955&utm_loc_interest_ms=&utm_loc_physical_ms=1008025&gclid=Cj0KCQjw-_j1BRDkARIsAJcfmTFu4LAUDhRGK2D027PHiqIPSlxK3ud87Ek_lwOu8rt8A8YLrjFiHqsaAoLDEALw_wcB - Third Party Library: - (statsmodels) https://www.statsmodels.org/stable/index.html# - KDE: - (TUTORIAL) https://jakevdp.github.io/blog/2013/12/01/kernel-density-estimation/ - Metrics: - (F1-Accuracy-Precision-Recall) https://towardsdatascience.com/beyond-accuracy-precision-and-recall-3da06bea9f6c
github_jupyter
# Load Dataset and Important Libraries ``` import pandas as pd import numpy as np from scipy import stats %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) np.random.seed(42) # some functions like scipy.stats rely on this being consistent between runs ``` Load COVID data set. ``` !wget https://github.com/beoutbreakprepared/nCoV2019/blob/master/latest_data/latestdata.tar.gz?raw=true ``` Unzip the file ``` !tar -xvf latestdata.tar.gz?raw=true ``` When reasing in the .csv dataset, set `low memory` to `False` so Pandas does not guess a data type and raise errors. ``` df = pd.read_csv('latestdata.csv', low_memory=False) original_df = df.copy() df.head() df.info() df.count() ``` Features worth noting in terms of relevance to outcome prediction and availibility in the data set: 1. `longitude`, `latitude`, `geo_reolution` 2. `date onset` and `admission` 3. `date_confirmation` 4. `travel_history_binary` 5. `chronic_disease_binary` 6. `outcome` 0 rows are completely filled. # Data Cleaning Begin by fixing the data types of the features we intend to use. This involves dropping rows without the neccessary features filled in. ``` df.dropna(subset=['age', 'sex', 'latitude', 'longitude', 'chronic_disease_binary', 'travel_history_binary', 'outcome'], inplace=True) # convert age to floats first # coerce to force any enrties that are age ranges to NaN df['age'] = pd.to_numeric(df['age'], errors='coerce') #remove all NaNs df.dropna(subset=['age'], inplace=True) df['age'] = df['age'].astype(int) df.dropna(subset=['travel_history_binary'], inplace=True) thb_dict = { True: True, False: False } df['travel_history_binary'] = df['travel_history_binary'].map(thb_dict) ``` Convert `sex` to binary for correlation purposes. ``` sexdict = { 'male': True, 'female': False } df['sex'] = df['sex'].map(sexdict) ``` Convert `outcome` label to boolean. ``` df['outcome'].unique() outdict = {'death': False, 'discharge': True, 'discharged': True, 'Discharged': True, 'recovered': True, 'released from quarantine': True, 'stable': True, 'Death': False, 'died': False, 'Alive': True, 'Dead': False, 'Recovered': True, 'Stable': True, 'Died': False, 'Deceased': False, 'stable condition': True, 'Under treatment': True, 'Receiving Treatment': True, 'severe illness': True, 'dead': False, 'critical condition': True, 'Hospitalized': True} df['outcome'] = df['outcome'].map(outdict) ``` Explore `date_confirmation` Did not end up using this feature in final model. ``` df['date_confirmation'] = pd.to_datetime(df['date_confirmation'], format='%d.%m.%Y', errors='coerce') print(df['date_confirmation'].dtype) df['date_confirmation'] print(df['date_confirmation'].min()) print(df['date_confirmation'].max()) df['date_confirmation'].dt.month.unique() ``` `date_confirmation` did not have year long data. # Data Insights and Statistics ``` from pandas.plotting import scatter_matrix attributes = ['age', 'latitude', 'longitude'] scatter_matrix(df[attributes], figsize=(12, 8)) ``` Analyse Pearson's correlation coefficient. ``` df.corr(method='pearson') import seaborn as sns corr = df.corr(method='pearson') plt.figure(figsize=(10, 10)) ax = sns.heatmap( corr, vmin=-1, vmax=1, center=0, cmap=sns.diverging_palette(20, 220, n=200), square=True ) ax.set_xticklabels( ax.get_xticklabels(), rotation=45, horizontalalignment='right' ) ``` Point Biserial Correlation between age (continuous) and outcome (binary) ``` stats.pointbiserialr(df['age'], df['outcome']) ``` Check if `age` is normally distributed. ``` stats.shapiro(df['age']) ``` Sample size too big for Shapiro-Wilk test, instead try Kolmogorov Smirnov test. ``` stats.kstest(df.loc[df['outcome']==True]['age'], 'norm') stats.kstest(df.loc[df['outcome']==False]['age'], 'norm') ``` A very high test statistic is found, so age of a positive and negative COVID case are both normally distributed. Now, check if the mean of the ages differ based on outcomes using a Wilcoxon test on a random sample of each subset since subsets are of different sizes. ``` df_alive= df.loc[df['outcome']==True].sample(n=1000, random_state=4) df_dead= df.loc[df['outcome']==False].sample(n=1000, random_state=4) stats.wilcoxon(df_alive['age'], df_dead['age']) ``` # Data Preparation ``` len(df) ``` Todo: need to fix sex column, use one hot encoding on it! ``` ml_df = df.copy() # sex_bin inclued only to check gini importance of RF ml_df = ml_df[['age', 'sex', 'latitude', 'longitude', 'chronic_disease_binary', 'travel_history_binary', 'outcome']] ml_df.head() ``` Check if there are any nulls. ``` ml_df.isnull().sum().sort_values() ``` Create feature vector and label ``` # Feature Vector X = ml_df.drop(columns=['outcome']) # Label y = ml_df['outcome'] ``` Perform train test split. ``` from sklearn.model_selection import StratifiedShuffleSplit stratified_split = StratifiedShuffleSplit(n_splits=1, test_size=0.3, random_state=42) for train_index, test_index in stratified_split.split(ml_df, ml_df['outcome']): strat_train_set = ml_df.iloc[train_index] strat_test_set = ml_df.iloc[test_index] print(len(strat_train_set)) print(len(strat_test_set)) # Feature Vectors X_train = strat_train_set.drop(columns=['outcome']) X_test = strat_test_set.drop(columns=['outcome']) # Labels y_train = strat_train_set['outcome'] y_test = strat_test_set['outcome'] numeric_train_df = X_train.select_dtypes(exclude=['object']) numeric_test_df = X_test.select_dtypes(exclude=['object']) # to deal with sex categorical_train_df = X_train.select_dtypes(['object']) categorical_test_df = X_test.select_dtypes(['object']) from sklearn.base import BaseEstimator, TransformerMixin class DFSelector(BaseEstimator, TransformerMixin): def __init__(self, attribute_names): self.attribute_names = attribute_names def fit (self, X, y=None): return self def transform(self, X): return X[self.attribute_names].values from sklearn.preprocessing import OneHotEncoder, StandardScaler from sklearn.pipeline import Pipeline from sklearn.compose import ColumnTransformer num_attributes = list(X_train.select_dtypes(exclude=['object'])) # removed country so i only have a numerical pipeline cat_attributes = list(X_train.select_dtypes(['object'])) num_pipeline = Pipeline([ ('select_numeric', DFSelector(num_attributes)), ('std_sclr', StandardScaler()), ]) cat_pipeline = Pipeline([ ('select_categoric', DFSelector(cat_attributes)), ('one_hot', OneHotEncoder()), ]) full_pipeline = ColumnTransformer([ ('num', num_pipeline, num_attributes), ('cat', cat_pipeline, cat_attributes), ]) X_train_scaled = full_pipeline.fit_transform(X_train) X_test_scaled = full_pipeline.fit_transform(X_test) # This is what o reilly does, but i dont think its what im after # it also reqwuires num_att and cat_att to be based on the entire svm_df # svm_df_prepared = full_pipeline.fit_transform(svm_df) ``` Label Scaling ``` from sklearn.preprocessing import LabelEncoder le = LabelEncoder() y_train_scaled = le.fit_transform(y_train) y_test_scaled = le.fit_transform(y_test) from sklearn.metrics import accuracy_score, mean_squared_error, balanced_accuracy_score, jaccard_score, f1_score, confusion_matrix, plot_roc_curve def performance_metrics(model, X_test, y_test, y_hat): print('Accuracy: ', accuracy_score(y_test, y_hat)*100, '%') print('Root Mean Squared Error: ', np.sqrt(mean_squared_error(y_test, y_hat))) print('Balanced Accuracy: ', balanced_accuracy_score(y_test, y_hat)*100, '%') print('Jaccard Score: ', jaccard_score(y_test, y_hat, average='weighted')*100, '%') print('F1 Score: ', f1_score(y_test, y_hat)) cm = confusion_matrix(y_test, y_hat) print('True Negatives (Correctly Predicted Death): ', cm[0,0]) print('False Negatives (Incorrectly Predicted as Death): ', cm[1,0]) print('True Positives (Correctly Predicted as Alive): ', cm[1,1]) print('False Positives (Incorrectly Predicted as Alive): ', cm[0,1]) print('Sensitivty (Recall/True Positive Rate): ', cm[1,1] / (cm[1,1]+cm[1,0])) # TP/(TP+FN), proportion of actual positives branded as positives print('False Positive Rate: ', cm[0,1] / (cm[0,1]+cm[0,0])) # FP/(FP+TN), proportion of actual negatives branded as postives print('Specificity: ', cm[0,0] / (cm[0,0]+cm[0,1])) # TN/(TN+FP) print('Positive Predictive Value (Precision): ', cm[1,1] / (cm[1,1]+cm[0,1])) # TP/(TP+FP) print('Negative Predictive Value: ', cm[0,0] / (cm[0,0]+cm[1,0])) # TN/(TN+FN) plot_roc_curve(model, X_test, y_test) plt.show ``` ## Support Vector Machine Create a SVM classifier and perform 10-fold cross validation to get an idea of ifdeal parameters (no fitting to training set yet). ``` from sklearn.svm import SVC from sklearn.model_selection import cross_validate SVM_basemodel = SVC(random_state=42) scoring = ['accuracy', 'f1', 'precision', 'recall'] scores = cross_validate(SVM_basemodel, X_train_scaled, y_train_scaled, cv = 5, scoring=scoring, return_estimator=True) sorted(scores.keys()) print('Average Accuracy:', scores['test_accuracy'].mean()) print('Average F1:', scores['test_f1'].mean()) print('Average Precision:', scores['test_precision'].mean()) print('Average Recall:', scores['test_recall'].mean()) scores['estimator'] SVM_basemodel.fit(X_train_scaled, y_train_scaled) ``` Results of default base model with test set. ``` y_hat_SVM_base = SVM_basemodel.predict(X_test_scaled) performance_metrics(SVM_basemodel, X_test_scaled, y_test_scaled, y_hat_SVM_base) ``` Now use random search hyper tuning. Takes approximately 6 to 10 minutes. ``` from sklearn.model_selection import RandomizedSearchCV parameter_space = { 'C': [0.1, 1, 10, 100], 'gamma': [ 0.1, 1, 10], } SVM_tuned =SVC(random_state=42) SVM_randsearch = RandomizedSearchCV(estimator=SVM_tuned, param_distributions=parameter_space, scoring=scoring, verbose=1, n_jobs=-1, n_iter=1000, refit = 'accuracy') # set refit to false for multi key scoring SVM_rand_result = SVM_randsearch.fit(X_train_scaled, y_train_scaled) results = SVM_rand_result.cv_results_ dict(results).keys() print('Accuracy: ', dict(results)['mean_test_accuracy'].max()) print('Precision: ', dict(results)['mean_test_precision'].max()) print('Recall: ', dict(results)['mean_test_recall'].max()) print('F1: ', dict(results)['mean_test_f1'].max()) SVM_clf = SVM_rand_result.best_estimator_ SVM_clf.fit(X_train_scaled, y_train_scaled) y_hat_SVM_tuned = SVM_clf.predict(X_test_scaled) ``` Results of final tuned model with tests et ``` performance_metrics(SVM_clf, X_test_scaled, y_test_scaled, y_hat_SVM_tuned) ``` --- # Random Forests ``` from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import cross_validate RF_basemodel = RandomForestClassifier(n_estimators=1000, random_state=42) scoring = ['accuracy', 'f1', 'precision', 'recall'] scores = cross_validate(RF_basemodel, X_train_scaled, y_train_scaled, cv = 5, scoring=scoring, return_estimator=True) # print(f'{scores.mean()*100}% accuracy with a standard deviation of {scores.std()}') sorted(scores.keys()) ``` metrics from validaiton set for default base model: ``` print('Average Accuracy:', scores['test_accuracy'].mean()) print('Average F1:', scores['test_f1'].mean()) print('Average Precision:', scores['test_precision'].mean()) print('Average Recall:', scores['test_recall'].mean()) scores['estimator'] RF_basemodel.fit(X_train_scaled, y_train_scaled) y_hat_RF_base = RF_basemodel.predict(X_test_scaled) performance_metrics(RF_basemodel, X_test_scaled, y_test_scaled, y_hat_RF_base) ``` Look at feature importances of base model. ``` importances = RF_basemodel.feature_importances_ std = np.std([ tree.feature_importances_ for tree in RF_basemodel.estimators_], axis=0) forest_importances = pd.Series(importances, index=list(X_train.select_dtypes(exclude=['object']))) fig, ax = plt.subplots() forest_importances.plot.bar(yerr=std, ax=ax) ax.set_ylabel("Mean decrease in impurity") ax.set_xticklabels( ax.get_xticklabels(), rotation=45, horizontalalignment='right' ) fig.tight_layout() ``` Ranges for parameters may appear,narrow, trail and error was used to condfirm that the optimal values lie within this range. this stiol takes a whilke, so need to cut down the numbrer of fits to try. ``` from sklearn.model_selection import RandomizedSearchCV parameter_space = { 'max_depth': [100, 110], 'min_samples_leaf': [3, 4], 'min_samples_split': [6, 8, 10], 'criterion': ['gini', 'entropy'] } RF_tuned = RandomForestClassifier(random_state=42) RF_randsearch = RandomizedSearchCV(estimator=RF_tuned, param_distributions=parameter_space, scoring=scoring, verbose=1, n_jobs=-1, n_iter=500, refit = 'accuracy') # set refit to false for multi key scoring RF_rand_result = RF_randsearch.fit(X_train_scaled, y_train_scaled) results = RF_rand_result.cv_results_ dict(results).keys() print('Accuracy: ', dict(results)['mean_test_accuracy'].max()) print('Precision: ', dict(results)['mean_test_precision'].max()) print('Recall: ', dict(results)['mean_test_recall'].max()) print('F1: ', dict(results)['mean_test_f1'].max()) RF_clf = RF_rand_result.best_estimator_ RF_clf.fit(X_train_scaled, y_train_scaled) y_hat_RF_tuned = RF_clf.predict(X_test_scaled) performance_metrics(RF_clf, X_test_scaled, y_test_scaled, y_hat_RF_tuned) ``` --- # Stochastic Gradient Descent ``` from sklearn.linear_model import SGDClassifier from sklearn.model_selection import cross_val_score, cross_validate scoring = ['accuracy', 'f1', 'precision', 'recall'] SGD_basemodel = SGDClassifier(random_state=42) # scores = cross_val_score(SGD_basemodel, X_train_scaled, y_train_scaled, cv = 5) # print(f'{scores.mean()*100}% accuracy with a standard deviation of {scores.std()}') scores = cross_validate(SGD_basemodel, X_train_scaled, y_train_scaled,scoring=scoring) sorted(scores.keys()) print('Average Accuracy:', scores['test_accuracy'].mean()) print('Average F1:', scores['test_f1'].mean()) print('Average Precision:', scores['test_precision'].mean()) print('Average Recall:', scores['test_recall'].mean()) SGD_basemodel.fit(X_train_scaled, y_train_scaled) # SGD_basemodel = scores.best_estimator_ y_hat_SGD_scaled = SGD_basemodel.predict(X_test_scaled) performance_metrics(SGD_basemodel, X_test_scaled, y_test_scaled, y_hat_SGD_scaled) from sklearn.model_selection import RandomizedSearchCV parameter_space = { 'loss': ['hinge', 'log', 'modified_huber', 'squared_hinge', 'perceptron'], 'penalty': ['l1', 'l2', 'elasticnet'], 'alpha': [0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000], 'learning_rate': ['constant', 'optimal', 'invscaling', 'adaptive'], 'class_weight': [{1:0.5, 0:0.5}, {1:0.4, 0:0.6}, {1:0.6, 0:0.4}, {1:0.7, 0:0.3}], 'eta0': [1, 10, 100], } SGD_tuned = SGDClassifier(random_state=42) SGD_randsearch = RandomizedSearchCV(estimator=SGD_tuned, param_distributions=parameter_space, scoring=scoring, verbose=1, n_jobs=-1, n_iter=1000, refit = 'accuracy') # set refit to false for multi key scoring SGD_Xtrain = X_train.drop(columns=['sex']) SGD_rand_result = SGD_randsearch.fit(X_train_scaled, y_train_scaled) results = SGD_rand_result.cv_results_ dict(results).keys() print('Accuracy: ', dict(results)['mean_test_accuracy'].max()) print('Precision: ', dict(results)['mean_test_precision'].max()) print('Recall: ', dict(results)['mean_test_recall'].max()) print('F1: ', dict(results)['mean_test_f1'].max()) SGD_rand_result.best_params_ SGD_clf = SGD_rand_result.best_estimator_ # SGD_clf = SGDClassifier(**SGD_rand_result.best_params_) SGD_clf.fit(X_train_scaled, y_train_scaled) y_hat_SGD_tuned = SGD_clf.predict(X_test_scaled) performance_metrics(SGD_clf, X_test_scaled, y_test_scaled, y_hat_SGD_tuned) ```
github_jupyter
##### Copyright 2018 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #@title MIT License # # Copyright (c) 2017 François Chollet # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. ``` # Train your first neural network: basic classification <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/keras/basic_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/keras/basic_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/keras/basic_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> This guide trains a neural network model to classify images of clothing, like sneakers and shirts. It's okay if you don't understand all the details, this is a fast-paced overview of a complete TensorFlow program with the details explained as we go. This guide uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow. ``` !pip install tf-nightly-2.0-preview from __future__ import absolute_import, division, print_function # TensorFlow and tf.keras import tensorflow as tf from tensorflow import keras # Helper libraries import numpy as np import matplotlib.pyplot as plt print(tf.__version__) ``` ## Import the Fashion MNIST dataset This guide uses the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset which contains 70,000 grayscale images in 10 categories. The images show individual articles of clothing at low resolution (28 by 28 pixels), as seen here: <table> <tr><td> <img src="https://tensorflow.org/images/fashion-mnist-sprite.png" alt="Fashion MNIST sprite" width="600"> </td></tr> <tr><td align="center"> <b>Figure 1.</b> <a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>&nbsp; </td></tr> </table> Fashion MNIST is intended as a drop-in replacement for the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset—often used as the "Hello, World" of machine learning programs for computer vision. The MNIST dataset contains images of handwritten digits (0, 1, 2, etc) in an identical format to the articles of clothing we'll use here. This guide uses Fashion MNIST for variety, and because it's a slightly more challenging problem than regular MNIST. Both datasets are relatively small and are used to verify that an algorithm works as expected. They're good starting points to test and debug code. We will use 60,000 images to train the network and 10,000 images to evaluate how accurately the network learned to classify images. You can access the Fashion MNIST directly from TensorFlow, just import and load the data: ``` fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() ``` Loading the dataset returns four NumPy arrays: * The `train_images` and `train_labels` arrays are the *training set*—the data the model uses to learn. * The model is tested against the *test set*, the `test_images`, and `test_labels` arrays. The images are 28x28 NumPy arrays, with pixel values ranging between 0 and 255. The *labels* are an array of integers, ranging from 0 to 9. These correspond to the *class* of clothing the image represents: <table> <tr> <th>Label</th> <th>Class</th> </tr> <tr> <td>0</td> <td>T-shirt/top</td> </tr> <tr> <td>1</td> <td>Trouser</td> </tr> <tr> <td>2</td> <td>Pullover</td> </tr> <tr> <td>3</td> <td>Dress</td> </tr> <tr> <td>4</td> <td>Coat</td> </tr> <tr> <td>5</td> <td>Sandal</td> </tr> <tr> <td>6</td> <td>Shirt</td> </tr> <tr> <td>7</td> <td>Sneaker</td> </tr> <tr> <td>8</td> <td>Bag</td> </tr> <tr> <td>9</td> <td>Ankle boot</td> </tr> </table> Each image is mapped to a single label. Since the *class names* are not included with the dataset, store them here to use later when plotting the images: ``` class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] ``` ## Explore the data Let's explore the format of the dataset before training the model. The following shows there are 60,000 images in the training set, with each image represented as 28 x 28 pixels: ``` train_images.shape ``` Likewise, there are 60,000 labels in the training set: ``` len(train_labels) ``` Each label is an integer between 0 and 9: ``` train_labels ``` There are 10,000 images in the test set. Again, each image is represented as 28 x 28 pixels: ``` test_images.shape ``` And the test set contains 10,000 images labels: ``` len(test_labels) ``` ## Preprocess the data The data must be preprocessed before training the network. If you inspect the first image in the training set, you will see that the pixel values fall in the range of 0 to 255: ``` plt.figure() plt.imshow(train_images[0]) plt.colorbar() plt.grid(False) plt.show() ``` We scale these values to a range of 0 to 1 before feeding to the neural network model. For this, we divide the values by 255. It's important that the *training set* and the *testing set* are preprocessed in the same way: ``` train_images = train_images / 255.0 test_images = test_images / 255.0 ``` Display the first 25 images from the *training set* and display the class name below each image. Verify that the data is in the correct format and we're ready to build and train the network. ``` plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i], cmap=plt.cm.binary) plt.xlabel(class_names[train_labels[i]]) plt.show() ``` ## Build the model Building the neural network requires configuring the layers of the model, then compiling the model. ### Setup the layers The basic building block of a neural network is the *layer*. Layers extract representations from the data fed into them. And, hopefully, these representations are more meaningful for the problem at hand. Most of deep learning consists of chaining together simple layers. Most layers, like `tf.keras.layers.Dense`, have parameters that are learned during training. ``` model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10, activation='softmax') ]) ``` The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (of 28 by 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data. After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are densely-connected, or fully-connected, neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer is a 10-node *softmax* layer—this returns an array of 10 probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the 10 classes. ### Compile the model Before the model is ready for training, it needs a few more settings. These are added during the model's *compile* step: * *Loss function* —This measures how accurate the model is during training. We want to minimize this function to "steer" the model in the right direction. * *Optimizer* —This is how the model is updated based on the data it sees and its loss function. * *Metrics* —Used to monitor the training and testing steps. The following example uses *accuracy*, the fraction of the images that are correctly classified. ``` model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) ``` ## Train the model Training the neural network model requires the following steps: 1. Feed the training data to the model—in this example, the `train_images` and `train_labels` arrays. 2. The model learns to associate images and labels. 3. We ask the model to make predictions about a test set—in this example, the `test_images` array. We verify that the predictions match the labels from the `test_labels` array. To start training, call the `model.fit` method—the model is "fit" to the training data: ``` model.fit(train_images, train_labels, epochs=5) ``` As the model trains, the loss and accuracy metrics are displayed. This model reaches an accuracy of about 0.88 (or 88%) on the training data. ## Evaluate accuracy Next, compare how the model performs on the test dataset: ``` test_loss, test_acc = model.evaluate(test_images, test_labels) print('\nTest accuracy:', test_acc) ``` It turns out, the accuracy on the test dataset is a little less than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*. Overfitting is when a machine learning model performs worse on new data than on their training data. ## Make predictions With the model trained, we can use it to make predictions about some images. ``` predictions = model.predict(test_images) ``` Here, the model has predicted the label for each image in the testing set. Let's take a look at the first prediction: ``` predictions[0] ``` A prediction is an array of 10 numbers. These describe the "confidence" of the model that the image corresponds to each of the 10 different articles of clothing. We can see which label has the highest confidence value: ``` np.argmax(predictions[0]) ``` So the model is most confident that this image is an ankle boot, or `class_names[9]`. And we can check the test label to see this is correct: ``` test_labels[0] ``` We can graph this to look at the full set of 10 channels ``` def plot_image(i, predictions_array, true_label, img): predictions_array, true_label, img = predictions_array[i], true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img, cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): predictions_array, true_label = predictions_array[i], true_label[i] plt.grid(False) plt.xticks([]) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') ``` Let's look at the 0th image, predictions, and prediction array. ``` i = 0 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions, test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions, test_labels) i = 12 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions, test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions, test_labels) ``` Let's plot several images with their predictions. Correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent (out of 100) for the predicted label. Note that it can be wrong even when very confident. ``` # Plot the first X test images, their predicted label, and the true label # Color correct predictions in blue, incorrect predictions in red num_rows = 5 num_cols = 3 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(i, predictions, test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) plot_value_array(i, predictions, test_labels) ``` Finally, use the trained model to make a prediction about a single image. ``` # Grab an image from the test dataset img = test_images[0] print(img.shape) ``` `tf.keras` models are optimized to make predictions on a *batch*, or collection, of examples at once. So even though we're using a single image, we need to add it to a list: ``` # Add the image to a batch where it's the only member. img = (np.expand_dims(img,0)) print(img.shape) ``` Now predict the image: ``` predictions_single = model.predict(img) print(predictions_single) plot_value_array(0, predictions_single, test_labels) _ = plt.xticks(range(10), class_names, rotation=45) ``` `model.predict` returns a list of lists, one for each image in the batch of data. Grab the predictions for our (only) image in the batch: ``` np.argmax(predictions_single[0]) ``` And, as before, the model predicts a label of 9.
github_jupyter
# 6. Pandas Introduction In the previous chapters, we have learned how to handle Numpy arrays that can be used to efficiently perform numerical calculations. Those arrays are however homogeneous structures i.e. they can only contain one type of data. Also, even if we have a single type of data, the different rows or columns of an array do not have labels, making it difficult to track what they contain. For such cases, we need a structure closer to a table as can be found in Excel, and these structures are implemented by the package Pandas. But why can't we simply use Excel then? While Excel is practical to browse through data, it is very cumbersome to use to combine, re-arrange and thoroughly analyze data: code is hidden and difficult to share, there's no version control, it's difficult to automate tasks, the manual clicking around leads to mistakes etc. In the next chapters, you will learn how to handle tabular data with Pandas, a Python package widely used in the scientific and data science areas. You will learn how to create and import tables, how to combine them, modify them, do statistical analysis on them and finally how to use them to easily create complex visualizations. So that you see where this leads, we start with a short example of how Pandas can be used in a project. We look here at data provided openly by the Swiss National Science Foundation about grants attributed since 1975. ``` import numpy as np import pandas as pd import seaborn as sns ``` ## 6.1 Importing data Before anything, we need access to the data that can be found [here](https://opendata.swiss/de/dataset/p3-export-projects-people-and-publications). We can either manually download them and then use the path to read the data or directly use the url. The latter has the advantage that if you have an evolving source of data, these will always be up to date: ``` # local import projects = pd.read_csv('Data/P3_GrantExport.csv',sep = ';') # import from url #projects = pd.read_csv('http://p3.snf.ch/P3Export/P3_GrantExport.csv',sep = ';') ``` Then we can have a brief look at the table itself that Jupyter displays in a formated way and limit the view to the 5 first rows using ```head()```: ``` projects.head(5) ``` ## 6.2 Exploring data Pandas offers a variety of tools to compile information about data, and that compilation can be done very efficiently without the need for loops, conditionals etc. For example we can quickly count how many times each University appear in that table. We just use the ```value_counts()``` method for that: ``` projects['University'].value_counts().head(10) ``` Then we can very easily plot the resulting information, either using directly Pandas or a more advanced library like Seaborn, plotnine or Altair. Here first with plain Pandas (using Matplotlib under the hood): ``` projects['University'].value_counts().head(10).plot(kind='bar') ``` ## 6.3 Handling different data types Unlike Numpy arrays, Pandas can handle a variety of different data types in a dataframe. For example it is very efficient at dealing with dates. We see that our table contains e.g. a ```Start Date```. We can turn this string into an actual date: ``` projects['start'] = pd.to_datetime(projects['Start Date']) projects['year'] = projects.start.apply(lambda x: x.year) projects.loc[0].start projects.loc[0].year ``` ## 6.4 Data wrangling, aggregation and statistics Pandas is very efficient at wrangling and aggregating data, i.e. grouping several elements of a table to calculate statistics on them. For example we first need here to convert the ```Approved Amount``` to a numeric value. Certain rows contain text (e.g. "not applicable") and we force the conversion: ``` projects['Approved Amount'] = pd.to_numeric(projects['Approved Amount'], errors = 'coerce') ``` Then we want to extract the type of filed without subfields e.g. "Humanities" instead of "Humanities and Social Sciences;Theology & religion". For that we can create a custom function and apply it to an entire column: ``` science_types = ['Humanities', 'Mathematics','Biology'] projects['Field'] = projects['Discipline Name Hierarchy'].apply( lambda el: next((y for y in [x for x in science_types if x in el] if y is not None),None) if not pd.isna(el) else el) ``` Then we group the data by discipline and year, and calculate the mean of each group: ``` aggregated = projects.groupby(['Institution Country', 'year','Field'], as_index=False).mean() ``` Finally we can use Seaborn to plot the data by "Field" using just keywords to indicate what the axes and colours should mean (following some principles of the grammar of graphics): ``` sns.lineplot(data = aggregated, x = 'year', y='Approved Amount', hue='Field'); ``` Note that here, axis labelling, colorouring, legend, interval of confidence have been done automatically based on the content of the dataframe. We see a drastic augmentation around 2010: let's have a closer look. We can here again group data by year and funding type and calculate the total funding: ``` grouped = projects.groupby(['year','Funding Instrument Hierarchy']).agg( total_sum=pd.NamedAgg(column='Approved Amount', aggfunc='sum')).reset_index() grouped ``` Now, for each year we keep only the 5 largest funding types to be able to plot them: ``` group_sorted = grouped.groupby('year',as_index=False).apply(lambda x: (x.groupby('Funding Instrument Hierarchy') .sum() .sort_values('total_sum', ascending=False)) .head(5)).reset_index() ``` Finally, we only keep year in the 2000's: ``` instruments_by_year = group_sorted[(group_sorted.year > 2005) & (group_sorted.year < 2012)] import matplotlib.pyplot as plt plt.figure(figsize=(10,10)) sns.barplot(data=instruments_by_year, x='year', y='total_sum', hue='Funding Instrument Hierarchy') ``` We see that the main change, is the sudden increase in funding for national research programs.
github_jupyter
<a href="https://colab.research.google.com/github/avitripathi15/starter-hugo-academic/blob/master/flower_recognition_alternate.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import matplotlib.pyplot as plt import numpy as np import os import PIL import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.models import Sequential from keras.preprocessing.image import ImageDataGenerator from tensorflow.keras import datasets, layers, models import pathlib dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz" data_dir = tf.keras.utils.get_file('flower_photos', origin=dataset_url, untar=True) data_dir = pathlib.Path(data_dir) image_count = len(list(data_dir.glob('*/*.jpg'))) print(image_count) batch_size = 32 img_height = 225 img_width = 225 train_ds = tf.keras.utils.image_dataset_from_directory( data_dir, validation_split=0.2, subset="training", seed=123, image_size=(img_height, img_width), batch_size=batch_size) val_ds = tf.keras.utils.image_dataset_from_directory( data_dir, validation_split=0.2, subset="validation", seed=123, image_size=(img_height, img_width), batch_size=batch_size) class_names = train_ds.class_names print(class_names) plt.figure(figsize=(10, 10)) for images, labels in train_ds.take(1): for i in range(9): ax = plt.subplot(3, 3, i + 1) plt.imshow(images[i].numpy().astype("uint8")) plt.title(class_names[labels[i]]) plt.axis("off") AUTOTUNE = tf.data.AUTOTUNE train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE) normalization_layer = layers.Rescaling(1./255) normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y)) image_batch, labels_batch = next(iter(normalized_ds)) first_image = image_batch[0] # Notice the pixel values are now in `[0,1]`. print(np.min(first_image), np.max(first_image)) data_augmentation = keras.Sequential( [ layers.RandomFlip("horizontal", input_shape=(img_height, img_width, 3)), layers.RandomRotation(0.1), layers.RandomZoom(0.2), ] ) num_classes = len(class_names) model = Sequential([ data_augmentation, layers.Rescaling(1./255, input_shape=(img_height, img_width, 3)), layers.Conv2D(16, 3, padding='same', activation='relu', input_shape=[64,64,3]), layers.MaxPooling2D(pool_size=2,strides=2), layers.Conv2D(16, 3, padding='same', activation='relu', input_shape=[64,64,3]), layers.MaxPooling2D(pool_size=2,strides=2), layers.Conv2D(32, 3, padding='same', activation='relu'), layers.MaxPooling2D(pool_size=2 , strides=2), layers.Conv2D(64, 3, padding='same', activation='relu'), layers.MaxPooling2D(pool_size=2 , strides=2), layers.Dropout(0.5), layers.Flatten(), layers.Dense(128, activation='relu'), layers.Dense(128, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) epochs=15 history = model.fit( train_ds, validation_data=val_ds, epochs=epochs ) acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs_range = range(epochs) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() num_classes = len(class_names) model = Sequential([ data_augmentation, layers.Rescaling(1./255, input_shape=(img_height, img_width, 3)), layers.Conv2D(16, 3, padding='same', activation='relu', input_shape=[64,64,3]), layers.MaxPooling2D(pool_size=2,strides=2), layers.Conv2D(16, 3, padding='same', activation='relu', input_shape=[64,64,3]), layers.MaxPooling2D(pool_size=2,strides=2), layers.Conv2D(32, 3, padding='same', activation='relu'), layers.MaxPooling2D(pool_size=2 , strides=2), layers.Conv2D(32, 3, padding='same', activation='relu'), layers.MaxPooling2D(pool_size=2 , strides=2), layers.Conv2D(64, 3, padding='same', activation='relu'), layers.MaxPooling2D(pool_size=2 , strides=2), layers.Conv2D(64, 3, padding='same', activation='relu'), layers.MaxPooling2D(pool_size=2 , strides=2), layers.Dropout(0.5), layers.Flatten(), layers.Dense(128, activation='relu'), layers.Dense(128, activation='softmax') ]) epochs=15 history = model.fit( train_ds, validation_data=val_ds, epochs=epochs ) acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs_range = range(epochs) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() ``` ## **Checking the behaviour of the model by changing different hyperparameter and changing convolution layers** ### **The maximum accuracy achived was 76.57%**
github_jupyter
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import scipy.io as scio from sklearn.linear_model import Ridge %load_ext autoreload %autoreload 2 import main mat = scio.loadmat('ex9_movies.mat') # TODO: mean normalization: rows, columns and total # TODO: only calculate the mean for present values r = pd.DataFrame(mat['R']) # y = pd.DataFrame(mat['Y'])[r == 1] y = pd.DataFrame(mat['Y']) # TODO: automatically select n n = 10 x = np.random.random((y.shape[0], n)) theta = np.random.random((y.shape[1], n)) alpha = 0.00008 tolerance = 2 reg_param = 1 # TODO: try running PCA first progress = [] for x, theta, cost in main.run_descent(alpha, tolerance, theta, x, y, r, reg_param): print(cost) progress.append(cost) plt.plot(progress) plt.plot(progress) plt.show() my_rating = np.zeros(y.shape[0]) my_rating[50] = 5 # star wars my_rating[56] = 5 # pulp fiction my_rating[64] = 5 # shawshank redemption my_rating[65] = 3 # what's eating gilbert grape my_rating[69] = 5 # forrest gump my_rating[71] = 4 # lion king my_rating[79] = 4 # the fugitive my_rating[86] = 5 # remains of the day my_rating[89] = 4 # blade runner my_rating[94] = 3 # home alone my_rating[96] = 5 # terminator 2 my_rating[98] = 5 # silence of the lambs my_rating[127] = 5 # godfather my_rating[135] = 4 # 2001 space odyssey my_rating[151] = 4 # willie wonka my_rating[172] = 5 # empire strikes back my_rating[174] = 4 # raiders of the lost ark my_rating[178] = 5 # 12 angry men my_rating[181] = 5 # return of the jedi my_rating[185] = 4 # psycho my_rating[195] = 5 # terminator my_rating[196] = 4 # dead poets society my_rating[200] = 3 # shining my_rating[202] = 5 # groundhog day my_rating[483] = 4 # Casablanca (1942) my_rating[755] = 3 # Jumanji (1995) my_rating[902] = 5 # Big Lebowski, The (1998) my_rating[1127] = 5 # Truman Show, The (1998) my_rating[204] = 5 # Back to the Future (1985) my_rating[209] = 3 # This Is Spinal Tap (1984) my_rating[214] = 2 # Pink Floyd - The Wall (1982) my_rating[216] = 4 # When Harry Met Sally... (1989) my_rating[250] = 4 # Fifth Element, The (1997) my_rating[257] = 4 # Men in Black (1997) my_rating[302] = 5 # L.A. Confidential (1997) my_rating[318] = 5 # Schindler's List (1993) my_rating[340] = 4 # Boogie Nights (1997) # indexes in the file start with one, compensate my_rating = np.roll(my_rating, -1, axis=0) my_rating = pd.Series(my_rating) my_actual_rating = my_rating[my_rating > 0] my_movies = x.iloc[my_actual_rating] all_movies = pd.read_csv('movie_ids.txt', header=None, quotechar='"').iloc[:, 1] # we created linear regression in the 1st lab, no need to repeat the code here reg = Ridge(alpha=0.1).fit(my_movies, my_actual_rating) prediction = pd.Series(reg.predict(x)) prediction = prediction.sort_values(ascending=False) for i, score in prediction.iloc[:20].iteritems(): print('{}: {}'.format(all_movies.iat[i], score)) ```
github_jupyter
# Facial Emotion Recognition (Model Training) This dataset consists of 48*48 pixel grayscale face images. The images are centered and occupy an equal amount of space. The dataset consists of 7 categories: 0. angry 1. disgust (removed) 2. fear 3. happy 4. sad 5. surprise 6. neutral Training set: 28,709 samples Public test set: 3,589 examples ``` # Import Libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import os import cv2 from PIL import Image import tensorflow as tf from IPython.display import Image from sklearn.model_selection import train_test_split from skimage.transform import resize from sklearn.metrics import accuracy_score from keras.applications.resnet import ResNet50 from keras.applications.nasnet import NASNetLarge from keras.models import Model, Sequential from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D, BatchNormalization, Activation from keras.optimizers import Adam from keras.preprocessing.image import ImageDataGenerator from keras.callbacks import ReduceLROnPlateau, EarlyStopping,ModelCheckpoint from tensorflow.keras.utils import plot_model from keras import regularizers import keras.backend as K import warnings warnings.filterwarnings('ignore') ``` ## Explore dataset ``` # We need to import the file on google drive (authorization) from google.colab import drive drive.mount('/content/drive') # Count the images in the train file train = {} count = 0 for folder in os.listdir('/content/drive/MyDrive/Colab Notebooks/FER_2013_images/train'): temp = [] for files in os.listdir(f"/content/drive/MyDrive/Colab Notebooks/FER_2013_images/train/{folder}"): temp.append(files) count += len(temp) train[folder] = temp print(f"{folder} has {len(temp)} images") print(f"Total images in all folders are {count}") # Count the images in the test file test = {} count = 0 for folder in os.listdir('/content/drive/MyDrive/Colab Notebooks/FER_2013_images/test'): temp = [] for files in os.listdir(f"/content/drive/MyDrive/Colab Notebooks/FER_2013_images/test/{folder}"): temp.append(files) count += len(temp) test[folder] = temp print(f"{folder} has {len(temp)} images") print(f"Total images in all folders are {count}") # training dataframe with folder name as index train_df = pd.DataFrame.from_dict(train.values()) train_df.index = train.keys() train_df # list of all the emotions to classify in the output emotions = [k for k in train.keys()] emotions base_dir = '/content/drive/MyDrive/Colab Notebooks/FER_2013_images/' # lets see one image from each category from training data plt.figure(figsize = (20, 8)) for i in range(6): ax = plt.subplot(2,4, i+1) img = cv2.imread(f"{base_dir}train/{emotions[i]}/{train_df.loc[emotions[i], i+7]}") ax.imshow(img, cmap = 'gray') ax.set_xticks([]) ax.set_yticks([]) ax.set_title(emotions[i]) # To know the shape of image one_img = cv2.imread('/content/drive/MyDrive/Colab Notebooks/FER_2013_images/train/surprise/Training_10013223.jpg') one_img.shape ``` ## Data Preprocessing Using Keras "ImageDataGenerator" to augment images in real-time while our model is still training. The ImageDataGenerator class has 3 methods `flow()`, `flow_from_directory()`, and `flow_from_dataframe()` to read the images from a big numpy array and folders containing images. Here, we use flow_from_directory() because it expects at least one directory under the given directory path. ``` # Data preprocessing(augmentation) train_dir = f"{base_dir}/train" test_dir = f"{base_dir}/test" train_datagen = ImageDataGenerator(horizontal_flip = True, rotation_range= 10, zoom_range = 0.1, width_shift_range = 0.1, height_shift_range = 0.1, rescale = 1./255, fill_mode = 'nearest', validation_split = 0.2) valid_datagen = ImageDataGenerator(rescale=1./255, validation_split=0.2) test_datagen = ImageDataGenerator(rescale= 1./255) train_generator = train_datagen.flow_from_directory( train_dir, target_size=(48,48), batch_size=64, class_mode='categorical', color_mode='grayscale', subset = "training") valid_generator = valid_datagen.flow_from_directory(train_dir, target_size=(48,48), class_mode='categorical', subset='validation', color_mode='grayscale', batch_size=64) test_generator = test_datagen.flow_from_directory( test_dir, target_size=(48,48), batch_size=64, color_mode='grayscale', class_mode='categorical') ``` ### CNN Model Building ``` model = None # Build model on the top of base model (transfer learning) model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', input_shape=(48, 48, 1))) model.add(Conv2D(64, kernel_size=(3, 3), activation='relu', padding='same')) model.add(BatchNormalization()) model.add(MaxPooling2D(2, 2)) model.add(Dropout(0.25)) model.add(Conv2D(128, kernel_size=(3, 3), activation='relu', padding='same', kernel_regularizer=regularizers.l2(0.01))) model.add(Conv2D(256, kernel_size=(3, 3), activation='relu', kernel_regularizer=regularizers.l2(0.01))) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(1024, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(6, activation='softmax')) # model Summary print(f'Number of layers:', len(model.layers)) model.summary() # function to calculate f1_score def f1_score(y_true, y_pred): true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) possible_positives = K.sum(K.round(K.clip(y_true, 0, 1))) predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1))) precision = true_positives / (predicted_positives + K.epsilon()) recall = true_positives / (possible_positives + K.epsilon()) f1_val = 2*(precision*recall)/(precision+recall+K.epsilon()) return f1_val # evaluation metrics from keras.metrics import AUC, BinaryAccuracy, Precision, Recall metric = [BinaryAccuracy(name = 'accuracy'), Precision(name = 'precision'), Recall(name = 'recall'), AUC(name = 'AUC'),] # callbacks checkpoint = ModelCheckpoint('model.h5') earlystop = EarlyStopping(patience=20, verbose=1) # restore_best_weights=True) reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=20, verbose=1, min_lr=1e-10) # min_delta=0.0001) callbacks = [checkpoint,earlystop,reduce_lr] # compile model (optimization) model.compile(optimizer= 'Adam', loss='categorical_crossentropy', metrics=metric) ``` The difference between `Keras.fit` and `Keras.fit_generator` functions used to train a deep learning neural network: `model.fit` is used when the entire training dataset can fit into the memory * `model.fit` is used when the entire training dataset can fit into the memory and no data augmentation is applied. * `model.fit_generator` is used when either we have a huge dataset to fit into our memory or when data augmentation needs to be applied. ``` # model fitting history = model.fit_generator(train_generator, validation_data=valid_generator,epochs=100,verbose = 1,callbacks=callbacks) ``` ### Save Model ``` model.save('model_optimal_singlechannel_addingdata_0626.h5') model.save_weights('model_weights_singlechannel_addingdata_0626.h5') ``` ### Plotting ``` plt.figure(0) plt.plot(history.history['accuracy'], label= 'training accuracy') plt.plot(history.history['val_accuracy'], label= 'validation accuracy') plt.title('Accuracy') plt.xlabel('epochs') plt.ylabel('Accuracy') plt.legend() plt.figure(0) plt.plot(history.history['loss'], label= 'training loss') plt.plot(history.history['val_loss'], label= 'validation loss') plt.title('Loss') plt.xlabel('epochs') plt.ylabel('Loss') plt.legend() plt.figure(0) plt.plot(history.history['AUC'], label= 'training auc') plt.plot(history.history['val_AUC'], label= 'validation auc') plt.title('auc') plt.xlabel('epochs') plt.ylabel('auc') plt.legend() plt.figure(0) plt.plot(history.history['precision'], label= 'train precision') plt.plot(history.history['val_precision'], label= 'validation precision') plt.title('precision') plt.xlabel('epochs') plt.ylabel('precision') plt.legend() plt.figure(0) plt.plot(history.history['recall'], label= 'train recall') plt.plot(history.history['val_recall'], label= 'validation recall') plt.title('recall') plt.xlabel('epochs') plt.ylabel('recall') plt.legend() ``` ### Testing Dataset Accuracy ``` model.evaluate(test_generator) ``` ### Confusion Matrix ``` y_pred = model.predict(train_generator) y_pred = np.argmax(y_pred, axis=1) class_labels = test_generator.class_indices class_labels = {v:k for k,v in class_labels.items()} from sklearn.metrics import classification_report, confusion_matrix cm_train = confusion_matrix(train_generator.classes, y_pred) print('Confusion Matrix') print(cm_train) print('Classification Report') target_names = list(class_labels.values()) print(classification_report(train_generator.classes, y_pred, target_names=target_names)) plt.figure(figsize=(8,8)) plt.imshow(cm_train, interpolation='nearest') plt.colorbar() tick_mark = np.arange(len(target_names)) _ = plt.xticks(tick_mark, target_names, rotation=90) _ = plt.yticks(tick_mark, target_names) # Import the model (if re-connect) from tensorflow import keras model = keras.models.load_model('/content/drive/MyDrive/Colab Notebooks/Kevin/model_optimal.h5') ```
github_jupyter
This notebook has been adapted from the [spam example](https://github.com/snorkel-team/snorkel-tutorials/blob/master/spam/01_spam_tutorial.ipynb) of the [snorkel examples github](https://github.com/snorkel-team/snorkel-tutorials). Please visit the official snorkel tutorials [link](https://github.com/snorkel-team/snorkel-tutorials) for a more detailed and exhautive guide on how to use snorkel.<br> This notebook demonstrates how to use snorkel for data labeling. Our goal here is to build a dataset which can be used to classify if a Youtube comment is Spam or Ham. ## Installing Dependencies ``` # To install only the requirements of this notebook, uncomment the lines below and run this cell # =========================== # !pip install numpy==1.19.5 # !pip install pandas==1.1.5 # !pip install wget==3.2 # !pip install matplotlib==3.2.2 # !pip install utils==1.0.1 # !pip install snorkel==0.9.6 # !pip install scikit-learn==0.21.3 # !pip install textblob==0.15.3 # !pip install treedlib==0.1.3 # !pip install numbskull==0.1.1 # !pip install spacy==2.2.4 # =========================== # To install the requirements for the entire chapter, uncomment the lines below and run this cell # =========================== # try : # import google.colab # !curl https://raw.githubusercontent.com/practical-nlp/practical-nlp/master/Ch2/ch2-requirements.txt | xargs -n 1 -L 1 pip install # except ModuleNotFoundError : # !pip install -r "ch2-requirements.txt" # =========================== # !pip install tensorflow==1.15 # !pip install tensorboard==1.15 !python -m spacy download en_core_web_sm import warnings warnings.filterwarnings('ignore') ``` ## Dataset Let's get the Youtube spam classification dataset from the UCI ML Repository archive. The link for the dataset can be found [here](http://archive.ics.uci.edu/ml/machine-learning-databases/00380/YouTube-Spam-Collection-v1.zip). ``` import os import wget import zipfile import shutil file_link = "http://archive.ics.uci.edu/ml/machine-learning-databases/00380/YouTube-Spam-Collection-v1.zip" os.makedirs("content", exist_ok= True) if not os.path.exists("content/YouTube-Spam-Collection-v1.zip"): wget.download(file_link, out="content/") else: print("File already exists") with zipfile.ZipFile("content/YouTube-Spam-Collection-v1.zip", 'r') as zip_ref: zip_ref.extractall("content/") shutil.rmtree("content/__MACOSX") os.remove("content/YouTube-Spam-Collection-v1.zip") os.listdir("content") ``` Let's clone the necessary repos. ``` git_repo = "snorkel-tutorials" if (os.path.isdir(git_repo + '/.git')): print(f"Git repo: {git_repo} already exists!") else: !git clone https://github.com/snorkel-team/snorkel-tutorials.git source = "content/" destination = git_repo + "/spam/data/" os.makedirs(destination, exist_ok= True) files = os.listdir(source) for file in files: new_path = shutil.move(f"{source}/{file}", destination) print(f"Git repo: {git_repo} created!") os.chdir("snorkel-tutorials/spam") ``` ## Making the necessary imports ``` import re import glob import utils from snorkel.analysis import get_label_buckets from snorkel.labeling import labeling_function from snorkel.labeling import LFAnalysis from snorkel.labeling import PandasLFApplier from snorkel.labeling import LabelingFunction from snorkel.labeling.model import MajorityLabelVoter from snorkel.labeling.model import LabelModel from snorkel.labeling import filter_unlabeled_dataframe from snorkel.labeling.lf.nlp import nlp_labeling_function from snorkel.preprocess import preprocessor from snorkel.preprocess.nlp import SpacyPreprocessor from snorkel.utils import probs_to_preds from textblob import TextBlob import pandas as pd import numpy as np from sklearn.feature_extraction.text import CountVectorizer from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt %matplotlib inline def load_spam_dataset(load_train_labels: bool = False, split_dev_valid: bool = False): filenames = sorted(glob.glob("data/Youtube*.csv")) dfs = [] for i, filename in enumerate(filenames, start=1): df = pd.read_csv(filename) # Lowercase column names df.columns = map(str.lower, df.columns) # Remove comment_id field df = df.drop("comment_id", axis=1) # Add field indicating source video df["video"] = [i] * len(df) # Rename fields df = df.rename(columns={"class": "label", "content": "text"}) # Shuffle order df = df.sample(frac=1, random_state=123).reset_index(drop=True) dfs.append(df) df_train = pd.concat(dfs[:4]) df_dev = df_train.sample(100, random_state=123) if not load_train_labels: df_train["label"] = np.ones(len(df_train["label"])) * -1 df_valid_test = dfs[4] df_valid, df_test = train_test_split( df_valid_test, test_size=250, random_state=123, stratify=df_valid_test.label ) if split_dev_valid: return df_train, df_dev, df_valid, df_test else: return df_train, df_test df_train, df_test = load_spam_dataset() print("Train") display(df_train.head()) print('Test') df_test.head() Y_test = df_test.label.values Y_test[:5] ``` There are a few things to keep in mind with respect to the dataset: 1. HAM represents a NON-SPAM comment. 2. SPAM is a SPAM comment 3. ABSTAIN is for neither of the above We initialise their respective values below ``` ABSTAIN = -1 HAM = 0 SPAM = 1 ``` We need to find some pattern in the data, so as to create rules for labeling the data.<br> Hence, we randomly display some rows of the dataset so that we can try to find some pattern in the text. ``` df_train[["author", "text", "video"]].sample(20, random_state=2020) ``` An example for how a labeling function can be defined , anything that might match the pattern of a spam message can be used , here http is an example to show as many spam comments may contain links ``` @labeling_function() def lf_contains_link(x): # Return a label of SPAM if "http" in comment text, otherwise ABSTAIN return SPAM if "http" in x.text.lower() else ABSTAIN ``` Defining labeling functions to check strings such as 'check', 'check out', 'http', 'my channel', 'subscribe'<br> Try to write your own labeling functions too. ``` @labeling_function() def check(x): return SPAM if "check" in x.text.lower() else ABSTAIN @labeling_function() def check_out(x): return SPAM if "check out" in x.text.lower() else ABSTAIN @labeling_function() def my_channel(x): return SPAM if "my channel" in x.text.lower() else ABSTAIN @labeling_function() def if_subscribe(x): return SPAM if "subscribe" in x.text.lower() else ABSTAIN ``` Using **LFApplier** to use our labelling fucntions with pandas dataframe object. these labeling functions can also be used on columns other than 'text' ``` lfs = [check_out, check, lf_contains_link, my_channel, if_subscribe] applier = PandasLFApplier(lfs=lfs) L_train = applier.apply(df=df_train) L_train ``` _Coverage_ is the fraction of the dataset the labeling function labels. ``` coverage_check_out, coverage_check, coverage_link, coverage_my_channel, coverage_subscribe= (L_train != ABSTAIN).mean(axis=0) print(f"check_out coverage: {coverage_check_out * 100:.1f}%") print(f"check coverage: {coverage_check * 100:.1f}%") print(f"link coverage: {coverage_link * 100:.1f}%") print(f"my_channel coverage: {coverage_my_channel * 100:.1f}%") print(f"if_subscribe coverage: {coverage_subscribe * 100:.1f}%") ``` Before we procees further, let us understand a bit of jargon with respect to the summary of the _LFAnalysis_ 1. Polarity - set of unique labels that the labeling function outputs (excluding Abstains) 2. Overlaps - where there is atleast one common entry for more than one labeling functions i.e the labeling fucntions agree upon the value to be returned 3. Conflicts - where the labeling functions disagree upon the value to be returned ``` LFAnalysis(L=L_train, lfs=lfs).lf_summary() ``` Trying and checking the results by filtering out the matching rows and checking for false positives ``` # display(df_train.iloc[L_train[:, 1] == SPAM].sample(10, random_state=2020)) # display(df_train.iloc[L_train[:, 2] == SPAM].sample(10, random_state=2020)) df_train.iloc[L_train[:, 3] == SPAM].sample(10, random_state=2020) ``` Combining two labeling functions and checking the results ``` #buckets = get_label_buckets(L_train[:, 0], L_train[:, 1]) #buckets = get_label_buckets(L_train[:, 1], L_train[:, 2]) buckets = get_label_buckets(L_train[:, 0], L_train[:, 3]) df_train.iloc[buckets[(ABSTAIN, SPAM)]].sample(10, random_state=1) ``` **Regex Based Labeling Functions:** Using regular expressions to make the labeling functions more adaptive over differnt variations of the pattern string and repeating the the same process as above.<br> Please feel free to find out patterns and write similar functions for 'http','subscribe', etc and if possible send a pull request. ``` #using regular expressions @labeling_function() def regex_check_out(x): return SPAM if re.search(r"check.*out", x.text, flags=re.I) else ABSTAIN lfs = [check_out, check, regex_check_out] applier = PandasLFApplier(lfs=lfs) L_train = applier.apply(df=df_train) LFAnalysis(L=L_train, lfs=lfs).lf_summary() buckets = get_label_buckets(L_train[:, 1], L_train[:, 2]) df_train.iloc[buckets[(SPAM, ABSTAIN)]].sample(10, random_state=2020) ``` Let's use a 3rd party model, TextBlob in this case, to write a labeling function. Snorkel makes this very simple to implement. ``` @preprocessor(memoize=True) def textblob_sentiment(x): scores = TextBlob(x.text) x.polarity = scores.sentiment.polarity x.subjectivity = scores.sentiment.subjectivity return x @labeling_function(pre=[textblob_sentiment]) def textblob_polarity(x): return HAM if x.polarity > 0.9 else ABSTAIN @labeling_function(pre=[textblob_sentiment]) def textblob_subjectivity(x): return HAM if x.subjectivity >= 0.5 else ABSTAIN lfs = [textblob_polarity, textblob_subjectivity] applier = PandasLFApplier(lfs) L_train = applier.apply(df_train) LFAnalysis(L_train, lfs).lf_summary() ``` ## Writing more labeling functions Single labeling functions arent enough to test the entire databsase with accuracy as they do not have enough coverage, we usually need to combine differnt labeling functions(more rubost and accurate ones) to get this done. **Keyword based labeling fucntions**: These are similar to the ones used befeore with the labeling_fucntion decorator. here we just make a few changes ``` def keyword_lookup(x, keywords, label): if any(word in x.text.lower() for word in keywords): return label return ABSTAIN def make_keyword_lf(keywords, label=SPAM): return LabelingFunction( name=f"keyword_{keywords[0]}", f=keyword_lookup, resources=dict(keywords=keywords, label=label), ) """Spam comments talk about 'my channel', 'my video', etc.""" keyword_my = make_keyword_lf(keywords=["my"]) """Spam comments ask users to subscribe to their channels.""" keyword_subscribe = make_keyword_lf(keywords=["subscribe"]) """Spam comments post links to other channels.""" keyword_link = make_keyword_lf(keywords=["http"]) """Spam comments make requests rather than commenting.""" keyword_please = make_keyword_lf(keywords=["please", "plz"]) """Ham comments actually talk about the video's content.""" keyword_song = make_keyword_lf(keywords=["song"], label=HAM) ``` Modifying the above functions to use regualr expressions too would be an interesting exercise which we will leave to the reader. Having other methods such as a Rule of Thumb or Heuristics(length of text) could help too. These are not extremely accurate but will get the job done to a certain extent. An example is given below ``` @labeling_function() def short_comment(x): """Ham comments are often short, such as 'cool video!'""" return HAM if len(x.text.split()) < 5 else ABSTAIN ``` We can also use NLP preprocessors such as spaCy to enrich our data and provide us with more fields to work on which will make the labeling a bit easier . ``` # The SpacyPreprocessor parses the text in text_field and # stores the new enriched representation in doc_field spacy = SpacyPreprocessor(text_field="text", doc_field="doc", memoize=True) @labeling_function(pre=[spacy]) def has_person(x): """Ham comments mention specific people and are short.""" if len(x.doc) < 20 and any([ent.label_ == "PERSON" for ent in x.doc.ents]): return HAM else: return ABSTAIN #snorkel has a pre built labeling function like decorator that uses spaCy as it is a very common nlp preprocessor @nlp_labeling_function() def has_person_nlp(x): """Ham comments mention specific people and are short.""" if len(x.doc) < 20 and any([ent.label_ == "PERSON" for ent in x.doc.ents]): return HAM else: return ABSTAIN ``` ## Outputs Let's move onto learning how we can go about combining labeling function outputs with labeling models. ``` lfs = [ keyword_my, keyword_subscribe, keyword_link, keyword_please, keyword_song, regex_check_out, short_comment, has_person_nlp, textblob_polarity, textblob_subjectivity, ] applier = PandasLFApplier(lfs=lfs) L_train = applier.apply(df=df_train) L_test = applier.apply(df=df_test) LFAnalysis(L=L_train, lfs=lfs).lf_summary() ``` We plot a histogram to get an idea about the coverages of the labeling functions ``` def plot_label_frequency(L): plt.hist((L != ABSTAIN).sum(axis=1), density=True, bins=range(L.shape[1])) plt.xlabel("Number of labels") plt.ylabel("Fraction of dataset") plt.show() plot_label_frequency(L_train) ``` We now convert the labels from our labeling functions to a single noise-aware probabilistic label per data. we do so by taking a majority vote on what the data should be labeled as .i.e if more labeling functions agree that the text/data is spam , then we label it as spam ``` majority_model = MajorityLabelVoter() preds_train = majority_model.predict(L=L_train) preds_train ``` However there may be functions that are correlated and might give a false sense of majority , to handle this we use a differernt snorkel label model to comine inputs of the labeling functions. ``` label_model = LabelModel(cardinality=2, verbose=True) label_model.fit(L_train=L_train, n_epochs=500, log_freq=100, seed=123) majority_acc = majority_model.score(L=L_test, Y=Y_test, tie_break_policy="random")["accuracy"] print(f"{'Majority Vote Accuracy:':<25} {majority_acc * 100:.1f}%") label_model_acc = label_model.score(L=L_test, Y=Y_test, tie_break_policy="random")["accuracy"] print(f"{'Label Model Accuracy:':<25} {label_model_acc * 100:.1f}%") ``` We plot another graph to see the confidences that each data point is a spam ``` def plot_probabilities_histogram(Y): plt.hist(Y, bins=10) plt.xlabel("Probability of SPAM") plt.ylabel("Number of data points") plt.show() probs_train = label_model.predict_proba(L=L_train) plot_probabilities_histogram(probs_train[:, SPAM]) ``` There might be some data which do not get any label from the functions , we filter them out as follows ``` df_train_filtered, probs_train_filtered = filter_unlabeled_dataframe( X=df_train, y=probs_train, L=L_train ) ``` ## Training a classifier In this section we use the probabilistic training labels we generated to train a classifier. for demonstration we use Scikit-Learn.<br> _Note:Do not worry if you do not understand what a classifier is. We cover all of this in Chapter 4. Please read Ch4 and look at the jupyter notebooks in Ch4._ ``` vectorizer = CountVectorizer(ngram_range=(1, 5)) X_train = vectorizer.fit_transform(df_train_filtered.text.tolist()) X_test = vectorizer.transform(df_test.text.tolist()) preds_train_filtered = probs_to_preds(probs=probs_train_filtered) sklearn_model = LogisticRegression(C=1e3, solver="liblinear") sklearn_model.fit(X=X_train, y=preds_train_filtered) print(f"Test Accuracy: {sklearn_model.score(X=X_test, y=Y_test) * 100:.1f}%") ``` We have just scratched the surface on what snorkel can do. We highly recommend going through their [github](https://github.com/snorkel-team/snorkel) and [tutorials](https://github.com/snorkel-team/snorkel-tutorials).
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Import-the-json-and-pprint-libraries" data-toc-modified-id="Import-the-json-and-pprint-libraries-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Import the json and pprint libraries</a></span></li><li><span><a href="#Load-the-JSON-data-and-look-for-potential-issues" data-toc-modified-id="Load-the-JSON-data-and-look-for-potential-issues-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Load the JSON data and look for potential issues</a></span></li><li><span><a href="#Check-for-differences-in-the-structure-of-the-dictionaries" data-toc-modified-id="Check-for-differences-in-the-structure-of-the-dictionaries-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>Check for differences in the structure of the dictionaries</a></span></li><li><span><a href="#Generate-counts-from-the-JSON-data" data-toc-modified-id="Generate-counts-from-the-JSON-data-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>Generate counts from the JSON data</a></span></li><li><span><a href="#Get-the-source-data-and-confirm-that-it-has-the-anticipated-length" data-toc-modified-id="Get-the-source-data-and-confirm-that-it-has-the-anticipated-length-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>Get the source data and confirm that it has the anticipated length</a></span></li><li><span><a href="#Fix-any-errors-in-the-values-in-the-dictionary" data-toc-modified-id="Fix-any-errors-in-the-values-in-the-dictionary-6"><span class="toc-item-num">6&nbsp;&nbsp;</span>Fix any errors in the values in the dictionary</a></span></li><li><span><a href="#Create-a-pandas-DataFrame" data-toc-modified-id="Create-a-pandas-DataFrame-7"><span class="toc-item-num">7&nbsp;&nbsp;</span>Create a pandas DataFrame</a></span></li><li><span><a href="#Confirm-that-we-are-getting-the-expected-values-for-source" data-toc-modified-id="Confirm-that-we-are-getting-the-expected-values-for-source-8"><span class="toc-item-num">8&nbsp;&nbsp;</span>Confirm that we are getting the expected values for source</a></span></li></ul></div> # Import the json and pprint libraries ``` import pandas as pd import numpy as np import json import pprint from collections import Counter import watermark %load_ext watermark %watermark -n -v -iv ``` # Load the JSON data and look for potential issues ``` with open('data/allcandidatenewssample.json') as f: candidatenews = json.load(f) len(candidatenews) pprint.pprint(candidatenews[0:2]) pprint.pprint(candidatenews[0]['source']) ``` # Check for differences in the structure of the dictionaries ``` Counter([len(item) for item in candidatenews]) pprint.pprint(next(item for item in candidatenews if len(item) < 9)) # checking the usage of next pprint.pprint((item for item in candidatenews if len(item) < 9)) pprint.pprint(next(item for item in candidatenews if len(item) > 9)) pprint.pprint([item for item in candidatenews if len(item) == 2][0:10]) candidatenews = [item for item in candidatenews if len(item) > 2] len(candidatenews) ``` # Generate counts from the JSON data ``` politico = [item for item in candidatenews if item['source'] == "Politico"] len(politico) pprint.pprint(politico[0:2]) ``` # Get the source data and confirm that it has the anticipated length ``` sources = [item.get('source') for item in candidatenews] type(sources) len(sources) sources[0:5] pprint.pprint(Counter(sources).most_common(10)) ``` # Fix any errors in the values in the dictionary ``` for newsdict in candidatenews: newsdict.update((k, 'The Hill') for k, v in newsdict.items() if k == 'source' and v == 'TheHill') # Usage of item.get('source') instead of item['source']. # This is handy when there might be missing keys in a dictionary. get returns None when the key # is missing, but we can use an optional second argument to specify a value to return. sources = [item.get('source') for item in candidatenews] pprint.pprint(Counter(sources).most_common(10)) ``` # Create a pandas DataFrame ``` candidatenewsdf = pd.DataFrame(candidatenews) candidatenewsdf.dtypes ``` # Confirm that we are getting the expected values for source ``` candidatenewsdf.rename(columns={'date': 'storydate'}, inplace=True) candidatenewsdf['storydate'] = candidatenewsdf['storydate'].astype( 'datetime64[ns]') candidatenewsdf.shape candidatenewsdf['source'].value_counts(sort=True).head(10) ```
github_jupyter
<style>div.container { width: 100% }</style> <img style="float:left; vertical-align:text-bottom;" height="65" width="172" src="../assets/holoviz-logo-unstacked.svg" /> <div style="float:right; vertical-align:text-bottom;"><h2>Tutorial 0. Setup</h2></div> This first step to the tutorial will make sure your system is set up to do all the remaining sections, with all software installed and all data downloaded as needed. The [index](index.ipynb) provided some links you might want to examine before you start. ## Getting set up Please consult [holoviz.org](http://holoviz.org/installation.html) for the full instructions on installing the software used in these tutorials. Here is the condensed version of those instructions, assuming you have already downloaded and installed [Anaconda](https://www.anaconda.com/download) or [Miniconda](https://conda.io/miniconda.html) and have opened a command prompt in a Conda environment: ``` conda install anaconda-project anaconda-project download pyviz/holoviz_tutorial cd holoviz_tutorial # You may need to delete this directory if you've run the command above before anaconda-project run jupyter notebook ``` If you prefer JupyterLab to the default (classic) notebook interface, you can replace "notebook" with "lab". Once your chosen environment is running, navigate to `tutorial/00_Setup.ipynb` (i.e. this notebook) and run the following cell to test the key imports needed for this tutorial. If it completes without errors your environment should be ready to go: ``` import datashader as ds, bokeh, holoviews as hv # noqa from distutils.version import LooseVersion min_versions = dict(ds='0.13.0', bokeh='2.3.2', hv='1.14.4') for lib, ver in min_versions.items(): v = globals()[lib].__version__ if LooseVersion(v) < LooseVersion(ver): print("Error: expected {}={}, got {}".format(lib,ver,v)) ``` And you should see the HoloViews, Bokeh, and Matplotlib logos after running the following cell: ``` hv.extension('bokeh', 'matplotlib') ``` ## Downloading sample data Lastly, let's make sure the datasets needed are available. First, check that the large earthquake dataset was downloaded correctly during the `anaconda-project run` command: ``` import os from pyct import cmd if not os.path.isfile('../data/earthquakes-projected.parq'): cmd.fetch_data(name='holoviz', path='..') # Alternative way to fetch the data ``` Make sure that you have the SNAPPY dependency required to read these data: ``` try: import pandas as pd columns = ['depth', 'id', 'latitude', 'longitude', 'mag', 'place', 'time', 'type'] data = pd.read_parquet('../data/earthquakes-projected.parq', columns=columns, engine='fastparquet') data.head() except RuntimeError as e: print('The data cannot be read: %s' % e) ``` If you don't see any error messages above, you should be good to go! Now that you are set up, you can continue with the [rest of the tutorial sections](01_Overview.ipynb).
github_jupyter