code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# cadCAD Tutorials: The Robot and the Marbles, part 2
In [Part 1](../robot-marbles-part-1/robot-marbles-part-1.ipynb) we introduced the 'language' in which a system must be described in order for it to be interpretable by cadCAD and some of the basic concepts of the library:
* State Variables
* Timestep
* State Update Functions
* Partial State Update Blocks
* Simulation Configuration Parameters
This article will introduce the concept of __Policies__. But first let's copy the base configuration from Part 1. As a reminder, here's the description of the simple system we are using for illustration purposes.
__The robot and the marbles__
* Picture a box (`box_A`) with ten marbles in it; an empty box (`box_B`) next to the first one; and a robot arm capable of taking a marble from any one of the boxes and dropping it into the other one.
* The robot is programmed to take one marble at a time from the box containing the largest number of marbles and drop it in the other box. It repeats that process until the boxes contain an equal number of marbles.
```
%%capture
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# List of all the state variables in the system and their initial values
genesis_states = {
'box_A': 10, # as per the description of the example, box_A starts out with 10 marbles in it
'box_B': 0 # as per the description of the example, box_B starts out empty
}
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
def update_A(params, step, sH, s, _input):
y = 'box_A'
add_to_A = 0
if (s['box_A'] > s['box_B']):
add_to_A = -1
elif (s['box_A'] < s['box_B']):
add_to_A = 1
x = s['box_A'] + add_to_A
return (y, x)
def update_B(params, step, sH, s, _input):
y = 'box_B'
add_to_B = 0
if (s['box_B'] > s['box_A']):
add_to_B = -1
elif (s['box_B'] < s['box_A']):
add_to_B = 1
x = s['box_B'] + add_to_B
return (y, x)
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# In the Partial State Update Blocks, the user specifies if state update functions will be run in series or in parallel
partial_state_update_blocks = [
{
'policies': { # We'll ignore policies for now
},
'variables': { # The following state variables will be updated simultaneously
'box_A': update_A,
'box_B': update_B
}
}
]
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Settings of general simulation parameters, unrelated to the system itself
# `T` is a range with the number of discrete units of time the simulation will run for;
# `N` is the number of times the simulation will be run (Monte Carlo runs)
# In this example, we'll run the simulation once (N=1) and its duration will be of 10 timesteps
# We'll cover the `M` key in a future article. For now, let's omit it
sim_config_dict = {
'T': range(10),
'N': 1,
#'M': {}
}
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
#imported some addition utilities to help with configuration set-up
from cadCAD.configuration.utils import config_sim
from cadCAD.configuration import Experiment
exp = Experiment()
c = config_sim(sim_config_dict)
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# The configurations above are then packaged into a `Configuration` object
exp.append_configs(initial_state=genesis_states, #dict containing variable names and initial values
partial_state_update_blocks=partial_state_update_blocks, #dict containing state update functions
sim_configs=c #preprocessed dictionaries containing simulation parameters
)
from cadCAD.engine import ExecutionMode, ExecutionContext
exec_mode = ExecutionMode()
local_mode_ctx = ExecutionContext(exec_mode.local_mode)
from cadCAD.engine import Executor
from cadCAD import configs
simulation = Executor(exec_context=local_mode_ctx, configs=configs) # Pass the configuration object inside an array
raw_system_events, tensor_field, sessions = simulation.execute() # The `execute()` method returns a tuple; its first elements contains the raw results
%matplotlib inline
import pandas as pd
simulation_result = pd.DataFrame(raw_result)
simulation_result.plot('timestep', ['box_A', 'box_B'], grid=True,
colormap = 'RdYlGn',
xticks=list(simulation_result['timestep'].drop_duplicates()),
yticks=list(range(1+(simulation_result['box_A']+simulation_result['box_B']).max())));
```
# Policies
In part 1, we ignored the `_input` argument of state update functions. That argument is a signal passed to the state update function by another set of functions: Policy Functions.
Policy Functions are most commonly used as representations of the behavior of agents that interact with the components of the system we're simulating in cadCAD. But more generally, they describe the logic of some component or mechanism of the system. It is possible to encode the functionality of a policy function in the state update functions themselves (as we did in part 1, where we had the robot's algorithm reside in the `update_A` and `update_B` functions), but as systems grow more complex this approach makes the code harder to read and maintain, and in some cases more inefficient because of unnecessary repetition of computational steps.
The general structure of a policy function is:
```python
def policy_function(params, step, sL, s):
...
return {'value1': value1, 'value2': value2, ...}
```
Just like State Update Functions, policies can read the current state of the system from argument `s`, a Python `dict` where the `dict_keys` are the __names of the variables__ and the `dict_values` are their __current values__. The Policy Function must return a dictionary, which will be passed as an argument (`_input`) to the state update functions.

Let's update our simulation so that the robot arm's logic is encoded in a Policy instead of in the State Update Functions.
```
%%capture
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# We specify the robot arm's logic in a Policy Function
def robot_arm(params, step, sH, s):
add_to_A = 0
if (s['box_A'] > s['box_B']):
add_to_A = -1
elif (s['box_A'] < s['box_B']):
add_to_A = 1
return({'add_to_A': add_to_A, 'add_to_B': -add_to_A})
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# We make the state update functions less "intelligent",
# ie. they simply add the number of marbles specified in _input
# (which, per the policy function definition, may be negative)
def increment_A(params, step, sH, s, _input):
y = 'box_A'
x = s['box_A'] + _input['add_to_A']
return (y, x)
def increment_B(params, step, sH, s, _input):
y = 'box_B'
x = s['box_B'] + _input['add_to_B']
return (y, x)
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# In the Partial State Update Blocks,
# the user specifies if state update functions will be run in series or in parallel
# and the policy functions that will be evaluated in that block
partial_state_update_blocks = [
{
'policies': { # The following policy functions will be evaluated and their returns will be passed to the state update functions
'robot_arm': robot_arm
},
'states': { # The following state variables will be updated simultaneously
'box_A': increment_A,
'box_B': increment_B
}
}
]
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
del configs[:] # Clear any prior configs
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# The configurations above are then packaged into a `Configuration` object
exp.append_configs(initial_state=genesis_states, #dict containing variable names and initial values
partial_state_update_blocks=partial_state_update_blocks, #dict containing state update functions
sim_configs=c #preprocessed dictionaries containing simulation parameters
)
executor = Executor(local_mode_ctx, configs) # Pass the configuration object inside an array
raw_result, tensor, sessions = executor.execute() # The `execute()` method returns a tuple; its first elements contains the raw results
%matplotlib inline
simulation_result = pd.DataFrame(raw_result)
simulation_result.plot('timestep', ['box_A', 'box_B'], grid=True,
xticks=list(simulation_result['timestep'].drop_duplicates()),
colormap = 'RdYlGn',
yticks=list(range(1+(simulation_result['box_A']+simulation_result['box_B']).max())));
```
As expected, the results are the same as when the robot arm logic was encoded within the state update functions.
Several policies may be evaluated within a Partial State Update Block. When that's the case, cadCAD's engine aggregates the outputs of the policies and passes them as a single signal to the state update functions.

Aggregation of policies is defined in cadCAD as __key-wise sum (+) of the elements of the outputted `dict`s__.
```python
>policy_1_output = {'int': 1, 'str': 'abc', 'list': [1, 2], '1-only': 'Specific to policy 1'}
>policy_2_output = {'int': 2, 'str': 'def', 'list': [3, 4], '2-only': 'Specific to policy 2'}
>print(aggregate([policy_1_output, policy_2_output]))
```
```
{'int': 3, 'str': 'abcdef', 'list': [1, 2, 3, 4], '1-only': 'Specific to policy 1', '2-only': 'Specific to policy 2'}
```
To illustrate, let's add to another system another robot arm identical to the first one, that acts in tandem with it. All it takes is to add a policy to the `dict` that describes the partial state update block.
```
%%capture
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# In the Partial State Update Blocks,
# the user specifies if state update functions will be run in series or in parallel
# and the policy functions that will be evaluated in that block
partial_state_update_blocks = [
{
'policies': { # The following policy functions will be evaluated and their returns will be passed to the state update functions
'robot_arm_1': robot_arm,
'robot_arm_2': robot_arm
},
'variables': { # The following state variables will be updated simultaneously
'box_A': increment_A,
'box_B': increment_B
}
}
]
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
del configs[:] # Clear any prior configs
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# The configurations above are then packaged into a `Configuration` object
exp.append_configs(initial_state=genesis_states, #dict containing variable names and initial values
partial_state_update_blocks=partial_state_update_blocks, #dict containing state update functions
sim_configs=c #preprocessed dictionaries containing simulation parameters
)
executor = Executor(local_mode_ctx, configs) # Pass the configuration object inside an array
raw_result, tensor, sessions = executor.execute() # The `execute()` method returns a tuple; its first elements contains the raw results
%matplotlib inline
simulation_result = pd.DataFrame(raw_result)
simulation_result.plot('timestep', ['box_A', 'box_B'], grid=True,
xticks=list(simulation_result['timestep'].drop_duplicates()),
colormap = 'RdYlGn',
yticks=list(range(1+(simulation_result['box_A']+simulation_result['box_B']).max())));
```
Because we have made it so that both robots read and update the state of the system at the same time, the equilibrium we had before (with 5 marbles in each box) is never reached. Instead, the system oscillates around that point.
---
_About BlockScience_
[BlockScience](http://bit.ly/github_articles_M_02) is a research and engineering firm specialized in complex adaptive systems and applying practical methodologies from engineering design, development and testing to projects in emerging technologies such as blockchain. Follow us on [Medium](http://bit.ly/bsci-medium) or [Twitter](http://bit.ly/bsci-twitter) to stay in touch.
| github_jupyter |
# Wrangling and Analyzing Data
## 1. Introduction
Data preparation is always the hardest part of a data analyst's work flow, in this project, we will use the data wrangling skills to pull real-world data from Twitter, clean it, and do some analysis. We will get the original Twitter data from Twitter user @dog_rates, along with a image prediction dataset, to build our analysis.
WeRateDogs is a popular Twitter hash tag, as the name tells, people rate dogs with a denominator of 10 and the numerator is usually higher than 10 to show how lovely the dog is.
## 2. Gathering Data
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
import json
import requests
import tweepy
import re
from tweepy import OAuthHandler
from timeit import default_timer as timer
%matplotlib inline
```
### 2.1 Import the on hand twitter data
```
twt_df1 = pd.read_csv('twitter-archive-enhanced.csv')
twt_df1.head()
```
### 2.2 Scrape data from website
```
url = "https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv"
response = requests.get(url)
with open('image_prediction.tsv', mode='wb') as file:
file.write(response.content)
img_df = pd.read_csv('image_prediction.tsv', sep='\t')
img_df.head()
```
I tried to get data from Twitter API by registering a Twitter developer account. But the application was failed, I was rejected by Twitter. So I just used the data sent from my Udacity instructor.
```
# Query Twitter API for each tweet in the Twitter archive and save JSON in a text file
# These are hidden to comply with Twitter's API terms and conditions
consumer_key = 'HIDDEN'
consumer_secret = 'HIDDEN'
access_token = 'HIDDEN'
access_secret = 'HIDDEN'
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_secret)
api = tweepy.API(auth, wait_on_rate_limit=True)
# NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES:
# df_1 is a DataFrame with the twitter_archive_enhanced.csv file. You may have to
# change line 17 to match the name of your DataFrame with twitter_archive_enhanced.csv
# NOTE TO REVIEWER: this student had mobile verification issues so the following
# Twitter API code was sent to this student from a Udacity instructor
# Tweet IDs for which to gather additional data via Twitter's API
tweet_ids = twt_df1.tweet_id.values
len(tweet_ids)
# Query Twitter's API for JSON data for each tweet ID in the Twitter archive
count = 0
fails_dict = {}
start = timer()
# Save each tweet's returned JSON as a new line in a .txt file
with open('tweet-json.txt', 'w') as outfile:
# This loop will likely take 20-30 minutes to run because of Twitter's rate limit
for tweet_id in tweet_ids:
count += 1
print(str(count) + ": " + str(tweet_id))
try:
tweet = api.get_status(tweet_id, tweet_mode='extended')
print("Success")
json.dump(tweet._json, outfile)
outfile.write('\n')
except tweepy.TweepError as e:
print("Fail")
fails_dict[tweet_id] = e
pass
end = timer()
print(end - start)
print(fails_dict)
```
Store the JSON file in a dataframe.
```
df2_list = []
with open('tweet-json.txt', 'r', encoding='utf8') as file:
for line in file:
lines = json.loads(line)
df2_list.append({'tweet_id': lines['id'],
'favorites': lines['favorite_count'],
'retweets': lines['retweet_count'],
'timestamp': lines['created_at']})
twt_df2 = pd.DataFrame(df2_list, columns=['tweet_id','timestamp','favorites','retweets'])
twt_df2.head()
```
## 3. Assessing Data
Now, it's time to check the data we gathered in part 2.
### 3.1 Assessment
twt_df1
```
twt_df1.head()
twt_df1.tail()
twt_df1.info()
twt_df1.describe()
twt_df1.sort_values('timestamp')
```
There are a lot of missing data in **in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id, and retweeted_status_timestamp** columns. Tried to find a connection among them:
```
twt_df1[twt_df1.in_reply_to_status_id.notna()].head()
twt_df1[twt_df1.retweeted_status_id.notna()].head()
```
Twt_df1 columns:
* **tweet_id**: the unique identifier for each tweet
* **in_reply_to_status_id**: if the tweet is a reply, this column will representing the original tweet id
* **in_reply_to_user_id**: if the tweet is a reply, this column will representing the original tweet's user id
* **timestamp**: date and time of the tweet
* **source**: utility used to post the tweet
* **text**: content of the tweet
* **retweeted_status_id**: if the tweet is retweet, this column will representing the original tweet id
* **retweeted_status_user_id**: if the tweet is retweet, this column will representing the original tweet's user id
* **retweeted_status_timestamp**: if the tweet is retweet, this column will representing the original tweet's time stamp
* **expanded_urls**: URL of the tweet
* **rating_numerator**: rating numerator of the dog mentioned in the tweet
* **rating_denominator**: rating denominator of the dog mentioned in the tweet
* **name**: the name of the dog
* **doggo**/ **floofer**/ **pupper**/ **puppo**: some nick names of different dog species at different ages.
img_df
```
img_df.head()
img_df.tail()
img_df.info()
img_df.describe()
```
img_df columns:
* **tweet_id**: the unique identifier of the tweet
* **jpg_url**: the URL of the image
* **img_num**: image number of the tweet
* **p1**: the first prediction of the image with the most prediction confidence
* **p1_conf**: how confident the algorithm is in the first prediction
* **p1_dog**: whether or not the first prediction is a dog
* **p2**: the second prediction of the image with the second prediction confidence
* **p2_conf**: how confident the algorithm is in the second prediction
* **p2_dog**: whether or not the second prediction is a dog
* **p3**: the third prediction of the image with the third prediction confidence
* **p3_conf**: how confident the algorithm is in the third prediction
* **p3_dog**: whether or not the third prediction is a dog
twt_df2
```
twt_df2.head()
twt_df2.tail()
twt_df2.info()
twt_df2.describe()
```
twt_df2 columns:
* **tweet_id**: the unique identifier of the tweet
* **timestamp**: the created time of the tweet
* **favorites**: favorite counts of the tweet
* **retweets**: retweet counts of the tweet
### 3.2 Quality and tidiness problems
twt_df1
Let's first check if our unique identifier is truly unique or not:
```
twt_df1.tweet_id.duplicated().sum()
```
Have a look of the first several rows:
```
twt_df1.head()
twt_df1.info()
```
* **tweet_id**: this column should be string instead of int
* **timestamp**: this column should be date-time format instead of string
* **expanded_urls**: this column has multiple missing values
```
twt_df1.describe()
twt_df1.tweet_id.duplicated().sum()
twt_df1.loc[twt_df1.expanded_urls.isnull()]
```
The text part is not fully displayed, we may need to see that full text content:
```
#https://stackoverflow.com/questions/25351968/how-to-display-full-non-truncated-dataframe-information-in-html-when-convertin
def print_full(x):
pd.set_option('display.max_rows', len(x))
pd.set_option('display.max_columns', None)
pd.set_option('display.width', 2000)
pd.set_option('display.float_format', '{:20,.2f}'.format)
pd.set_option('display.max_colwidth', -1)
print(x)
pd.reset_option('display.max_rows')
pd.reset_option('display.max_columns')
pd.reset_option('display.width')
pd.reset_option('display.float_format')
pd.reset_option('display.max_colwidth')
print_full(twt_df1.head()[['text', 'rating_numerator', 'rating_denominator']])
print_full(twt_df1.tail()[['text', 'rating_numerator', 'rating_denominator']])
print_full(twt_df1.sample(5)[['text', 'rating_numerator', 'rating_denominator']])
```
So the rating of the dog will be at the end of the text content. Let's check the full schema by value counts of the ratings:
```
twt_df1.rating_denominator.value_counts()
```
We have some values of the rating denominator that is not 10. Dig into them by check text content, and denominator of these rows:
```
check_denominator = twt_df1.query("rating_denominator > 10")[['text', 'rating_numerator', 'rating_denominator']]
check_denominator
print_full(check_denominator)
```
So, some people used another number/number expression in the text content, and that was recorded as the rating. Some other ratings are just strange with big rating denominators. I think I'm just gonna delete those strange ratings and keep those was wrongly parsed with a real rating at the end of the text.
```
twt_df1.rating_numerator.value_counts()
check_numerator = twt_df1.query("rating_numerator > 20")[['text', 'rating_numerator', 'rating_denominator']]
check_numerator
print_full(check_numerator)
```
* **rating_denominator**: Some of the denominator is not 10. One reason of that is some text contents has multiple number/number format, and the ratings only transform the first number/number into rating numerator and rating denominators which is not the rating but some thing like date/time. Another reason is that some of the of the posted image contains more than one dog, so they will rate 10 dogs based on a denominator of 100.
* **rating_numerator**: Some of the numerator is too big. Besides the reasons listed in the denominator part, there is another reason: people just loves the dog so much that they give the dog a such a high rate. So basically, if we solved the problem in denominator part, we don't need to worry about the numerators.
Check the source column which is not fully displayed:
```
print_full(twt_df1.source.sample(5))
```
* **source**: this column has a html structure which can be simplified
Let's have a look of the dog names column:
```
twt_df1.name.head()
twt_df1.name.value_counts()
```
There are some values in this column that looks not like a real name: a, an, the, very, and so on. They are all in lower case, so we may check the abnormality by this feature.
```
twt_df1.loc[(twt_df1.name.str.islower())].name.value_counts()
twt_df1.loc[(twt_df1.name.str.islower())].name.value_counts().index
```
The list above proves the hypothesis: lower case strings are not real names of dog.
* **name**: this column has some missing values and some of the names are not real dog names but articles or adjectives.
We can also have a look of the last four columns of this dataset: doggo, floofer, pupper, and puppo. This is a tidiness problem that the columns themselves are values of a variable. Here the variable name of these 4 columns should be something like 'dog stages'.
```
twt_df1.groupby(["doggo", "floofer", "pupper", "puppo"]).size().reset_index().rename(columns={0: "count"})
```
* **doggo, floofer, pupper, puppo**: tidiness problem: columns themselves are values of a variable. And some of the dogs have multiple dog stages.
img_df
```
img_df.head()
img_df.info()
```
* **tweet_id**: this column should be string instead of int
```
img_df.describe()
```
Start from checking unique identifiers:
```
img_df.tweet_id.duplicated().sum()
```
This dataset looks pretty clean. The only problem is the string in **p1**, **p2**, and **p3** columns are not all in lowercase.
* **p1**, **p2**, **p3**: dog breed names are not all in lowercase
twt_df2
```
twt_df2.head()
twt_df2.sample(5)
twt_df2.info()
```
* **tweet_id**: this column should be string instead of int
* **timestamp**: this column should be date-time instead of sting
```
twt_df2.describe()
twt_df2.tweet_id.duplicated().sum()
```
We can summarize the data quality and tidiness problems now:
**Quality problems**:
* **tweet_id**, **timestamp**: wrong data types
* **source**: this column has a useless html structure that can be simplified
* **retweets**: there are some retweets that are essentially duplicates of the actual tweets
* **expanded_urls**: multiple missing values
* **rating_denominator**: Some of the denominator is not 10. One reason of that is some text contents has multiple number/number format, and the ratings only transform the first number/number into rating numerator and rating denominators which is not the rating but some thing like date/time. Another reason is that some of the of the posted image contains more than one dog, so they will rate 10 dogs based on a denominator of 100.
* **rating_numerator**: Some of the numerator is too big. Besides the reasons listed in the denominator part, there is another reason: people just loves the dog so much that they give the dog a such a high rate. So basically, if we solved the problem in denominator part, we don't need to worry about the numerators
* **name**: this column has some missing values and some of the names are not real dog names but articles or adjectives
* **p1**, **p2**, **p3**: dog breed names are not all in lowercase
* **unnecessary columns to be deleted**: delete unnecessary columns to make the final dataset more neat and tidy
**Tidiness problems**:
* **doggo, floofer, pupper, puppo**: tidiness problem: columns themselves are values of a variable. Notice: some of the dogs have multiple dog stages.
* **need to merge all the datasets**: merge the three datasets into one using inner join according to the tweet_id
## 4. Cleaning Data
```
df1_clean = twt_df1.copy()
df2_clean = twt_df2.copy()
df3_clean = img_df.copy()
```
* **need to merge all the datasets**: merge the three datasets into one using inner join according to the tweet_id
**Define**
we can start with combining all three dataset with the unique identifier *tweet_id*
**Code**
```
df4 = pd.merge(df1_clean, df2_clean, left_on='tweet_id', right_on='tweet_id', how='inner')
df5 = pd.merge(df4, df3_clean, left_on='tweet_id', right_on='tweet_id', how='inner')
df5_clean = df5
```
**Test**
```
df5_clean.info()
```
In general, I will fix the tidiness problems and quality problems first, then delete these unnecessary columns.
* **source**: this column has a useless html structure that can be simplified
**Define**
Delete the useless html structure by using regular expressions
**Code**
```
href_tags = re.compile(r'<[^>]+>')
def remove_tags(text):
return href_tags.sub('', text)
df5_clean['source'] = df5_clean['source'].apply(remove_tags)
```
**Test**
```
df5_clean.source.sample(5)
df5_clean.source.value_counts()
```
* **rating_denominator**: Some of the denominator is not 10. One reason of that is some text contents has multiple number/number format, and the ratings only transform the first number/number into rating numerator and rating denominators which is not the rating but some thing like date/time. Another reason is that some of the of the posted image contains more than one dog, so they will rate 10 dogs based on a denominator of 100.
* **rating_numerator**: Some of the numerator is too big. Besides the reasons listed in the denominator part, there is another reason: people just loves the dog so much that they give the dog a such a high rate. So basically, if we solved the problem in denominator part, we don't need to worry about the numerators.
**Define**
Use a regular expression to extract all the number/number formating in the text content. Assign the first extracted value to rating column, and then, for those has two number/number in their post, update the rating column with the second number/number values in their post. After that, calculate the true rating points by extract the denominator and numerator.
**Code**
```
rating = df5_clean.text.str.extractall('(\d+\.?\d*\/{1}\d+)')
rating.head()
rating.xs(1, level='match').head()
match1 = rating.xs(1, level='match')
match1_index = match1.index
match1_index = np.array(match1_index)
match1_index
match1.columns = match1.columns.astype(str)
match1.rename(columns={"0":"rating"}, inplace=True)
df5_clean['rating'] = rating.xs(0, level='match')
df5_clean.update(match1)
```
**Test**
```
df5_clean.rating
```
This is another method doing the same thing above with regular expression match:
**Code**
```
regex1 = '(\d+\.?\d*\/{1}\d+)'
regex2 = '(\.{1}\d+)'
rating_new = df5_clean.text.tolist()
df5_clean['rating'] = [re.sub(regex2, '', re.findall(regex1, x)[-1]) for x in rating_new]
```
**Test**
```
df5_clean.rating
```
This is not the end of this adjustment, since all the ratings are now based on a denominator of 10, we can now just keep the numerator of it to represent the ratings. Besides, there are some ratings are decimal values, so we should use float data type for it.
**Code**
```
rating_df = df5_clean.rating.str.extract('(\d+\.?\d*\/)')
rating_scores = rating_df[0].str.strip('/')
df5_clean['rating'] = rating_scores.astype(float)
```
**Test**
```
df5_clean.info()
```
* **expanded_urls**: multiple missing values
**Define**
Remove those missing values in expanded_urls column by .dropna
**Code**
```
df5_clean.dropna(subset=['expanded_urls'], inplace=True)
```
**Test**
```
df5_clean.expanded_urls.isnull().sum()
```
* **p1**, **p2**, **p3**: dog breed names are not all in lowercase
**Define**
Turn them into lower case
**Code**
```
df5_clean['p1'] = df5_clean['p1'].str.lower()
df5_clean['p2'] = df5_clean['p2'].str.lower()
df5_clean['p3'] = df5_clean['p3'].str.lower()
```
**Test**
```
df5_clean.sample(5)
```
* **name**: this column has some missing values and some of the names are not real dog names but articles or adjectives.
**Define**
Replace those not-real-dog-names with None
**Code**
```
not_dog_names = df5_clean.loc[(df5_clean.name.str.islower())].name.value_counts().index.tolist()
not_dog_names.append('None')
not_dog_names
for name in not_dog_names:
df5_clean.loc[df5_clean.name == name, 'name'] = None
```
**Test**
```
df5_clean.name.value_counts()
```
* **doggo, floofer, pupper, puppo**: tidiness problem: columns themselves are values of a variable
**Define**
Combine those 4 columns into one column
**Code**
Firstly, we need to convert Nones and np.NaN to empty string '' for all columns
```
df5_clean.doggo.replace('None', '', inplace=True)
df5_clean.doggo.replace(np.NaN, '', inplace=True)
df5_clean.floofer.replace('None', '', inplace=True)
df5_clean.floofer.replace(np.NaN, '', inplace=True)
df5_clean.pupper.replace('None', '', inplace=True)
df5_clean.pupper.replace(np.NaN, '', inplace=True)
df5_clean.puppo.replace('None', '', inplace=True)
df5_clean.puppo.replace(np.NaN, '', inplace=True)
```
Then we get the columns combined
```
df5_clean['dog_stages'] = df5_clean.text.str.extract('(doggo|floofer|pupper|puppo)', expand = True)
```
Notice: there are some dogs have multiple stages
```
df5_clean['dog_stages'] = df5_clean.doggo + df5_clean.floofer + df5_clean.pupper + df5_clean.puppo
df5_clean.loc[df5_clean.dog_stages == 'doggopupper', 'dog_stages'] = 'doggo, pupper'
df5_clean.loc[df5_clean.dog_stages == 'doggopuppo', 'dog_stages'] = 'doggo, puppo'
df5_clean.loc[df5_clean.dog_stages == 'doggofloofer', 'dog_stages'] = 'doggo, floofer'
```
Now we can delete the useless four columns
```
df5_clean.drop(['doggo','floofer','pupper','puppo'], axis=1, inplace = True)
```
**Test**
```
df5_clean.sample(5)
df5_clean.dog_stages.value_counts()
```
* **retweets**: there are some retweets that are essentially duplicates of the actual tweets
**Define**
There are some retweets need to be removed from the dataset, since they are duplicates of actual tweets
**Code**
```
df5_clean = df5_clean.loc[df5_clean['text'].str.startswith('RT') == False]
```
**Test**
```
df5_clean.info()
```
* **unnecessary columns to be deleted**: delete unnecessary columns to make the final dataset more neat and tidy.
**Define**
Delete unnecessary columns by using df.drop
**Code**
```
drop_cols = ['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id',
'retweeted_status_user_id','retweeted_status_timestamp',
'rating_numerator', 'rating_denominator',
'p1_conf','p1_dog', 'p2_conf','p2_dog', 'p3_conf','p3_dog']
df5_clean.drop(drop_cols, axis=1, inplace=True)
```
**Test**
```
df5_clean.head()
df5_clean.info()
```
There is a new problem in the data set that we have two time stamp columns: timestamp_x and timestamp_y, they
are of the same content but in different format, we should drop one of them
**Define**
Drop unnecessary timestamp_y column, and rename timestamp_x column to timestamp
**Code**
```
df5_clean.drop(['timestamp_y'], axis=1, inplace=True)
df5_clean.rename(columns={'timestamp_x': 'timestamp'}, inplace=True)
```
**Test**
```
df5_clean.info()
```
* **tweet_id**, **timestamp**: wrong data types
**Define**
Correct all the wrong data types in the dataset, including changing source, img_num, and dog_stages into category data type for future analyzing
**Code**
```
df5_clean['tweet_id'] = df5_clean['tweet_id'].astype(str)
df5_clean['timestamp'] = pd.to_datetime(df5_clean['timestamp'])
df5_clean['source'] = df5_clean['source'].astype('category')
df5_clean['img_num'] = df5_clean['img_num'].astype('category')
df5_clean['dog_stages'] = df5_clean['dog_stages'].astype('category')
```
**Test**
```
df5_clean.info()
```
## 5. Storing, Analyzing, and Visualizing Data
### Storing Data
Now we have our dataset cleaned, we can save it for future usage.
```
df5_clean.to_csv('twitter.csv', index=False)
```
## Analyzing & Visualizing Data
```
df = pd.read_csv('twitter.csv')
```
* which kind of source are people using the most?
```
df.source.value_counts()
```
The answer is quite straight forward: most people using Twitter for iPhone
* what is the bar plot of dog stages?
```
from matplotlib import rcParams
rcParams.update({'figure.autolayout': True})
plt.figure(figsize=(8,6))
df.dog_stages.value_counts().sort_values(ascending=False).plot.bar()
plt.title("Popular dog stages")
plt.xticks(rotation=45)
plt.xlabel("Dog stages")
plt.ylabel("Number of dogs");
#save pic
plt.savefig('dog_stages.png')
```
* what is the most popular dog name?
```
name_count = df.name.value_counts()
name_list = name_count.index.tolist()
from subprocess import check_output
from wordcloud import WordCloud, ImageColorGenerator
from PIL import Image
import locale
locale.setlocale(locale.LC_ALL, '')
rcParams['figure.figsize']=(8.0,6.0)
rcParams['savefig.dpi']=100
rcParams['font.size']=12
rcParams['figure.subplot.bottom']=.1
round_mask = np.array(Image.open("mask.png"))
wordcloud = WordCloud(background_color='white',
mask=round_mask,
max_words=50,
max_font_size=70,
random_state=23,
).generate(' '.join(name_list))
#save the wordcloud
wordcloud.to_file(os.path.join("dog_names.png"))
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.figure()
plt.axis("off")
plt.show();
```
* which breed of dog people love the most?
```
dog_fav = df.groupby('p1')['favorites'].sum().sort_values(ascending=False).head(6)
dog_fav
dog_ret = df.groupby('p1')['retweets'].sum().sort_values(ascending=False).head(6)
dog_ret
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
dog_fav.plot(figsize = (10,6), kind='bar', color='orange', ax=ax1, width=0.4, position=1,
title='Popular Breeds: Likes vs. Retweets')
dog_ret.plot(figsize = (10,6), kind='bar', color='yellow', ax=ax2, width=0.4, position=0)
ax1.set_ylabel('Likes')
ax2.set_ylabel('Retweets')
ax1.set_xticklabels(dog_fav.index, rotation=30)
h1, l1 = ax1.get_legend_handles_labels()
h2, l2 = ax2.get_legend_handles_labels()
plt.legend(h1+h2, l1+l2, loc=1)
#save pic
plt.savefig('popular_dogs.png', dpi=100)
plt.show();
```
* Which breed of those popular dogs has the highest average rating?
```
rate1 = df.query('p1 == "golden_retriever"').rating.mean()
rate2 = df.query('p1 == "labrador_retriever"').rating.mean()
rate3 = df.query('p1 == "pembroke"').rating.mean()
rate4 = df.query('p1 == "chihuahua"').rating.mean()
rate5 = df.query('p1 == "samoyed"').rating.mean()
rate6 = df.query('p1 == "french_bulldog"').rating.mean()
breeds = ['golden_retriever', 'labrador_retriever', 'pembroke', 'chihuahua', 'samoyed', 'french_bulldog']
rates = [rate1, rate2, rate3, rate4, rate5, rate6]
y_position = np.arange(len(breeds))
plt.bar(y_position, rates, align='center', alpha=0.5)
plt.xticks(y_position, breeds)
plt.xticks(rotation=30)
plt.ylabel("Average rating")
plt.title("Average rating of popular dogs")
plt.savefig('dog_ratings.png')
plt.show()
```
| github_jupyter |
# Dependencies and Paths
```
!pip install transformers==4.9.1 ruamel.yaml
import random
import os
import copy
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, Dataset
from torch.cuda.amp import autocast, GradScaler
from transformers import AutoTokenizer, AutoModel, AdamW, get_linear_schedule_with_warmup
from tqdm import tqdm
import ruamel.yaml
os.environ["TOKENIZERS_PARALLELISM"] = "false"
config_name = './genetic_algorithm/settings.yaml'
with open(config_name, 'r') as stream:
try:
yaml = ruamel.yaml.YAML()
config = yaml.load(stream)
except yaml.YAMLError as exc:
print(exc)
# New directory for current genetic algorithm
directory = config['Paths']['iteration_folder']
# Path to scored aptamers
aptamerList = config['Paths']['path_to_initial_aptamers']
aptamerListAll = config['Paths']['path_to_all_aptamers']
# Path to PyTorch alBERT model
path_to_model = config['Paths']['path_to_model']
# How many sequences we want to have in a list
apt_len = config['Parameters']['aptamer_len']
aptamerList_iter = './datasets/ga_interim_data/Albumin/breed_1.csv'
```
Stage I
Load model and create a DataLouder for latter GA
```
class CustomDataset(Dataset):
def __init__(self, data, maxlen, with_labels=True, bert_model='albert-base-v2'):
self.data = data # pandas dataframe
self.tokenizer = AutoTokenizer.from_pretrained(bert_model, return_dict=False)
self.maxlen = maxlen
self.with_labels = with_labels
def __len__(self):
return len(self.data)
def __getitem__(self, index):
sent1 = str(self.data.loc[index, 'Sequence1'])
sent2 = str(self.data.loc[index, 'Sequence2'])
# Tokenize the pair of sentences to get token ids, attention masks and token type ids
encoded_pair = self.tokenizer(sent1, sent2,
padding='max_length', # Pad to max_length
truncation=True, # Truncate to max_length
max_length=self.maxlen,
return_tensors='pt') # Return torch.Tensor objects
token_ids = encoded_pair['input_ids'].squeeze(0) # tensor of token ids
attn_masks = encoded_pair['attention_mask'].squeeze(0) # binary tensor with "0" for padded values and "1" for the other values
token_type_ids = encoded_pair['token_type_ids'].squeeze(0) # binary tensor with "0" for the 1st sentence tokens & "1" for the 2nd sentence tokens
if self.with_labels: # True if the dataset has labels
label = self.data.loc[index, 'Label']
return token_ids, attn_masks, token_type_ids, label
else:
return token_ids, attn_masks, token_type_ids
class Model(nn.Module):
def __init__(self, bert_model="albert-base-v2", freeze_bert=False):
super(Model, self).__init__()
self.bert_layer = AutoModel.from_pretrained(bert_model, return_dict=False)
bert_model == "albert-base-v2" # 12M parameters
hidden_size = 768
# More information on available models can be found at https://huggingface.co/transformers/pretrained_models.html
# Freeze model layers and only train the classification layer weights
if freeze_bert:
for p in self.bert_layer.parameters():
p.requires_grad = False
# Putting Classification layer on top of BERT
self.cls_layer = nn.Linear(hidden_size, 1)
self.dropout = nn.Dropout(p=0.1)
@autocast() # Mixes precision
def forward(self, input_ids, attn_masks, token_type_ids):
'''
Inputs:
-input_ids : Tensor containing token ids
-attn_masks : Tensor containing attention masks to be used to focus on non-padded values
-token_type_ids : Tensor containing token type ids to be used to identify sentence1 and sentence2
'''
# Feeding the inputs to the BERT-based model to obtain contextualized representations
cont_reps, pooler_output = self.bert_layer(input_ids, attn_masks, token_type_ids)
# Feeding to the classifier layer the last layer hidden-state of the [CLS] token further processed by a
# Linear Layer and a Tanh activation. The Linear layer weights were trained from the sentence order prediction (ALBERT) or next sentence prediction (BERT)
# objective during pre-training.
logits = self.cls_layer(self.dropout(pooler_output))
return logits
def set_seed(seed):
""" Set all seeds to make results reproducible """
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(seed)
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
def get_probs_from_logits(logits):
"""
Converts a tensor of logits into an array of probabilities by applying the sigmoid function
"""
probs = torch.sigmoid(logits.unsqueeze(-1))
return probs.detach().cpu().numpy()
def test_prediction(net, device, aptamerDataFrame, dataloader, with_labels, result_path, iteration):
"""
Predict the probabilities on a dataset with or without labels and print the result in a file
"""
net.eval()
probs_all = []
nb_iterations = len(dataloader)
with torch.no_grad():
if with_labels:
for it, (seq, attn_masks, token_type_ids) in tqdm(enumerate(dataloader), total = nb_iterations):
seq, attn_masks, token_type_ids = seq.to(device), attn_masks.to(device), token_type_ids.to(device)
logits = net(seq, attn_masks, token_type_ids)
probs = get_probs_from_logits(logits.squeeze(-1)).squeeze(-1)
probs_all += probs.tolist()
else:
for it, (seq, attn_masks, token_type_ids) in tqdm(enumerate(dataloader), total=nb_iterations):
seq, attn_masks, token_type_ids = seq.to(device), attn_masks.to(device), token_type_ids.to(device)
logits = net(seq, attn_masks, token_type_ids)
probs = get_probs_from_logits(logits.squeeze(-1)).squeeze(-1)
probs_all += probs.tolist()
df1 = pd.read_csv(aptamerDataFrame)
probs_all = [round(x) for x in probs_all]
df2 = pd.DataFrame({'Label': probs_all})
df = pd.concat([df1, df2], axis=1)
df.to_csv(result_path)
print("Compared aptamers iteration {} is located in {}".format(iteration, result_path))
bert_model = config['Model']['model_name'] # 'albert-base-v2', 'albert-large-v2', 'albert-xlarge-v2' and others
maxlen = config['Model']['max_len'] # maximum length of the tokenized input sentence pair : if greater than "maxlen", the input is truncated and else if smaller, the input is padded
bs = config['Model']['batch_size'] # batch size of testing
with_labels = config['Model']['with_labels']
iter = 1
```
Stage III
Apply Genetic Algorithm to generate new population of aptamers
```
def run_GA():
iter = 1
while iter < 51:
# Generate N aptamers to have the same 1000 as before deleting inferior
!python ./genetic_algorithm/breeder.py --p {aptamerList} --o {directory} --l {apt_len} --i {iter}
if iter > 1:
breedCSV = './datasets/ga_interim_data/Albumin/breed_{}.csv'.format(iter-1)
%rm $breedCSV
# Pair up new batch
!python ./functions/pairing.py --h {aptamerList_iter} --o {directory} --i {iter}
# Call alBERT to compare goodness of sequences
df_test = pd.read_csv('{}iteration_{}.csv'.format(directory, iter))
test_set = CustomDataset(df_test, maxlen, with_labels, bert_model)
data_toModel = DataLoader(test_set, batch_size=bs) #nureadinti data pirma
test_prediction(net=model, device=device, aptamerDataFrame='{}iteration_{}.csv'.format(directory, iter), dataloader=data_toModel, with_labels=False, result_path='{}predicted_{}.csv'.format(directory, iter), iteration=iter)
# Find dominating aptamers and go to step 1 again.
!python ./functions/dominance_score.py --p {directory + 'predicted_' +str(iter) + '.csv'} --f {directory + 'breed_' + str(iter) + '.csv'} --o {directory + 'top_iter_' + str(iter)} --i {iter} --l {apt_len}
#survarkyti kur galunes nera tokios ir tegul patys scriptai tuo rupinasi
iterationCSV = './datasets/ga_interim_data/Albumin/iteration_{}.csv'.format(iter)
predictionCSV = './datasets/ga_interim_data/Albumin/predicted_{}.csv'.format(iter)
aptamerList = directory + 'top_iter_' + str(iter) + '.csv'
iter += 1
aptamerList_iter = './datasets/ga_interim_data/Albumin/breed_{}.csv'.format(iter)
!rm $iterationCSV $predictionCSV
set_seed(2021)
print("Loading model...")
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = Model(bert_model, freeze_bert=False)
model.load_state_dict(torch.load(config['Paths']['path_to_model']))
model.to(device)
model.eval() #tikriausiai nereikia
run_GA()
```
| github_jupyter |
```
#Toy example: this does not mean anything really, just write out some random returns series and find the maximum level (this was born as a unit test basically, for the logic of a private application).
#requires py3.7 scipy==1.1.0 (conda)
#requires pyDOE (pip)
from dnlcb import DynamicNegativeLowerConfidenceBound
from emukit.bayesian_optimization.loops import BayesianOptimizationLoop
from emukit.core import ParameterSpace, ContinuousParameter
from emukit.core.optimization import RandomSearchAcquisitionOptimizer
from emukit.model_wrappers import GPyModelWrapper
from GPy.models import GPRegression
from GPy.kern.src.brownian import Brownian
from matplotlib import pyplot as plt
import numpy as np
from pandas import Series
import random
#time horizon
parameter_space = ParameterSpace([ContinuousParameter('days', 0, 251)])
#Sample some arithmetic brownian motion, e.g. returns.
def ABM(
X0: float = 100,
T: int = 252,
n: int = 1,
alpha: float = 0,
sigma: float = 100
) -> Series:
dt = n/T
X = Series([X0])
for i in np.arange(1, T):
ei = np.random.normal(0, 1)
X.loc[i] = X.loc[i-1] + alpha*dt + sigma*ei*(dt**0.5)
return X.to_numpy()
data = ABM(sigma=100, alpha=200)
def f(x):
i=np.round(x).astype(int)
return -data[i]
def XY():
x = np.array(np.array([random.uniform(0, 251)]))
y = f(x)
return (x, y)
X = np.zeros(5)
Y = np.zeros(5)
for i in range(5):
x,y = XY()
X[i] = x
Y[i] = y
X_init = X[:, None]
Y_init = Y[:, None]
#Kernel choice: brownian. This kernel is hardly used in applications, the most common non-smooth kernel is a fractional Matérn I guess. Something very inconvenient with a brownian kernel is its not-differentiability, and the fact that it is strictly one-dimensional (at least in its classic definition): this forces you to marginalize on the dimensions, with an overhead linear in the number of dimensions of course.
#This said it is technically the best assumption if your underlying process does have a brownian nature, like in this example.
kernel = Brownian(variance=np.var(X_init))
#Negate Y_init results as we are solving the dual problem (see below)
#Train and wrap the model in Emukit
model_gpy = GPRegression(X_init,-Y_init,kernel=kernel)
model_emukit = GPyModelWrapper(model_gpy)
#Attention: DNLCB does *not* have convergence guarantees for non-smooth kernel surfaces (see paper), like a brownian ones; this basically means we have no guarantee to find the optimum no matter the number of iterations. As this is a toy example on a conveniently discretized space it's all good, but with real applications be careful on the acquisition choice.
#The input space size must be the same as the parameter space range (see above); starting with a low delta.
dynamic_lcb = DynamicNegativeLowerConfidenceBound(model = model_emukit,input_space_size=252, delta=0.2)
#A brownian motion is nowhere differentiable so its gradient function https://gpy.readthedocs.io/en/deploy/tuto_creating_new_kernels.html#gradients-x-self-dl-dk-x-x2 is undefined; this also means we cannot use any gradient-based acquisition optimizer
acquisition_optimizer = RandomSearchAcquisitionOptimizer(parameter_space, 30)
bayesopt_loop_cust = BayesianOptimizationLoop(
model = model_emukit,
space = parameter_space,
acquisition = dynamic_lcb,acquisition_optimizer=acquisition_optimizer,
batch_size = 1
)
def f_opt(x):
return -f(x)
bayesopt_loop_cust.run_loop(f_opt, 30)
#The optimization engine can only minimize, apparently; so to maximize the original function we will minimize its inverse, by duality max(f)=min(-f).
smin=parameter_space.parameters[0].min
smax=parameter_space.parameters[0].max
x_plot = np.linspace(smin, smax , 200)[:, None]
y_plot = f(x_plot)
z_plot = f_opt(x_plot)
plt.figure(figsize=(12, 8))
plt.plot(x_plot, y_plot, "k", label="Original Function")
plt.plot(x_plot, z_plot, "c", label="Optimization Function")
plt.legend(loc=2, prop={'size': 10})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(smin, smax)
plt.show()
mu_plot, var_plot = bayesopt_loop_cust.model.predict(x_plot)
plt.figure(figsize=(12, 8))
plt.plot(bayesopt_loop_cust.loop_state.X, bayesopt_loop_cust.loop_state.Y, "ro", markersize=10, label="Observations")
plt.plot(x_plot, y_plot, "k", label="Original Function")
plt.plot(x_plot, z_plot, "c", label="Optimization Function")
plt.plot(x_plot, mu_plot, "C0", label="Model")
x_mean = np.mean(bayesopt_loop_cust.loop_state.X)
plt.scatter(x_mean, f(np.array(x_mean)), label="Average", color='b', s=500)
plt.scatter(bayesopt_loop_cust.get_results().minimum_location,
bayesopt_loop_cust.get_results().minimum_location, label="Maximum original function / minimum optimization function", color='g', s=500)
plt.legend(loc=2, prop={'size': 10})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(smin, smax)
plt.show()
#Variance parameter gets automatically optimized; you may set it as fixed too (see GPy docs).
print(np.var(X_init)) #Initial value we set it to; this should be meaningful with respect to the variance across the year levels.
bayesopt_loop_cust.model.model.parameters[0].variance #final iteration value
#This is how we ask for the next best guess for x, evaluate its y offline (example evaluation: 42) and come back to update the model
from emukit.core.loop import UserFunctionResult
print(bayesopt_loop_cust.loop_state.iteration)
state = bayesopt_loop_cust.loop_state
next_point = bayesopt_loop_cust.candidate_point_calculator.compute_next_points(state)
print(next_point[0])
evaluation_result = [UserFunctionResult(next_point[0], np.array([42]))]
state.update(evaluation_result)
bayesopt_loop_cust.model_updaters[0].update(state)
print(bayesopt_loop_cust.loop_state.iteration)
```
| github_jupyter |
# Example 2: 1st-level Analysis
In this example we will take the preprocessed output from the first example and run for each subject a 1st-level analysis. For this we need to do the following steps:
1. Extract onset times of stimuli from TVA file
2. Specify the model (TR, high pass filter, onset times, etc.)
3. Specify contrasts to compute
4. Estimate contrasts
In the previous example, we used two different smoothing kernels of fwhm=4 and fwhm=8. Therefore, let us also run the 1st-level analysis for those two versions.
**So, let's begin!**
## Imports
First, we need to import all modules we later want to use.
```
from nilearn import plotting
%matplotlib inline
from os.path import join as opj
import json
from nipype.interfaces.spm import Level1Design, EstimateModel, EstimateContrast
from nipype.algorithms.modelgen import SpecifySPMModel
from nipype.interfaces.utility import Function, IdentityInterface
from nipype.interfaces.io import SelectFiles, DataSink
from nipype import Workflow, Node
```
## Experiment parameters
It's always a good idea to specify all parameters that might change between experiments at the beginning of your script.
```
experiment_dir = '/output'
output_dir = 'datasink'
working_dir = 'workingdir'
# list of subject identifiers
subject_list = ['01', '02', '03', '04', '05', '06', '07', '08', '09', '10']
# TR of functional images
with open('/data/ds000114/task-fingerfootlips_bold.json', 'rt') as fp:
task_info = json.load(fp)
TR = task_info['RepetitionTime']
# Smoothing withds used during preprocessing
fwhm = [4, 8]
```
## Specify Nodes
Initiate all the different interfaces (represented as nodes) that you want to use in your workflow.
```
# SpecifyModel - Generates SPM-specific Model
modelspec = Node(SpecifySPMModel(concatenate_runs=False,
input_units='secs',
output_units='secs',
time_repetition=TR,
high_pass_filter_cutoff=128),
name="modelspec")
# Level1Design - Generates an SPM design matrix
level1design = Node(Level1Design(bases={'hrf': {'derivs': [1, 0]}},
timing_units='secs',
interscan_interval=TR,
model_serial_correlations='FAST'),
name="level1design")
# EstimateModel - estimate the parameters of the model
level1estimate = Node(EstimateModel(estimation_method={'Classical': 1}),
name="level1estimate")
# EstimateContrast - estimates contrasts
level1conest = Node(EstimateContrast(), name="level1conest")
```
## Specify GLM contrasts
To do any GLM analysis, we need to also define the contrasts that we want to investigate. If we recap, we had three different conditions in the **fingerfootlips** task in this dataset:
- **finger**
- **foot**
- **lips**
Therefore, we could create the following contrasts (seven T-contrasts and two F-contrasts):
```
# Condition names
condition_names = ['Finger', 'Foot', 'Lips']
# Contrasts
cont01 = ['average', 'T', condition_names, [1/3., 1/3., 1/3.]]
cont02 = ['Finger', 'T', condition_names, [1, 0, 0]]
cont03 = ['Foot', 'T', condition_names, [0, 1, 0]]
cont04 = ['Lips', 'T', condition_names, [0, 0, 1]]
cont05 = ['Finger > others','T', condition_names, [1, -0.5, -0.5]]
cont06 = ['Foot > others', 'T', condition_names, [-0.5, 1, -0.5]]
cont07 = ['Lips > others', 'T', condition_names, [-0.5, -0.5, 1]]
cont08 = ['activation', 'F', [cont02, cont03, cont04]]
cont09 = ['differences', 'F', [cont05, cont06, cont07]]
contrast_list = [cont01, cont02, cont03, cont04, cont05, cont06, cont07, cont08, cont09]
```
## Specify GLM Model
The next step is now to get information such as stimuli onset, duration and other regressors into the GLM model. For this we need to create a helper function, in our case called ``subjectinfo``.
To recap, let's see what we have in the TSV file for each run:
```
!cat /data/ds000114/task-fingerfootlips_events.tsv
```
We can also create a data frame using pandas library.
```
import pandas as pd
trialinfo = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv')
trialinfo
```
And finally we need to separate the onsets of the three conditions, i.e. group by ``trial_type``. This can be done as follows:
```
for group in trialinfo.groupby('trial_type'):
print(group)
print("")
```
Now, let us incorporate all this in the helper function ``subjectinfo``.
```
def subjectinfo(subject_id):
import pandas as pd
from nipype.interfaces.base import Bunch
trialinfo = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv')
trialinfo.head()
conditions = []
onsets = []
durations = []
for group in trialinfo.groupby('trial_type'):
conditions.append(group[0])
onsets.append(list(group[1].onset - 10)) # subtracting 10s due to removing of 4 dummy scans
durations.append(group[1].duration.tolist())
subject_info = [Bunch(conditions=conditions,
onsets=onsets,
durations=durations,
#amplitudes=None,
#tmod=None,
#pmod=None,
#regressor_names=None,
#regressors=None
)]
return subject_info # this output will later be returned to infosource
# Get Subject Info - get subject specific condition information
getsubjectinfo = Node(Function(input_names=['subject_id'],
output_names=['subject_info'],
function=subjectinfo),
name='getsubjectinfo')
```
## Specify input & output stream
Specify where the input data can be found & where and how to save the output data.
```
# Infosource - a function free node to iterate over the list of subject names
infosource = Node(IdentityInterface(fields=['subject_id',
'fwhm_id',
'contrasts'],
contrasts=contrast_list),
name="infosource")
infosource.iterables = [('subject_id', subject_list),
('fwhm_id', fwhm)]
# SelectFiles - to grab the data (alternativ to DataGrabber)
templates = {'func': opj(output_dir, 'preproc', 'sub-{subject_id}', 'task-{task_id}',
'fwhm-{fwhm_id}_ssub-{subject_id}_ses-test_task-{task_id}_bold.nii'),
'mc_param': opj(output_dir, 'preproc', 'sub-{subject_id}', 'task-{task_id}',
'sub-{subject_id}_ses-test_task-{task_id}_bold.par'),
'outliers': opj(output_dir, 'preproc', 'sub-{subject_id}', 'task-{task_id}',
'art.sub-{subject_id}_ses-test_task-{task_id}_bold_outliers.txt')}
selectfiles = Node(SelectFiles(templates,
base_directory=experiment_dir,
sort_filelist=True),
name="selectfiles")
selectfiles.inputs.task_id = 'fingerfootlips'
# Datasink - creates output folder for important outputs
datasink = Node(DataSink(base_directory=experiment_dir,
container=output_dir),
name="datasink")
# Use the following DataSink output substitutions
substitutions = [('_subject_id_', 'sub-')]
subjFolders = [('_fwhm_id_%ssub-%s' % (f, sub), 'sub-%s/fwhm-%s' % (sub, f))
for f in fwhm
for sub in subject_list]
substitutions.extend(subjFolders)
datasink.inputs.substitutions = substitutions
```
## Specify Workflow
Create a workflow and connect the interface nodes and the I/O stream to each other.
```
# Initiation of the 1st-level analysis workflow
l1analysis = Workflow(name='l1analysis')
l1analysis.base_dir = opj(experiment_dir, working_dir)
# Connect up the 1st-level analysis components
l1analysis.connect([(infosource, selectfiles, [('subject_id', 'subject_id'),
('fwhm_id', 'fwhm_id')]),
(infosource, getsubjectinfo, [('subject_id',
'subject_id')]),
(getsubjectinfo, modelspec, [('subject_info',
'subject_info')]),
(infosource, level1conest, [('contrasts', 'contrasts')]),
(selectfiles, modelspec, [('func', 'functional_runs')]),
(selectfiles, modelspec, [('mc_param', 'realignment_parameters'),
('outliers', 'outlier_files')]),
(modelspec, level1design, [('session_info',
'session_info')]),
(level1design, level1estimate, [('spm_mat_file',
'spm_mat_file')]),
(level1estimate, level1conest, [('spm_mat_file',
'spm_mat_file'),
('beta_images',
'beta_images'),
('residual_image',
'residual_image')]),
(level1conest, datasink, [('spm_mat_file', '1stLevel.@spm_mat'),
('spmT_images', '1stLevel.@T'),
('con_images', '1stLevel.@con'),
('spmF_images', '1stLevel.@F'),
('ess_images', '1stLevel.@ess'),
]),
])
```
## Visualize the workflow
It always helps to visualize your workflow.
```
# Create 1st-level analysis output graph
l1analysis.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename=opj(l1analysis.base_dir, 'l1analysis', 'graph.png'))
```
## Run the Workflow
Now that everything is ready, we can run the 1st-level analysis workflow. Change ``n_procs`` to the number of jobs/cores you want to use.
```
l1analysis.run('MultiProc', plugin_args={'n_procs': 4})
```
## Inspect output
Let's check the structure of the output folder, to see if we have everything we wanted to save. You should have nine contrast images (``con_*.nii`` for T-contrasts and ``ess_*.nii`` for T-contrasts) and nine statistic images (``spmT_*.nii`` and ``spmF_*.nii``) for every subject and smoothing kernel.
```
!tree /output/datasink/1stLevel
```
## Visualize results
Let's look at the contrasts of one subject that we've just computed. First, let's see what the difference of smoothing is for the contrast **`average`**
```
from nilearn.plotting import plot_stat_map
anatimg = '/data/ds000114/derivatives/fmriprep/sub-02/anat/sub-02_t1w_preproc.nii.gz'
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0001.nii', title='average - fwhm=4',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-8/spmT_0001.nii', title='average - fwhm=8',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
```
Now, let's look at the three contrasts **`Finger`**, **`Foot`**, **`Lips`**.
```
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0002.nii', title='finger - fwhm=4',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0003.nii', title='foot - fwhm=4',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0004.nii', title='lips - fwhm=4',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
```
We can also check three additional contrasts **Finger > others**, **Foot > others** and **Lips > others**.
```
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0005.nii', title='finger - fwhm=4',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0006.nii', title='foot - fwhm=4',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0007.nii', title='lips - fwhm=4',
bg_img=anatimg, threshold=3, display_mode='y', cut_coords=(-5, 0, 5, 10, 15), dim=-1);
```
## Special case
There is something special with the **Finger** contrast in all subjects. So let's take a look at all of them.
```
plot_stat_map(
'/output/datasink/1stLevel/sub-01/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-01',
bg_img='/data/ds000114/derivatives/fmriprep/sub-01/anat/sub-01_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-02/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-02',
bg_img='/data/ds000114/derivatives/fmriprep/sub-02/anat/sub-02_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-03/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-03',
bg_img='/data/ds000114/derivatives/fmriprep/sub-03/anat/sub-03_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-04/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-04',
bg_img='/data/ds000114/derivatives/fmriprep/sub-04/anat/sub-04_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-05/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-05',
bg_img='/data/ds000114/derivatives/fmriprep/sub-05/anat/sub-05_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-06/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-06',
bg_img='/data/ds000114/derivatives/fmriprep/sub-06/anat/sub-06_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-07/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-07',
bg_img='/data/ds000114/derivatives/fmriprep/sub-07/anat/sub-07_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-08/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-08',
bg_img='/data/ds000114/derivatives/fmriprep/sub-08/anat/sub-08_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-09/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-09',
bg_img='/data/ds000114/derivatives/fmriprep/sub-09/anat/sub-09_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
plot_stat_map(
'/output/datasink/1stLevel/sub-10/fwhm-4/spmT_0002.nii', title='finger - fwhm=4 - sub-10',
bg_img='/data/ds000114/derivatives/fmriprep/sub-10/anat/sub-10_t1w_preproc.nii.gz',
threshold=3, display_mode='y', cut_coords=(5, 10, 15, 20), dim=-1);
```
What you might see is that the hemisphere of the main cluster differs significantly between subjects. This is because all subjects were asked to use the dominant hand, either right or left. There were three subjects (``sub-01``, ``sub-06`` and ``sub-10``) that were left handed. This can be seen in the pictures above, where we find the main cluster in the left hemisphere for right handed subject and on the right hemisphere for left handed subjects.
**Because of this, We will use only right handed subjects for the following anlyses**.
| github_jupyter |
# Toxic Comment Classification Challenge
## Identify and classify toxic online comments
El problema conciste en crear un modelo el cual clasifique los textos en 6 categorias:
1. comment_text
2. toxic
3. severe_toxic
4. obscene
5. threat
6. insult
7. identity_hate
Para ello se dividira el proceso en las siguientes etapas:
1. [Preprocesamiento](#pre)
2. [Analisis estadístico](#eda)
3. [Modelado y Evaluación](#mod)
### Paqueterías
Aqui se incluyen las paqueterias necesarias.
```
import pandas as pd
import spacy
from gensim.models import Phrases
from funcs import entidades, quitarpuntuacion, bigramas
from tqdm import tqdm
import pickle
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import SGDClassifier
```
<a id='pre'></a>
## Preprocesamiento
El preprocesamiento conciste en limpriar los textos para poder despues . Primero cargamos los datos.
```
train = pd.read_csv('data/train.csv',index_col=0)
test = pd.read_csv('data/test.csv')
subm = pd.read_csv('data/sample_submission.csv')
label_cols = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
train['limpio'] = 1-train[label_cols].max(axis=1)
train['comment_text'].fillna("unknown", inplace=True)
test['comment_text'].fillna("unknown", inplace=True)
```
Observamos que contiene el archivo de train
```
train.head()
```
Observemos el texto del segundo renglón.
```
train.iloc[1].comment_text
```
Lo que haremos en el pre-procesamiento va a ser:
- Quitar saltos de linea (\n)
- Quitar puntuación y simbolos (.:;,!"#$%&/()=?)
- Separar las palabras y crear n-gramas, por ejemplo en vez de separar en tres palabras "W.", "S." y "Merwin", que sea una sola palabra, aun que en realidad se podrian borrar estas palabras, ya que no son importantes para la clasificación.
- Tokeniza, que es separar las palabras.
```
nlp = spacy.load('en')
train = quitarpuntuacion(train)
docs = train.comment_text
```
La parte de tokenizar y crear los bigramas se hacen con funciones definidas en el archivo funcs.py que esta en la misma carpeta
```
%%time
docs = entidades(docs,nlp)
docs = bigramas(docs)
```
Dado que este ultimo procesa lleva una tiempo considerable se guarda.
```
pickle.dump( docs, open( "save.p", "wb" ) )
docs = pickle.load( open( "save.p", "rb" ) )
```
<a id='eda'></a>
## Analisis estadistico
```
# TODO
```
<a id='mod'></a>
## Modelo y evaluación
Debido a que los datos a predecir son categorias y no son excluyentes, es decir un documento puede pertenecer a dos o más categorias se debe hacer un modelo para cada categoria. Para un primer ejercicio se usaran los siguientes modelos:
- Naive Bayes multinomial
- Regresión Logistica
- Stochastic Gradient Descent
Antes de configurar los modelos los datos se tienen que veectorizar, i.e. acomodar en una matriz numérica, por lo se se usa Tf-idf.
```
train['texto_limpio'] = docs
#train['texto_limpio'] = train['texto_limpio'].apply(lambda x: ' '.join(x))
del train['comment_text']
del train['limpio']
```
Se divide los datos en datos de entrenamiento y datos de prueba, para despues vectorizarlos
```
#%%Dividir
X_train, X_test, y_train, y_test = train_test_split(train.iloc[:,6], train.iloc[:,0:6],test_size=0.2)
#%%
vectorizer = TfidfVectorizer(stop_words = 'english',\
vocabulary=None,\
analyzer='word',\
lowercase=True,\
ngram_range=(1, 1),\
max_df=1.0,\
min_df=1)
tfidf_train = vectorizer.fit_transform(X_train)
tfidf_test = vectorizer.transform(X_test)
```
### Regresión logística
```
reglog = LogisticRegression()
presicion_reglog = dict()
for label in label_cols:
y = y_train[label]
reglog.fit(tfidf_train, y)
y_hat = reglog.predict(tfidf_test)
presicion_reglog[label] = accuracy_score(y_test[label], y_hat)
```
### Naive Bayes multinomial
```
naive_bayes = MultinomialNB()
presicion_naive_bayes = dict()
for label in label_cols:
y = y_train[label]
naive_bayes.fit(tfidf_train, y)
y_hat = naive_bayes.predict(tfidf_test)
presicion_naive_bayes[label] = accuracy_score(y_test[label], y_hat)
```
### Stochastic Gradient Descent
> Nota: Independiente al resultado de este modelo, para la competencia de kaggle no funciona ya que los datos que tienes que regresar es una probabilidad, y SGD regresa si pertenece o no a la categoría.
```
SGDC = SGDClassifier()
presicion_SGDC = dict()
for label in label_cols:
y = y_train[label]
SGDC.fit(tfidf_train, y)
y_hat = SGDC.predict(tfidf_test)
presicion_SGDC[label] = accuracy_score(y_test[label], y_hat)
```
### Evaluación
```
print(sum(list(presicion_reglog.values()))/6)
print(sum(list(presicion_naive_bayes.values()))/6)
print(sum(list(presicion_SGDC.values()))/6)
```
Por lo que elgimos la regresión logistica
#### Submission
Por ultimo creamos el archivo para subirlo a la competencia:
```
tfidf_train_c =vectorizer.fit_transform(train['texto_limpio'])
tfidf_sub = vectorizer.transform(test['comment_text'])
for label in label_cols:
y = train[label]
reglog.fit(tfidf_train_c, y)
test[label] = reglog.predict_proba(tfidf_sub)[:,1]
#%%
#%%
subm = test
del subm["comment_text"]
#%%
subm.to_csv('subs/submission5.csv',index=False)
```
## ¿Qué sigue?
Para mejorar el modelo, las ideas que se podrian hacer son:
- Mejorar el preprocesamiento: aumentar los bi-gramas a n-gramas, filtrar palabras comunes aunque no sean stop words
- Mejorar la vectorización: Reducir el tamaño, eliminando palabras
- Probar más modelos: Usar redes nuronales, otros clasificadores(e.g. NB-SVM).
- Ver la calidad del modelo logistico: Checar curva ROC, Recall, Presicion, etc.
| github_jupyter |
# Preprocess Docs
```
# load dependency libraries
import os
import re
import pickle
from bs4 import BeautifulSoup
from bs4.element import Comment
from nltk.stem import PorterStemmer
from nltk.corpus import stopwords
# extracting english stop words
stop_words = stopwords.words('english')
# Initializing Porter Stemmer object
st = PorterStemmer()
# Initializing regex to remove words with one or two characters length
# shortword = re.compile(r'\W*\b\w{1,2}\b')
# folder to store pickel files
pickle_folder = "../PickleFiles/"
os.makedirs(pickle_folder, exist_ok=True)
pages_folder = "../FetchedPages/"
filenames = os.listdir(pages_folder)
# list to store filenames of all stored crawled webpages
files = []
for name in filenames:
files.append(name)
# len(files)
# for file in files[:1]:
# web_page = open(pages_folder + file, "r", encoding="utf-8")
# code = web_page.read()
# # print(code)
# soup = BeautifulSoup(code, "html.parser")
# [s.extract() for s in soup(['style', 'script', '[document]', 'head'])]
# visible_text = soup.getText()
# print(visible_text)
# function to filter tags that are visible on webpage i.e. excluding style, script, meta, etc. tags
def tag_visible(element):
if element.parent.name in ['style', 'script', 'head', 'meta', '[document]']:
return False
elif isinstance(element, Comment): # check if element is html comment
return False
elif re.match(r"[\s\r\n]+",str(element)): # to eliminate remaining extra white spaces and new lines
return False
else:
return True
# function to extract only the visible text from the html code of each webpage
def get_text_from_code(page):
soup = BeautifulSoup(page, "lxml")
text_in_page = soup.find_all(text=True) # return all text in page
visible_text = filter(tag_visible, text_in_page) # return only visible text
return " ".join(term.strip() for term in visible_text)
# dict to create inverted index
inverted_index = {}
# dict to store tokens in each web page
webpage_tokens = {}
for file in files:
web_page = open(pages_folder + file, "r", encoding="utf-8")
code = web_page.read()
# print(code)
text = get_text_from_code(code) # get all text actually visible on web page
# print(text,"\n")
text = text.lower()
text = re.sub('[^a-z]+', ' ', text) # remove all punctuations and digits
tokens = text.split()
# print(tokens, "\n")
# # removing stop words from the tokens
# clean_tokens = [word for word in tokens if word not in stop_words]
# # stemming the tokens
# stem_tokens = [st.stem(word) for word in clean_tokens]
# # checking for stopwords again
# clean_stem_tokens = [word for word in stem_tokens if word not in stop_words]
# # converting list of tokens to string
# clean_stem_tokens = ' '.join(map(str, clean_stem_tokens))
# # removing tokens with one or two characters length
# clean_stem_tokens = shortword.sub('', clean_stem_tokens)
# print(clean_stem_tokens, "\n")
# removing stop words and stemming each token while only accepting stemmed tokens with length greater than 2
clean_stem_tokens = [
st.stem(token) for token in tokens
if (token not in stop_words and st.stem(token) not in stop_words) and len(st.stem(token))>2
]
# print(clean_stem_tokens, "\n")
webpage_tokens[file] = clean_stem_tokens # add tokens in web page to dict
for token in clean_stem_tokens:
freq = inverted_index.setdefault(token,{}).get(file,0) # get frequency of token and set to 0 if token not in dict
inverted_index.setdefault(token,{})[file] = freq + 1 # add 1 to frequency of token in current webpage
# inverted_index.setdefault(token,{})[file] = inverted_index.setdefault(token,{})
# x = inverted_index.setdefault(token,{})[file]
# print(x)
# pickling inverted index and tokens
with open(pickle_folder + '6000_inverted_index.pickle', 'wb') as f:
pickle.dump(inverted_index,f)
with open(pickle_folder + '6000_webpages_tokens.pickle', 'wb') as f:
pickle.dump(webpage_tokens,f)
```
| github_jupyter |
# Tree Based Algorithm Related Topics
## Random Forest
The way random forest trained is as below: [Quora](https://www.quora.com/What-is-the-out-of-bag-error-in-random-forests-What-does-it-mean-Whats-a-typical-value-if-any-Why-would-it-be-higher-or-lower-than-a-typical-value)
1\. Suppose our training data set is represented by T and suppose data set has M features (or attributes or variables). T = {(X1,y1), (X2,y2), ... (Xn, yn)} and Xi is input vector {xi1, xi2, ... xiM} and yi is the label (or output or class).
2\. **(bootstrap / bagging)** Suppose we decide to have S number of trees in our forest then we first create S datasets of "same size as original" created from random resampling of data in T with-replacement (n times for each dataset). This will result in {T1, T2, ... TS} datasets. Each of these is called a bootstrap dataset. Due to "with-replacement" every dataset Ti can have duplicate data records and Ti can be missing several data records from original datasets. This is called Bagging.
3\. **(train each tree with subspace of features)** Now, RF creates S trees and uses m (=sqrt(M) or =floor(lnM+1)) random subfeatures out of M possible features to create any tree. This is called random subspace method.
*PS: Extra-Trees algorithm goes further for randomization, which not only on subset of features, but also on discriminative thresholds (RF search for best split in subset).*
4\. **(voting for final result)** So for each Ti bootstrap dataset you create a tree Ki. If you want to classify some input data D = {x1, x2, ..., xM} you let it pass through each tree and produce S outputs (one for each tree) which can be denoted by Y = {y1, y2, ..., ys}. Final prediction is a majority vote on this set.
### Bootstrap Sampling (Bagging)
Bootstrap sampling is the fundamental idea of random forest.
**How bootstrap datasets are created?**
- Randomly drawn from original dataset with replacement, to create a new dataset as same size as original dataset. It means that drawn a sample out of original dataset, and put it back before drawn the next one. In other words, each drawning is from the exact original dataset without lossing any sample.
- Approximatly 1/3 (~1/e) of the samples in original dataset is left out from corresponding weak learner training, which is called out-of-bag samples.
<img src="https://www.arb.ca.gov/research/weekendeffect/CARB041300/img012.gif" alt="title" >
**How to use the out-of-bag dataset to evaluate the generalization ability of trained model?** ([reference slides](http://stat.ethz.ch/education/semesters/ss2012/ams/slides/v10.2.pdf))
- For each single estimator, there is one bootstrap dataset and corresponding out-of-bag dataset. This oob dataset is just like the validation set of its estimator, which can be used to evaluate the generalization ability (show as slide one below).
- For the generalization ability of the whole random forest model, it can be acheived by averaging among all the oob error of each single weak learner. (shown as silde two below)
- Therefore, there is no need to use cross validation on random forest for hyperparameter searching, as stated in [scikit-learn documentation](http://scikit-learn.org/stable/modules/grid_search.html#out-of-bag-estimates). But still need test set to verify the whole model.
- (in scikit-learn) Use [ParameterGrid](https://stackoverflow.com/questions/34624978/is-there-easy-way-to-grid-search-without-cross-validation-in-python) instead of GridSearchCV and RandomizedSearchCV to avoid CV
<img src="../img/OOB_error.png" alt="title" >
<img src="../img/OOB_error2.png" alt="title" >
## AdaBoost and Gradient Tree Boosting
### AdaBoost Algorithm
Adaboost algorithm is a general idea of ensemble, which continuous changing weight of samples at each training of weak learner. And it can be applied / built upon any basic algorithm, like linear regression, decision tree or even randome forest.
<img src="https://www.packtpub.com/graphics/9781788295758/graphics/image_04_046-1.png" alt="title" >
### Gradient Tree Boosting
Gradient Tree Boosting is a residual fitting algorithm, which trains a new learner to predict error residual left by previous stacked learners.
<img src="../img/Gradient_Boosted_Tree.png" alt="title" >
| github_jupyter |
```
import CNN2Head_input
import os
import tensorflow as tf
import numpy as np
import BKNetStyle
from const import *
''' PREPARE DATA '''
''' PREPARE DATA '''
smile_train, smile_test = CNN2Head_input.getSmileImage()
gender_train, gender_test = CNN2Head_input.getGenderImage()
age_train, age_test = CNN2Head_input.getAgeImage()
def one_hot(index, num_classes):
assert index < num_classes and index >= 0
tmp = np.zeros(num_classes, dtype=np.float32)
tmp[index] = 1.0
return tmp
sess = tf.InteractiveSession()
global_step = tf.contrib.framework.get_or_create_global_step()
x, y_, mask = BKNetStyle.Input()
y_smile_conv, y_gender_conv, y_age_conv, phase_train, keep_prob = BKNetStyle.BKNetModel(x)
smile_loss, gender_loss, age_loss, l2_loss, loss = BKNetStyle.selective_loss(y_smile_conv, y_gender_conv,
y_age_conv, y_, mask)
train_step = BKNetStyle.train_op(loss, global_step)
smile_mask = tf.get_collection('smile_mask')[0]
gender_mask = tf.get_collection('gender_mask')[0]
age_mask = tf.get_collection('age_mask')[0]
y_smile = tf.get_collection('y_smile')[0]
y_gender = tf.get_collection('y_gender')[0]
y_age = tf.get_collection('y_age')[0]
smile_correct_prediction = tf.equal(tf.argmax(y_smile_conv, 1), tf.argmax(y_smile, 1))
gender_correct_prediction = tf.equal(tf.argmax(y_gender_conv, 1), tf.argmax(y_gender, 1))
age_correct_prediction = tf.equal(tf.argmax(y_age_conv, 1), tf.argmax(y_age, 1))
smile_true_pred = tf.reduce_sum(tf.cast(smile_correct_prediction, dtype=tf.float32) * smile_mask)
gender_true_pred = tf.reduce_sum(tf.cast(gender_correct_prediction, dtype=tf.float32) * gender_mask)
age_true_pred = tf.reduce_sum(tf.cast(age_correct_prediction, dtype=tf.float32) * age_mask)
train_data = []
# Mask: Smile -> 0, Gender -> 1, Age -> 2
for i in range(len(smile_train) * 10):
img = (smile_train[i % 3000][0] - 128) / 255.0
label = smile_train[i % 3000][1]
train_data.append((img, one_hot(label, 4), 0.0))
for i in range(len(gender_train)):
img = (gender_train[i][0] - 128) / 255.0
label = (int)(gender_train[i][1])
train_data.append((img, one_hot(label, 4), 1.0))
for i in range(len(age_train)):
img = (age_train[i][0] - 128) / 255.0
label = (int)(age_train[i][1])
train_data.append((img, one_hot(label, 4), 2.0))
saver = tf.train.Saver()
if not os.path.isfile(SAVE_FOLDER + 'model.ckpt.index'):
print('Create new model')
sess.run(tf.global_variables_initializer())
print('OK')
else:
print('Restoring existed model')
saver.restore(sess, SAVE_FOLDER + 'model.ckpt')
print('OK')
loss_summary_placeholder = tf.placeholder(tf.float32)
tf.summary.scalar('loss', loss_summary_placeholder)
merge_summary = tf.summary.merge_all()
writer = tf.summary.FileWriter("./summary/")
learning_rate = tf.get_collection('learning_rate')[0]
current_epoch = (int)(global_step.eval() / (len(train_data) // BATCH_SIZE))
for epoch in range(current_epoch + 1, NUM_EPOCHS):
print('Epoch:', str(epoch))
np.random.shuffle(train_data)
train_img = []
train_label = []
train_mask = []
for i in range(len(train_data)):
train_img.append(train_data[i][0])
train_label.append(train_data[i][1])
train_mask.append(train_data[i][2])
number_batch = len(train_data) // BATCH_SIZE
avg_ttl = []
avg_rgl = []
avg_smile_loss = []
avg_gender_loss = []
avg_age_loss = []
smile_nb_true_pred = 0
gender_nb_true_pred = 0
age_nb_true_pred = 0
smile_nb_train = 0
gender_nb_train = 0
age_nb_train = 0
print("Learning rate: %f" % learning_rate.eval())
for batch in range(number_batch):
# print('Training on batch {0}/{1}'.format(str(batch + 1), str(number_batch)))
top = batch * BATCH_SIZE
bot = min((batch + 1) * BATCH_SIZE, len(train_data))
batch_img = np.asarray(train_img[top:bot])
batch_label = np.asarray(train_label[top:bot])
batch_mask = np.asarray(train_mask[top:bot])
for i in range(BATCH_SIZE):
if batch_mask[i] == 0.0:
smile_nb_train += 1
else:
if batch_mask[i] == 1.0:
gender_nb_train += 1
else:
age_nb_train += 1
batch_img = CNN2Head_input.augmentation(batch_img, 48)
ttl, sml, gel, agl, l2l, _ = sess.run([loss, smile_loss, gender_loss, age_loss, l2_loss, train_step],
feed_dict={x: batch_img, y_: batch_label, mask: batch_mask,
phase_train: True,
keep_prob: 0.5})
smile_nb_true_pred += sess.run(smile_true_pred, feed_dict={x: batch_img, y_: batch_label, mask: batch_mask,
phase_train: True,
keep_prob: 0.5})
gender_nb_true_pred += sess.run(gender_true_pred,
feed_dict={x: batch_img, y_: batch_label, mask: batch_mask,
phase_train: True,
keep_prob: 0.5})
age_nb_true_pred += sess.run(age_true_pred,
feed_dict={x: batch_img, y_: batch_label, mask: batch_mask,
phase_train: True,
keep_prob: 0.5})
'''--------------------------------------- DEBUG -----------------------------------------------------'''
'''
sm_mask, em_mask, ge_mask = sess.run([smile_mask, gender_mask, age_mask],
feed_dict={x: batch_img, y_: batch_label, mask: batch_mask,
phase_train: True,
keep_prob: 0.5})
print('Smile mask: ', sm_mask)
print('Gender mask', ge_mask)
print('Age mask', ag_mask)
print('Batch mask', batch_mask)
y_true_sm, y_true_ge, y_true_ag = sess.run([y_smile, y_gender, y_age],
feed_dict={x: batch_img, y_: batch_label, mask: batch_mask,
phase_train: True,
keep_prob: 0.5})
print('Smile label', y_true_sm)
print('Gender label', y_true_ge)
print('Age label', y_true_ag)
print('Batch label', batch_label)
y_conv_sm, y_conv_ge, y_conv_ag = sess.run([y_smile_conv, y_gender_conv, y_age_conv],
feed_dict={x: batch_img, y_: batch_label, mask: batch_mask,
phase_train: True,
keep_prob: 0.5})
print('Smile conv', y_conv_sm)
print('Gender conv', y_conv_ge)
print('Age conv', y_conv_ag)
'''
'''---------------------------------- END OF DEBUG ----------------------------------------------------'''
avg_ttl.append(ttl)
avg_smile_loss.append(sml)
avg_gender_loss.append(gel)
avg_age_loss.append(agl)
avg_rgl.append(l2l)
smile_train_accuracy = smile_nb_true_pred * 1.0 / smile_nb_train
gender_train_accuracy = gender_nb_true_pred * 1.0 / gender_nb_train
age_train_accuracy = age_nb_true_pred * 1.0 / age_nb_train
avg_smile_loss = np.average(avg_smile_loss)
avg_gender_loss = np.average(avg_gender_loss)
avg_age_loss = np.average(avg_age_loss)
avg_rgl = np.average(avg_rgl)
avg_ttl = np.average(avg_ttl)
# print('Avg_ttl: ' + str(avg_ttl))
# print('loss_summary_placeholder: ' + str(loss_summary_placeholder))
# print('merge_summary: ' + str(merge_summary))
summary = sess.run(merge_summary, feed_dict={loss_summary_placeholder: avg_ttl})
writer.add_summary(summary, global_step=epoch)
with open('log.csv', 'w+') as f:
# epochs, smile_train_accuracy, gender_train_accuracy, age_train_accuracy,
# avg_smile_loss, avg_gender_loss, avg_age_loss, avg_ttl, avg_rgl
f.write('{0},{1},{2},{3},{4},{5},{6},{7},{8}\n'.format(current_epoch, smile_train_accuracy, gender_train_accuracy, age_train_accuracy, avg_smile_loss, avg_gender_loss, avg_age_loss, avg_ttl, avg_rgl))
print('Smile task train accuracy: ' + str(smile_train_accuracy * 100))
print('Gender task train accuracy: ' + str(gender_train_accuracy * 100))
print('Age task train accuracy: ' + str(age_train_accuracy * 100))
print('Total loss: ' + str(avg_ttl) + '. L2-loss: ' + str(avg_rgl))
print('Smile loss: ' + str(avg_smile_loss))
print('Gender loss: ' + str(avg_gender_loss))
print('Age loss: ' + str(avg_age_loss))
print('\n')
saver.save(sess, SAVE_FOLDER + 'model.ckpt')
```
| github_jupyter |
```
from pandas.io.json import json_normalize
import pandas as pd
import json
import time
# 读入数据
fullList = []
indexi=0
ranLength = 399
for i in range(0,ranLength+1,1):
fullList.append([])
for j in range(10):
data_str = open(f'C:/Users/a1090/Documents/GitHub/manual-screeps/test/data/pathLength/{i}f{j}.json',encoding="utf-8").read()
df = json.loads(data_str)
fullList[indexi].append(df)
indexi+=1
print(json.dumps(fullList[0][0], sort_keys=True, indent=4, separators=(',', ': ')))
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
fig=plt.figure()
ax=plt.axes()
x=np.linspace(0,ranLength,len(fullList))
condList = [[],[],[]]
for m in range(ranLength+1):
condList[0].append(fullList[m][0]['stats']['energyProfit'])
plt.xlabel("pathLength")
plt.ylabel("energy profit in 1500 ticks")
ax.plot(x,np.array(condList[0]),'-',label='energyProfit')
ax.legend()
x=np.linspace(0,ranLength,len(fullList))
plt.style.use('seaborn-whitegrid')
fig=plt.figure()
ax=plt.axes()
plt.xlabel("pathLength")
plt.ylabel("sum of spawn time in 1500 ticks")
condList[0]=[]
for m in range(ranLength+1):
condList[0].append(fullList[m][0]['stats']['timeOnSpawn'])
ax.plot(x,np.array(condList[0]),'-',label='timeOnSpawn')
ax.legend()
x=np.linspace(0,ranLength,len(fullList))
plt.style.use('seaborn-whitegrid')
fig=plt.figure()
ax=plt.axes()
plt.xlabel("pathLength")
plt.ylabel("sum of stranded energy time in 1500 ticks")
condList[0]=[]
for m in range(ranLength+1):
condList[0].append(fullList[m][0]['stats']['energyStranded'])
ax.plot(x,np.array(condList[0]),'-',label='energyStranded')
ax.legend()
x=np.linspace(0,ranLength,len(fullList))
plt.style.use('seaborn-whitegrid')
fig=plt.figure()
ax=plt.axes()
plt.xlabel("pathLength")
plt.ylabel("efficiency (%)")
condList[0]=[]
for m in range(ranLength+1):
condList[0].append(fullList[m][0]['stats']['efficiency'])
ax.plot(x,np.array(condList[0]),'-',label='efficiency (%)')
ax.legend()
x=np.linspace(0,ranLength,len(fullList))
plt.style.use('seaborn-whitegrid')
fig=plt.figure()
ax=plt.axes()
plt.xlabel("pathLength")
plt.ylabel("energyOnSpawn/energyProfit (%)")
condList[0]=[]
for m in range(ranLength+1):
condList[0].append(fullList[m][0]['stats']['energyOnSpawn']/fullList[m][0]['stats']['energyProfit']*100)
ax.plot(x,np.array(condList[0]),'-',label='energyOnSpawn/energyProfit (%)')
ax.legend()
x=np.linspace(0,ranLength,len(fullList))
plt.style.use('seaborn-whitegrid')
fig=plt.figure()
ax=plt.axes()
plt.xlabel("pathLength")
plt.ylabel("energyOnRepair/energyProfit (%)")
condList[0]=[]
for m in range(ranLength+1):
condList[0].append(fullList[m][0]['stats']['energyOnRepair']/fullList[m][0]['stats']['energyProfit']*100)
ax.plot(x,np.array(condList[0]),'-',label='energyOnRepair/energyProfit (%)')
ax.legend()
x=np.linspace(0,ranLength,len(fullList))
plt.style.use('seaborn-whitegrid')
fig=plt.figure()
ax=plt.axes()
plt.xlabel("pathLength")
plt.ylabel("carrier carry num")
condList[0]=[]
condList[1]=[]
for m in range(ranLength+1):
condList[0].append(fullList[m][0]['entityList']['carrier']['body']['carry'])
condList[1].append(fullList[m][0]['entityList']['carrier']['body']['move'])
ax.plot(x,np.array(condList[0]),'-',label='most suitable carrier carry num')
ax.plot(x,np.array(condList[1]),'-',label='most suitable carrier move num')
ax.plot(x,x*0.4,'-',label='most suitable carrier carry theroetical value')
ax.plot(x,x*0.2,'-',label='most suitable carrier move theroetical value')
ax.legend()
x=np.linspace(0,ranLength,len(fullList))
plt.style.use('seaborn-whitegrid')
fig=plt.figure()
ax=plt.axes()
plt.xlabel("pathLength")
plt.ylabel("carrier carry num")
condList[0]=[]
condList[1]=[]
for m in range(ranLength+1):
condList[0].append(fullList[m][0]['carrierData']['inRoundGeneration']['transportCapability'])
ax.plot(x,np.array(condList[0]),'-',label='most suitable carrier carry num')
ax.legend()
x=np.linspace(0,ranLength,len(fullList))
plt.style.use('seaborn-whitegrid')
fig=plt.figure()
ax=plt.axes()
plt.xlabel("pathLength")
plt.ylabel("carrier carry num")
condList[0]=[]
condList[1]=[]
condList[2]=[]
for m in range(ranLength+1):
condList[0].append(fullList[m][0]['entityList']['harvester']['body']['carry'])
condList[1].append(fullList[m][0]['entityList']['harvester']['body']['move'])
condList[2].append(fullList[m][0]['entityList']['harvester']['body']['work'])
ax.plot(x,np.array(condList[0]),'-',label='most suitable harvester carry num')
ax.plot(x,np.array(condList[1]),'-',label='most suitable harvester move num')
ax.plot(x,np.array(condList[2]),'-',label='most suitable harvester work num')
ax.legend()
```
| github_jupyter |
# High-level RNN MXNet Example
```
import os
import sys
import numpy as np
import mxnet as mx
from mxnet.io import DataDesc
from common.params_lstm import *
from common.utils import *
# Force one-gpu
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
print("OS: ", sys.platform)
print("Python: ", sys.version)
print("Numpy: ", np.__version__)
print("MXNet: ", mx.__version__)
print("GPU: ", get_gpu_name())
print(get_cuda_version())
print("CuDNN Version ", get_cudnn_version())
def create_symbol(CUDNN=True,
maxf=MAXFEATURES, edim=EMBEDSIZE, nhid=NUMHIDDEN, maxl=MAXLEN):
# https://mxnet.incubator.apache.org/api/python/rnn.html
data = mx.symbol.Variable('data')
embedded_step = mx.symbol.Embedding(data=data, input_dim=maxf, output_dim=edim)
# Fusing RNN layers across time step into one kernel
# Improves speed but is less flexible
# Currently only supported if using cuDNN on GPU
if not CUDNN:
gru_cell = mx.rnn.GRUCell(num_hidden=nhid)
else:
gru_cell = mx.rnn.FusedRNNCell(num_hidden=nhid, num_layers=1, mode='gru')
begin_state = gru_cell.begin_state()
# Call the cell to get the output of one time step for a batch.
# TODO: TNC layout (sequence length, batch size, and feature dimensions) is faster for RNN
outputs, states = gru_cell.unroll(length=maxl, inputs=embedded_step, merge_outputs=False)
fc1 = mx.symbol.FullyConnected(data=outputs[-1], num_hidden=2)
input_y = mx.symbol.Variable('softmax_label')
m = mx.symbol.SoftmaxOutput(data=fc1, label=input_y, name="softmax")
return m
def init_model(m, batchs=BATCHSIZE, maxl=MAXLEN, lr=LR, b1=BETA_1, b2=BETA_2, eps=EPS):
ctx = [mx.gpu(0)]
mod = mx.mod.Module(context=ctx, symbol=m)
mod.bind(data_shapes=[DataDesc(name='data', shape=(batchs, maxl))],
label_shapes=[DataDesc(name='softmax_label', shape=(batchs,))])
# Glorot-uniform initializer
mod.init_params(initializer=mx.init.Xavier(rnd_type='uniform'))
mod.init_optimizer(optimizer='Adam',
optimizer_params=(('learning_rate', lr),
('beta1', b1),
('beta2', b2),
('epsilon', eps)))
return mod
%%time
# Data into format for library
x_train, x_test, y_train, y_test = imdb_for_library(seq_len=MAXLEN, max_features=MAXFEATURES)
# TNC layout faster for RNN
# Train iterator
train_iter = mx.io.NDArrayIter(x_train, y_train, BATCHSIZE, shuffle=True)
print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)
print(x_train.dtype, x_test.dtype, y_train.dtype, y_test.dtype)
%%time
# Load symbol
# See Notebook "MXNet_RNN_TNC.ipynb" for example with TNC layout
sym = create_symbol()
%%time
# Initialise model
model = init_model(sym)
%%time
# Main training loop: 12.7s
metric = mx.metric.create('acc')
for j in range(EPOCHS):
train_iter.reset()
metric.reset()
for batch in train_iter:
model.forward(batch, is_train=True)
model.update_metric(metric, batch.label)
model.backward()
model.update()
print('Epoch %d, Training %s' % (j, metric.get()))
%%time
# Main evaluation loop: 1.52s
y_guess = model.predict(mx.io.NDArrayIter(x_test, batch_size=BATCHSIZE, shuffle=False))
y_guess = np.argmax(y_guess.asnumpy(), axis=-1)
print("Accuracy: ", 1.*sum(y_guess == y_test)/len(y_guess))
```
| github_jupyter |
```
import psycopg2 as psy
import sqlalchemy
import numpy as np
import pandas as pd
import time
import random
import datetime
#test the connection with default mimic build options
try:
connection = psy.connect(user = "postgres",
password = "postgres",
host = "127.0.0.1",
port = "5432",
database = "mimic")
cursor = connection.cursor()
cursor1 = connection.cursor()
postgreSQL_select_Query = "select pneumonia_bal_hadm_visit.subject_id, pneumonia_bal_hadm_visit.hadm_id, chartevents.icustay_id, chartevents.itemid, chartevents.charttime, chartevents.storetime, chartevents.cgied, chartevents.valuenum, chartevents.valueuom, chartevents.warning, chartevents.error, chartevents.resultstatus, chartevents.stopped, diagnoses_icd.icd9_code, procedures_icd.icd9_code, services.transfertime, services.prev_service, services.curr_service, drgcodes.drg_type, drgcodes.drg_code, drgcodes.description, drgcodes.drg_severity, drgcodes.drg_mortality, patients.gender, patients.dob, patients.dod, patients.dod_hosp, patients.dod_ssen, patients.expire_flag, labevents.itemid, labevents.charttime, labevents.value, labevents.valuenum, labevents.valueom, labevents.flag, noteevents.chartdate, noteevents.charttime, noteevents.storetime, noteevents.category, noteevents.description, noteevents.cgid, noteevents.iserror, noteevents.text, cptevents.costcenter, cptevents.cpt_cd, cptevents.cpt_number, cptevents.cpt_suffix, cptevents.ticket_id_seq, cptevents.sectionheader, cptevents.subsectionheader, cptevents.description, icustays.icustay_id, icustays.dbsource, icustays.first_careunit, icustays.last_careunit, icustays.first_wardid, icustays.last_wardid, icustays.intime, icustays.outtime, icustays.los, outputevents.charttime, outputevents.itemid, outputevents.value, outputevents.valueuom, outputevents.storetime, outputevents.cgid, outputevents.stopped, outputevents.newbottle, outputevents.iserror, transfers.dbsource, transfers.eventtype, transfers.prev_careunit, transfers.curr_careunit, transfers.prev_wardid, transfers.curr_wardid, transfers.intime, transfers.outtime, transfers.los, datetimeevents.charttime, datetimeevents.storetime, datetimeevents.cgid, datetimeevents.value, datetimeevents.valueuom, datetimeevents.warning, datetimeevents.error, datetimeevents.resultstatus, datetimeevents.stopped, microbiologyevents.chartdate, microbiologyevents.charttime, microbiologyevents.spec_itemid, microbiologyevents.spec_type_desc, microbiologyevents.org_itemid, microbiologyevents.org_name, microbiologyevents.isolate_num, microbiologyevents.ab_itemid, microbiologyevents.ab_name, microbiologyevents.dilution_text, microbiologyevents.dilution_comparison, microbiologyevents.dilution_value, microbiologyevents.interpertation, admissions.admittime, admissions.dischtime, admissions.deathtime, admissions.admission_type, admissions.admission_location, admissions.discharge_location, admissions.insurance, admissions.language, admissions.religion, admissions.marital_status, admissions.ethnicity, admissions.edregtime, admissions.edouttime, admissions.diagnosis, admissions.hospital_expire_flag, admissions.has_chartevents_data from mimiciii.pneumonia_bal_hadm_visit, mimiciii.chartevents, mimiciii.diagnoses_icd, mimiciii.procedures_icd, mimiciii.services, mimiciii.drgcodes, mimiciii.patients, mimiciii.labevents, mimiciii.noteevents, mimiciii.cptevents, mimiciii.icustays, mimiciii.outputevents, mimiciii.transfers, mimiciii.datetimeevents, mimiciii.microbiologyevents, mimiciii.admissions where pneumonia_bal_hadm_visit.subject_id = chartevents.subject_id = diagnoses_icd.subject_id = procedures_icd.subject_id = services.subject_id = drgcodes.subject_id = patients.subject_id = labevents.subject_id = noteevents.subject_id = cptevents.subject_id = icustays.subject_id = outputevents.subject_id = transfers.subject_id = datetimeevents.subject_id = microbiologyevents.subject_id = admissions.subject_id and pneumonia_bal_hadm_visit.hadm_id = chartevents.hadm_id = diagnoses_icd.hadm_id = procedures_icd.hadm_id = services.hadm_id = drgcodes.hadm_id = patients.hadm_id = labevents.hadm_id = noteevents.hadm_id = cptevents.hadm_id = icustays.hadm_id = outputevents.hadm_id = transfers.hadm_id = datetimeevents.hadm_id = microbiologyevents.hadm_id = admissions.hadm_id and pneumonia_bal_hadm_visit.visit = 1"
cursor1.execute(postgreSQL_select_Query)
patient_info = cursor1.fetchall()
except (Exception, psy.Error) as error :
print ("Error while connecting to PostgreSQL", error)
finally:
#closing database connection.
if(connection):
cursor.close()
connection.close()
print("PostgreSQL connection is closed")
```
| github_jupyter |
## 06c HSC LSK treatment sexual dimorphism MAST
Essentially run the same analysis as in 04a and 04b but just with objects that only contain female or male cells.
Run this model:
`zlmCond_all <- zlm(formula = ~condition + leiden + n_genes, sca=sca)`
done with this docker image:
docker run --rm -d --name scanpy -p 8883:8888 -e JUPYTER_ENABLE_LAB=YES -v /Users/efast/Documents/:/home/jovyan/work r_scanpy:vs5
```
import scanpy as sc
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import rcParams
from matplotlib import colors
import seaborn as sb
from gprofiler import GProfiler
import rpy2.rinterface_lib.callbacks
import logging
from rpy2.robjects import pandas2ri
import anndata2ri
# Ignore R warning messages
#Note: this can be commented out to get more verbose R output
rpy2.rinterface_lib.callbacks.logger.setLevel(logging.ERROR)
# Automatically convert rpy2 outputs to pandas dataframes
pandas2ri.activate()
anndata2ri.activate()
%load_ext rpy2.ipython
plt.rcParams['figure.figsize']=(8,8) #rescale figures
sc.settings.verbosity = 3
#sc.set_figure_params(dpi=200, dpi_save=300)
sc.logging.print_versions()
%%R
# Load libraries from correct lib Paths for my environment - ignore this!
.libPaths(.libPaths()[c(3,2,1)])
# Load all the R libraries we will be using in the notebook
library(scran)
library(ggplot2)
library(plyr)
library(MAST)
```
## HSC female
```
# load data
adata = sc.read('./sc_objects/LT_female.h5ad', cache = True)
#Create new Anndata object for use in MAST with non-batch corrected data as before
adata_raw = adata.copy()
adata_raw.X = adata.raw.X
adata_raw.obs['n_genes'] = (adata_raw.X > 0).sum(1) # recompute number of genes expressed per cell
adata = None
adata_raw.obs.head()
```
### Run MAST on total cells - Select genes expressed in >5% of cells (no adaptive thresholding)
```
%%R -i adata_raw
#Convert SingleCellExperiment to SingleCellAssay type as required by MAST
sca <- SceToSingleCellAssay(adata_raw, class = "SingleCellAssay")
#Scale Gene detection rate
colData(sca)$n_genes = scale(colData(sca)$n_genes)
# filter genes based on hard cutoff (have to be expressed in at least 5% of all cells)
freq_expressed <- 0.05
expressed_genes <- freq(sca) > freq_expressed
sca <- sca[expressed_genes,]
#rename the sample to condition and make the ct the control
cond<-factor(colData(sca)$sample)
cond<-relevel(cond,"ct")
colData(sca)$condition<-cond
```
#### everything
background:
`zlmCond_all <- zlm(formula = ~condition + leiden +n_genes, sca=sca) # this runs the model`
a formula with the measurement variable (gene expression) on the LHS (left hand side) and
predictors present in colData on the RHS
expression of genes controlling for cluster, condition, sex + n_genes
questions I can ask:
sex differences controlling for treatments
sex differences controlling for clusters - not necessary analyze all the clusters
overall gene expression changes in treatment
```
%%R
#Define & run hurdle model
zlmCond_all <- zlm(formula = ~condition + n_genes + leiden, sca=sca) # this runs the model
summaryCond_all <- summary(zlmCond_all, doLRT=TRUE) # extracts the data, gives datatable with summary of fit, doLRT=TRUE extracts likelihood ratio test p-value
summaryDt_all <- summaryCond_all$datatable # reformats into a table
%%R
head(summaryDt_all)
%%R -o GCSF_all -o dmPGE2_all -o indo_all -o pIC_all
# reformat for GCSF
result_all_GCSF <- merge(summaryDt_all[contrast=='conditionGCSF' & component=='H',.(primerid, `Pr(>Chisq)`)], #P-vals
summaryDt_all[contrast=='conditionGCSF' & component=='logFC', .(primerid, coef)],
by='primerid') #logFC coefficients
#Correct for multiple testing (FDR correction) and filtering
result_all_GCSF[,FDR:=p.adjust(`Pr(>Chisq)`, 'fdr')] # create column named FDR - probably that p.adjust function
GCSF_all = result_all_GCSF[result_all_GCSF$FDR<0.01,, drop=F] # create new table where rows with FDR<0.01 are droped
GCSF_all = GCSF_all[order(GCSF_all$FDR),] # sorts the table
# reformat for dmPGE2
result_all_dmPGE2 <- merge(summaryDt_all[contrast=='conditiondmPGE2' & component=='H',.(primerid, `Pr(>Chisq)`)], #P-vals
summaryDt_all[contrast=='conditiondmPGE2' & component=='logFC', .(primerid, coef)],
by='primerid') #logFC coefficients
#Correct for multiple testing (FDR correction) and filtering
result_all_dmPGE2[,FDR:=p.adjust(`Pr(>Chisq)`, 'fdr')] # create column named FDR - probably that p.adjust function
dmPGE2_all = result_all_dmPGE2[result_all_dmPGE2$FDR<0.01,, drop=F] # create new table where rows with FDR<0.01 are droped
dmPGE2_all = dmPGE2_all[order(dmPGE2_all$FDR),] # sorts the table
# reformat for indo
result_all_indo <- merge(summaryDt_all[contrast=='conditionindo' & component=='H',.(primerid, `Pr(>Chisq)`)], #P-vals
summaryDt_all[contrast=='conditionindo' & component=='logFC', .(primerid, coef)],
by='primerid') #logFC coefficients
#Correct for multiple testing (FDR correction) and filtering
result_all_indo[,FDR:=p.adjust(`Pr(>Chisq)`, 'fdr')] # create column named FDR - probably that p.adjust function
indo_all = result_all_indo[result_all_indo$FDR<0.01,, drop=F] # create new table where rows with FDR<0.01 are droped
indo_all = indo_all[order(indo_all$FDR),] # sorts the table
# reformat for pIC
result_all_pIC <- merge(summaryDt_all[contrast=='conditionpIC' & component=='H',.(primerid, `Pr(>Chisq)`)], #P-vals
summaryDt_all[contrast=='conditionpIC' & component=='logFC', .(primerid, coef)],
by='primerid') #logFC coefficients
#Correct for multiple testing (FDR correction) and filtering
result_all_pIC[,FDR:=p.adjust(`Pr(>Chisq)`, 'fdr')] # create column named FDR - probably that p.adjust function
pIC_all = result_all_pIC[result_all_pIC$FDR<0.01,, drop=F] # create new table where rows with FDR<0.01 are droped
pIC_all = pIC_all[order(pIC_all$FDR),] # sorts the table
%%R -o MAST_raw_all
MAST_raw_all <- summaryDt_all
# save files as .csvs
MAST_raw_all.to_csv('./write/MAST_raw_LT_leiden_female.csv')
GCSF_all.to_csv('./write/MAST_GCSF_LT_leiden_female.csv')
pIC_all.to_csv('./write/MAST_pIC_LT_leiden_female.csv')
dmPGE2_all.to_csv('./write/MAST_dmPGE2_LT_leiden_female.csv')
indo_all.to_csv('./write/MAST_indo_LT_leiden_female.csv')
%%R
# remove previous variables
rm(zlmCond_all)
rm(summaryDt_all)
rm(summaryCond_all)
rm(MAST_raw_all)
```
## HSC male
```
# load data
adata = sc.read('./sc_objects/LT_male.h5ad', cache = True)
#Create new Anndata object for use in MAST with non-batch corrected data as before
adata_raw = adata.copy()
adata_raw.X = adata.raw.X
adata_raw.obs['n_genes'] = (adata_raw.X > 0).sum(1) # recompute number of genes expressed per cell
adata = None
adata_raw.obs.head()
```
### Run MAST on total cells - Select genes expressed in >5% of cells (no adaptive thresholding)
```
%%R -i adata_raw
#Convert SingleCellExperiment to SingleCellAssay type as required by MAST
sca <- SceToSingleCellAssay(adata_raw, class = "SingleCellAssay")
#Scale Gene detection rate
colData(sca)$n_genes = scale(colData(sca)$n_genes)
# filter genes based on hard cutoff (have to be expressed in at least 5% of all cells)
freq_expressed <- 0.05
expressed_genes <- freq(sca) > freq_expressed
sca <- sca[expressed_genes,]
#rename the sample to condition and make the ct the control
cond<-factor(colData(sca)$sample)
cond<-relevel(cond,"ct")
colData(sca)$condition<-cond
```
#### everything
background:
`zlmCond_all <- zlm(formula = ~condition + leiden +n_genes, sca=sca) # this runs the model`
a formula with the measurement variable (gene expression) on the LHS (left hand side) and
predictors present in colData on the RHS
expression of genes controlling for cluster, condition, sex + n_genes
questions I can ask:
sex differences controlling for treatments
sex differences controlling for clusters - not necessary analyze all the clusters
overall gene expression changes in treatment
```
%%R
#Define & run hurdle model
zlmCond_all <- zlm(formula = ~condition + n_genes + leiden, sca=sca) # this runs the model
summaryCond_all <- summary(zlmCond_all, doLRT=TRUE) # extracts the data, gives datatable with summary of fit, doLRT=TRUE extracts likelihood ratio test p-value
summaryDt_all <- summaryCond_all$datatable # reformats into a table
%%R
head(summaryDt_all)
%%R -o GCSF_all -o dmPGE2_all -o indo_all -o pIC_all
# reformat for GCSF
result_all_GCSF <- merge(summaryDt_all[contrast=='conditionGCSF' & component=='H',.(primerid, `Pr(>Chisq)`)], #P-vals
summaryDt_all[contrast=='conditionGCSF' & component=='logFC', .(primerid, coef)],
by='primerid') #logFC coefficients
#Correct for multiple testing (FDR correction) and filtering
result_all_GCSF[,FDR:=p.adjust(`Pr(>Chisq)`, 'fdr')] # create column named FDR - probably that p.adjust function
GCSF_all = result_all_GCSF[result_all_GCSF$FDR<0.01,, drop=F] # create new table where rows with FDR<0.01 are droped
GCSF_all = GCSF_all[order(GCSF_all$FDR),] # sorts the table
# reformat for dmPGE2
result_all_dmPGE2 <- merge(summaryDt_all[contrast=='conditiondmPGE2' & component=='H',.(primerid, `Pr(>Chisq)`)], #P-vals
summaryDt_all[contrast=='conditiondmPGE2' & component=='logFC', .(primerid, coef)],
by='primerid') #logFC coefficients
#Correct for multiple testing (FDR correction) and filtering
result_all_dmPGE2[,FDR:=p.adjust(`Pr(>Chisq)`, 'fdr')] # create column named FDR - probably that p.adjust function
dmPGE2_all = result_all_dmPGE2[result_all_dmPGE2$FDR<0.01,, drop=F] # create new table where rows with FDR<0.01 are droped
dmPGE2_all = dmPGE2_all[order(dmPGE2_all$FDR),] # sorts the table
# reformat for indo
result_all_indo <- merge(summaryDt_all[contrast=='conditionindo' & component=='H',.(primerid, `Pr(>Chisq)`)], #P-vals
summaryDt_all[contrast=='conditionindo' & component=='logFC', .(primerid, coef)],
by='primerid') #logFC coefficients
#Correct for multiple testing (FDR correction) and filtering
result_all_indo[,FDR:=p.adjust(`Pr(>Chisq)`, 'fdr')] # create column named FDR - probably that p.adjust function
indo_all = result_all_indo[result_all_indo$FDR<0.01,, drop=F] # create new table where rows with FDR<0.01 are droped
indo_all = indo_all[order(indo_all$FDR),] # sorts the table
# reformat for pIC
result_all_pIC <- merge(summaryDt_all[contrast=='conditionpIC' & component=='H',.(primerid, `Pr(>Chisq)`)], #P-vals
summaryDt_all[contrast=='conditionpIC' & component=='logFC', .(primerid, coef)],
by='primerid') #logFC coefficients
#Correct for multiple testing (FDR correction) and filtering
result_all_pIC[,FDR:=p.adjust(`Pr(>Chisq)`, 'fdr')] # create column named FDR - probably that p.adjust function
pIC_all = result_all_pIC[result_all_pIC$FDR<0.01,, drop=F] # create new table where rows with FDR<0.01 are droped
pIC_all = pIC_all[order(pIC_all$FDR),] # sorts the table
%%R -o MAST_raw_all
MAST_raw_all <- summaryDt_all
# save files as .csvs
MAST_raw_all.to_csv('./write/MAST_raw_LT_leiden_male.csv')
GCSF_all.to_csv('./write/MAST_GCSF_LT_leiden_male.csv')
pIC_all.to_csv('./write/MAST_pIC_LT_leiden_male.csv')
dmPGE2_all.to_csv('./write/MAST_dmPGE2_LT_leiden_male.csv')
indo_all.to_csv('./write/MAST_indo_LT_leiden_male.csv')
%%R
# remove previous variables
rm(zlmCond_all)
rm(summaryDt_all)
rm(summaryCond_all)
rm(MAST_raw_all)
```
## LSK female
```
# load data
adata = sc.read('./sc_objects/MPP_female.h5ad', cache = True)
#Create new Anndata object for use in MAST with non-batch corrected data as before
adata_raw = adata.copy()
adata_raw.X = adata.raw.X
adata_raw.obs['n_genes'] = (adata_raw.X > 0).sum(1) # recompute number of genes expressed per cell
adata = None
adata_raw.obs.head()
```
### Run MAST on total cells - Select genes expressed in >5% of cells (no adaptive thresholding)
```
%%R -i adata_raw
#Convert SingleCellExperiment to SingleCellAssay type as required by MAST
sca <- SceToSingleCellAssay(adata_raw, class = "SingleCellAssay")
#Scale Gene detection rate
colData(sca)$n_genes = scale(colData(sca)$n_genes)
# filter genes based on hard cutoff (have to be expressed in at least 5% of all cells)
freq_expressed <- 0.05
expressed_genes <- freq(sca) > freq_expressed
sca <- sca[expressed_genes,]
#rename the sample to condition and make the ct the control
cond<-factor(colData(sca)$sample)
cond<-relevel(cond,"ct")
colData(sca)$condition<-cond
```
#### everything
background:
`zlmCond_all <- zlm(formula = ~condition + leiden +n_genes, sca=sca) # this runs the model`
a formula with the measurement variable (gene expression) on the LHS (left hand side) and
predictors present in colData on the RHS
expression of genes controlling for cluster, condition, sex + n_genes
questions I can ask:
sex differences controlling for treatments
sex differences controlling for clusters - not necessary analyze all the clusters
overall gene expression changes in treatment
```
%%R
#Define & run hurdle model
zlmCond_all <- zlm(formula = ~condition + n_genes + leiden, sca=sca) # this runs the model
summaryCond_all <- summary(zlmCond_all, doLRT=TRUE) # extracts the data, gives datatable with summary of fit, doLRT=TRUE extracts likelihood ratio test p-value
summaryDt_all <- summaryCond_all$datatable # reformats into a table
%%R
head(summaryDt_all)
%%R -o GCSF_all -o dmPGE2_all -o indo_all -o pIC_all
# reformat for GCSF
result_all_GCSF <- merge(summaryDt_all[contrast=='conditionGCSF' & component=='H',.(primerid, `Pr(>Chisq)`)], #P-vals
summaryDt_all[contrast=='conditionGCSF' & component=='logFC', .(primerid, coef)],
by='primerid') #logFC coefficients
#Correct for multiple testing (FDR correction) and filtering
result_all_GCSF[,FDR:=p.adjust(`Pr(>Chisq)`, 'fdr')] # create column named FDR - probably that p.adjust function
GCSF_all = result_all_GCSF[result_all_GCSF$FDR<0.01,, drop=F] # create new table where rows with FDR<0.01 are droped
GCSF_all = GCSF_all[order(GCSF_all$FDR),] # sorts the table
# reformat for dmPGE2
result_all_dmPGE2 <- merge(summaryDt_all[contrast=='conditiondmPGE2' & component=='H',.(primerid, `Pr(>Chisq)`)], #P-vals
summaryDt_all[contrast=='conditiondmPGE2' & component=='logFC', .(primerid, coef)],
by='primerid') #logFC coefficients
#Correct for multiple testing (FDR correction) and filtering
result_all_dmPGE2[,FDR:=p.adjust(`Pr(>Chisq)`, 'fdr')] # create column named FDR - probably that p.adjust function
dmPGE2_all = result_all_dmPGE2[result_all_dmPGE2$FDR<0.01,, drop=F] # create new table where rows with FDR<0.01 are droped
dmPGE2_all = dmPGE2_all[order(dmPGE2_all$FDR),] # sorts the table
# reformat for indo
result_all_indo <- merge(summaryDt_all[contrast=='conditionindo' & component=='H',.(primerid, `Pr(>Chisq)`)], #P-vals
summaryDt_all[contrast=='conditionindo' & component=='logFC', .(primerid, coef)],
by='primerid') #logFC coefficients
#Correct for multiple testing (FDR correction) and filtering
result_all_indo[,FDR:=p.adjust(`Pr(>Chisq)`, 'fdr')] # create column named FDR - probably that p.adjust function
indo_all = result_all_indo[result_all_indo$FDR<0.01,, drop=F] # create new table where rows with FDR<0.01 are droped
indo_all = indo_all[order(indo_all$FDR),] # sorts the table
# reformat for pIC
result_all_pIC <- merge(summaryDt_all[contrast=='conditionpIC' & component=='H',.(primerid, `Pr(>Chisq)`)], #P-vals
summaryDt_all[contrast=='conditionpIC' & component=='logFC', .(primerid, coef)],
by='primerid') #logFC coefficients
#Correct for multiple testing (FDR correction) and filtering
result_all_pIC[,FDR:=p.adjust(`Pr(>Chisq)`, 'fdr')] # create column named FDR - probably that p.adjust function
pIC_all = result_all_pIC[result_all_pIC$FDR<0.01,, drop=F] # create new table where rows with FDR<0.01 are droped
pIC_all = pIC_all[order(pIC_all$FDR),] # sorts the table
%%R -o MAST_raw_all
MAST_raw_all <- summaryDt_all
# save files as .csvs
MAST_raw_all.to_csv('./write/MAST_raw_MPP_leiden_female.csv')
GCSF_all.to_csv('./write/MAST_GCSF_MPP_leiden_female.csv')
pIC_all.to_csv('./write/MAST_pIC_MPP_leiden_female.csv')
dmPGE2_all.to_csv('./write/MAST_dmPGE2_MPP_leiden_female.csv')
indo_all.to_csv('./write/MAST_indo_MPP_leiden_female.csv')
%%R
# remove previous variables
rm(zlmCond_all)
rm(summaryDt_all)
rm(summaryCond_all)
rm(MAST_raw_all)
```
## LSK Male
```
# load data
adata = sc.read('./sc_objects/MPP_male.h5ad', cache = True)
#Create new Anndata object for use in MAST with non-batch corrected data as before
adata_raw = adata.copy()
adata_raw.X = adata.raw.X
adata_raw.obs['n_genes'] = (adata_raw.X > 0).sum(1) # recompute number of genes expressed per cell
adata = None
adata_raw.obs.head()
```
### Run MAST on total cells - Select genes expressed in >5% of cells (no adaptive thresholding)
```
%%R -i adata_raw
#Convert SingleCellExperiment to SingleCellAssay type as required by MAST
sca <- SceToSingleCellAssay(adata_raw, class = "SingleCellAssay")
#Scale Gene detection rate
colData(sca)$n_genes = scale(colData(sca)$n_genes)
# filter genes based on hard cutoff (have to be expressed in at least 5% of all cells)
freq_expressed <- 0.05
expressed_genes <- freq(sca) > freq_expressed
sca <- sca[expressed_genes,]
#rename the sample to condition and make the ct the control
cond<-factor(colData(sca)$sample)
cond<-relevel(cond,"ct")
colData(sca)$condition<-cond
```
#### everything
background:
`zlmCond_all <- zlm(formula = ~condition + leiden +n_genes, sca=sca) # this runs the model`
a formula with the measurement variable (gene expression) on the LHS (left hand side) and
predictors present in colData on the RHS
expression of genes controlling for cluster, condition, sex + n_genes
questions I can ask:
sex differences controlling for treatments
sex differences controlling for clusters - not necessary analyze all the clusters
overall gene expression changes in treatment
```
%%R
#Define & run hurdle model
zlmCond_all <- zlm(formula = ~condition + n_genes + leiden, sca=sca) # this runs the model
summaryCond_all <- summary(zlmCond_all, doLRT=TRUE) # extracts the data, gives datatable with summary of fit, doLRT=TRUE extracts likelihood ratio test p-value
summaryDt_all <- summaryCond_all$datatable # reformats into a table
%%R
head(summaryDt_all)
%%R -o GCSF_all -o dmPGE2_all -o indo_all -o pIC_all
# reformat for GCSF
result_all_GCSF <- merge(summaryDt_all[contrast=='conditionGCSF' & component=='H',.(primerid, `Pr(>Chisq)`)], #P-vals
summaryDt_all[contrast=='conditionGCSF' & component=='logFC', .(primerid, coef)],
by='primerid') #logFC coefficients
#Correct for multiple testing (FDR correction) and filtering
result_all_GCSF[,FDR:=p.adjust(`Pr(>Chisq)`, 'fdr')] # create column named FDR - probably that p.adjust function
GCSF_all = result_all_GCSF[result_all_GCSF$FDR<0.01,, drop=F] # create new table where rows with FDR<0.01 are droped
GCSF_all = GCSF_all[order(GCSF_all$FDR),] # sorts the table
# reformat for dmPGE2
result_all_dmPGE2 <- merge(summaryDt_all[contrast=='conditiondmPGE2' & component=='H',.(primerid, `Pr(>Chisq)`)], #P-vals
summaryDt_all[contrast=='conditiondmPGE2' & component=='logFC', .(primerid, coef)],
by='primerid') #logFC coefficients
#Correct for multiple testing (FDR correction) and filtering
result_all_dmPGE2[,FDR:=p.adjust(`Pr(>Chisq)`, 'fdr')] # create column named FDR - probably that p.adjust function
dmPGE2_all = result_all_dmPGE2[result_all_dmPGE2$FDR<0.01,, drop=F] # create new table where rows with FDR<0.01 are droped
dmPGE2_all = dmPGE2_all[order(dmPGE2_all$FDR),] # sorts the table
# reformat for indo
result_all_indo <- merge(summaryDt_all[contrast=='conditionindo' & component=='H',.(primerid, `Pr(>Chisq)`)], #P-vals
summaryDt_all[contrast=='conditionindo' & component=='logFC', .(primerid, coef)],
by='primerid') #logFC coefficients
#Correct for multiple testing (FDR correction) and filtering
result_all_indo[,FDR:=p.adjust(`Pr(>Chisq)`, 'fdr')] # create column named FDR - probably that p.adjust function
indo_all = result_all_indo[result_all_indo$FDR<0.01,, drop=F] # create new table where rows with FDR<0.01 are droped
indo_all = indo_all[order(indo_all$FDR),] # sorts the table
# reformat for pIC
result_all_pIC <- merge(summaryDt_all[contrast=='conditionpIC' & component=='H',.(primerid, `Pr(>Chisq)`)], #P-vals
summaryDt_all[contrast=='conditionpIC' & component=='logFC', .(primerid, coef)],
by='primerid') #logFC coefficients
#Correct for multiple testing (FDR correction) and filtering
result_all_pIC[,FDR:=p.adjust(`Pr(>Chisq)`, 'fdr')] # create column named FDR - probably that p.adjust function
pIC_all = result_all_pIC[result_all_pIC$FDR<0.01,, drop=F] # create new table where rows with FDR<0.01 are droped
pIC_all = pIC_all[order(pIC_all$FDR),] # sorts the table
%%R -o MAST_raw_all
MAST_raw_all <- summaryDt_all
# save files as .csvs
MAST_raw_all.to_csv('./write/MAST_raw_MPP_leiden_male.csv')
GCSF_all.to_csv('./write/MAST_GCSF_MPP_leiden_male.csv')
pIC_all.to_csv('./write/MAST_pIC_MPP_leiden_male.csv')
dmPGE2_all.to_csv('./write/MAST_dmPGE2_MPP_leiden_male.csv')
indo_all.to_csv('./write/MAST_indo_MPP_leiden_male.csv')
%%R
# remove previous variables
rm(zlmCond_all)
rm(summaryDt_all)
rm(summaryCond_all)
rm(MAST_raw_all)
!pip list
```
| github_jupyter |
# k-Nearest Neighbor (kNN) exercise
*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*
The kNN classifier consists of two stages:
- During training, the classifier takes the training data and simply remembers it
- During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
- The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
```
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
```
We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
1. First we must compute the distances between all test examples and all train examples.
2. Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are **Ntr** training examples and **Nte** test examples, this stage should result in a **Nte x Ntr** matrix where each element (i,j) is the distance between the i-th test and j-th train example.
First, open `cs231n/classifiers/k_nearest_neighbor.py` and implement the function `compute_distances_two_loops` that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
```
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print dists.shape
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
```
**Inline Question #1:** Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
- What in the data is the cause behind the distinctly bright rows?
- What causes the columns?
**Your Answer**:
*-If the ith test data is similar to a large number of train data, the ith row would be black. Otherwise,the ith row would be white.
-If the jth train data is similar to a large number of test data, the jth column would be black. Otherwise,the jth column would be white.*
```
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
```
You should expect to see approximately `27%` accuracy. Now lets try out a larger `k`, say `k = 5`:
```
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
```
You should expect to see a slightly better performance than with `k = 1`.
```
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
"""
Call a function f with args and return the time (in seconds) that it took to execute.
"""
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
```
### Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
```
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
# pass
y_train_ = y_train.reshape(-1, 1)
X_train_folds , y_train_folds = np.array_split(X_train, 5), np.array_split(y_train_, 5)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
# pass
for k_ in k_choices:
k_to_accuracies.setdefault(k_, [])
for i in range(num_folds):
classifier = KNearestNeighbor()
X_val_train = np.vstack(X_train_folds[0:i] + X_train_folds[i+1:])
y_val_train = np.vstack(y_train_folds[0:i] + y_train_folds[i+1:])
y_val_train = y_val_train[:,0]
classifier.train(X_val_train, y_val_train)
for k_ in k_choices:
y_val_pred = classifier.predict(X_train_folds[i], k=k_)
num_correct = np.sum(y_val_pred == y_train_folds[i][:,0])
accuracy = float(num_correct) / len(y_val_pred)
k_to_accuracies[k_] = k_to_accuracies[k_] + [accuracy]
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 10
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
```
| github_jupyter |
# Intro
SQL is the programming language used with databases, and it is an important skill for any data scientist. You'll build your SQL skills in this course apply those skills using BigQuery, a database system that lets you apply SQL to huge datasets.
This lesson describes basics about connecting to the database and running your first SQL query. After you have a handle on these basics, we'll come back to build your SQL skills.
# Your First BigQuery Commands
We'll access BigQuery using a Python package called `bq_helper` that puts BigQuery results into Pandas DataFrames. This is valuable if you are familiar with Pandas. In case you aren't, we have a separate [Pandas course](https://www.kaggle.com/learn/pandas).
You can import`bq_helper` in the standard way.
```
import bq_helper
```
We also need to create a BigQueryHelper object pointing to a specific dataset.
For now, we will give you the names of the datasets you will connect to. The current example uses a dataset of posts to HackerNews.
```
# create a helper object for our bigquery dataset
hacker_news = bq_helper.BigQueryHelper(active_project= "bigquery-public-data",
dataset_name = "hacker_news")
```
# Database Schemas
The structure of a dataset is called its **schema**.
We need to understand a database's schema to effectively pull out the data we want (called "querying the database"). The `BigQueryHelper.list_tables()` method lists the tables in the dataset. A table is composed of rows and columns, like a spreadsheet table. The database itself can hold multiple tables, much as a spreadsheet file can hold multiple tables.
```
# print a list of all the tables in the hacker_news dataset
hacker_news.list_tables()
```
Now that we know what tables are in this dataset, we can explore the columns in individual tables. In this example, we'll look at table called "full". Note that other data sets have different table names, so you will not always use "full."
```
# print information on all the columns in the "full" table
# in the hacker_news dataset
hacker_news.table_schema("full")
```
Each SchemaField tells us about a specific column. In order, the information is:
* The name of the column
* The datatype in the column
* [The mode of the column](https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#schema.fields.mode) (NULLABLE means that a column allows NULL values, and is the default)
* A description of the data in that column
The first field has the SchemaField:
`SchemaField('by', 'string', 'NULLABLE', "The username of the item's author.",())`
This tells us
- the field is called "by"
- the data in this field is strings
- NULL values are allowed
- It contains the "username" of the item's author.
We can use the `BigQueryHelper.head()` method to check just the first couple of lines of of the "full" table to make sure this is right. (Sometimes databases out there have outdated description, so it's good to check.)
```
# preview the first couple lines of the "full" table
hacker_news.head("full")
```
The `BigQueryHelper.head()` method will also let us look at just the information in a specific column. If we want to see the first ten entries in the "by" column, for example, we can do that!
```
# preview the first ten entries in the by column of the full table
hacker_news.head("full", selected_columns="by", num_rows=10)
```
# Wrap Up
You've seen how to:
- Set up a helper function to access your database (`BigQueryHelper`)
- List the tables in your database (`list_tables`)
- Review the schema for any table (`table_schema`)
- Inspect the top few rows in a table (`head`)
You're about to get a chance to try these out.
Before we go into the coding exercise, a quick disclaimer for those who already know some SQL:
**Each Kaggle user can scan 5TB every 30 days for free. Once you hit that limit, you'll have to wait for it to reset.**
The commands you've seen so far won't demand a meaningful fraction of that limit. But some BiqQuery datasets are huge. So, if you already know SQL, wait to run `SELECT` queries until you've seen how to use your allotment effectively. If you are like most people reading this, you don't know how to write these queries yet, so you don't need to worry about this disclaimer.
# Your Turn
Practice the commands you've seen to **[Explore The Structure of a Dataset](#$NEXT_NOTEBOOK_URL$)** with crimes in the city of Chicago.
| github_jupyter |
# Introduction
In this notebook, we analyse the oracles generated in **tc_br_orc_v2_gen** notebook:
* TC_BR_volunteers
* TC_BR_expert
* TC_BR_volunteers_expert_union
* TC_BR_volunteers_expert_intersec
# Load Libraries and Datasets
```
from mod_finder_util import mod_finder_util
mod_finder_util.add_modules_origin_search_path()
import pandas as pd
import numpy as np
from sklearn.metrics import cohen_kappa_score
from modules.utils import firefox_dataset_p2 as fd
from modules.utils import aux_functions
from matplotlib import pyplot as plt
volunteers_oracle = fd.Tc_BR_Oracles.read_oracle_volunteers_df()
expert_oracle = fd.Tc_BR_Oracles.read_oracle_expert_df()
print()
volunteers_expert_union_oracle = fd.Tc_BR_Oracles.read_oracle_expert_volunteers_union_df()
volunteers_expert_intersec_oracle = fd.Tc_BR_Oracles.read_oracle_expert_volunteers_intersec_df()
print()
bugreports = fd.Datasets.read_selected_bugreports_df()
testcases = fd.Datasets.read_testcases_df()
print()
br_2_feature_matrix_final = fd.Feat_BR_Oracles.read_br_2_features_matrix_final_df()
```
# Cohen's Kappa - Test Cases x Bug Reports Trace Matrix
In the section bellow, we calculate the Cohen's kappa based on two matrices:
* the matrix of Test Cases x Bug Reports generated from the answers of the **expert**
* the matrix of Test Cases x Bug Reports generated from the answers of the **volunteers**
```
expert_answers = []
volunteers_answers = []
for idx,row in volunteers_oracle.iterrows():
for col in volunteers_oracle.columns:
volunteers_answers.append(volunteers_oracle.at[idx,col])
expert_answers.append(expert_oracle.at[idx,col])
print("Expert Answers Length: {}".format(len(expert_answers)))
print("Volunteers Answers Length: {}".format(len(volunteers_answers)))
print("Cohen Kappa Score: {}".format(cohen_kappa_score(expert_answers, volunteers_answers)))
```
We can observe that we have a weak inter-rater agreement level, because the kappa is between 0.40 and 0.59, we have only 15-35% of the data reliable. [Source](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3900052/)
# Calculate Sparsity
```
print('volunteers_oracle sparcity: {:>40.2%}'.format(aux_functions.calculate_sparsity(volunteers_oracle)))
print('expert_oracle sparcity: {:>44.2%}'.format(aux_functions.calculate_sparsity(expert_oracle)))
print('volunteers_expert_union_oracle sparcity: {:>27.2%}'.format(aux_functions.calculate_sparsity(volunteers_expert_union_oracle)))
print('volunteers_expert_intersec_oracle sparcity: {:>24.2%}'.format(aux_functions.calculate_sparsity(volunteers_expert_intersec_oracle)))
```
# Analysis of Amount of Affected Test Cases by Features
Analysis of the amount of test cases that are directly affected by a bug report after made the traceability between bug reports and features in the empirical study. We analyse the amount of affected test cases using three different matrices: expert answers only, volunteers answers only and the result of expert and volunteers answers.
```
def calculate_affected_tcs_amount(row, col):
amount_aff_tcs = 0
for f_id in row[col].split(" "):
if f_id != "":
amount_aff_tcs = amount_aff_tcs + len(testcases[testcases.Feature_ID == int(f_id)])
return amount_aff_tcs
br_2_feature_matrix_final['Amount_Aff_TCs_Exp'] = br_2_feature_matrix_final.apply(lambda row : calculate_affected_tcs_amount(row, 'Features_IDs_exp_m'), axis=1)
br_2_feature_matrix_final['Amount_Aff_TCs_Vol'] = br_2_feature_matrix_final.apply(lambda row : calculate_affected_tcs_amount(row, 'Features_IDs_vol_m'), axis=1)
br_2_feature_matrix_final['Amount_Aff_TCs_Exp_Vol_Union'] = br_2_feature_matrix_final.apply(lambda row : calculate_affected_tcs_amount(row, 'Features_IDs_exp_vol_union_m'), axis=1)
br_2_feature_matrix_final['Amount_Aff_TCs_Exp_Vol_Intersec'] = br_2_feature_matrix_final.apply(lambda row : calculate_affected_tcs_amount(row, 'Features_IDs_exp_vol_intersec_m'), axis=1)
#br_2_feature_matrix_final.head(100)
```
# Percentage of Positive and Negative Links
```
cols = ['Amount_Aff_TCs_Exp','Amount_Aff_TCs_Vol','Amount_Aff_TCs_Exp_Vol_Union','Amount_Aff_TCs_Exp_Vol_Intersec']
df = pd.DataFrame(columns=cols,
index=['PosLinks','NegLinks','TotalLinks','PercPosLinks','PercNegLinks'])
for col in cols:
df.at['PosLinks',col] = br_2_feature_matrix_final[col].sum()
df.at['NegLinks',col] = len(br_2_feature_matrix_final) * len(testcases) - df.at['PosLinks',col]
df.at['TotalLinks',col] = df.at['PosLinks',col] + df.at['NegLinks',col]
df.at['PercPosLinks',col] = round(float(df.at['PosLinks',col]/(df.at['PosLinks',col]+df.at['NegLinks',col])), 2)
df.at['PercNegLinks',col] = round(float(df.at['NegLinks',col]/(df.at['PosLinks',col]+df.at['NegLinks',col])), 2)
df.at['TotalPerc',col] = round(df.at['PercPosLinks',col] + df.at['PercNegLinks',col])
df.T.head(10)
```
# Analysis of No Maching Answers
```
print(br_2_feature_matrix_final[(br_2_feature_matrix_final.Features_IDs_exp_m != br_2_feature_matrix_final.Features_IDs_vol_m) &
(br_2_feature_matrix_final.Features_IDs_exp_vol_intersec_m == "")].shape)
br_2_feature_matrix_final[(br_2_feature_matrix_final.Features_IDs_exp_m != br_2_feature_matrix_final.Features_IDs_vol_m)].head(90)
```
# TC_BR Oracle Production
In 53/93 observations the answers don't match, that means 56.98% of the answers don't match totally. However, 17/53 or 32.07% of these answers do have some matching with the researchers answers, once they agree in at least one of the features attributed to the bug report.
Here we must to make a decision about which oracle must be considered to analyse our IR based models and how it must be produced. This decision has three options:
1. We take the **intersection** of the answers. So we put 1 in the cells of the BR_TC trace matrix where the answers of the volunteers and the researcher have some degree of agreement;
2. We take the **union** of the answers. So we put 1 in the cell of the BR_TC trace matrix where the volunteers or the researcher answered that there is a link;
3. We make a **selection**, discarding the relations BR x Features where did not have agreement, considering only those that the volunteers and the researcher agreed, this would reduce the size of the oracle.
# Top Value For IR-based Models
We can observe that some bug reports are linked up to 31 test cases. So we must setup our IR-based models to return up to **31** test cases. In percentual terms, this represents 31/207 * 100 = 15%, this means the TOP values must have the value of **15**.
# Distribution Amount BRs by TCs
```
brs_expert = expert_oracle.index
tcs_amount_expert = expert_oracle.apply(lambda row : sum(row.values), axis=1)
brs_vol = volunteers_oracle.index
tcs_amount_vol = volunteers_oracle.apply(lambda row : sum(row.values), axis=1)
brs_exp_vol_union = volunteers_expert_union_oracle.index
tcs_amount_exp_vol_union = volunteers_expert_union_oracle.apply(lambda row : sum(row.values), axis=1)
brs_exp_vol_intersec = volunteers_expert_intersec_oracle.index
tcs_amount_exp_vol_intersec = volunteers_expert_intersec_oracle.apply(lambda row : sum(row.values), axis=1)
f, (ax1,ax2,ax3,ax4) = plt.subplots(1, 4, figsize=(20,5))
ax1.set_title('Exp-Only')
ax1.plot(brs_expert, tcs_amount_expert, color='orange')
ax1.set(xlabel='bug reports', ylabel='test cases amount')
ax1.set_ylim([0, 50])
ax1.xaxis.set_ticks([])
ax2.set_title('Vol-Only')
ax2.plot(brs_vol, tcs_amount_vol, color='blue')
ax2.set(xlabel='bug reports', ylabel='test cases amount')
ax2.set_ylim([0, 50])
ax2.xaxis.set_ticks([])
ax3.set_title('Exp-Vol-Union')
ax3.plot(brs_exp_vol_union, tcs_amount_exp_vol_union, color='red')
ax3.set(xlabel='bug reports', ylabel='test cases amount')
ax3.set_ylim([0, 50])
ax3.xaxis.set_ticks([])
ax4.set_title('Exp-Vol-Intersection')
ax4.plot(brs_exp_vol_intersec, tcs_amount_exp_vol_intersec, color='green')
ax4.set(xlabel='bug reports', ylabel='test cases amount')
ax4.set_ylim([0, 50])
ax4.xaxis.set_ticks([])
print("BRs x TCs Mean Amount - Exp Oracle: {:2.2}".format(np.mean(tcs_amount_expert)))
print("BRs x TCs Mean Amount - Vol Oracle: {:2.2}".format(np.mean(tcs_amount_vol)))
print("BRs x TCs Mean Amount - Exp-Vol Union Oracle: {:2.2}".format(np.mean(tcs_amount_exp_vol_union)))
print("BRs x TCs Mean Amount - Exp-Vol Intersec Oracle: {:2.2}".format(np.mean(tcs_amount_exp_vol_intersec)))
```
# Analysis of Histograms of Test Cases Amounts
```
f2, (ax4,ax5,ax6,ax7) = plt.subplots(1, 4, figsize=(20,5))
ax4.set_title('Hist TCs Amount - Exp Orc')
ax4.hist(tcs_amount_expert, color='orange')
ax5.set_title('Hist TCs Amount - Vol Orc')
ax5.hist(tcs_amount_vol, color='blue')
ax6.set_title('Hist TCs Amount - Exp-Vol Orc Union')
ax6.hist(tcs_amount_exp_vol_union, color='red')
ax7.set_title('Hist TCs Amount - Exp-Vol Orc Intersec')
ax7.hist(tcs_amount_exp_vol_intersec, color='green')
```
| github_jupyter |
_Lambda School Data Science, Unit 2_
# Sprint Challenge: Predict Steph Curry's shots 🏀
For your Sprint Challenge, you'll use a dataset with all Steph Curry's NBA field goal attempts. (Regular season and playoff games, from October 28, 2009, through June 5, 2019.)
You'll predict whether each shot was made, using information about the shot and the game. This is hard to predict! Try to get above 60% accuracy. The dataset was collected with the [nba_api](https://github.com/swar/nba_api) Python library.
```
%%capture
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install pandas-profiling==2.*
# Read data
import pandas as pd
url = 'https://drive.google.com/uc?export=download&id=1fL7KPyxgGYfQDsuJoBWHIWwCAf-HTFpX'
df = pd.read_csv(url)
# Check data shape
assert df.shape == (13958, 20)
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn.ensemble import RandomForestClassifier
import category_encoders as ce
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.impute import SimpleImputer
from sklearn.linear_model import Ridge
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
```
To demonstrate mastery on your Sprint Challenge, do all the required, numbered instructions in this notebook.
To earn a score of "3", also do all the stretch goals.
You are permitted and encouraged to do as much data exploration as you want.
**1. Begin with baselines for classification.** Your target to predict is `shot_made_flag`. What is your baseline accuracy, if you guessed the majority class for every prediction?
**2. Hold out your test set.** Use the 2018-19 season to test. NBA seasons begin in October and end in June. You'll know you've split the data correctly when your test set has 1,709 observations.
**3. Engineer new feature.** Engineer at least **1** new feature, from this list, or your own idea.
- **Homecourt Advantage**: Is the home team (`htm`) the Golden State Warriors (`GSW`) ?
- **Opponent**: Who is the other team playing the Golden State Warriors?
- **Seconds remaining in the period**: Combine minutes remaining with seconds remaining, to get the total number of seconds remaining in the period.
- **Seconds remaining in the game**: Combine period, and seconds remaining in the period, to get the total number of seconds remaining in the game. A basketball game has 4 periods, each 12 minutes long.
- **Made previous shot**: Was Steph Curry's previous shot successful?
**4. Decide how to validate** your model. Choose one of the following options. Any of these options are good. You are not graded on which you choose.
- **Train/validate/test split: train on the 2009-10 season through 2016-17 season, validate with the 2017-18 season.** You'll know you've split the data correctly when your train set has 11,081 observations, and your validation set has 1,168 observations.
- **Train/validate/test split: random 80/20%** train/validate split.
- **Cross-validation** with independent test set. You may use any scikit-learn cross-validation method.
**5.** Use a scikit-learn **pipeline** to **encode categoricals** and fit a **Decision Tree** or **Random Forest** model.
**6.** Get your model's **validation accuracy.** (Multiple times if you try multiple iterations.)
**7.** Get your model's **test accuracy.** (One time, at the end.)
**8.** Given a **confusion matrix** for a hypothetical binary classification model, **calculate accuracy, precision, and recall.**
### Stretch Goals
- Engineer 4+ new features total, either from the list above, or your own ideas.
- Make 2+ visualizations to explore relationships between features and target.
- Optimize 3+ hyperparameters by trying 10+ "candidates" (possible combinations of hyperparameters). You can use `RandomizedSearchCV` or do it manually.
- Get and plot your model's feature importances.
## 1. Begin with baselines for classification.
>Your target to predict is `shot_made_flag`. What would your baseline accuracy be, if you guessed the majority class for every prediction?
```
df.head()
baseline = df['shot_made_flag'].value_counts(normalize=True)
print ("Majority Class: ",baseline[0])
df['game_date'].max()
```
## 2. Hold out your test set.
>Use the 2018-19 season to test. NBA seasons begin in October and end in June. You'll know you've split the data correctly when your test set has 1,709 observations.
```
# New feature function for creating new column of only half court shots and beyond:
def half_court(row):
if row['shot_distance'] >= 47:
return 1
if row['shot_distance'] < 47:
return 0
df['game_date'] = pd.to_datetime(df['game_date'], infer_datetime_format=True)
test = df[(df['game_date'] >= '2018-10-01') & (df['game_date'] < '2019-07-01')]
target = 'shot_made_flag'
test['past_half_court'] = df.apply(half_court, axis=1)
test = test.drop(columns=[target])
y_true = df[(df['game_date'] >= '2018-10-01') & (df['game_date'] < '2019-07-01')]
y_true['past_half_court'] = df.apply(half_court, axis=1)
y_true = y_true[target]
test.shape
```
## 3. Engineer new feature.
>Engineer at least **1** new feature, from this list, or your own idea.
>
>- **Homecourt Advantage**: Is the home team (`htm`) the Golden State Warriors (`GSW`) ?
>- **Opponent**: Who is the other team playing the Golden State Warriors?
>- **Seconds remaining in the period**: Combine minutes remaining with seconds remaining, to get the total number of seconds remaining in the period.
>- **Seconds remaining in the game**: Combine period, and seconds remaining in the period, to get the total number of seconds remaining in the game. A basketball game has 4 periods, each 12 minutes long.
>- **Made previous shot**: Was Steph Curry's previous shot successful?
```
# Creating a new feature that returns a 1 if the shot is beyond half court - and returns a 0 if not:
df['shot_distance'].max()
df['past_half_court'] = df.apply(half_court, axis=1)
df['past_half_court'].value_counts()
```
## **4. Decide how to validate** your model.
>Choose one of the following options. Any of these options are good. You are not graded on which you choose.
>
>- **Train/validate/test split: train on the 2009-10 season through 2016-17 season, validate with the 2017-18 season.** You'll know you've split the data correctly when your train set has 11,081 observations, and your validation set has 1,168 observations.
>- **Train/validate/test split: random 80/20%** train/validate split.
>- **Cross-validation** with independent test set. You may use any scikit-learn cross-validation method.
```
train = df[(df['game_date'] >= '2009-10-01') & (df['game_date'] < '2017-07-01')]
val = df[(df['game_date'] >= '2017-10-01') & (df['game_date'] < '2018-07-01')]
print ("Train Shape:", train.shape, "Val Shape:", val.shape)
target = 'shot_made_flag'
train_features = train.drop(columns=[target, 'game_id', 'game_event_id', 'player_name'],)
numeric_features = train_features.select_dtypes(include='number').columns.tolist()
cardinality = train_features.select_dtypes(exclude='number').nunique()
categorical_features = cardinality[cardinality <= 10].index.tolist()
features = numeric_features + categorical_features
print(features)
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
```
## 5. Use a scikit-learn pipeline to encode categoricals and fit a Decision Tree or Random Forest model.
```
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(min_samples_leaf = 40, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train,y_train)
```
## 6.Get your model's validation accuracy
> (Multiple times if you try multiple iterations.)
```
print('Validation Accuracy:', pipeline.score(X_val, y_val))
```
## 7. Get your model's test accuracy
> (One time, at the end.)
```
print('Test Accuracy:', pipeline.score(X_test, y_true))
```
## 8. Given a confusion matrix, calculate accuracy, precision, and recall.
Imagine this is the confusion matrix for a binary classification model. Use the confusion matrix to calculate the model's accuracy, precision, and recall.
<table>
<tr>
<td colspan="2" rowspan="2"></td>
<td colspan="2">Predicted</td>
</tr>
<tr>
<td>Negative</td>
<td>Positive</td>
</tr>
<tr>
<td rowspan="2">Actual</td>
<td>Negative</td>
<td style="border: solid">85</td>
<td style="border: solid">58</td>
</tr>
<tr>
<td>Positive</td>
<td style="border: solid">8</td>
<td style="border: solid"> 36</td>
</tr>
</table>
### Calculate accuracy
```
correct_pred = 85 + 36
total_pred = 85 + 58 + 8 + 36
accuracy = correct_pred / total_pred
print(accuracy)
```
### Calculate precision
```
correct_pos_pred = 36
total_pos_pred = 58 + 38
positive_precision = correct_pos_pred / total_pos_pred
print(positive_precision)
```
### Calculate recall
```
actual_pos = 8 + 36
recall = correct_pos_pred / actual_pos
print(recall)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import tracktor as tr
import cv2
import sys
import time
from scipy.optimize import linear_sum_assignment
from scipy.spatial.distance import cdist
```
## Global parameters
This cell (below) enlists user-defined parameters
```
# colours is a vector of BGR values which are used to identify individuals in the video
# since we only have one individual, the program will only use the first element from this array i.e. (0,0,255) - red
# number of elements in colours should be greater than n_inds (THIS IS NECESSARY FOR VISUALISATION ONLY)
n_inds = 1
colours = [(0,0,255),(0,255,255),(255,0,255),(255,255,255),(255,255,0),(255,0,0),(0,255,0),(0,0,0)]
# this is the block_size and offset used for adaptive thresholding (block_size should always be odd)
# these values are critical for tracking performance
block_size = 81
offset = 30
# minimum area and maximum area occupied by the animal in number of pixels
# this parameter is used to get rid of other objects in view that might be hard to threshold out but are differently sized
min_area = 1000
max_area = 10000
# the scaling parameter can be used to speed up tracking if video resolution is too high (use value 0-1)
scaling = 0.5
# mot determines whether the tracker is being used in noisy conditions to track a single object or for multi-object
# using this will enable k-means clustering to force n_inds number of animals
mot = False
# name of source video and paths
video = 'Cockroach'
input_vidpath = '/mnt/ssd1/Documents/Vivek/tracktor/videos/toxtrac_videos/' + video + '.avi'
output_vidpath = '/mnt/ssd1/Documents/Vivek/tracktor/output/toxtrac_videos/' + video + '.mp4'
output_filepath = '/mnt/ssd1/Documents/Vivek/tracktor/output/toxtrac_videos/' + video + '.csv'
codec = 'DIVX' # try other codecs if the default doesn't work ('DIVX', 'avc1', 'XVID') note: this list is non-exhaustive
## Start time
start = time.time()
## Open video
cap = cv2.VideoCapture(input_vidpath)
if cap.isOpened() == False:
sys.exit('Video file cannot be read! Please check input_vidpath to ensure it is correctly pointing to the video file')
## Video writer class to output video with contour and centroid of tracked object(s)
# make sure the frame size matches size of array 'final'
fourcc = cv2.VideoWriter_fourcc(*codec)
output_framesize = (int(cap.read()[1].shape[1]*scaling),int(cap.read()[1].shape[0]*scaling))
out = cv2.VideoWriter(filename = output_vidpath, fourcc = fourcc, fps = 30.0, frameSize = output_framesize, isColor = True)
## Individual location(s) measured in the last and current step
meas_last = list(np.zeros((n_inds,2)))
meas_now = list(np.zeros((n_inds,2)))
last = 0
df = []
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
this = cap.get(1)
if ret == True:
frame = cv2.resize(frame, None, fx = scaling, fy = scaling, interpolation = cv2.INTER_LINEAR)
thresh = tr.colour_to_thresh(frame, block_size, offset)
final, contours, meas_last, meas_now = tr.detect_and_draw_contours(frame, thresh, meas_last, meas_now, min_area, max_area)
row_ind, col_ind = tr.hungarian_algorithm(meas_last, meas_now)
final, meas_now, df = tr.reorder_and_draw(final, colours, n_inds, col_ind, meas_now, df, mot, this)
# Create output dataframe
for i in range(n_inds):
df.append([this, meas_now[i][0], meas_now[i][1]])
# Display the resulting frame
out.write(final)
cv2.imshow('frame', final)
if cv2.waitKey(1) == 27:
break
if last == this:
break
last = this
## Write positions to file
df = pd.DataFrame(np.matrix(df), columns = ['frame','pos_x','pos_y'])
df.to_csv(output_filepath, sep=',')
## When everything done, release the capture
cap.release()
out.release()
cv2.destroyAllWindows()
cv2.waitKey(1)
## End time and duration
end = time.time()
duration = end - start
print("--- %s seconds ---" %duration)
```
## Plot tracks
The code below allows you to see individual tracks. By counting the number of jumps in the tracks, one can identify number of false detections.
```
df = pd.read_csv(output_filepath)
df.head()
import matplotlib.pyplot as plt
plt.figure(figsize=(5,5))
plt.scatter(df['pos_x'], df['pos_y'], c=df['frame'])
plt.xlabel('pos_x')
plt.ylabel('pos_y')
plt.show()
```
## Identifying true/false detections
Here, we use individual movement speeds to identify false detections. All frames where individuals move faster than their body length are considered false detections.
NOTE: The methode used here underestimates false detections.
```
dx = df['pos_x'] - df['pos_x'].shift(n_inds)
dy = df['pos_y'] - df['pos_y'].shift(n_inds)
df['speed'] = np.sqrt(dx**2 + dy**2)
df.head()
thresh = 243.1
```
True detection rate
```
print(1-len(np.where(df['speed'] > thresh)[0]) / max(df['frame']))
```
| github_jupyter |
# Analyse - Predict
Functions are important in reducing the replication of code as well as giving the user the functionality of getting an ouput on varying inputs. The functions you will write all use Eskom data/variables.
## Instructions to Students
- **Do not add or remove cells in this notebook. Do not edit or remove the `### START FUNCTION` or `### END FUNCTION` comments. Do not add any code outside of the functions you are required to edit. Doing any of this will lead to a mark of 0%!**
- Answer the questions according to the specifications provided.
- Use the given cell in each question to to see if your function matches the expected outputs.
- Do not hard-code answers to the questions.
- The use of stackoverflow, google, and other online tools are permitted. However, copying fellow student's code is not permissible and is considered a breach of the Honour code. Doing this will result in a mark of 0%.
- Good luck, and may the force be with you!
## Imports
```
import pandas as pd
import numpy as np
```
## Data Loading and Preprocessing
### Electricification by province (EBP) data
```
ebp_url = 'https://raw.githubusercontent.com/Explore-AI/Public-Data/master/Data/electrification_by_province.csv'
ebp_df = pd.read_csv(ebp_url)
for col, row in ebp_df.iloc[:,1:].iteritems():
ebp_df[col] = ebp_df[col].str.replace(',','').astype(int)
ebp_df.head()
```
### Twitter data
```
twitter_url = 'https://raw.githubusercontent.com/Explore-AI/Public-Data/master/Data/twitter_nov_2019.csv'
twitter_df = pd.read_csv(twitter_url)
twitter_df.head()
```
## Important Variables (Do not edit these!)
```
# gauteng ebp data as a list
gauteng = ebp_df['Gauteng'].astype(float).to_list()
# dates for twitter tweets
dates = twitter_df['Date'].to_list()
# dictionary mapping official municipality twitter handles to the municipality name
mun_dict = {
'@CityofCTAlerts' : 'Cape Town',
'@CityPowerJhb' : 'Johannesburg',
'@eThekwiniM' : 'eThekwini' ,
'@EMMInfo' : 'Ekurhuleni',
'@centlecutility' : 'Mangaung',
'@NMBmunicipality' : 'Nelson Mandela Bay',
'@CityTshwane' : 'Tshwane'
}
# dictionary of english stopwords
stop_words_dict = {
'stopwords':[
'where', 'done', 'if', 'before', 'll', 'very', 'keep', 'something', 'nothing', 'thereupon',
'may', 'why', '’s', 'therefore', 'you', 'with', 'towards', 'make', 'really', 'few', 'former',
'during', 'mine', 'do', 'would', 'of', 'off', 'six', 'yourself', 'becoming', 'through',
'seeming', 'hence', 'us', 'anywhere', 'regarding', 'whole', 'down', 'seem', 'whereas', 'to',
'their', 'various', 'thereafter', '‘d', 'above', 'put', 'sometime', 'moreover', 'whoever', 'although',
'at', 'four', 'each', 'among', 'whatever', 'any', 'anyhow', 'herein', 'become', 'last', 'between', 'still',
'was', 'almost', 'twelve', 'used', 'who', 'go', 'not', 'enough', 'well', '’ve', 'might', 'see', 'whose',
'everywhere', 'yourselves', 'across', 'myself', 'further', 'did', 'then', 'is', 'except', 'up', 'take',
'became', 'however', 'many', 'thence', 'onto', '‘m', 'my', 'own', 'must', 'wherein', 'elsewhere', 'behind',
'becomes', 'alone', 'due', 'being', 'neither', 'a', 'over', 'beside', 'fifteen', 'meanwhile', 'upon', 'next',
'forty', 'what', 'less', 'and', 'please', 'toward', 'about', 'below', 'hereafter', 'whether', 'yet', 'nor',
'against', 'whereupon', 'top', 'first', 'three', 'show', 'per', 'five', 'two', 'ourselves', 'whenever',
'get', 'thereby', 'noone', 'had', 'now', 'everyone', 'everything', 'nowhere', 'ca', 'though', 'least',
'so', 'both', 'otherwise', 'whereby', 'unless', 'somewhere', 'give', 'formerly', '’d', 'under',
'while', 'empty', 'doing', 'besides', 'thus', 'this', 'anyone', 'its', 'after', 'bottom', 'call',
'n’t', 'name', 'even', 'eleven', 'by', 'from', 'when', 'or', 'anyway', 'how', 'the', 'all',
'much', 'another', 'since', 'hundred', 'serious', '‘ve', 'ever', 'out', 'full', 'themselves',
'been', 'in', "'d", 'wherever', 'part', 'someone', 'therein', 'can', 'seemed', 'hereby', 'others',
"'s", "'re", 'most', 'one', "n't", 'into', 'some', 'will', 'these', 'twenty', 'here', 'as', 'nobody',
'also', 'along', 'than', 'anything', 'he', 'there', 'does', 'we', '’ll', 'latterly', 'are', 'ten',
'hers', 'should', 'they', '‘s', 'either', 'am', 'be', 'perhaps', '’re', 'only', 'namely', 'sixty',
'made', "'m", 'always', 'those', 'have', 'again', 'her', 'once', 'ours', 'herself', 'else', 'has', 'nine',
'more', 'sometimes', 'your', 'yours', 'that', 'around', 'his', 'indeed', 'mostly', 'cannot', '‘ll', 'too',
'seems', '’m', 'himself', 'latter', 'whither', 'amount', 'other', 'nevertheless', 'whom', 'for', 'somehow',
'beforehand', 'just', 'an', 'beyond', 'amongst', 'none', "'ve", 'say', 'via', 'but', 'often', 're', 'our',
'because', 'rather', 'using', 'without', 'throughout', 'on', 'she', 'never', 'eight', 'no', 'hereupon',
'them', 'whereafter', 'quite', 'which', 'move', 'thru', 'until', 'afterwards', 'fifty', 'i', 'itself', 'n‘t',
'him', 'could', 'front', 'within', '‘re', 'back', 'such', 'already', 'several', 'side', 'whence', 'me',
'same', 'were', 'it', 'every', 'third', 'together'
]
}
```
## Function 1: Metric Dictionary
Write a function that calculates the mean, median, variance, standard deviation, minimum and maximum of of list of items. You can assume the given list is contains only numerical entries, and you may use numpy functions to do this.
**Function Specifications:**
- Function should allow a list as input.
- It should return a `dict` with keys `'mean'`, `'median'`, `'std'`, `'var'`, `'min'`, and `'max'`, corresponding to the mean, median, standard deviation, variance, minimum and maximum of the input list, respectively.
- The standard deviation and variance values must be unbiased. **Hint:** use the `ddof` parameter in the corresponding numpy functions!
- All values in the returned `dict` should be rounded to 2 decimal places.
```
### START FUNCTION
def dictionary_of_metrics(items):
# your code here
return
### END FUNCTION
dictionary_of_metrics(gauteng)
```
_**Expected Output**_:
```python
dictionary_of_metrics(gauteng) == {'mean': 26244.42,
'median': 24403.5,
'var': 108160153.17,
'std': 10400.01,
'min': 8842.0,
'max': 39660.0}
```
## Function 2: Five Number Summary
Write a function which takes in a list of integers and returns a dictionary of the [five number summary.](https://www.statisticshowto.datasciencecentral.com/how-to-find-a-five-number-summary-in-statistics/).
**Function Specifications:**
- The function should take a list as input.
- The function should return a `dict` with keys `'max'`, `'median'`, `'min'`, `'q1'`, and `'q3'` corresponding to the maximum, median, minimum, first quartile and third quartile, respectively. You may use numpy functions to aid in your calculations.
- All numerical values should be rounded to two decimal places.
```
### START FUNCTION
def five_num_summary(items):
# your code here
return
### END FUNCTION
five_num_summary(gauteng)
```
_**Expected Output:**_
```python
five_num_summary(gauteng) == {
'max': 39660.0,
'median': 24403.5,
'min': 8842.0,
'q1': 18653.0,
'q3': 36372.0
}
```
## Function 3: Date Parser
The `dates` variable (created at the top of this notebook) is a list of dates represented as strings. The string contains the date in `'yyyy-mm-dd'` format, as well as the time in `hh:mm:ss` formamt. The first three entries in this variable are:
```python
dates[:3] == [
'2019-11-29 12:50:54',
'2019-11-29 12:46:53',
'2019-11-29 12:46:10'
]
```
Write a function that takes as input a list of these datetime strings and returns only the date in `'yyyy-mm-dd'` format.
**Function Specifications:**
- The function should take a list of strings as input.
- Each string in the input list is formatted as `'yyyy-mm-dd hh:mm:ss'`.
- The function should return a list of strings where each element in the returned list contains only the date in the `'yyyy-mm-dd'` format.
```
### START FUNCTION
def date_parser(dates):
# your code here
return
### END FUNCTION
date_parser(dates[:3])
```
_**Expected Output:**_
```python
date_parser(dates[:3]) == ['2019-11-29', '2019-11-29', '2019-11-29']
date_parser(dates[-3:]) == ['2019-11-20', '2019-11-20', '2019-11-20']
```
## Function 4: Municipality & Hashtag Detector
Write a function which takes in a pandas dataframe and returns a modified dataframe that includes two new columns that contain information about the municipality and hashtag of the tweet.
**Function Specifications:**
* Function should take a pandas `dataframe` as input.
* Extract the municipality from a tweet using the `mun_dict` dictonary given below, and insert the result into a new column named `'municipality'` in the same dataframe.
* Use the entry `np.nan` when a municipality is not found.
* Extract a list of hashtags from a tweet into a new column named `'hashtags'` in the same dataframe.
* Use the entry `np.nan` when no hashtags are found.
**Hint:** you will need to `mun_dict` variable defined at the top of this notebook.
```
```
### START FUNCTION
def extract_municipality_hashtags(df):
# your code here
return
### END FUNCTION
extract_municipality_hashtags(twitter_df.copy())
```
_**Expected Outputs:**_
```python
extract_municipality_hashtags(twitter_df.copy())
```
> <table class="dataframe" border="1">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Tweets</th>
<th>Date</th>
<th>municipality</th>
<th>hashtags</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>@BongaDlulane Please send an email to mediades...</td>
<td>2019-11-29 12:50:54</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<th>1</th>
<td>@saucy_mamiie Pls log a call on 0860037566</td>
<td>2019-11-29 12:46:53</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<th>2</th>
<td>@BongaDlulane Query escalated to media desk.</td>
<td>2019-11-29 12:46:10</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<th>3</th>
<td>Before leaving the office this afternoon, head...</td>
<td>2019-11-29 12:33:36</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<th>4</th>
<td>#ESKOMFREESTATE #MEDIASTATEMENT : ESKOM SUSPEN...</td>
<td>2019-11-29 12:17:43</td>
<td>NaN</td>
<td>[#eskomfreestate, #mediastatement]</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>195</th>
<td>Eskom's Visitors Centres’ facilities include i...</td>
<td>2019-11-20 10:29:07</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<th>196</th>
<td>#Eskom connected 400 houses and in the process...</td>
<td>2019-11-20 10:25:20</td>
<td>NaN</td>
<td>[#eskom, #eskom, #poweringyourworld]</td>
</tr>
<tr>
<th>197</th>
<td>@ArthurGodbeer Is the power restored as yet?</td>
<td>2019-11-20 10:07:59</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<th>198</th>
<td>@MuthambiPaulina @SABCNewsOnline @IOL @eNCA @e...</td>
<td>2019-11-20 10:07:41</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<th>199</th>
<td>RT @GP_DHS: The @GautengProvince made a commit...</td>
<td>2019-11-20 10:00:09</td>
<td>NaN</td>
<td>NaN</td>
</tr>
</tbody>
</table>
## Function 5: Number of Tweets per Day
Write a function which calculates the number of tweets that were posted per day.
**Function Specifications:**
- It should take a pandas dataframe as input.
- It should return a new dataframe, grouped by day, with the number of tweets for that day.
- The index of the new dataframe should be named `Date`, and the column of the new dataframe should be `'Tweets'`, corresponding to the date and number of tweets, respectively.
- The date should be formated as `yyyy-mm-dd`, and should be a datetime object. **Hint:** look up `pd.to_datetime` to see how to do this.
```
### START FUNCTION
def number_of_tweets_per_day(df):
# your code here
return
### END FUNCTION
number_of_tweets_per_day(twitter_df.copy())
```
_**Expected Output:**_
```python
number_of_tweets_per_day(twitter_df.copy())
```
> <table class="dataframe" border="1">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Tweets</th>
</tr>
<tr>
<th>Date</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>2019-11-20</th>
<td>18</td>
</tr>
<tr>
<th>2019-11-21</th>
<td>11</td>
</tr>
<tr>
<th>2019-11-22</th>
<td>25</td>
</tr>
<tr>
<th>2019-11-23</th>
<td>19</td>
</tr>
<tr>
<th>2019-11-24</th>
<td>14</td>
</tr>
<tr>
<th>2019-11-25</th>
<td>20</td>
</tr>
<tr>
<th>2019-11-26</th>
<td>32</td>
</tr>
<tr>
<th>2019-11-27</th>
<td>13</td>
</tr>
<tr>
<th>2019-11-28</th>
<td>32</td>
</tr>
<tr>
<th>2019-11-29</th>
<td>16</td>
</tr>
</tbody>
</table>
# Function 6: Word Splitter
Write a function which splits the sentences in a dataframe's column into a list of the separate words. The created lists should be placed in a column named `'Split Tweets'` in the original dataframe. This is also known as [tokenization](https://www.geeksforgeeks.org/nlp-how-tokenizing-text-sentence-words-works/).
**Function Specifications:**
- It should take a pandas dataframe as an input.
- The dataframe should contain a column, named `'Tweets'`.
- The function should split the sentences in the `'Tweets'` into a list of seperate words, and place the result into a new column named `'Split Tweets'`. The resulting words must all be lowercase!
- The function should modify the input dataframe directly.
- The function should return the modified dataframe.
```
### START FUNCTION
def word_splitter(df):
# your code here
return
### END FUNCTION
word_splitter(twitter_df.copy())
```
_**Expected Output**_:
```python
word_splitter(twitter_df.copy())
```
> <table class="dataframe" border="1">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Tweets</th>
<th>Date</th>
<th>Split Tweets</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>@BongaDlulane Please send an email to mediades...</td>
<td>2019-11-29 12:50:54</td>
<td>[@bongadlulane, please, send, an, email, to, m...</td>
</tr>
<tr>
<th>1</th>
<td>@saucy_mamiie Pls log a call on 0860037566</td>
<td>2019-11-29 12:46:53</td>
<td>[@saucy_mamiie, pls, log, a, call, on, 0860037...</td>
</tr>
<tr>
<th>2</th>
<td>@BongaDlulane Query escalated to media desk.</td>
<td>2019-11-29 12:46:10</td>
<td>[@bongadlulane, query, escalated, to, media, d...</td>
</tr>
<tr>
<th>3</th>
<td>Before leaving the office this afternoon, head...</td>
<td>2019-11-29 12:33:36</td>
<td>[before, leaving, the, office, this, afternoon...</td>
</tr>
<tr>
<th>4</th>
<td>#ESKOMFREESTATE #MEDIASTATEMENT : ESKOM SUSPEN...</td>
<td>2019-11-29 12:17:43</td>
<td>[#eskomfreestate, #mediastatement, :, eskom, s...</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>195</th>
<td>Eskom's Visitors Centres’ facilities include i...</td>
<td>2019-11-20 10:29:07</td>
<td>[eskom's, visitors, centres’, facilities, incl...</td>
</tr>
<tr>
<th>196</th>
<td>#Eskom connected 400 houses and in the process...</td>
<td>2019-11-20 10:25:20</td>
<td>[#eskom, connected, 400, houses, and, in, the,...</td>
</tr>
<tr>
<th>197</th>
<td>@ArthurGodbeer Is the power restored as yet?</td>
<td>2019-11-20 10:07:59</td>
<td>[@arthurgodbeer, is, the, power, restored, as,...</td>
</tr>
<tr>
<th>198</th>
<td>@MuthambiPaulina @SABCNewsOnline @IOL @eNCA @e...</td>
<td>2019-11-20 10:07:41</td>
<td>[@muthambipaulina, @sabcnewsonline, @iol, @enc...</td>
</tr>
<tr>
<th>199</th>
<td>RT @GP_DHS: The @GautengProvince made a commit...</td>
<td>2019-11-20 10:00:09</td>
<td>[rt, @gp_dhs:, the, @gautengprovince, made, a,...</td>
</tr>
</tbody>
</table>
# Function 7: Stop Words
Write a function which removes english stop words from a tweet.
**Function Specifications:**
- It should take a pandas dataframe as input.
- Should tokenise the sentences according to the definition in function 6. Note that function 6 **cannot be called within this function**.
- Should remove all stop words in the tokenised list. The stopwords are defined in the `stop_words_dict` variable defined at the top of this notebook.
- The resulting tokenised list should be placed in a column named `"Without Stop Words"`.
- The function should modify the input dataframe.
- The function should return the modified dataframe.
```
### START FUNCTION
def stop_words_remover(df):
# your code here
return
### END FUNCTION
stop_words_remover(twitter_df.copy())
```
_**Expected Output**_:
Specific rows:
```python
stop_words_remover(twitter_df.copy()).loc[0, "Without Stop Words"] == ['@bongadlulane', 'send', 'email', 'mediadesk@eskom.co.za']
stop_words_remover(twitter_df.copy()).loc[100, "Without Stop Words"] == ['#eskomnorthwest', '#mediastatement', ':', 'notice', 'supply', 'interruption', 'lichtenburg', 'area', 'https://t.co/7hfwvxllit']
```
Whole table:
```python
stop_words_remover(twitter_df.copy())
```
> <table class="dataframe" border="1">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Tweets</th>
<th>Date</th>
<th>Without Stop Words</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>@BongaDlulane Please send an email to mediades...</td>
<td>2019-11-29 12:50:54</td>
<td>[@bongadlulane, send, email, mediadesk@eskom.c...</td>
</tr>
<tr>
<th>1</th>
<td>@saucy_mamiie Pls log a call on 0860037566</td>
<td>2019-11-29 12:46:53</td>
<td>[@saucy_mamiie, pls, log, 0860037566]</td>
</tr>
<tr>
<th>2</th>
<td>@BongaDlulane Query escalated to media desk.</td>
<td>2019-11-29 12:46:10</td>
<td>[@bongadlulane, query, escalated, media, desk.]</td>
</tr>
<tr>
<th>3</th>
<td>Before leaving the office this afternoon, head...</td>
<td>2019-11-29 12:33:36</td>
<td>[leaving, office, afternoon,, heading, weekend...</td>
</tr>
<tr>
<th>4</th>
<td>#ESKOMFREESTATE #MEDIASTATEMENT : ESKOM SUSPEN...</td>
<td>2019-11-29 12:17:43</td>
<td>[#eskomfreestate, #mediastatement, :, eskom, s...</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>195</th>
<td>Eskom's Visitors Centres’ facilities include i...</td>
<td>2019-11-20 10:29:07</td>
<td>[eskom's, visitors, centres’, facilities, incl...</td>
</tr>
<tr>
<th>196</th>
<td>#Eskom connected 400 houses and in the process...</td>
<td>2019-11-20 10:25:20</td>
<td>[#eskom, connected, 400, houses, process, conn...</td>
</tr>
<tr>
<th>197</th>
<td>@ArthurGodbeer Is the power restored as yet?</td>
<td>2019-11-20 10:07:59</td>
<td>[@arthurgodbeer, power, restored, yet?]</td>
</tr>
<tr>
<th>198</th>
<td>@MuthambiPaulina @SABCNewsOnline @IOL @eNCA @e...</td>
<td>2019-11-20 10:07:41</td>
<td>[@muthambipaulina, @sabcnewsonline, @iol, @enc...</td>
</tr>
<tr>
<th>199</th>
<td>RT @GP_DHS: The @GautengProvince made a commit...</td>
<td>2019-11-20 10:00:09</td>
<td>[rt, @gp_dhs:, @gautengprovince, commitment, e...</td>
</tr>
</tbody>
</table>
| github_jupyter |
```
%load_ext memory_profiler
```
# Iterators, generators and itertools
```
for i in range(5): print(i, end=" ")
print()
for i in (0, 1, 2, 3, 4): print(i, end=" ")
print()
for i in {0, 1, 2, 3, 4}: print(i, end=" ")
list(map(type, (range(5), (0, 1, 2, 3, 4), {0, 1, 2, 3, 4})))
range(5).__sizeof__(), (0, 1, 2, 3, 4).__sizeof__(), {0, 1, 2, 3, 4}.__sizeof__()
mil = range(10**6)
mil.__sizeof__(), tuple(mil).__sizeof__()
```
## Collections, sequences and containers
<img src="https://miro.medium.com/max/1200/1*X3GmUh7dqAMJLM5KwGBKYQ.png">
## Iterable
```
hasattr(range(5), "__iter__"), hasattr(tuple(), "__iter__"), hasattr(set(), "__iter__")
```
```python
iterable.__iter__() -> iterator
```
## Iterator protocol
- `__iter__` - return the iterator object itself
- `__next__` - return the next item from the iterator
Return the next item from the container. If there are no further items, raise the `StopIteration` exception. Once an iterator’s `__next__` method raises `StopIteration`, it must continue to do so on subsequent calls.
```
for i in range(5):
print(i, end=" ")
print()
iterator = iter(range(5))
while True:
try:
i = next(iterator)
print(i, end=" ")
except StopIteration:
break
type(range(5)), type(iterator)
from collections import namedtuple
from typing import Iterator
Page = namedtuple("Page", ["text", "number"])
class Book:
def __init__(self) -> None:
self.pages = []
def add_page(self, text: str) -> None:
self.pages.append(
Page(text, number=len(self.pages) + 1)
)
def __iter__(self) -> Iterator[Page]:
return BookIter(self)
class BookIter:
def __init__(self, book: Book) -> None:
self.pages = book.pages
self._cursor = -1
def __iter__(self) -> "BookIter":
return self
def __next__(self) -> Page:
self._cursor += 1
if len(self.pages) > self._cursor:
return self.pages[self._cursor]
raise StopIteration
book = Book()
for i in range(1, 5):
book.add_page(f"page_{i}")
for page in book:
print(page)
type(book), type(iter(book))
```
## Why do we need BookIter?
```
class LazyBook(Book):
def __iter__(self) -> Iterator[Page]:
return iter(self.pages)
lazy_book = LazyBook()
for i in range(1, 5):
lazy_book.add_page(f"page_{i}")
for page in lazy_book:
print(page)
type(lazy_book), type(iter(lazy_book))
class PurchasableBook(Book):
def __init__(self, purchased: bool = False) -> None:
self.purchased = purchased
super().__init__()
def __iter__(self) -> "PurchasableBookIter":
return PurchasableBookIter(self)
class BookIter:
def __init__(self, book: Book) -> None:
self.pages = book.pages
self.book = book
self._cursor = 0 # self._cursor = -1
def __iter__(self) -> "BookIter":
return self
def __next__(self) -> Page:
if len(self.pages) > self._cursor: # self._cursor += 1
result = self.pages[self._cursor] # if len(self.pages) > self._cursor:
self._cursor += 1 # return self.pages[self._cursor]
return result
raise StopIteration # raise StopIteration
class PurchasableBookIter(BookIter):
def __init__(self, book: Book):
self.purchased = book.purchased
super().__init__(book)
def __next__(self) -> Page:
if not self.purchased and self._cursor > 0:
print("Buy the book to view next pages!")
raise StopIteration
return super().__next__()
purchased_book = PurchasableBook()
for i in range(1, 5):
purchased_book.add_page(f"page_{i}")
for page in purchased_book:
print(page)
it = iter(purchased_book)
for page in it:
print(page)
purchased_book.purchased = True
for page in it:
print(page)
purchased_book.purchased = True
for page in purchased_book:
print(page)
```
## Is PurchasableBookIter fully match the iterator protocol?
<h2 align=center>Quiz time</h2>
$$
iterators \supset iterable
$$
<center>or</center>
$$
iterators \subset iterable
$$
## Recup
What should an iterator do?
1. Track the current state
1. Know how to return next element
1. ?????
1. <strike>PROFIT</strike> `StopIteration`
Do we realy need a collection for an iterator?
Do we realy need to stop?
```
class RecurrentSequence:
def __init__(self, a_1: int, a_2: int) -> None:
self.a_1 = a_1
self.a_2 = a_2
def __iter__(self) -> Iterator[int]:
return RecurrentSequenceIterator(self.a_1, self.a_2)
class RecurrentSequenceIterator:
def __init__(self, a_1: int, a_2: int) -> None:
self.a_1 = a_1
self.a_2 = a_2
def __iter__(self) -> Iterator[int]:
return self
def __next__(self) -> int:
result = self.a_1
self.a_1, self.a_2 = self.a_2, self.a_1 + self.a_2
return result
fib = RecurrentSequence(1, 1)
for i, f in zip(range(1, 20), fib):
print(f"{i} - {f}", end="; ")
print()
for i, f in zip(range(1, 20), fib):
print(f"{i} - {f}", end="; ")
fib_iter = iter(fib)
for i, f in zip(range(1, 10), fib_iter):
print(f"{i} - {f}", end="; ")
print()
for i, f in zip(range(1, 10), fib_iter):
print(f"{i} - {f}", end="; ")
fib.__sizeof__(), fib_iter.__sizeof__()
```
## Any side effects?
- one can exhauste an iterator:
```
iterator = iter([1, 2, 3, 4])
print(sum(iterator))
print(sum(iterator))
not_iterator = [1, 2, 3, 4]
print(sum(not_iterator))
print(sum(not_iterator))
```
- an iterable has `__contains__` method
```
print(list(book))
Page("page_2", 2) in book, Page("page_2", 2) in book, Page("page_5", 5) in book, 3 in book
iterator = iter(book)
Page("page_2", 2) in iterator, Page("page_2", 2) in iterator
5 in fib
# 6 in fib
```
## Iterables with `__getitem__`
```
class HiddenList:
def __init__(self, lst):
self._lst = lst
h_list = HiddenList([1, 2, 3])
iter(h_list)
class IterableHiddenList(HiddenList):
def __getitem__(self, index):
print(f"Index: {index}")
return self._lst[index]
ih_list = IterableHiddenList([1, 2, 3])
iter(ih_list)
# for i in ih_list:
# print(i)
# pass
# 5 in ih_list
```
How might it work?
```
print(dir(IterableHiddenList))
print(dir(RecurrentSequence))
```
```python
print(index)
```
## Any questions so far?
# GENERATORS
Are generators iterators?
```
def gen():
yield 1
print(dir(gen()))
```
Are generators iterators?
\- Yes*
```
gen, gen(), (1 for _ in [])
```
- **Generator functions** - a function or method which uses the yield statement
- **Generator iterator** - an object created by a generator function
- **Generator expression** -an expression that returns an iterator
```
def recurrent_sequence(a1: int, a2: int):
while True:
yield a1
a1, a2 = a2, a1 + a2
fib = recurrent_sequence(0, 1)
for i, f in zip(range(1, 20), fib):
print(f"{i} - {f}", end="; ")
```
## Generators and `return`
```
def gen():
yield 1
return 2
g = gen()
next(g)
next(g)
```
What will happen here?
```
def gen():
return 2
yield 1
gen().__next__()
try:
next(gen())
except StopIteration as e:
print(e.value)
```
## Preserving operations order
```
def do_in_order():
x = 1
print(f"Do first, {x}")
yield
x += 1
print(f"Do second, {x}")
yield
x *= x
print(f"Do third, {x}")
gen = do_in_order()
next(gen)
next(gen)
next(gen, "Stop")
```
## Send
```
def do_in_order_2():
x = 1
print(f"Do first, {x}")
y = yield
print(f"Do second, {y}")
z = yield 42
print(f"Do third, {z}")
gen = do_in_order_2()
for _ in gen:
print(f"step {_}")
gen = do_in_order_2()
next(gen)
next(gen)
next(gen, "Stop")
```
## Send
```
def do_in_order_2():
x = 1
print(f"Do first, {x}")
x = yield "123"
print(f"Do second, {x}")
x = yield 42
print(f"Do third, {x}")
gen = do_in_order_2()
gen.send(None)
gen.send("Hello")
try:
gen.send("World")
except StopIteration:
print("I'm out!")
```
## Throw
```
def g():
try:
yield 42
except Exception as e:
yield e
gen = g()
next(gen)
gen.throw(NotImplementedError, "Exception text")
gen.throw(NotImplementedError, "Exception text returns")
```
## Close
```
def do_in_order_2():
x = 1
print(f"Do first, {x}")
x = yield
print(f"Do second, {x}")
x = yield 42
print(f"Do third, {x}")
gen = do_in_order_2()
gen.send(None)
gen.close()
gen.send(None)
gen.throw(NotImplementedError, "Exception text")
```
How did `close` works?
```
BaseException.__subclasses__()
def tricky_gen():
yield "first"
try:
yield "second"
finally:
yield "from finally"
gen = tricky_gen()
for i in gen:
print(i, end=" ")
gen = tricky_gen()
next(gen), next(gen)
gen.close()
```
https://amir.rachum.com/blog/2017/03/03/generator-cleanup/
## yield from
- pass the execution to another generator
- pass `send` and `throw`
```
def chain(*iterables):
for iterable in iterables:
for item in iterable:
yield item
list(chain(
[1, 2], ("4", "5"), {"key1": "val1", "key2": "val2"}, "iter", {("a",), ("b",)}
))
```
Can we do better?
```
def chain(*iterables):
for iterable in iterables:
yield from iterable
return "Stop"
list(chain(
[1, 2], ("4", "5"), {"key1": "val1", "key2": "val2"}, "iter", {("a",), ("b",)}
))
```
## Example: Sieve of Eratosthenes
In mathematics, the sieve of Eratosthenes is an ancient algorithm for finding all prime numbers up to any given limit.
<img src="https://upload.wikimedia.org/wikipedia/commons/b/b9/Sieve_of_Eratosthenes_animation.gif">
1. Create a list of consecutive integers from 2 through n: (2, 3, 4, ..., n).
1. Initially, let p equal 2, the smallest prime number.
1. Enumerate the multiples of p by counting in increments of p from 2p to n, and mark them in the list (these will be 2p, 3p, 4p, ...; the p itself should not be marked).
1. Find the smallest number in the list greater than p that is not marked. If there was no such number, stop. Otherwise, let p now equal this new number (which is the next prime), and repeat from step 3.
1. When the algorithm terminates, the numbers remaining not marked in the list are all the primes below n.
Step 1: generate all natural numbers:
```
def natural_numbers(start=1):
while True:
yield start
start += 1
for _, number in zip(range(10), natural_numbers(1)):
print(number, end=" ")
```
Step 2: ~draw the rest of the owl~
```
s = sieve(natural_numbers(2))
def sieve(numbers):
prime = next(numbers)
yield prime
yield from sieve(p for p in numbers if p % prime != 0)
for _, prime in zip(range(10), sieve(natural_numbers(2))):
print(f"{_} - {prime}")
```
# Lazy evaluations
```
n = 10**7
%%memit
sum([i for i in range(n)])
%%memit
sum(i for i in range(n))
%%memit
sum(list(map(lambda x: x**2, [i for i in range(n)])))
%%memit
sum(map(lambda x: x**2, [i for i in range(n)]))
%%memit
sum(map(lambda x: x**2, (i for i in range(n))))
```
# Context managers
- `__enter__(self)`
- `__exit__(self, exception_type, exception_value, traceback)`
Patterns:
- acquisition/release of the resource
- doing something in different context
```
import os
class cd:
def __init__(self, path):
self.path = path
def __enter__(self):
self.cwd = os.getcwd()
os.chdir(self.path)
def __exit__(self, *args):
os.chdir(self.cwd)
print(os.getcwd())
with cd(".."):
print(os.getcwd())
print(os.getcwd())
from contextlib import contextmanager
@contextmanager
def cd_gen(path):
# <__init__>
cwd = os.getcwd()
# </__init__>
try:
# <__enter__>
os.chdir(path)
# </__enter__>
yield
finally:
# <__exit__>
os.chdir(cwd)
# </__exit__>
print(os.getcwd())
with cd_gen("/home"):
print(os.getcwd())
print(os.getcwd())
```
# `itertools`
This module implements a number of iterator building blocks inspired by constructs from APL, Haskell, and SML. Each has been recast in a form suitable for Python.
The module standardizes a core set of fast, memory efficient tools that are useful by themselves or in combination. Together, they form an “iterator algebra” making it possible to construct specialized tools succinctly and efficiently in pure Python.
```
from itertools import islice
def take(iterable, n=10):
return list(islice(iterable, n))
take(fib, 5)
from itertools import count, repeat, cycle
take(count())
take(repeat([1, 2]), 3)
take(cycle([1, 2]), 11)
from itertools import dropwhile, takewhile
list(dropwhile(lambda x: x < 3, range(6)))
list(takewhile(lambda x: x < 3, range(6)))
from itertools import chain
list(chain(
[1, 2], ("4", "5"), {"key1": "val1", "key2": "val2"}, "iter", {("a",), ("b",)}
))
collection = [[1, 2], ("4", "5"), {"key1": "val1", "key2": "val2"}, "iter", {("a",), ("b",)}]
list(chain(*collection))
list(chain.from_iterable(collection))
```
What is the difference?
```
from itertools import tee
iterator = iter(range(4))
a = b = iterator
list(a), list(b)
iterator = range(4)
a, b = tee(iterator, 2)
list(iterator)
list(a), list(b)
iterator = iter(range(4))
a, b = tee(iterator, 2)
list(iterator)
list(a), list(b)
```
## Combinatoric iterators
```
from itertools import product
list(product("12", repeat=3))
list(product("12", "abc"))
sum(1 for a, b, c, d in product(l1, l2, l3, l4) if a + b + c + d == 0)
# or
sum(a + b + c + d == 0 for a, b, c, d in product(l1, l2, l3, l4))
from itertools import permutations, combinations, combinations_with_replacement
list(permutations("BAC", 2))
list(combinations("ABC", 2))
list(combinations_with_replacement("ABC", 2))
```
# Links
More itertools:
https://docs.python.org/3/library/itertools.html#itertools-recipes
David Beazley:
https://www.dabeaz.com/tutorials.html
# Fin.
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Flower classification with TensorFlow Lite Model Maker with TensorFlow 2.0
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/lite/codelabs/flower_classification/ml/Flower_Classification_with_TFLite_Model_Maker.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/lite/codelabs/flower_classification/ml/Flower_Classification_with_TFLite_Model_Maker.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
</table>
Model Maker library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.
This notebook shows an end-to-end example that utilizes this Model Maker library to illustrate the adaption and conversion of a commonly-used image classification model to classify flowers on a mobile device.
## Prerequisites
To run this example, we first need to make a copy of this notebook. Click on "Copy to Drive" at the top of this notebook. Then we need to install serveral required packages, including Model Maker package that in github [repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
```
!pip install -q tflite-model-maker
```
Import the required packages.
```
from tflite_model_maker import image_classifier
from tflite_model_maker.image_classifier import DataLoader
import tensorflow as tf
assert tf.__version__.startswith('2')
import matplotlib.pyplot as plt
import numpy as np
```
## Simple End-to-End Example
### Get the data path
Let's get some images to play with this simple end-to-end example. Hundreds of images is a good start for Model Maker while more data could achieve better accuracy.
```
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
```
You could replace `image_path` with your own image folders. As for uploading data to colab, you could find the upload button in the left sidebar shown in the image below with the red rectangle. Just have a try to upload a zip file and unzip it. The root file path is the current path.
<img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_image_classification.png" alt="Upload File" width="800" hspace="100">
If you prefer not to upload your images to the cloud, you could try to run the library locally following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker) in github.
### Run the example
The example just consists of 4 lines of code as shown below, each of which representing one step of the overall process.
1. Load input data specific to an on-device ML app. Split it to training data and testing data.
```
data = DataLoader.from_folder(image_path)
train_data, test_data = data.split(0.9)
```
2. Customize the TensorFlow model. See [docs](https://www.tensorflow.org/lite/api_docs/python/tflite_model_maker/image_classifier/ImageClassifier) for choices.
```
model = image_classifier.create(train_data, model_spec='efficientnet_lite0')
```
3. Evaluate the model.
```
loss, accuracy = model.evaluate(test_data)
```
4. Export to TensorFlow Lite model.
You could download it in the left sidebar same as the uploading part for your own use.
```
model.export(export_dir='.')
```
5. Download the trained model by clicking on the folder icon on the left hand side. Right-click on "model.tflite" and select download. Or run the following code:
```
from google.colab import files
files.download('model.tflite')
```
6. Inferencing using tflite model. Download some sample test images and preprocess them so that you can run inference with the model.
```
!wget https://upload.wikimedia.org/wikipedia/commons/thumb/4/40/Sunflower_sky_backdrop.jpg/1200px-Sunflower_sky_backdrop.jpg
!wget https://cdn.shopify.com/s/files/1/0011/4170/2746/products/12RedRosesKoreanStyle_002_600x.jpg
!wget https://www.insidescience.org/sites/default/files/2020-06/Dandelion_topNteaser.jpg
import cv2
# Load and preprocess images
image_list = []
def load_image(image_path):
test_image = cv2.imread(image_path)
test_image = cv2.cvtColor(test_image, cv2.COLOR_BGR2RGB)
test_image = cv2.resize(test_image, (224, 224))
image_list.append(test_image)
load_image('1200px-Sunflower_sky_backdrop.jpg')
load_image('12RedRosesKoreanStyle_002_600x.jpg')
load_image('Dandelion_topNteaser.jpg')
# Display the images
fig, ax = plt.subplots(1, 3, figsize=(12, 4))
for i in range(3):
ax[i].imshow(image_list[i]);
ax[i].set_title(image_list[i].shape);
test_image_samples = np.array(image_list)
test_image_samples.shape
interpreter = tf.lite.Interpreter(model_path='model.tflite')
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
print("Input Shape:", input_details[0]['shape'])
print("Input Type:", input_details[0]['dtype'])
print("Output Shape:", output_details[0]['shape'])
print("Output Type:", output_details[0]['dtype'])
interpreter.get_input_details()
interpreter.get_output_details()
```
7. Observe the results.
```
# Manually set sample size to 3
interpreter.resize_tensor_input(input_details[0]['index'], (3, 224, 224, 3))
interpreter.resize_tensor_input(output_details[0]['index'], (3, 5))
# Run inference using interpreter
interpreter.allocate_tensors()
interpreter.set_tensor(input_details[0]['index'], test_image_samples)
interpreter.invoke()
tflite_model_predictions = interpreter.get_tensor(output_details[0]['index'])
# Print predictions
labels = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips']
print(tflite_model_predictions)
print('Prediction:', [labels[ind] for ind in np.argmax(tflite_model_predictions, axis=1)])
```
For a more comprehensive guide to TFLite Model Maker, please refer to this [notebook](https://colab.sandbox.google.com/github/tensorflow/examples/blob/master/tensorflow_examples/lite/model_maker/demo/image_classification.ipynb) and its [documentation](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
| github_jupyter |
# Neurons example via low-level, flexible interface
## Prepare
```
from bayes_window import models
from bayes_window.fitting import fit_numpyro
from bayes_window.generative_models import generate_fake_spikes
import numpy as np
from sklearn.preprocessing import LabelEncoder
trans = LabelEncoder().fit_transform
```
## Make some data
```
df, df_monster, index_cols, firing_rates = generate_fake_spikes(n_trials=2,
n_neurons=8,
n_mice=4,
dur=7, )
# df['log_isi'] = np.log10(df['isi'])
from bayes_window import visualization, utils
from importlib import reload
reload(visualization)
reload(utils)
y = 'isi'
ddf, dy = utils.make_fold_change(df,
y=y,
index_cols=('stim', 'mouse', 'neuron'),
treatment_name='stim',
do_take_mean=True)
visualization.plot_data(x='neuron', y=dy, color='mouse', df=ddf)[0]
```
TODO leave axis labels here somehow
## Estimate model
```
# y = list(set(df.columns) - set(index_cols))[0]
trace = fit_numpyro(y=df[y].values,
treatment=(df['stim']).astype(int).values,
condition=trans(df['neuron']),
group=trans(df['mouse']),
progress_bar=True,
model=models.model_hierarchical,
n_draws=100, num_chains=1, )
```
## Add data back
```
# reload(utils)
# df_both, trace = utils.add_data_to_posterior(df, posterior=trace.posterior, y=y,
# fold_change_index_cols=['neuron', 'stim', 'mouse_code', ],
# treatment_name='stim', b_name='slope_per_condition',
# posterior_index_name='neuron', group_name='mouse')
#
# + [markdown] hideCode=false hidePrompt=false
# # ## Plot data and posterior
#
# + hideCode=false hidePrompt=false
# # BayesWindow.regression_charts(df_both, y=f'{y} diff', x='neuron',color='mouse_code',title=y,hold_for_facet=False,add_box=False)
# reload(visualization)
#
# chart_d, _ = visualization.plot_data(df=df_both, x='neuron', y=f'{y} diff', color='mouse_code', highlight=False)
# chart_d
#
# + hideCode=false hidePrompt=false
# chart_p = visualization.plot_posterior(df=df_both, title=f'd_{y}', x='neuron', )
# chart_p
#
# + hideCode=false hidePrompt=false
# (chart_d + chart_p).resolve_scale(y='independent')
#
# + hideCode=false hidePrompt=false
# (chart_d + chart_p).facet(column='neuron')
```
## Appendix: Elements of interactivity (WIP)
```
import altair as alt
y='isi'
color='mouse'
x='neuron'
base=alt.Chart(df).encode(
x=x,
color=f'{color}',
y=alt.Y(f'mean({y})',
scale=alt.Scale(zero=False,
domain=list(np.quantile(df[y], [.05, .95])))),
)
highlight = alt.selection(type='single', on='mouseover',
fields=[color], nearest=True)
lines=base.mark_line(clip=True, fill=None, opacity=.6, ).encode(
size=alt.condition(~highlight, alt.value(1), alt.value(3))
)
points = base.mark_circle().encode(
opacity=alt.value(0),
#axis=alt.Axis(labels=False, tickCount=0, title='')
).add_selection(
highlight
)
lines+points
import altair as alt
y='isi'
color='mouse'
x='neuron'
base=alt.Chart(df).encode(
x=x,
color=f'{color}',
y=alt.Y(f'mean({y})',
scale=alt.Scale(zero=False,
domain=list(np.quantile(df[y], [.05, .95])))),
)
# Create a selection that chooses the nearest point & selects based on x-value
nearest = alt.selection(type='single', nearest=True, on='mouseover',
fields=[x], empty='none')
# Transparent selectors across the chart. This is what tells us
# the x-value of the cursor
selectors = alt.Chart(df).mark_point().encode(
x=x,
opacity=alt.value(0),
).add_selection(
nearest
)
highlight = alt.selection(type='single', on='mouseover',
fields=[color], nearest=True)
lines=base.mark_line(clip=True, fill=None, opacity=.6, ).encode(
#tooltip=color,
size=alt.condition(~highlight, alt.value(1), alt.value(3))
)
# Draw text labels near the points, and highlight based on selection
text = lines.mark_text(align='left', dx=5, dy=-5).encode(
text=alt.condition(nearest, y, alt.value(' '))
)
points = base.mark_circle().encode(
opacity=alt.value(0)
).add_selection(
highlight
)
alt.layer(
lines, selectors, points, text
)
```
| github_jupyter |
## Notebook 0 - Labeling Languages of Texts
For our project, we will be using the Dota dataset: https://www.kaggle.com/romovpa/gosuai-dota-2-game-chats
This dataset contains multiple languages that our group cannot interpret. For this case, we will be using the English portion of the dataset. If we have more time by the end of this project, we may find a way to translate different languages.
```
import numpy as np
import pandas as pd
import warnings
import fasttext
# To load the large dataset (21 million rows) with AWS
!pip install boto3
# Installing the language labeler
!pip install fasttext
# Loading data into Colab via AWS https://medium.com/python-in-plain-english/how-to-load-data-from-aws-s3-into-google-colab-7e76fbf534d2
import boto3
BUCKET_NAME = 'ggwp-project'
# Authentication credentials
s3 = boto3.resource('s3', aws_access_key_id = 'AKIAJ52OAEJJXOEMTRVA',
aws_secret_access_key= '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')
KEY = 'dota2_chat_messages.csv'
try:
# Downloading training dataset from s3 with name `dota2_chat_messages.csv`
# to colab dir as `dota2_chat_messages.csv`
s3.Bucket(BUCKET_NAME).download_file(KEY, 'dota2_chat_messages.csv')
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
print("The object does not exist.")
else:
raise
# Loading the Full Dota dataset (all languages)
df = pd.read_csv('dota2_chat_messages.csv')
df.head()
df.shape
# Checking null values
for col in df.columns:
print(col, "NA:", sum(df[col].isna()))
```
There are only null values in the `text` column. While the dataset description suggests theres no missing values, pandas does not identify empty strings as Na/NaN. This suggests the missing values are other invalid characters such as emojis which we won't be considering for our purposes. As such, we dropped the null values.
```
# Dropping nulls since they are only in the text column (nothing to analyze w/ no text)
df = df.dropna()
df.head()
# Converting the single monolith dataframe into a dictionary of multiple smaller dataframes to allow batch processing
DataFrameDict = {}
countEach = int(df.shape[0] / 10) + 1
for i in range(0,10):
df_temp = df[(i * countEach) : ((i + 1) * countEach)]
DataFrameDict[str(i)] = df_temp.copy()
# Loading the FastText model
!wget https://dl.fbaipublicfiles.com/fasttext/supervised-models/lid.176.bin
model = fasttext.load_model("lid.176.bin")
# Using FastText to run a batch language detection process on the small dfs that are part of the above dict
xCt = 0
for key in DataFrameDict:
thisDf = DataFrameDict[key]
i = -1
langs = np.zeros(len(thisDf)).astype(str)
for message in thisDf['text'].values:
i += 1
try:
predictions = model.predict(message)
langs[i] = predictions[0][0][-2:]
except:
continue
print("FINISHED!" + str(xCt))
thisDf['language'] = langs
print(thisDf.head())
xCt += 1
# Recreating the dataframe, now with languages identified
newDf = pd.DataFrame()
for key in DataFrameDict:
newDf = pd.concat([newDf, DataFrameDict[key]])
print(newDf.shape)
# # Saving the dataframe with the language column
# newDf.to_csv('dataSetWithLang.csv', index=False)
```
| github_jupyter |
## This notebook:
- Include a comprehensive list of all the courses and their requirements distribution
- User enter a course, a requirement distribution, number of recommendation requested
- System gives the requested number of recommendation up to the maximum possible choices, based on the requirement distribution of that course
```
import pandas as pd
com_course_list = pd.read_csv('assets/com_course_list_w_requirements.csv')
cs_course_list = pd.read_csv('assets/Computer Science Program Major Requirements only.csv')
com_course_list['Requirement'] = 'LSA'
cs_course_list['Requirement'] = 'CS'
course_list = pd.concat([com_course_list, cs_course_list])
course_list
course_list.sample()
valid_requirement_dist_options = course_list['Course List Description'].unique().tolist()
valid_requirement_dist_options
def recommend_course(courses_taken, course_type, num_to_recommend):
df = course_list
if courses_taken in course_list['Subject/Catalog'].tolist():
type_of_course = df[df['Subject/Catalog'] == courses_taken]['Course List Description'].values[0]
course_requirement = df[df['Subject/Catalog'] == courses_taken]['Require']
course_requirement_of_type = df[df['Course List Description'] == course_type]['Require']
# If the course entered belongs to the "One" list
if course_requirement.any() == 'One':
print(f'The requirement distribution of {courses_taken} is {type_of_course}.')
print(f'You only need to take one course of the {type_of_course} requirement.')
print(f'You have fully met the {type_of_course} requirement.')
# If not
elif course_requirement.any() != 'One':
# If the type entered is not on the list
if course_type not in course_list['Course List Description'].tolist():
print('Requirement distribution not found')
# If it does
elif course_type in course_list['Course List Description'].tolist():
# If the type of the course entered and the course type entered are not the same
if type_of_course != course_type:
print('These courses are not of the same requirement')
print('----------------------')
print(f'The requirement distribution of {courses_taken} is {type_of_course}.')
print(f'Recommending courses of the same requirement with {courses_taken}:')
courses_of_same_type_entered = df[df['Course List Description'] == type_of_course]
courses_not_taken_entered = courses_of_same_type_entered[courses_of_same_type_entered['Subject/Catalog'] != courses_taken][['Subject/Catalog', 'Course Title']]
# If the number of course in list is longer than num_to_recommend
if len(courses_not_taken_entered) < num_to_recommend:
courses_to_recommend_entered = courses_not_taken_entered[['Subject/Catalog', 'Course Title']]
print(f'Total number of recommendations: {len(courses_to_recommend_entered)}')
# If the number of course in list is shorter than num_to_recommend
else:
courses_to_recommend_entered = courses_not_taken_entered.sample(num_to_recommend)[['Subject/Catalog', 'Course Title']]
print(f'Total number of recommendations: {len(courses_to_recommend_entered)}')
print(courses_to_recommend_entered)
print('----------------------')
print(f'Recommending course of the {course_type} requirement:')
courses_of_same_type = df[df['Course List Description'] == course_type]
courses_not_taken = courses_of_same_type[courses_of_same_type['Subject/Catalog'] != courses_taken][['Subject/Catalog', 'Course Title']]
# If the number of course in list is longer than num_to_recommend
if len(courses_not_taken) < num_to_recommend:
courses_to_recommend = courses_not_taken[['Subject/Catalog', 'Course Title']]
print(f'Total number of recommendations: {len(courses_to_recommend)}')
# If the number of course in list is shorter than num_to_recommend
else:
courses_to_recommend = courses_not_taken.sample(num_to_recommend)[['Subject/Catalog', 'Course Title']]
print(f'Total number of recommendations: {len(courses_to_recommend)}')
print(courses_to_recommend)
###########################################
else:
# If the type of the course entered and the course type entered are the same
print(f'Course {courses_taken} is of {course_type} requirement.')
print(f'Recommending course of the {course_type} requirement:')
courses_of_same_type = df[df['Course List Description'] == course_type]
courses_not_taken = courses_of_same_type[courses_of_same_type['Subject/Catalog'] != courses_taken][['Subject/Catalog', 'Course Title']]
# If the number of course in list is longer than num_to_recommend
if len(courses_not_taken) < num_to_recommend:
courses_to_recommend = courses_not_taken[['Subject/Catalog', 'Course Title']]
print(f'Total number of recommendations: {len(courses_to_recommend)}')
print(courses_to_recommend)
# If the number of course in list is shorter than num_to_recommend
else:
courses_to_recommend = courses_not_taken.sample(num_to_recommend)[['Subject/Catalog', 'Course Title']]
print(f'Total number of recommendations: {len(courses_to_recommend)}')
print(courses_to_recommend)
else:
print('Course not found')
recommend_course('BIOLOG 197', 'Natural Science', 10)
```
| github_jupyter |
# **Processamento de Imagens e Imagens**
Engenharia da Computação - 2021.01
## Operações Pontuais
### Download das imagens de teste
```
!wget 'https://homepages.cae.wisc.edu/~ece533/images/peppers.png'
!wget 'https://homepages.cae.wisc.edu/~ece533/images/baboon.png'
```
### Imports
```
import cv2 as cv
import matplotlib.pyplot as plt
```
### Code
```
img1 = cv.imread('peppers.png')
img1 = cv.cvtColor(img1, cv.COLOR_BGR2RGB)
img2 = cv.imread('baboon.png')
img2 = cv.cvtColor(img2, cv.COLOR_BGR2RGB)
img3 = cv.add(img1, 60)
img4 = cv.add(img1, img2)
img5 = cv.bitwise_not(img2)
img6 = cv.bitwise_and(img1, img2)
img7 = cv.bitwise_or(img1, img2)
img8 = cv.bitwise_xor(img1, img2)
img9 = cv.subtract(img1, img2)
img10 = cv.addWeighted(img1, 0.60, img2, 0.80, 0)
plt.figure(figsize=(16, 14))
plt.subplot(551), plt.imshow(img1)
plt.subplot(552), plt.imshow(img2)
plt.subplot(553), plt.imshow(img3)
plt.subplot(554), plt.imshow(img4)
plt.subplot(555), plt.imshow(img5)
plt.subplot(556), plt.imshow(img6)
plt.subplot(557), plt.imshow(img7)
plt.subplot(558), plt.imshow(img8)
plt.subplot(559), plt.imshow(img9)
plt.subplot(5,5,10), plt.imshow(img10)
plt.show()
```
### Exercises
#### Imports
```
import numpy as np
```
#### Code
1. Utilizando a biblioteca OpenCV construa uma aplicação para:
- Carregar uma imagem digital
- Separar e apresentar os canais da imagem
```
img_origin = cv.imread('peppers.png')
img_origin = cv.cvtColor(img_origin, cv.COLOR_BGR2RGB)
blue_channel, green_channel, red_channel = cv.split(img_origin)
zeros = np.zeros(img_origin.shape[:2], dtype = 'uint8')
red = cv.merge([red_channel, zeros, zeros])
green = cv.merge([zeros, green_channel, zeros])
blue = cv.merge([zeros, zeros, blue_channel])
plt.figure(figsize=(16, 14))
plt.subplot(551), plt.imshow(img_origin)
plt.subplot(552), plt.imshow(red), plt.title('red')
plt.subplot(553), plt.imshow(green), plt.title('green')
plt.subplot(554), plt.imshow(blue), plt.title('blue')
plt.show()
```
2. Utilizando a biblioteca OpenCV construa uma aplicação para:
- Carregar uma imagem digital
- Transformar a imagem para tons de cinza utilizando operações pontuais.
- [Matplotlib Colormaps](https://matplotlib.org/stable/tutorials/colors/colormaps.html)
```
img_origin = cv.imread('peppers.png')
img_origin = cv.cvtColor(img_origin, cv.COLOR_BGR2RGB)
img_gray = cv.cvtColor(img_origin, cv.COLOR_RGB2GRAY)
# coverter RGB -> Gray manualmente
img_gray_manual = img_origin.copy()
img_gray_weighted = img_origin.copy()
for row in range(img_gray_manual.shape[0]): # para todas as linhas
for col in range(img_gray_manual.shape[1]): # para todas as colunas
r = img_gray_manual.item(row, col, 0) # valor do pixel R
g = img_gray_manual.item(row, col, 1) # valor do pixel G
b = img_gray_manual.item(row, col, 2) # valor do pixel B
gray = (r + g + b) / 3 # formula de conversao
gray_weighted = (r * 0.30) + (g * 0.59) + (b * 0.11) # formula de conversao ponderada
# alterar os pixels da imagem
img_gray_manual.itemset((row, col, 0), gray)
img_gray_manual.itemset((row, col, 1), gray)
img_gray_manual.itemset((row, col, 2), gray)
img_gray_weighted.itemset((row, col, 0), gray_weighted)
img_gray_weighted.itemset((row, col, 1), gray_weighted)
img_gray_weighted.itemset((row, col, 2), gray_weighted)
plt.figure(figsize=(10, 10))
plt.subplot(141), plt.imshow(img_origin), plt.title('original')
plt.subplot(142), plt.imshow(img_gray, cmap='gray'), plt.title('gray')
plt.subplot(143), plt.imshow(img_gray_manual, cmap='gray'), plt.title('gray manual')
plt.subplot(144), plt.imshow(img_gray_weighted, cmap='gray'), plt.title('gray weighted')
plt.show()
```
3. Utilizando a biblioteca OpenCV construa uma aplicação para:
- Carregar uma imagem digital
- Transformar a imagem para preto e branco (binária) utilizando operações pontuais.
```
img_bwb = img_gray.copy()
for row in range(img_gray.shape[0]):
for col in range(img_gray.shape[1]):
pixel = img_gray.item(row, col)
if pixel > 127:
img_bwb.itemset((row, col), 255)
else:
img_bwb.itemset((row, col), 0)
plt.figure(figsize=(10, 10))
plt.subplot(144), plt.imshow(img_bwb, cmap='gray'), plt.title('B&W Binary')
plt.show()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
df=pd.read_excel('Main.xlsx')
df.head()
df.info()
df.describe()
df.isnull().sum()
sns.boxplot(df['Percentage Graduate & above'])
grouped_df=df[['Illiterate','Graduate & above']]
grouped_df.head()
from sklearn.preprocessing import StandardScaler
scalar = StandardScaler()
scaled = scalar.fit_transform(grouped_df)
scaled.shape
scaled_df = pd.DataFrame(scaled)
# grouped_df.reset_index()
scaled_df.columns = ['Illiterate','Graduate & above']
scaled_df
from sklearn.neighbors import NearestNeighbors
from random import sample
from numpy.random import uniform
import numpy as np
from math import isnan
def hopkins(X):
d = X.shape[1]
#d = len(vars) # columns
n = len(X) # rows
m = int(0.1 * n)
nbrs = NearestNeighbors(n_neighbors=1).fit(X.values)
rand_X = sample(range(0, n, 1), m)
ujd = []
wjd = []
for j in range(0, m):
u_dist, _ = nbrs.kneighbors(uniform(np.amin(X,axis=0),np.amax(X,axis=0),d).reshape(1, -1), 2, return_distance=True)
ujd.append(u_dist[0][1])
w_dist, _ = nbrs.kneighbors(X.iloc[rand_X[j]].values.reshape(1, -1), 2, return_distance=True)
wjd.append(w_dist[0][1])
H = sum(ujd) / (sum(ujd) + sum(wjd))
if isnan(H):
print(ujd, wjd)
H = 0
return H
#Use the Hopkins Statistic function by passing the above dataframe as a paramter
hopkins(scaled_df)
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=4, max_iter=50)
kmeans.fit(scaled_df)
kmeans.labels_
ssd = []
range_n_clusters = [2, 3, 4, 5, 6, 7, 8]
for num_clusters in range_n_clusters:
kmeans = KMeans(n_clusters=num_clusters, max_iter=50)
kmeans.fit(scaled_df)
ssd.append(kmeans.inertia_)
plt.plot(ssd)
ssd
# silhouette analysis
from sklearn.metrics import silhouette_score
range_n_clusters = [2, 3, 4, 5, 6, 7, 8]
for num_clusters in range_n_clusters:
# intialise kmeans
kmeans = KMeans(n_clusters=num_clusters, max_iter=50)
kmeans.fit(scaled_df)
cluster_labels = kmeans.labels_
# silhouette score
silhouette_avg = silhouette_score(scaled_df, cluster_labels)
print("For n_clusters={0}, the silhouette score is {1}".format(num_clusters, silhouette_avg))
```
### In graph we can see, at k=3, the graph which decreasing at a very high rate, starts to drop slowly.
As well as using Sihouette score we can see that k=2 and k=3 has almost same score.
So out of common point, we will take k=3 to calculate further
```
kmeans = KMeans(n_clusters=3, max_iter=50)
kmeans.fit(scaled_df)
scaled_df['cluster_id'] = kmeans.labels_
scaled_df.head()
# Since we selected 3 clusters, all cluster id will be between 0 and 2
sns.boxplot(x='cluster_id', y='Illiterate', data=scaled_df)
sns.boxplot(x='cluster_id', y='Graduate & above', data=scaled_df)
```
| github_jupyter |
# Data Story
The data (Bondora's loan book) can be download from: https://www.bondora.com/marketing/media/LoanData.zip
```
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
import warnings
warnings.filterwarnings("ignore", category=np.VisibleDeprecationWarning)
pd.options.display.max_rows = 125
import seaborn as sns
sns.set(color_codes=True)
sns.set(rc={"figure.figsize": (16, 4)})
loandata = pd.read_csv("data/loandata.csv", low_memory=False)
loandata['year'] = pd.to_datetime(loandata['ListedOnUTC']).dt.year
loandata['yearmonth'] = pd.to_datetime(loandata['ListedOnUTC']).dt.to_period('M')
recentld = loandata[loandata['year'] > 2012]
repaid = recentld[recentld['Status'] == 'Repaid']
(loandata.shape, recentld.shape)
```
### Number of loans per year
```
countByYear = loandata.groupby('year').size()
plot = sns.barplot(x=countByYear.index,y=countByYear)
```
From the initial analysis we can see that the number of loans is definitely growing over time. This can be caused by a higher demand for loans or rise in popularity of Bondora.
### Median salary per year and country
```
t = loandata[['year', 'IncomeTotal', 'Country']]
t = t[(t['year'] > 2010) & (t['year'] < 2017)]
plot = t.groupby(['year', 'Country']).median().unstack(1).plot(kind='bar', figsize=(16, 4))
```
We can see that, generally, the income of the borrowers increases over time. This is an expected behaviour as the countries, where Bondora operates, have seen an increase of average salary over the last years.
### Loan amount analysis
```
plot = sns.distplot(loandata['Amount'].astype(int), bins=50)
```
#### List of the top 30 loan amount (round to nearest 100) with counts:
```
plot = (loandata['Amount'] // 100 * 100).value_counts().head(30).plot(kind='bar')
```
The most common loan amount is 500 EUR with the first 13 being lower or equal to 3100 EUR.
#### Distributions of loan amounts over years
```
plot = sns.violinplot(cut=0, scale="width", x="year", y="Amount", data=loandata[['Amount', 'year']])
```
In the first couple of years the loans were relatively much lower then in the last years. Average, minimum and maximum loan amounts increase over time.
#### Distribution of loan amounts per country
```
plot = sns.violinplot(cut=0, scale="width", x="Country", y="Amount", data=loandata[['Amount', 'Country']])
```
Finland has the highest most frequent loan amount (about 2100 EUR) and Estonia the lowest (about 500 EUR). The shapes of the distrubtions are similar across all the countries.
### Loan duration analysis
```
pd.options.mode.chained_assignment = None # default='warn'
t = loandata[['Amount', 'LoanDuration']]
t['LoanDuration2'] = t['LoanDuration'] // 12 * 12
plot = sns.distplot(loandata['LoanDuration'], bins=50) # remove density
```
#### Loan duration with relation to the amount
```
plot = sns.violinplot(cut=0, scale="width", x="LoanDuration2", y="Amount", data=t)
```
There is a visible linear dependency between the amount borrowed and loan duration -- the longer the loan the higher amount borrowed.
#### Loan duration with relation to year of issue
```
plot = sns.violinplot(cut=0, scale="width", x="year", y="LoanDuration", data=loandata[['year', 'LoanDuration']])
```
Over the first three years Bondora issued loans of maximum 24 months duration, but since 2013 the maximum duration is 60 months. We can see that the most popular durations in the recent years are 36 and 60 months with very few borrowers choosing durations lower than 12 months.
```
plot = sns.violinplot(cut=0, scale="width", x="year", y="LoanDuration", data=repaid[['year', 'LoanDuration']])
```
### Number of dependants vs age
```
p = loandata[['Age', 'NrOfDependants']]
p['DepNum'] = pd.to_numeric(loandata.NrOfDependants, errors='coerce')
plot = p.groupby('NrOfDependants').size().sort_values().plot(kind='bar')
```
More than half of the borrowers have no dependants at all with very few borrowers have more than 5 dependants.
```
p = p.dropna().astype(int)
grid = sns.lmplot(x="Age", y="NrOfDependants", data=p, fit_reg=False, size=6, aspect=3)
```
We can see a non linear dependency between the age of the borrower and number of the dependants, gradually increasing from the age of 18, reaching peak between 40-45, and then gradually decreasing.
### Number of loans listed per year month
```
loandata['yearmonth'] = pd.to_datetime(loandata['ListedOnUTC']).dt.to_period('M')
plot = loandata.groupby(['yearmonth', 'Country']).size().unstack(1).sort_index(ascending=True).fillna(0).plot(figsize=(16, 5))
```
From the analysis of the loans listed per yearmonth, it is clearly visible, that Slovakian loans were listed only for a short period of time (mostly 2014) and since then borrowing from that country has been phased out.
### Distribution of loan amounts for genders
```
plot = sns.violinplot(cut=0, scale="width", x="Amount", y="Gender", orient='h', data=recentld)
df = recentld
m = df[df['Gender'] == 0.0]
f = df[df['Gender'] == 1.0]
(m.shape, f.shape)
v = 'Amount'
m_mean = m[v].dropna().mean()
f_mean = f[v].dropna().mean()
std = df[v].dropna().std()
z = (m_mean - f_mean) / std
(m_mean, f_mean, std, m_mean - f_mean, z)
```
H0: No difference between mean loan amount for female borrowers and male borrowers. H1 - there is a difference.
### Historical repayment rate of principal and amount of interest with penalties
```
repaid['Defaulted'] = repaid['PrincipalPaymentsMade'] < repaid['Amount']
repaid[['Defaulted', 'PrincipalPaymentsMade', 'InterestAndPenaltyPaymentsMade', 'Amount', 'Interest']]
print(repaid.shape)
print(repaid['Defaulted'].mean())
print(repaid['PrincipalPaymentsMade'].sum() / repaid['Amount'].sum())
print(repaid['InterestAndPenaltyPaymentsMade'].sum() / repaid['Amount'].sum())
print((repaid['PrincipalPaymentsMade'].sum() + repaid['InterestAndPenaltyPaymentsMade'].sum()) / repaid['Amount'].sum())
```
| github_jupyter |
# Tutorial 4: Encrypted Convolution on MNIST
Welcome to tutorial 4 where we are going to perform encrypted evaluation on MNIST examples, using a convolutional neural network. If you haven't played with TenSEAL before, or need a quick overview of what homomorphic encryption is, I would suggest going through [Tutorial 0 - Getting Started](./Tutorial%200%20-%20Getting%20Started.ipynb) first.
We will be using CKKS extensively in this tutorial, so if you don't know how it works, I would recommend checking [Tutorial 2 - Working with Approximate Numbers](./Tutorial%202%20-%20Working%20with%20Approximate%20Numbers.ipynb) first.
We will start by explaining how the different layers can be performed on encrypted data. Next we train a PyTorch model on MNIST, then implement an equivalent one using TenSEAL, but which can evaluate encrypted inputs.
Authors:
- Ayoub Benaissa - Twitter: [@y0uben11](https://twitter.com/y0uben11)
- Bilal Retiat - Twitter: [@philomath213](https://twitter.com/philomath213)
## Machine Learning Model
With the MNIST dataset in hand, we can use a simple neural network composed of a convolutional layer, followed by two linear layers. Here we use the square activation function for simplicity, and ease of use, given the limitation of the number of multiplications with the CKKS scheme.
We will keep in mind that the input for the model needs to be encrypted using CKKS, but the parameters of the model don't, they will be kept in plain during the whole protocol.
### Model Description
The model is the sequence of the below layers:
- **Conv:** Convolution with 4 kernels. Shape of the kernel is 7x7. Strides are 3x3.
- **Activation:** Square activation function.
- **Linear Layer 1:** Input size: 256. Output size: 64.
- **Activation:** Square activation function.
- **Linear Layer 2:** Input size: 64. Output size: 10.
### Input Representation
In order to keep the memory and computation to its lowest, we will mostly try to use a single ciphertext. It's not always possible, and we often loose some flexibility. For this model, there are two different representations. One for the convolution, and one for the linear layers. The former will be quickly explained in the convolution section. For the latter, it's simply the input vector for the linear layer which is replicated many times to fit the slots of the ciphertexts. So a single ciphertext will contain the whole input for the linear layer.
### Convolution
There is actually different ways for doing convolution, and one way we can do it is via a well-known algorithm that translates the 2D convolution into a single matrix multiplication operation. This operation is often referred to as image-to-column convolution and is depicted in *Figure1*.
<div align="center">
<img src="assets/im2col_conv2d.png" width="50%"/>
<div><b>Figure1:</b> Image to column convolution</div>
</div>
However, this requires arranging the elements of the input matrix in a special way, and since we can't easily do that with a ciphertext, we have to do this as a pre-processing step before encryption. This also means that only a single convolution can be performed. To perform the convolution, we first need to do *im2col* encoding to the input matrix and encrypt it into a single ciphertext. It's worth noting that the matrix is translated into a vector using vertical scan. We then perform a matrix multiplication between an encrypted matrix (input image encoded in a ciphertext) and a plain vector (the flattened kernel of the convolution). This is done by first constructing this new flattened kernel, which replicates every element in the kernel $n$ times, where $n$ is the number of windows. Then we perform a ciphertext-plaintext multiplication, and continue with a sequence of rotate and sum operations in order to sum the elements of the same window. The process is depicted in *Figure2* and *Figure3*.
<div align="center">
<img src="assets/im2col_conv2d_ckks1.png" width="50%"/>
<div><b>Figure2:</b> Image to column convolution with CKKS - step 1</div>
</div>
<div align="center">
<img src="assets/im2col_conv2d_ckks2.png" width="50%"/>
<div><b>Figure3:</b> Image to column convolution with CKKS - step 2</div>
</div>
If multiple kernels are used, then we need to perform this operation multiple times, yielding different output ciphertexts. These ciphertexts can later be combined (using a single multiplication) into a flattened vector. So every convolution will output a ciphertext containing 64 useful slots, then combining the 4 kernel outputs will yield us a ciphertext with 256 useful slots that will be the input for the first linear layer. The algorithm requires a single multiplication and $log_2(n)$ ciphertext rotations where $n$ is the number of windows in the convolution.
### Linear Layer
A linear layer boils down to a vector-matrix multiplication and an addition of a bias. The matrix and the bias are not encrypted. The vector-matrix multiplication is implemented based on [Halevi and Shoup ](https://link.springer.com/chapter/10.1007/978-3-662-44371-2_31) diagonal method. It's an accumulation of multiple ciphertext-plaintext multiplications, with slightly different rotations. We iterate over every diagonal in the plain matrix and multiply it with the ciphertext rotated $n$ slots to the left, where $n$ is the index (0-indexed) of the diagonal. The process is depicted in *Figure4*. The algorithm runs in $O(n)$ where $n$ is the size of the encrypted vector.
<div align="center">
<img src="assets/vec-matmul.png" width="65%"/>
<div><b>Figure4:</b> Vector-Matrix Multiplication</div>
</div>
### Square Activation
The square activation is pretty straightforward. We just multiply a ciphertext by itself.
Building on these operations, we now know that this evaluation requires exactly 6 multiplications to be performed, 2 for the convolution, 1 for the first square activation, 1 for the first linear layer, 1 for the second square activation, and finally 1 for the last linear layer.
## Training
Now that we know how we can implement such a model via HE, we will start using a library called [TenSEAL](https://github.com/OpenMined/TenSEAL) that implements all these operations we have been describing. But first, we need to train a plain PyTorch model to classify the MNIST dataset.
```
import torch
from torchvision import datasets
import torchvision.transforms as transforms
import numpy as np
torch.manual_seed(73)
train_data = datasets.MNIST('data', train=True, download=True, transform=transforms.ToTensor())
test_data = datasets.MNIST('data', train=False, download=True, transform=transforms.ToTensor())
batch_size = 64
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, shuffle=True)
class ConvNet(torch.nn.Module):
def __init__(self, hidden=64, output=10):
super(ConvNet, self).__init__()
self.conv1 = torch.nn.Conv2d(1, 4, kernel_size=7, padding=0, stride=3)
self.fc1 = torch.nn.Linear(256, hidden)
self.fc2 = torch.nn.Linear(hidden, output)
def forward(self, x):
x = self.conv1(x)
# the model uses the square activation function
x = x * x
# flattening while keeping the batch axis
x = x.view(-1, 256)
x = self.fc1(x)
x = x * x
x = self.fc2(x)
return x
def train(model, train_loader, criterion, optimizer, n_epochs=10):
# model in training mode
model.train()
for epoch in range(1, n_epochs+1):
train_loss = 0.0
for data, target in train_loader:
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
train_loss += loss.item()
# calculate average losses
train_loss = train_loss / len(train_loader)
print('Epoch: {} \tTraining Loss: {:.6f}'.format(epoch, train_loss))
# model in evaluation mode
model.eval()
return model
model = ConvNet()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
model = train(model, train_loader, criterion, optimizer, 10)
```
Then test its accuracy on the test set:
```
def test(model, test_loader, criterion):
# initialize lists to monitor test loss and accuracy
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
# model in evaluation mode
model.eval()
for data, target in test_loader:
output = model(data)
loss = criterion(output, target)
test_loss += loss.item()
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct = np.squeeze(pred.eq(target.data.view_as(pred)))
# calculate test accuracy for each object class
for i in range(len(target)):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# calculate and print avg test loss
test_loss = test_loss/len(test_loader)
print(f'Test Loss: {test_loss:.6f}\n')
for label in range(10):
print(
f'Test Accuracy of {label}: {int(100 * class_correct[label] / class_total[label])}% '
f'({int(np.sum(class_correct[label]))}/{int(np.sum(class_total[label]))})'
)
print(
f'\nTest Accuracy (Overall): {int(100 * np.sum(class_correct) / np.sum(class_total))}% '
f'({int(np.sum(class_correct))}/{int(np.sum(class_total))})'
)
test(model, test_loader, criterion)
```
## Encrypted Evaluation
Now start the encrypted evaluation that will use the pre-trained model:
```
"""
It's a PyTorch-like model using operations implemented in TenSEAL.
- .mm() method is doing the vector-matrix multiplication explained above.
- you can use + operator to add a plain vector as a bias.
- .conv2d_im2col() method is doing a single convlution operation.
- .square_() just square the encrypted vector inplace.
"""
import tenseal as ts
class EncConvNet:
def __init__(self, torch_nn):
self.conv1_weight = torch_nn.conv1.weight.data.view(
torch_nn.conv1.out_channels, torch_nn.conv1.kernel_size[0],
torch_nn.conv1.kernel_size[1]
).tolist()
self.conv1_bias = torch_nn.conv1.bias.data.tolist()
self.fc1_weight = torch_nn.fc1.weight.T.data.tolist()
self.fc1_bias = torch_nn.fc1.bias.data.tolist()
self.fc2_weight = torch_nn.fc2.weight.T.data.tolist()
self.fc2_bias = torch_nn.fc2.bias.data.tolist()
def forward(self, enc_x, windows_nb):
# conv layer
enc_channels = []
for kernel, bias in zip(self.conv1_weight, self.conv1_bias):
y = enc_x.conv2d_im2col(kernel, windows_nb) + bias
enc_channels.append(y)
# pack all channels into a single flattened vector
enc_x = ts.CKKSVector.pack_vectors(enc_channels)
# square activation
enc_x.square_()
# fc1 layer
enc_x = enc_x.mm(self.fc1_weight) + self.fc1_bias
# square activation
enc_x.square_()
# fc2 layer
enc_x = enc_x.mm(self.fc2_weight) + self.fc2_bias
return enc_x
def __call__(self, *args, **kwargs):
return self.forward(*args, **kwargs)
def enc_test(context, model, test_loader, criterion, kernel_shape, stride):
# initialize lists to monitor test loss and accuracy
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
for data, target in test_loader:
# Encoding and encryption
x_enc, windows_nb = ts.im2col_encoding(
context, data.view(28, 28).tolist(), kernel_shape[0],
kernel_shape[1], stride
)
# Encrypted evaluation
enc_output = enc_model(x_enc, windows_nb)
# Decryption of result
output = enc_output.decrypt()
output = torch.tensor(output).view(1, -1)
# compute loss
loss = criterion(output, target)
test_loss += loss.item()
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct = np.squeeze(pred.eq(target.data.view_as(pred)))
# calculate test accuracy for each object class
label = target.data[0]
class_correct[label] += correct.item()
class_total[label] += 1
# calculate and print avg test loss
test_loss = test_loss / sum(class_total)
print(f'Test Loss: {test_loss:.6f}\n')
for label in range(10):
print(
f'Test Accuracy of {label}: {int(100 * class_correct[label] / class_total[label])}% '
f'({int(np.sum(class_correct[label]))}/{int(np.sum(class_total[label]))})'
)
print(
f'\nTest Accuracy (Overall): {int(100 * np.sum(class_correct) / np.sum(class_total))}% '
f'({int(np.sum(class_correct))}/{int(np.sum(class_total))})'
)
# Load one element at a time
test_loader = torch.utils.data.DataLoader(test_data, batch_size=1, shuffle=True)
# required for encoding
kernel_shape = model.conv1.kernel_size
stride = model.conv1.stride[0]
```
Choosing the parameters isn't easy, so we list some intuition here for why we have chosen these parameters exactly:
1. For a given security level (e.g. 128-bits security) and a polynomial modulus degree (e.g. 8192) there is an upper bound for the bit count of the coefficient modulus (`sum(coeff_mod_bit_sizes)`). If the upper bound is surpassed, there is a need to use a higher polynomial modulus degree (e.g. 16384) in order to make sure we still have the required security level.
2. The multiplicative depth is controlled by the number of primes constituting our coefficient modulus.
3. All elements of `coeff_mod_bit_sizes[1: -1]` should be equal in TenSEAL, since it takes care of rescaling ciphertexts. And we also want to use the same number of bits (e.g. 2 ^ 26) for the scale during encryption.
4. The scale is what controls the precision of the fractional part, since it's the value that plaintexts are multiplied with before being encoded into a polynomial of integer coefficients.
Starting with a scale of more than 20 bits, we need to choose the number of bits of all the middle primes equal to that, so we are already over 120 bits. With this lower bound of coefficient modulus and a security level of 128-bits, we will need a polynomial modulus degree of at least 8192. The upper bound for choosing a higher degree is at 218. Trying different values for the precision and adjusting the coefficient modulus, while studying the loss and accuracy, we end up with 26-bits of scale and primes. We also have 5 bits (31 - 26) for the integer part in the last coefficient modulus, which should be enough for our use case, since output values aren't that big.
```
## Encryption Parameters
# controls precision of the fractional part
bits_scale = 26
# Create TenSEAL context
context = ts.context(
ts.SCHEME_TYPE.CKKS,
poly_modulus_degree=8192,
coeff_mod_bit_sizes=[31, bits_scale, bits_scale, bits_scale, bits_scale, bits_scale, bits_scale, 31]
)
# set the scale
context.global_scale = pow(2, bits_scale)
# galois keys are required to do ciphertext rotations
context.generate_galois_keys()
```
This will now run encrypted evaluation over the whole test-set. It's gonna take time, but with this, you can feel proud of having done encrypted inference on a test-set of 10000 elements, congratulations!
```
enc_model = EncConvNet(model)
enc_test(context, enc_model, test_loader, criterion, kernel_shape, stride)
```
## Cost of the Encrypted Inference
To conclude, I wanted to give you some numbers about memory and computation costs for this specific use case. Running this on a personal computer with a *Intel(R) Core(TM) i7-3612QM CPU @ 2.10GHz* CPU requires 2 seconds per encrypted inference. In a real-world use case, this would also require sending the encrypted input from the client to the server, and the encrypted result from the server to the client, so the size of these objects really matters. The encrypted input takes about 476KB, while the encrypted result is only about 70KB.
# Congratulations!!! - Time to Join the Community!
Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement towards privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
### Star TenSEAL on GitHub
The easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building.
- [Star TenSEAL](https://github.com/OpenMined/TenSEAL)
### Join our Slack!
The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org). #lib_tenseal and #code_tenseal are the main channels for the TenSEAL project.
### Join our Team!
If you're excited about what we are working on TenSEAL, and if you're interested to work on homomorphic encryption related use cases, you should definitely join us!
[Apply to the crypto team!](https://docs.google.com/forms/d/1T6MJ21V1lb7aEr4ilZOTYQXzxXP6KbpLumZVmTZMSuY/edit)
### Donate
If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!
[OpenMined's Open Collective Page](https://opencollective.com/openmined)
| github_jupyter |
# End-to-End Data Cleaning Pipeline with Raha and Baran (Detailed Demo)
We build an end-to-end data cleaning pipeline with our configuration-free error detection and correction systems, Raha and Baran.
```
import bz2
import json
import pickle
import numpy
import pandas
import ipywidgets
import IPython.display
import sklearn.decomposition
import scipy.cluster.hierarchy
import matplotlib.pyplot as plt
import raha
```
## 1. Instantiating the Detection and Correction Classes
We first instantiate the `Detection` and `Correction` classes.
```
app_1 = raha.Detection()
app_2 = raha.Correction()
# How many tuples would you label?
app_1.LABELING_BUDGET = 20
app_2.LABELING_BUDGET = 0
# Would you like to see the logs?
app_1.VERBOSE = True
app_2.VERBOSE = True
# Do you want to filter out ineffective error detector startegies?
app_1.STRATEGY_FILTERING = True
app_1.HISTORICAL_DATASETS = [
{
"name": "hospital",
"path": "datasets/hospital/dirty.csv",
"clean_path": "datasets/hospital/clean.csv"
},
{
"name": "beers",
"path": "datasets/beers/dirty.csv",
"clean_path": "datasets/beers/clean.csv"
}
]
# Do you have any pretrained error corrector models to load?
#PRETRAINED_VALUE_BASED_MODELS_PATH = "/media/mohammad/C20E45C80E45B5E7/Projects/raha/supplementaries/models/pretrained_value_based_models_small.dictionary"
#pretrained_models = pickle.load(bz2.BZ2File(PRETRAINED_VALUE_BASED_MODELS_PATH, "rb"))
```
## 2. Instantiating the Dataset
We next load and instantiate the dataset object.
```
dataset_dictionary = {
"name": "flights",
"path": "datasets/flights/dirty.csv",
"clean_path": "datasets/flights/clean.csv"
}
d = app_1.initialize_dataset(dataset_dictionary)
d.dataframe.head()
```
## 3. Running Error Detection Strategies
Raha runs (all or the promising) error detection strategies on the dataset. This step could take a while because all the strategies should be run on the dataset.
```
app_1.run_strategies(d)
optimized_strategies_count = len(d.strategy_profiles)
optimized_runtime = sum([sp["runtime"] for sp in d.strategy_profiles])
original_strategies_count, original_runtime = raha.utilities.get_strategies_count_and_runtime(dataset_dictionary)
approaches = ["Without Strategy Filtering", "With Strategy Filtering"]
plt.style.use("ggplot")
fig = plt.figure()
ax = plt.axes()
x_pos = [0, 1]
ax.bar(x_pos, [original_strategies_count, optimized_strategies_count])
ax.set(ylabel="Number of Strategies", title="Effect of Filtering out Ineffective Strategies");
ax.set_xticks(numpy.arange(len(x_pos)))
_ = ax.set_xticklabels(approaches, rotation=0)
plt.style.use("ggplot")
fig = plt.figure()
ax = plt.axes()
x_pos = [0, 1]
ax.bar(x_pos, [original_runtime, optimized_runtime])
ax.set(ylabel="Runtime of Strategies (s)", title="Effect of Filtering out Ineffective Strategies");
ax.set_xticks(numpy.arange(len(x_pos)))
_ = ax.set_xticklabels(approaches, rotation=0)
strategies_df = pandas.DataFrame(columns=["Name", "Score", "New Column", "Historical Column"])
for sp in d.strategy_profiles:
strategies_df = strategies_df.append({"Name": sp["name"].replace("OD", "Outlier Detection").replace(
"PVD", "Pattern Violation Detection").replace("RVD", "Rule Violation Detection").replace(
"KBVD", "Knowledge Base Violation Detection"), "Score": sp["score"], "New Column": sp["new_column"],
"Historical Column": sp["historical_column"]}, ignore_index=True)
strategies_df.head()
```
## 4. Generating Features
Raha then generates a feature vector for each data cell based on the output of error detection strategies.
```
app_1.generate_features(d)
def callback(row, column):
selected_tuple = pandas.DataFrame(data=[d.dataframe.iloc[int(row), :]], columns=d.dataframe.columns)
IPython.display.display(selected_tuple)
features_df = pandas.DataFrame(columns=["Name", "Value"])
for strategy_profile in d.strategy_profiles:
strategy_name = json.loads(strategy_profile["name"])
value = 0
for cell in strategy_profile["output"]:
if cell == (int(row), int(column)):
value = 1
features_df = features_df.append({"Name": strategy_name, "Value": value}, ignore_index=True)
IPython.display.display(features_df.sort_values("Value", ascending=False))
interactive_text = ipywidgets.interactive(callback, row="100", column="6")
interactive_text
```
## 5. Building Clusters
Raha next builds a hierarchical clustering model for our clustering-based sampling approach.
```
app_1.build_clusters(d)
def callback(attribute):
column = d.dataframe.columns.get_loc(attribute)
features = d.column_features[column]
plt.figure(figsize=(20, 7))
plt.title("Data Cells Dendograms")
linkage = scipy.cluster.hierarchy.linkage(features[:50], method="average")
dend = scipy.cluster.hierarchy.dendrogram(linkage, labels=range(50))
_ = ipywidgets.interact(callback, attribute=ipywidgets.Dropdown(options=d.dataframe.columns, value=d.dataframe.columns[-1]))
def inspect_features(cell_list):
features_df = pandas.DataFrame(columns=["Cell", "Value", "Strategies"])
for c in cell_list:
strategies = []
for strategy_profile in d.strategy_profiles:
strategy_name = json.loads(strategy_profile["name"])
for cell in strategy_profile["output"]:
if cell == c:
strategies.append(strategy_profile["name"])
features_df = features_df.append({"Cell": c, "Value": d.dataframe.iloc[c],
"Strategies": len(strategies)}, ignore_index=True)
IPython.display.display(features_df)
IPython.display.display(pandas.DataFrame({"Strategies": strategies}))
first_cluster = [(37, 6), (36, 6), (33, 6), (32, 6), (31, 6), (5, 6), (1, 6), (2, 6)]
inspect_features(first_cluster)
second_cluster = [(43, 6), (34, 6), (11, 6), (22, 6)]
inspect_features(second_cluster)
```
## 6. Interactive Tuple Sampling and Labeling
Raha then iteratively samples a tuple. We should label data cells of each sampled tuple.
```
def on_button_clicked(_):
for j in range(0, len(texts)):
cell = (d.sampled_tuple, j)
error_label = 0
correction = texts[j].value
if d.dataframe.iloc[cell] != correction:
error_label = 1
d.labeled_cells[cell] = [error_label, correction]
d.labeled_tuples[d.sampled_tuple] = 1
app_1.sample_tuple(d)
print("Fix the dirty cells in the following sampled tuple.")
sampled_tuple = pandas.DataFrame(data=[d.dataframe.iloc[d.sampled_tuple, :]], columns=d.dataframe.columns)
IPython.display.display(sampled_tuple)
texts = [ipywidgets.Text(value=d.dataframe.iloc[d.sampled_tuple, j]) for j in range(d.dataframe.shape[1])]
button = ipywidgets.Button(description="Save the Annotation")
button.on_click(on_button_clicked)
output = ipywidgets.VBox(children=texts + [button])
IPython.display.display(output)
model_names = ["Identity+Remover", "Unicode+Remover", "Identity+Adder", "Unicode+Adder",
"Identity+Replacer", "Unicode+Replacer", "Identity+Swapper", "Unicode+Swapper"]
def callback(old_value, new_value):
corrections = raha.Correction()._value_based_corrector(pretrained_models, {"old_value": old_value, "new_value": new_value})
annotation_df = pandas.DataFrame(columns=["Model", "Probability"])
for m, model in enumerate(corrections):
p = model[new_value] if new_value in model else 0
annotation_df = annotation_df.append({"Model": model_names[m], "Probability": "{:.2f}".format(p)}, ignore_index=True)
IPython.display.display(annotation_df)
interactive_text = ipywidgets.interactive(callback, old_value="x10:00", new_value="10:00")
interactive_text
```
For the sake of time, we use the ground truth of the dataset to label tuples below.
```
%%capture
while len(d.labeled_tuples) < app_1.LABELING_BUDGET:
app_1.sample_tuple(d)
if d.has_ground_truth:
app_1.label_with_ground_truth(d)
def callback(attribute):
column = d.dataframe.columns.get_loc(attribute)
features = d.column_features[column]
features = (features - features.min())/(features.max() - features.min())
pca = sklearn.decomposition.PCA(n_components=2)
transformed = pandas.DataFrame(pca.fit_transform(features))
clean_indexes = [True if (i, column) in d.labeled_cells and d.labeled_cells[(i, column)][0] == 0
else False for i in range(d.dataframe.shape[0])]
clean_data_cells = transformed[clean_indexes]
dirty_indexes = [True if (i, column) in d.labeled_cells and d.labeled_cells[(i, column)][0] == 1
else False for i in range(d.dataframe.shape[0])]
dirty_data_cells = transformed[dirty_indexes]
unlabeled_indexes = [True if (i, column) not in d.labeled_cells else False for i in range(d.dataframe.shape[0])]
unlabeled_data_cells = transformed[unlabeled_indexes]
plt.style.use("ggplot")
fig = plt.figure()
ax = plt.axes()
plt.scatter(unlabeled_data_cells[0], unlabeled_data_cells[1], label="Unlabeled Data Cells", c="gray")
plt.scatter(clean_data_cells[0], clean_data_cells[1], label="Clean Data Cells", c="green")
plt.scatter(dirty_data_cells[0], dirty_data_cells[1], label="Dirty Data Cells", c="red")
plt.legend()
plt.show()
_ = ipywidgets.interact(callback, attribute=ipywidgets.Dropdown(options=d.dataframe.columns, value=d.dataframe.columns[-1]))
```
## 7. Propagating User Labels
Raha then propagates each user label through its cluster.
```
app_1.propagate_labels(d)
def callback(attribute):
column = d.dataframe.columns.get_loc(attribute)
features = d.column_features[column]
features = (features - features.min())/(features.max() - features.min())
pca = sklearn.decomposition.PCA(n_components=2)
transformed = pandas.DataFrame(pca.fit_transform(features))
clean_indexes = [True if (i, column) in d.extended_labeled_cells and d.extended_labeled_cells[(i, column)] == 0
else False for i in range(d.dataframe.shape[0])]
clean_data_cells = transformed[clean_indexes]
dirty_indexes = [True if (i, column) in d.extended_labeled_cells and d.extended_labeled_cells[(i, column)] == 1
else False for i in range(d.dataframe.shape[0])]
dirty_data_cells = transformed[dirty_indexes]
unlabeled_indexes = [True if (i, column) not in d.extended_labeled_cells else False for i in range(d.dataframe.shape[0])]
unlabeled_data_cells = transformed[unlabeled_indexes]
plt.style.use("ggplot")
fig = plt.figure()
ax = plt.axes()
plt.scatter(unlabeled_data_cells[0], unlabeled_data_cells[1], label="Unlabeled Data Cells", c="gray")
plt.scatter(clean_data_cells[0], clean_data_cells[1], label="Clean Data Cells", c="green")
plt.scatter(dirty_data_cells[0], dirty_data_cells[1], label="Dirty Data Cells", c="red")
plt.legend()
plt.show()
_ = ipywidgets.interact(callback, attribute=ipywidgets.Dropdown(options=d.dataframe.columns, value=d.dataframe.columns[-1]))
```
## 8. Predicting Labels of Data Cells
Raha then trains and applies one classifier per data column to predict the label of the rest of data cells.
```
app_1.predict_labels(d)
def callback(attribute):
column = d.dataframe.columns.get_loc(attribute)
features = d.column_features[column]
features = (features - features.min())/(features.max() - features.min())
pca = sklearn.decomposition.PCA(n_components=2)
transformed = pandas.DataFrame(pca.fit_transform(features))
clean_indexes = [True if (i, column) not in d.detected_cells else False for i in range(d.dataframe.shape[0])]
clean_data_cells = transformed[clean_indexes]
dirty_indexes = [True if (i, column) in d.detected_cells else False for i in range(d.dataframe.shape[0])]
dirty_data_cells = transformed[dirty_indexes]
plt.style.use("ggplot")
fig = plt.figure()
ax = plt.axes()
plt.scatter(clean_data_cells[0], clean_data_cells[1], label="Clean Data Cells", c="green")
plt.scatter(dirty_data_cells[0], dirty_data_cells[1], label="Dirty Data Cells", c="red")
plt.legend()
plt.show()
_ = ipywidgets.interact(callback, attribute=ipywidgets.Dropdown(options=d.dataframe.columns, value=d.dataframe.columns[-1]))
def callback(data_cell):
c = tuple(json.loads(data_cell))
selected_tuple = pandas.DataFrame(data=[d.dataframe.iloc[c[0], :]], columns=d.dataframe.columns)
IPython.display.display(selected_tuple)
features_df = pandas.DataFrame(columns=["Name", "Value"])
for strategy_profile in d.strategy_profiles:
strategy_name = json.loads(strategy_profile["name"])
for cell in strategy_profile["output"]:
if cell == c:
features_df = features_df.append({"Name": strategy_name, "Value": 1}, ignore_index=True)
IPython.display.display(features_df)
column = c[1]
features = d.column_features[column]
features = (features - features.min())/(features.max() - features.min())
pca = sklearn.decomposition.PCA(n_components=2)
transformed = pandas.DataFrame(pca.fit_transform(features))
clean_indexes = [True if (i, column) not in d.detected_cells else False for i in range(d.dataframe.shape[0])]
clean_data_cells = transformed[clean_indexes]
dirty_indexes = [True if (i, column) in d.detected_cells else False for i in range(d.dataframe.shape[0])]
dirty_data_cells = transformed[dirty_indexes]
selected_dirty_indexes = [True if i == c[0] else False for i in range(d.dataframe.shape[0])]
selected_dirty_data_cell = transformed[selected_dirty_indexes]
plt.style.use("ggplot")
fig = plt.figure()
ax = plt.axes()
plt.scatter(clean_data_cells[0], clean_data_cells[1], label="Clean Data Cells", c="green")
plt.scatter(dirty_data_cells[0], dirty_data_cells[1], label="Dirty Data Cells", c="red")
plt.scatter(selected_dirty_data_cell[0], selected_dirty_data_cell[1], label="Selected Dirty Data Cells", c="blue")
plt.legend()
plt.show()
_ = ipywidgets.interact(callback, data_cell=[json.dumps(cell) for cell in d.detected_cells])
errors_per_attribute = d.dataframe.shape[1] * [0]
for cell in d.detected_cells:
errors_per_attribute[cell[1]] += 1
plt.style.use("ggplot")
fig = plt.figure()
ax = plt.axes()
x_pos = range(len(d.dataframe.columns))
ax.bar(x_pos, errors_per_attribute)
ax.set(ylabel="Data Errors per Attribute", title="Error Detection Progress");
ax.set_xticks(numpy.arange(len(x_pos)))
_ = ax.set_xticklabels(d.dataframe.columns, rotation=22)
```
## 9. Initializing and Updating the Error Corrector Models
Baran initializes the error corrector models. Baran then iteratively samples a tuple. We should label data cells of each sampled tuple. It then udpates the models accordingly and generates a feature vector for each pair of a data error and a correction candidate. Finally, it trains and applies a classifier to each data column to predict the final correction of each data error. Since we already labeled tuples for Raha, we use the same labeled tuples and do not label new tuples here.
```
corrections_per_labels = [0]
app_2.initialize_models(d)
app_2.initialize_dataset(d)
for si in d.labeled_tuples:
d.sampled_tuple = si
app_2.update_models(d)
app_2.generate_features(d)
app_2.predict_corrections(d)
corrections_per_labels.append(len(d.corrected_cells))
def callback(data_cell):
c = tuple(json.loads(data_cell))
selected_tuple = pandas.DataFrame(data=[d.dataframe.iloc[c[0], :]], columns=d.dataframe.columns)
IPython.display.display(selected_tuple)
features_df = pandas.DataFrame(columns=["Erroneus Value", "Correction", "Model", "Probability"])
correction = d.corrected_cells[c]
error_dictionary = {"column": c[1], "old_value": d.dataframe.iloc[c], "vicinity": list(d.dataframe.iloc[c[0], :])}
value_corrections = app_2._value_based_corrector(d.value_models, error_dictionary)
vicinity_corrections = app_2._vicinity_based_corrector(d.vicinity_models, error_dictionary)
domain_corrections = app_2._domain_based_corrector(d.domain_models, error_dictionary)
models_corrections = value_corrections + vicinity_corrections + domain_corrections
for mi, model in enumerate(models_corrections):
p = 0
if correction in model:
p = model[correction]
name = ""
if mi == len(models_corrections) - 1:
name = "Domain " + d.dataframe.columns[c[1]]
elif mi < len(model_names):
name = model_names[mi]
else:
name = "{} -> {}".format(d.dataframe.columns[mi - 8], d.dataframe.columns[c[1]])
features_df = features_df.append({"Erroneus Value": d.dataframe.iloc[c], "Correction": correction,
"Model": name, "Probability": "{:.2f}".format(p)}, ignore_index=True)
IPython.display.display(features_df)
column = c[1]
features = d.column_features[column]
features = (features - features.min())/(features.max() - features.min())
pca = sklearn.decomposition.PCA(n_components=2)
transformed = pandas.DataFrame(pca.fit_transform(features))
clean_indexes = [True if (i, column) not in d.detected_cells else False for i in range(d.dataframe.shape[0])]
clean_data_cells = transformed[clean_indexes]
dirty_indexes = [True if (i, column) in d.detected_cells else False for i in range(d.dataframe.shape[0])]
dirty_data_cells = transformed[dirty_indexes]
selected_dirty_indexes = [True if i == c[0] else False for i in range(d.dataframe.shape[0])]
selected_dirty_data_cell = transformed[selected_dirty_indexes]
plt.style.use("ggplot")
fig = plt.figure()
ax = plt.axes()
plt.scatter(clean_data_cells[0], clean_data_cells[1], label="Clean Data Cells", c="green")
plt.scatter(dirty_data_cells[0], dirty_data_cells[1], label="Dirty Data Cells", c="red")
plt.scatter(selected_dirty_data_cell[0], selected_dirty_data_cell[1], label="Selected Dirty Data Cells", c="blue")
plt.legend()
plt.show()
_ = ipywidgets.interact(callback, data_cell=[json.dumps(cell) for cell in d.corrected_cells])
plt.style.use("ggplot")
fig = plt.figure()
ax = plt.axes()
ax.plot(range(0, len(d.labeled_tuples) + 1), corrections_per_labels)
_ = ax.set(xlim=(0, len(d.labeled_tuples) + 1), xticks = range(0, len(d.labeled_tuples) + 1, 2),
xlabel="Labeled Tuples", ylabel="Corrected Data Errors", title="Error Correction Progress")
correction_candidates = {}
total_correction_candidates = 0
actual_correction_candidates = 0
for cell in d.detected_cells:
correction_candidates[cell] = {}
error_dictionary = {"column": cell[1], "old_value": d.dataframe.iloc[cell], "vicinity": list(d.dataframe.iloc[cell[0], :])}
value_corrections = app_2._value_based_corrector(pretrained_models, error_dictionary)
for model in value_corrections:
for value in model:
correction_candidates[cell][value] = 1
total_correction_candidates += 1
if value == d.clean_dataframe.iloc[cell]:
actual_correction_candidates += 1
plt.style.use("ggplot")
fig = plt.figure(figsize=(10, 7))
ax = plt.axes()
x_pos = [0, 1]
ax.bar(x_pos, [total_correction_candidates, actual_correction_candidates])
ax.set_yscale("log")
ax.set(title="Effect of Pretraining Value-Based Models");
ax.set_xticks(numpy.arange(len(x_pos)))
_ = ax.set_xticklabels(["Total Additional Correction Candidates", "Actual Additional Correction Candidates"], rotation=0)
```
## 10. Storing Results
Both Raha and Baran can also store the error detection/correction results.
```
app_1.store_results(d)
app_2.store_results(d)
```
## 11. Evaluating the Data Cleaning Task
We can finally evaluate our data cleaning task.
```
edp, edr, edf = d.get_data_cleaning_evaluation(d.detected_cells)[:3]
ecp, ecr, ecf = d.get_data_cleaning_evaluation(d.corrected_cells)[-3:]
evaluation_df = pandas.DataFrame(columns=["Task", "Precision", "Recall", "F1 Score"])
evaluation_df = evaluation_df.append({"Task": "Error Detection (Raha)", "Precision": "{:.2f}".format(edp),
"Recall": "{:.2f}".format(edr), "F1 Score": "{:.2f}".format(edf)}, ignore_index=True)
evaluation_df = evaluation_df.append({"Task": "Error Correction (Baran)", "Precision": "{:.2f}".format(ecp),
"Recall": "{:.2f}".format(ecr), "F1 Score": "{:.2f}".format(ecf)}, ignore_index=True)
evaluation_df.head()
```
| github_jupyter |
Please go through Giba's post and kernel to underrstand what this leak is all about
https://www.kaggle.com/titericz/the-property-by-giba (kernel)
https://www.kaggle.com/c/santander-value-prediction-challenge/discussion/61329 (post)
Also, go through this Jiazhen's kernel which finds more columns to exploit leak
https://www.kaggle.com/johnfarrell/giba-s-property-extended-result
I just exploit data property in brute force way and then fill in remaining by row non zero means! This should bring everyone on level-playing field.
**Let the competition begin! :D**
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
print(os.listdir("../input"))
import lightgbm as lgb
from sklearn.model_selection import *
from sklearn.metrics import mean_squared_error, make_scorer
from scipy.stats import mode, skew, kurtosis, entropy
from sklearn.ensemble import ExtraTreesRegressor
import matplotlib.pyplot as plt
import seaborn as sns
import dask.dataframe as dd
from dask.multiprocessing import get
from tqdm import tqdm, tqdm_notebook
tqdm.pandas(tqdm_notebook)
# Any results you write to the current directory are saved as output.
train = pd.read_csv("../input/train.csv")
test = pd.read_csv("../input/test.csv")
transact_cols = [f for f in train.columns if f not in ["ID", "target"]]
y = np.log1p(train["target"]).values
```
We take time series columns from [here](https://www.kaggle.com/johnfarrell/giba-s-property-extended-result)
```
cols = ['f190486d6', '58e2e02e6', 'eeb9cd3aa', '9fd594eec', '6eef030c1',
'15ace8c9f', 'fb0f5dbfe', '58e056e12', '20aa07010', '024c577b9',
'd6bb78916', 'b43a7cfd5', '58232a6fb', '1702b5bf0', '324921c7b',
'62e59a501', '2ec5b290f', '241f0f867', 'fb49e4212', '66ace2992',
'f74e8f13d', '5c6487af1', '963a49cdc', '26fc93eb7', '1931ccfdd',
'703885424', '70feb1494', '491b9ee45', '23310aa6f', 'e176a204a',
'6619d81fc', '1db387535', 'fc99f9426', '91f701ba2', '0572565c2',
'190db8488', 'adb64ff71', 'c47340d97', 'c5a231d81', '0ff32eb98']
from multiprocessing import Pool
CPU_CORES = 1
def _get_leak(df, cols, lag=0):
""" To get leak value, we do following:
1. Get string of all values after removing first two time steps
2. For all rows we shift the row by two steps and again make a string
3. Just find rows where string from 2 matches string from 1
4. Get 1st time step of row in 3 (Currently, there is additional condition to only fetch value if we got exactly one match in step 3)"""
series_str = df[cols[lag+2:]].apply(lambda x: "_".join(x.round(2).astype(str)), axis=1)
series_shifted_str = df[cols].shift(lag+2, axis=1)[cols[lag+2:]].apply(lambda x: "_".join(x.round(2).astype(str)), axis=1)
target_rows = series_shifted_str.progress_apply(lambda x: np.where(x == series_str)[0])
target_vals = target_rows.apply(lambda x: df.loc[x[0], cols[lag]] if len(x)==1 else 0)
return target_vals
def get_all_leak(df, cols=None, nlags=15):
"""
We just recursively fetch target value for different lags
"""
df = df.copy()
#with Pool(processes=CPU_CORES) as p:
# res = [p.apply_async(_get_leak, args=(df, cols, i)) for i in range(nlags)]
# res = [r.get() for r in res]
for i in range(nlags):
print("Processing lag {}".format(i))
df["leaked_target_"+str(i)] = _get_leak(df, cols, i)
return df
test["target"] = train["target"].mean()
all_df = pd.concat([train[["ID", "target"] + cols], test[["ID", "target"]+ cols]]).reset_index(drop=True)
all_df.head()
NLAGS = 25 #Increasing this might help push score a bit
all_df = get_all_leak(all_df, cols=cols, nlags=NLAGS)
leaky_cols = ["leaked_target_"+str(i) for i in range(NLAGS)]
train = train.join(all_df.set_index("ID")[leaky_cols], on="ID", how="left")
test = test.join(all_df.set_index("ID")[leaky_cols], on="ID", how="left")
train[["target"]+leaky_cols].head(10)
train["nonzero_mean"] = train[transact_cols].apply(lambda x: np.expm1(np.log1p(x[x!=0]).mean()), axis=1)
test["nonzero_mean"] = test[transact_cols].apply(lambda x: np.expm1(np.log1p(x[x!=0]).mean()), axis=1)
#We start with 1st lag target and recusrsively fill zero's
train["compiled_leak"] = 0
test["compiled_leak"] = 0
for i in range(NLAGS):
train.loc[train["compiled_leak"] == 0, "compiled_leak"] = train.loc[train["compiled_leak"] == 0, "leaked_target_"+str(i)]
test.loc[test["compiled_leak"] == 0, "compiled_leak"] = test.loc[test["compiled_leak"] == 0, "leaked_target_"+str(i)]
print("Leak values found in train and test ", sum(train["compiled_leak"] > 0), sum(test["compiled_leak"] > 0))
print("% of correct leaks values in train ", sum(train["compiled_leak"] == train["target"])/sum(train["compiled_leak"] > 0))
train.loc[train["compiled_leak"] == 0, "compiled_leak"] = train.loc[train["compiled_leak"] == 0, "nonzero_mean"]
test.loc[test["compiled_leak"] == 0, "compiled_leak"] = test.loc[test["compiled_leak"] == 0, "nonzero_mean"]
from sklearn.metrics import mean_squared_error
np.sqrt(mean_squared_error(y, np.log1p(train["compiled_leak"]).fillna(14.49)))
#submission
sub = test[["ID"]]
sub["target"] = test["compiled_leak"]
sub.to_csv("baseline_submission_with_leaks.csv", index=False)
```
| github_jupyter |
```
import tensorflow as tf
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.45)
tf.enable_eager_execution(config=tf.ConfigProto(gpu_options=gpu_options))
import time
from pathlib import Path
import matplotlib.pyplot as plt
from IPython.display import clear_output
from shared import make_dataset, random_jitter, Generator, Discriminator, normalize, \
train_step, generate_images, generate_plot
%matplotlib inline
PATH = Path("/scratch/datasets/astro_deconv_2019/")
CHECKPOINT_PREFIX = Path('training_checkpoints/gan')
BUFFER_SIZE = 200
BATCH_SIZE = 1
IMG_SIZE = 256
OUTPUT_CHANNELS = 1
LAMBDA = 100
EPOCHS = 5
LR = 0.001
TEST_INTERVAL = 100
CHECKPOINT_INTERVAL = 5000
train_dirty_dataset = make_dataset(PATH / 'train/*-dirty.fits')
train_skymodel_dataset = make_dataset(PATH / 'train/*-skymodel.fits')
train_psf_dataset = make_dataset(PATH / 'train/*-psf.fits')
train_clean_beam_dataset = make_dataset(PATH / 'train/*-clean-beam.fits')
train_dataset = tf.data.Dataset.zip((train_dirty_dataset, train_skymodel_dataset, train_psf_dataset, train_clean_beam_dataset))
train_dataset = train_dataset.map(random_jitter)
train_dataset = train_dataset.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.batch(1)
test_dirty_dataset = make_dataset(PATH / 'test/*-dirty.fits')
test_wsclean_dataset = make_dataset(PATH / 'test/*-wsclean-model.fits')
test_skymodel_dataset = make_dataset(PATH / 'test/*-skymodel.fits')
test_psf_dataset = make_dataset(PATH / 'test/*-psf.fits')
test_clean_beam_dataset = make_dataset(PATH / 'test/*-clean-beam.fits')
test_dataset = tf.data.Dataset.zip((test_dirty_dataset, test_skymodel_dataset, test_psf_dataset, test_clean_beam_dataset, test_wsclean_dataset))
test_dataset = test_dataset.shuffle(BUFFER_SIZE)
test_dataset = test_dataset.batch(1)
generator = Generator(IMG_SIZE=IMG_SIZE, OUTPUT_CHANNELS=OUTPUT_CHANNELS)
discriminator = Discriminator()
loss_object = tf.keras.losses.BinaryCrossentropy(from_logits=True)
generator_optimizer = tf.train.AdamOptimizer(learning_rate=LR, beta1=0.5)
discriminator_optimizer = tf.train.AdamOptimizer(learning_rate=LR, beta1=0.5)
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
```
# Train the shit
```
l1s = []
gans = []
for epoch in range(EPOCHS):
step = 0
start = time.time()
for input_, target, psf, clean_beam in train_dataset:
min_, max_, input_, target, = normalize(input_, target)
train_step(loss_object, generator, generator_optimizer,
discriminator_optimizer, discriminator, input_, target, LAMBDA)
step += 1
print(".", end = '')
if (step + 1) % TEST_INTERVAL == 0:
clear_output(wait=True)
for test_input, test_target, test_psf, test_cleanbeam, test_wsclean in test_dataset.take(1):
r = normalize(test_input, test_target, test_wsclean)
min_, max_, test_input, test_target, wsclean = r
test_prediction = generator(test_input, training=True)
disc_real_output = discriminator([test_input, test_target], training=True)
disc_generated_output = discriminator([test_input, test_prediction], training=True)
gan_loss = loss_object(tf.ones_like(disc_generated_output), disc_generated_output)
l1_loss = tf.reduce_mean(tf.abs(test_target - test_prediction))
print(f"l1_loss: {l1_loss:.4f} gan_loss: {gan_loss:.4f}")
l1s.append(l1_loss)
gans.append(gan_loss)
generate_images(test_prediction, test_input, test_target)
generate_plot([i.numpy() for i in l1s], 'l1')
generate_plot([i.numpy() for i in gans], 'gan')
duration = time.time()-start
speed = step / duration
print(f"step: {step + 1} epoch: {epoch + 1} duration: {duration:.2f}s step/s: {speed:.2f}\n")
if (step + 1) % CHECKPOINT_INTERVAL == 0:
checkpoint.save(file_prefix=str(CHECKPOINT_PREFIX))
```
| github_jupyter |
```
import dash
import dash_core_components as dcc
import dash_html_components as html
import json
from textwrap import dedent as d
from dash.dependencies import Input, Output
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
#Read dataset
df = pd.read_csv("Detector_1.csv")
df1 = pd.read_csv("Detector_2.csv")
df2 = pd.read_csv("Detector_3.csv")
#List of dataset columns
available_indicators = list(df)
available_indicators = list(df1)
available_indicators = list(df2)
#Initialize Dash
app = dash.Dash()
app = dash.Dash(__name__)
app.scripts.config.serve_locally = True
app.css.config.serve_locally = True
styles = {
'pre': {
'border': 'thin lightgrey solid',
'overflowX': 'scroll'
}
}
app.layout = html.Div([
html.Div([
html.H1('Detectors Data Analysis',style={"textAlign": "center"}),
html.P('By: Keerthi Prakash')],style = {'padding' : '50px' ,'backgroundColor' : '#3aaab2', "textAlign": "center"}),
dcc.Markdown('''
Welcome to my Plotly (Dash) Data Science interactive dashboard. In order to create this dashboard have been used different datasets.
We plotted the hourly distribution of each detector by histogram plot.
Plotting the hourly distribution for each detector shows us that, in general, the detectors peak in the
afternoon and are much less likely to be triggered during the night and early morning
which may implicate that the triggering objects are biological in nature (or tied to the day-night cycle)''') ,
html.H1("Detectors Hourly Distribution Histogram Plot", style={'textAlign': 'center', 'padding-top': 5}),
dcc.Graph(
id='basic-interactions',
figure={
'data': [
{
'x': df['Hour'],
'text': df['Hour'],
'customdata': df['DateTime'],
'name': 'Detector 1 - Hourly Distribution',
'type': 'histogram'
},
{
'x': df1['Hour'],
'text': df1['Hour'],
'customdata': df1['DateTime'],
'name': 'Detector 2 - Hourly Distribution',
'type': 'histogram'
},
{
'x': df2['Hour'],
'text': df2['Hour'],
'customdata': df2['DateTime'],
'name': 'Detector 3 - Hourly Distribution',
'type': 'histogram'
}
],
'layout': {}
}
),
html.Div(className='row', children=[
html.Div([
dcc.Markdown(d("""
**Hover Data**
Mouse over values in the graph.
""")),
html.Pre(id='hover-data', style=styles['pre'])
], className='three columns'),
html.Div([
dcc.Markdown(d("""
**Click Data**
Click on points in the graph.
""")),
html.Pre(id='click-data', style=styles['pre']),
], className='three columns'),
html.Div([
dcc.Markdown(d("""
**Selection Data**
Choose the lasso or rectangle tool in the graph's menu
bar and then select points in the graph.
""")),
html.Pre(id='selected-data', style=styles['pre']),
], className='three columns'),
html.Div([
dcc.Markdown(d("""
**Zoom and Relayout Data**
Click and drag on the graph to zoom or click on the zoom
buttons in the graph's menu bar.
Clicking on legend items will also fire
this event.
""")),
html.Pre(id='relayout-data', style=styles['pre']),
], className='three columns')
])
])
@app.callback(
Output ('hover-data', 'children'),
[Input('basic-interactions', 'hoverData')])
def display_hover_data(hoverData):
return json.dumps(hoverData, indent=2)
@app.callback(
Output('click-data', 'children'),
[Input('basic-interactions', 'clickData')])
def display_click_data(clickData):
return json.dumps(clickData, indent=2)
@app.callback(
Output('selected-data', 'children'),
[Input('basic-interactions', 'selectedData')])
def display_selected_data(selectedData):
return json.dumps(selectedData, indent=2)
@app.callback(
Output('relayout-data', 'children'),
[Input('basic-interactions', 'relayoutData')])
def display_selected_data(relayoutData):
return json.dumps(relayoutData, indent=2)
if __name__ == '__main__':
app.run_server(debug=False)
```
| github_jupyter |
- title: Cox's Theorem: Establishing Probability Theory
- summary: Cox's theorem is the strongest argument for the use of standard probability theory. Here we examine the axioms to establish a firm foundation for the interpretation of probability theory as the unique extension of true-false logic to degrees of belief.
- author: Daniel Cox
- date: 2019-11-03
- category: arXiv highlights
- image: /static/images/arXiv.gif
# Ranging farther afield
Today I'll be taking advantage of my stated intention to pull back from the stream of _recent_ papers, and look at some papers for their impact or fundamental importance as I see it. So today I'm doing something unusual, highlighting a paper not from last week, but from _four years_ ago, and not directly from AI, but from the field of probability theory: [Cox's Theorem and the Jaynesian Interpretation of Probability](https://arxiv.org/abs/1507.06597).
I've been reading a book by E. T. Jaynes, called [Probability Theory: The Logic of Science](https://www.amazon.com/Probability-Theory-Science-T-Jaynes/dp/0521592712), a brilliant and practical exposition of the Bayesian view of probability theory, partially on [the recommendation of another AI researcher](https://www.lesswrong.com/posts/kXSETKZ3X9oidMozA/the-level-above-mine). The thoughts of an ideal reasoner would have Bayesian structure, so I am both personally and professionally interested in mastering the concepts.
# Overview
Cox's theorem is an attempt to derive probability theory from a small, common-sense set of uncontroversial desiderata, and to demonstrate its uniqueness as an extension of two-valued (true/false) logic to degrees of belief. That's a big deal. As today's paper mentions, Peter Cheeseman [has called](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-8640.1988.tb00091.x) Cox's theorem the "strongest argument for the use of standard (Bayesian) probability theory". But Cox's theorem is non-rigorous as originally formulated, and many people have patched up the holes for use in their various fields. Often today, if someone refers to "Cox's theorem", they usually mean one of the fixed-up versions.
Jaynes' version unfortunately contains a mistake, and today's paper fixes it by replacing some of the axioms with the simple requirement that probability theory remain consistent with respect to repeated events.
It may be difficult without reading the book to see why this paper is important to AI, so perhaps in the near future I'll discuss that at greater length. For today, however, I'll simply be explaining each of the axioms, and setting you up to read the paper more easily. It is certainly worth a close reading, to ground your confidence in the interpretation of probability theory as a _logical system_ that extends true-false logic to handle uncertainty, so you can reap the associated benefits.
# Abstract
> There are multiple proposed interpretations of probability theory: one such interpretation is true-false logic under uncertainty. Cox's Theorem is a representation theorem that states, under a certain set of axioms describing the meaning of uncertainty, that every true-false logic under uncertainty is isomorphic to conditional probability theory. This result was used by Jaynes to develop a philosophical framework in which statistical inference under uncertainty should be conducted through the use of probability, via Bayes' Rule. Unfortunately, most existing correct proofs of Cox's Theorem require restrictive assumptions: for instance, many do not apply even to the simple example of rolling a pair of fair dice. We offer a new axiomatization by replacing various technical conditions with an axiom stating that our theory must be consistent with respect to repeated events. We discuss the implications of our results, both for the philosophy of probability and for the philosophy of statistics.
# Axioms $\newcommand{\P}{\mathbb{P}} \newcommand{\F}{\mathscr{F}}$
This paper proposes a new axiomatization of probability theory, with five axioms. As a variant of Cox's theorem, these axioms are supposed to represent a set of "common sense" desiderata for a logical system under uncertainty. That is, each of these axioms are things we naturally want to be true of any logical system under uncertainty. Cox's original axioms were more intuitively essential to me, however, so I'll also try to give justifications for demanding each of the following axioms, as well as explaining them technically.
Remember the ultimate goal is to _build_ probability theory up from a minimal set of absolute requirements for _any_ logical system. The punchline is that probability theory as described historically by greats like Kolmogorov turns out to be the _unique_ extension of true-false logic under uncertainty, and we can derive it from "common sense".
To emphasize the point that while we're writing these axioms we haven't yet got _probability_, following Jaynes I'll refer to our measure of certainty/uncertainty as "plausibility".
## 1. Plausibility must be representable by a real number
> Let $\Omega$ be a set and $\mathscr{F}$ be a $\sigma$-algebra on $\Omega$.
>
> Let $\P: \F \times (\F \setminus \emptyset) \rightarrow R \subseteq \mathbb{R}$ be a function, written using notation $\P(A|B)$.
It makes intuitive sense that we should be able to measure our uncertainty on a smooth, finite scale, so it makes sense to demand that our plausibility scale be chosen from some definite subset of the reals.
$\F$ being "[a $\sigma$-algebra on $\Omega$](https://en.wikipedia.org/wiki/Sigma-algebra)" means that it is the set of every subset of $\Omega$ (including $\Omega$ and $\emptyset$), is closed under complement, and is closed under countable unions. (Being "closed under" some operation means that taking that operation on any element in the set yields an element that's also defined to be in the set.) The idea is that $\Omega$ comprises all primitive events, and $\F$ therefore includes every possible logical combination of these primitive events, in a way that makes it eqivalent to a Boolean algebra.
I found it clarifying that $\P(\Omega)=1$. That's what made it click for me that a set in $\F$ represents a disjunction of primitive events, and $\Omega$ contains _all_ primitive events, so $\P(\Omega)$ is the probability that _anything_ happens.
$\P(A|B)$ is a function of two arguments $A,B \in \mathscr{F}$, and B cannot be empty. The interpretation is, "The probability of some event A, given that event B is true." The second argument cannot be empty, Jaynes often describes it as "the background information", including everything else known (such as the rules of probability themselves, and the number of penguins in Antarctica).
The arguments of $\P$ are sets, but as the paper mentions, "by [Stone's Representation Theorem](https://www.jstor.org/stable/1989664), every Boolean algebra is isomorphic to an algebra of sets".
## 2. Sequential continuity
> We have that
> $$A_1 \subseteq A_2 \subseteq A_3 \subseteq\ldots \text{ such that } A_i \nearrow A \text{ implies } \P (A_i | B)\nearrow \P(A | B )$$
> for all $A, A_i, B$.
Another intuitive requirement for a system of logical inference is that our plausibility measure return arbitrarily small differences in plausibility for arbitrarily small changes in truth value. This concept is also known as "continuity".
If you can arrange a sequence of events (sets) so that earlier events (e.g., $A_1$) are included in later events (e.g., $A_3$), then there is "sequential continuity" between earlier sets and later sets in this sequence. In the notation of the paper, $A_1 \nearrow A_3$.
What this axiom is saying is that as long as there is sequential continuity between two logical propositions, there is also sequential continuity between their plausibilities. This formalizes our requirement for continuity. Also notice that if $\P (A_i | B)\nearrow \mathbb{P}(A | B )$ then $\P (A_i | B) \leq \mathbb{P}(A | B )$, because our definition of sequential continuity also implies that the cardinality of the sets is non-decreasing. This will be useful reading the proof.
## 3. Decomposability
> $\P(AB | C )$ can be written as
> $$\P(A | C ) \circ \P(B | AC)$$
> for some some function $\circ : (R \times R) \rightarrow R$.
This is the first axiom that I had trouble seeing as intuitive, and in fact I thought it was a bit question-begging at first because it looks like the product rule. It represents the demand that plausibilities of compound propositions be decomposable into plausibilities of the their constituents, and that that decomposition has a particular form. It's the demand that it follow a particular form that seems somewhat arbitrary to me at first. Of course we would want to be able to decompose compound uncertainty into more fundamental elements, or else probability theory wouldn't be very useful. But why should it take the form described of $\circ$?
The answer is that this form is _minimal_ for decomposability. That is, it's the weakest statement that could be made about the details of decomposition. In English: "The plausibility of A _and_ B is a function of the plausibility of one of those (say, $A$), and the plausibility of the other ($B$) once we can assume $A$ is true."
Note that logical conjunctions are commutative ($AB = BA$), so by this axiom $\P(AB | C )$ can _also_ be written as $\P(B | C ) \circ \P(A | BC)$. They prove later also that $\circ$ is commutative, but that is not assumed in the axioms.
## 4. Negation
> There exists a function $N : R \rightarrow R$ such that
> $$
\P(A^c | B)= N[ \P(A | B)]
$$
> for all $A,B$.
This axiom also seemed a bit question-begging to me, because it looks like the sum rule of probability theory, and because it seemed arbitrary that you would want uniquely determined probabilities for the negations of propositions.
Upon further reflection, however, this seems like a reasonable demand to be consistent with two-valued logic. Every proposition $A$ in true-false logic has a unique proposition $A^c$ representating its negation, (This superscript complement notation emphasizes the representation as propositions as sets, but is equivalent to $\bar A$, $\neg A$, etc.) so it makes sense that an extension of true-false logic to uncertainty would also include a method of determining the opposite.
In actual fact, this _may_ be the most controversial axiom, since there are logics other than true-false logic that don't require the "law of the excluded middle" (they allow "maybe"). But if you are willing to accept that all well-formed propositions are either true or false, and our system of plausibility represents levels of certainty about their truth or falsehood, then this axiom represents a reasonable and necessary demand.
## 5. Consistency under extension
> If $(\Omega, \mathscr{F}, \P)$ satisfies the axioms above, then $(\Omega \times \Omega, \mathscr{F} \otimes \mathscr{F}, \P \operatorname{\circ} \P)$ must as well, i.e., the definition $\P(A \times B | C \times D) = \P(A | C) \circ \P(B | D)$ is consistent.
This axiom represents the core of the authors' contribution. Although there were many correct variants of Cox's theorem, and many ways to axiomatize probability theory, they all had either disappointingly narrow scope, or had lost their intuitive nature in the formalization. The authors' of our paper replace several technical axioms from other axiomatizations with this one demand _that their rules be consistent under extention to repeated events_.
In English, this axiom is, "If the rules apply to a single trial (e.g., a single coinflip), then they also apply to a system of two independent trials (e.g., two coinflips)." To me, that's obviously intuitive, so it's delightful to find that it covers so much ground.
Examining their formal expression, with the coinflips example, with $A$ meaning "heads on the first coinflip" and B meaning "tails on the second coinflip":
$\P(A \times B | C \times D)$ means "the plausibility of heads-then-tails given two piles of background information $C$ and $D$". The axiom states this must equal $\P(A | C) \circ \P(B | D)$, meaning that the plausibility of a pair of coinflips coming up heads-tails is equal to the plausibility of a single coinflip coming up heads (given background information $C$), composed (using $\circ$) with another coinflip coming up tails (given background information $D$).
# Parting thoughts
1. I hope this exposition of the axioms helps you read the paper yourself, though I realize I may not have provided sufficient motivation to do so yet. That would make it a bit like [my post deriving something surprising about Boltzmann machines](https://computable.ai/articles/2019/Mar/10/boltzmann-machines-differentiation-work.html) without first explaining what Boltzmann machines _are_. I intend to rectify this in the future for both posts.
2. I could make this a lot clearer for people with less set theory, group theory, or probability theory background. If that would be helpful to you, please leave me a comment on what specifically didn't make sense so I can get a feel for my audience.
3. To memorize these and make reading the proof easier, I labeled each of the five axioms with some relevant symbol, and combined them into a mneumonic. In case that helps you too, here it is: $\mathbb{R}$ $\nearrow$ $\circ$ $N$ $\times$.
| github_jupyter |
# Example Gawain notebook
In this notebook I show how to set up, run, and plot a simple simulation using the gawain plasma physics module.
```
import numpy as np
from gawain.main import run_gawain
from gawain.io import Reader
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation, rc
from IPython.display import HTML
```
# Set up run
Here we define the simulation parameters and initial and boundary conditions.
For this simple example, I use the Sod shock tube problem. This is a 1D hydrodynamics problem, and so mhd routines are turned off.
First define the run_name and output directory, this will create a directory containing the output from the simulation.
```
run_name = "sod_shock_tube"
output_dir = "."
```
Here I choose whether to run an MHD or Hydro simulation, and whether to turn on thermal conductivity and resistivty. As the Sod shock tube is a hydrodynamic problem, MHD and resistivity are turned off. I also do not turn on thermal conductivity.
```
with_mhd = False
with_thermal_conductivity = False
with_resistivity = False
```
These cells define the cfl number, the total simulation time, and which time integrator and flux calculation methods are to be used.
Currently the supported time integration methods are
- euler forward step
- 2nd order Runge-Kutta
- Leapfrog
- Predictor-Corrector
The currently supported flux calculation methods are
- Lax-Wendroff (two-step Richtmeyer form)
- Lax-Friedrichs
- HLLE with MUSCL reconstruction
For all but the simplest simulations it is strongly advised to use HLL, as Lax-Wendroff is susceptible to oscillations about sharp discontinuities and Lax-Friedrichs is very diffusive.
```
cfl = 0.5
t_max = 0.25
# "euler", "rk2", "leapfrog", "predictor-corrector"
integrator = "euler"
# "lax-wendroff", "lax-friedrichs", "hll"
fluxer = "hll"
```
## Define mesh
This cell defines the mesh shape (number of cells in each direction), dimensions (length of each dimension) and the number of output dumps to use.
```
nx, ny, nz = 200, 1, 1
mesh_shape = (nx, ny, nz)
n_outputs = 100
lx, ly, lz = 1.0, 0.001, 0.001
mesh_size = (lx, ly, lz)
x = np.linspace(0.0, lx,num=nx)
y = np.linspace(0.0, ly,num=ny)
z = np.linspace(0.0, lz,num=nz)
X,Y,Z =np.meshgrid(x,y,z, indexing='ij')
```
## Define initial condition
The mesh information is used to create an initial condition. If this were an mhd simulation, the magnetic field initial condition would also need to be included.
```
adiabatic_idx = 7.0/5.0
rho = np.piecewise(X, [X < 0.5, X >= 0.5], [1.0, 0.125])
pressure = np.piecewise(X, [X < 0.5, X >= 0.5], [1.0, 0.1])
mx = np.zeros(X.shape)
my = np.zeros(X.shape)
mz = np.zeros(X.shape)
e = pressure/(adiabatic_idx-1) + 0.5*mx*mx/rho
initial_condition = np.array([rho, mx, my, mz, e])
source = 0.0*np.ones(initial_condition.shape)
```
adiabatic_idx = 7.0/5.0
rho = np.ones(mesh_shape)
pressure = np.ones(mesh_shape)
mx = np.zeros(mesh_shape)
my = np.zeros(mesh_shape)
mz = np.zeros(mesh_shape)
e = pressure/(adiabatic_idx-1) + 0.5*mx*mx/rho
initial_condition = np.array([rho, mx, my, mz, e])
rho_s= np.zeros(mesh_shape)
mx_s= np.zeros(mesh_shape)
my_s= np.zeros(mesh_shape)
mz_s= np.zeros(mesh_shape)
e_s=np.zeros(mesh_shape)
e_s[80:120, :, :]=1.0
source = np.array([rho_s, mx_s, my_s, mz_s, e_s])
## Define boundary conditions
The available boundary conditions are
- periodic
- fixed (to the value specified in the initial condition)
- reflective
```
boundary_conditions = ['fixed', 'periodic', 'periodic']
config = {
"run_name": run_name,
"cfl": cfl,
"mesh_shape": mesh_shape,
"mesh_size": mesh_size,
"t_max": t_max,
"n_dumps": n_outputs,
"initial_condition": initial_condition,
"boundary_type": boundary_conditions,
"adi_idx": adiabatic_idx,
"integrator": integrator,
"fluxer": fluxer,
"output_dir": output_dir,
"with_mhd": with_mhd,
"source":source,
}
```
# Run Simulation
Combine all the above simulation parameters into a parameter dictionary. This dictionary is then fed to the run_gawain function which begins the simulation. Ensure the all keys for this dictionary are defined, and ensure the names are spelt correctly.
```
run_gawain(config)
```
# Plot Results
One can create simple plots to visualise the results using the Reader object
```
data = Reader(run_name)
data.variables
data.plot('density', timesteps=[0,10,20,50,90])
```
One can also create animations from the raw data using the method below
```
raw_data = data.get_data('energy')
raw_data.shape
fig, ax = plt.subplots()
ax.set_xlim(( 0, 200))
ax.set_ylim((0, 1))
line, = ax.plot([], [], lw=2)
# initialization function: plot the background of each frame
def init():
line.set_data([], [])
return (line,)
# animation function. This is called sequentially
def animate(i):
x = np.linspace(0, 200, 200)
y = raw_data[i].reshape(200,)
line.set_data(x, y)
return (line,)
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=100, interval=20,
blit=True)
HTML(anim.to_jshtml())
```
| github_jupyter |
# 1. Data Cleaning
## Import Packages and Defining Helper Functions
### Import Packages
```
# import required packages
#warnings :)
import warnings
warnings.filterwarnings('ignore')
# for df purpose
import pandas as pd
import numpy as np
from tqdm.auto import tqdm
tqdm.pandas()
# for text processing
import nltk
import re
import string
from google_trans_new import google_translator
# for graph plotting / visualisation
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
#for storing intermediate results
import pickle
# for notebook function
from IPython.display import display, clear_output
import time
from multiprocessing.dummy import Pool as ThreadPool
```
### Helper Functions
```
def print_bold(text):
text_bold = '\x1b[1;30;47m'+text+ '\x1b[0m'
print(text_bold)
def glance(x,n=5):
try:
iterator = iter(x)
except:
print(x)
return
else:
if type(x) == str or len(str(x)) <= 50:
print(x)
return
if type(x) == dict:
iterator = x.items()
i = 0
for ele in iterator:
if i >= n:
break
glance(ele,n)
i += 1
```
## Read Data
```
df = pd.read_csv('data/facebook_data.csv',index_col=0)
df
df.describe()
df.dtypes
```
## Normal Data Cleaning
```
# check NA
print(df.isna().sum())
# remove posts with no title
df1 = df.dropna(subset=['title'])
df1.isna().sum()
# replace NA values in comment, reaction, and share with 0
df2 = df1.fillna(0)
df2.isna().sum()
# convert comments and share counts to integer
df2['comment'] = df2['comment'].apply(int)
df2['share'] = df2['share'].apply(int)
df2.dtypes
df2['date'] = pd.to_datetime(df2['date'])
df2.dtypes
df2.describe(include='all')
# remove duplicates
df3 = df2.drop_duplicates(keep='first', inplace=False).reset_index(drop=True)
print('Duplicates removed: %s' % (len(df2) - len(df3)))
```
## Text Data Cleaning
### Text Cleaner Class
```
class TextCleaner:
def __init__(self,custom_stop=set(),custom_stop_path='pickles/custom_stop.pkl',
custom_translate={},custom_translate_path='pickles/custom_translate.pkl',
custom_dict=set(),custom_dict_path='pickles/custom_dict.pkl'):
if custom_stop_path:
self.custom_stop_path = custom_stop_path
try:
self.custom_stop = pickle.load(open(custom_stop_path,'rb'))
except Exception as e:
self.custom_stop = None
print('TextCleaner: Unable to load pickle: ',custom_stop_path)
if custom_translate_path:
self.custom_translate_path = custom_translate_path
try:
self.custom_translate = pickle.load(open(custom_translate_path,'rb'))
except Exception as e:
self.custom_translate = None
print('TextCleaner: Unable to load pickle: ',custom_translate_path)
if custom_dict_path:
self.custom_dict_path = custom_dict_path
try:
self.custom_dict = pickle.load(open(custom_dict_path,'rb'))
except Exception as e:
self.custom_dict = None
print('TextCleaner: Unable to load pickle: ',custom_dict_path)
self.custom_stop = self.custom_stop or custom_stop
self.custom_translate = self.custom_translate or custom_translate
self.custom_dict = self.custom_dict or custom_dict
def save(self):
pickle.dump(self.custom_stop,open(self.custom_stop_path,'wb'))
pickle.dump(self.custom_translate,open(self.custom_translate_path,'wb'))
pickle.dump(self.custom_dict,open(self.custom_dict_path,'wb'))
def update_custom_stop(self,new_list):
self.custom_stop.update(new_list)
self.save()
def update_custom_translate(self,new_dict):
self.custom_translate.update(new_dict)
self.save()
def update_custom_dict(self,new_list):
self.custom_dict.update(new_list)
self.save()
def clear_custom_stop(self):
self.custom_stop = set()
self.save()
def clear_custom_translate(self):
self.custom_translate={}
self.save()
def clear_custom_dict(self):
self.custom_dict = set()
self.save()
def tokenize(self,text):
word_tokenize = nltk.tokenize.word_tokenize
text = text.lower()
return word_tokenize(text)
def clean_tokens(self,tokens):
tokens = [re.sub('[%s]' % re.escape(string.punctuation), '', text) for text in tokens] #remove punctuations
tokens = [t for t in tokens if re.match(r'[^\W\d]*$', t)] # remove non-alphabetical tokens
tokens = [text for text in tokens if text!=''] #remove empty tokens
return tokens
def remove_stop_words(self,tokens):
stopset = set(nltk.corpus.stopwords.words('english'))
stopset.update(self.custom_stop)
new_tokens = []
for t in tokens:
if type(t) == str:
if t not in stopset:
new_tokens.append(t)
elif (len(t) > 0):
new_tokens.append(self.remove_stop_words(t))
else:
print('Invalid value: ',t)
return new_tokens
def lemmatize(self, tokens):
lemmatizer = nltk.stem.wordnet.WordNetLemmatizer()
new_tokens = []
for t in tokens:
if type(t) == str:
new_tokens.append(lemmatizer.lemmatize(t))
elif (len(t) > 0):
new_tokens.append(self.lemmatize(t))
else:
print('Invalid value: ',t)
return new_tokens
def translate(self,tokens):
new_tokens = []
for t in tokens:
if type(t) == str:
if t in self.custom_translate:
t = self.custom_translate[t]
new_tokens.append(t)
elif (len(t) > 0):
new_tokens.append(self.translate(t))
else:
print('Invalid value: ',t)
return new_tokens
def token_count(self,tokens, counts={}):
for t in tokens:
if type(t) == str:
if t in counts:
counts[t] += 1
else:
counts[t] = 1
elif (len(t) > 0):
counts = self.token_count(t, counts)
else:
print('Invalid value: ',t)
return counts
def perform_clean(self,series,min_tokens=2,show_intermediate=False):
print_bold('Tokenizing...')
series = series.progress_apply(self.tokenize)
if show_intermediate:
print('After Tokenize\n', series)
print_bold('Cleaning Tokens...')
series = series.progress_apply(self.clean_tokens)
if show_intermediate:
print('After Clean\n',series)
print_bold('Removing Stop Words...')
series = series.progress_apply(self.remove_stop_words)
if show_intermediate:
print('\nAfter Stopword Removal\n ', series)
print_bold('Lemmatizing...')
series = series.progress_apply(self.lemmatize)
if show_intermediate:
print('\nAfter Lemmatization\n',series)
print_bold('Translating...')
series = series.progress_apply(self.translate)
if show_intermediate:
print('\nAfter Translation\n',series)
#remove posts with less than n tokens
min_tokens = 2
print_bold('Removing posts with less than %s words...' % min_tokens)
series_count = series.apply(len)
series,series_removed = series[series_count >= min_tokens], series[series_count < min_tokens]
print('%s posts removed:' % len(series_removed))
print(series_removed)
if show_intermediate:
print('\nAfter removing posts\n',series)
return series
def non_dict(self, tokens, min_occurrence=5):
dict_words = set(nltk.corpus.words.words())
dict_words.update(set(i for i in nltk.corpus.wordnet.words()))
dict_words.update(self.custom_dict)
# print('total dict words: ',len(dict_words))
non_dict_words = []
word_counts = self.token_count(tokens,counts={})
for (k,v) in word_counts.items():
if k not in dict_words and v >= min_occurrence:
non_dict_words.append((k,v))
non_dict_words.sort(key=lambda x: x[1], reverse=True)
return non_dict_words
cleaner = TextCleaner()
print('Custom Stop Words: %s. Use "cleaner.custom_stop" to see existing custom stop words.' % len(cleaner.custom_stop))
print('Custom Dictionary: %s. Use "cleaner.custom_dict" to see existing custom dictionary.' % len(cleaner.custom_dict))
print('Custom Translation: %s. Use "cleaner.custom_translate" to see existing custom translation.' % len(cleaner.custom_translate))
```
### Perform Cleaning
```
#tokenize and clean post titles
df3['tokens'] = df3['title'].progress_apply(cleaner.tokenize)
df3['tokens'] = df3['tokens'].progress_apply(cleaner.clean_tokens)
df3['tokens'].sample(5)
df3['count'] = df3['tokens'].apply(len)
print(df3['count'].describe())
df3['count'].plot.density()
# Remove posts with no tokens
print('Number of Posts with no tokens: %s' % (df3['count']==0).sum())
df3 = df3[df3['count']!=0].reset_index(drop=True)
df3.describe()
# after cleaning
df_clean = df3
df_clean.sample(5)
```
## Store Intermediate Results and Export notebook
```
df_clean.to_pickle('pickles/df_clean.pkl')
# Convert notebook to html
!jupyter nbconvert --to html_ch --output-dir='.\html' "1. Data Cleaning.ipynb"
```
# End
| github_jupyter |
# Verifying that the matrix DWPC method generates results similar to the Neo4j method
The matrix-based DWPC calculation method does not provide results exactly equal to the Neo4j-based method for all metapaths. We would like to verify that these differences in DWPC calculation do not result in significant differences in the resulting predictions.
```
import pandas as pd
import matplotlib
import seaborn as sns
%matplotlib inline
```
---
## Data
Data files are from fold 3 of the full size network (no rare disease data).
```
orig = pd.read_csv("orig_pred_res_for_roc.tsv", sep='\t')
matx = pd.read_csv("matrix_pred_res_for_roc.tsv", sep='\t')
orig.shape
matx.shape
orig.head(2)
matx.head(2)
```
---
## Check that the pairs are equal
Check that the predictions involve the same chemical-disease pairs.
```
assert (
set((r["compound_id"], r["disease_id"]) for i, r in orig.iterrows())
==
set((r["chemical_id"], r["disease_id"]) for i, r in matx.iterrows())
)
```
---
## Extract out the relevant information
We will exclude the actual DWPC values when comparing the ranks of the predicted results.
```
tempa = (orig
[["compound_id", "disease_id", "predicted_value", "true_label"]]
.rename(columns={
"compound_id": "chemical_id",
"predicted_value": "orig_value",
"true_label": "orig_label"
})
)
tempb = (matx
[["chemical_id", "disease_id", "predicted_value", "true_label"]]
.rename(columns={
"predicted_value": "matx_value",
"true_label": "matx_label"
})
)
res = tempa.merge(tempb, how="inner", on=["chemical_id", "disease_id"])
res.shape
res.head()
```
### Check that the true labels are equal
```
(res["matx_label"] == res["orig_label"]).all()
res = (res
.drop("matx_label", axis=1)
.rename(columns={"orig_label": "true_label"})
)
res.head()
```
---
## Calculate the ranks of the predictions
Ranks are assigned in descending order (rank of 1 means the top prediction).
```
ranks = (res
.rank(numeric_only=True, ascending=False)
.drop("true_label", axis=1)
.rename(columns={
"orig_value": "orig_rank",
"matx_value": "matx_rank"
})
)
ranks.head()
fin = res.merge(ranks, left_index=True, right_index=True)
fin.head()
```
---
## Calculate the difference in rank for a prediction between the two methods
Calculate how the ranks differ between the two methods.
```
fin = (fin
.assign(
rank_diff = lambda df: pd.Series.abs(df["orig_rank"] - df["matx_rank"])
)
.assign(
diff_pct = lambda df: df["rank_diff"] / len(fin) * 100
)
)
fin.head()
fin["diff_pct"].max()
```
## How much variance in ranking is there?
```
sns.boxplot(y=fin["diff_pct"])
```
## Does variance vary based on the true label?
```
sns.violinplot(data=fin, x="true_label", y="diff_pct")
```
---
## Visual comparison of ranks
```
sns.jointplot(
data=fin,
x="orig_rank", y="matx_rank",
size=9
)
```
For the most part this seems to show that the matrix method is comparable to the original neo4j method at ranking the predictions.
There don't seem to be any major deviations from the y=x line on the predictions. The medium amount of fuzziness is probably due to the fact that the matrix method cannot provide exact matches in DWPC values to the original neo4j method, and therefore this influences the final predicted value, which changes the calculated rank.
```
sns.jointplot(
data=fin.query("true_label == 1"),
x="orig_rank", y="matx_rank",
size=9
)
sns.jointplot(
data=fin.query("true_label == 0"),
x="orig_rank", y="matx_rank",
size=9
)
```
No obvious outliers in the prediction ranks, which implies that the matrix method is generating similar results.
Next: look at the model being built, and the features being selected.
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D3_ModelFitting/W1D3_Tutorial3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Neuromatch Academy: Week 1, Day 3, Tutorial 3
# Model Fitting: Confidence intervals and bootstrapping
**Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith
**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Ella Batty, Michael Waskom
#Tutorial Objectives
This is Tutorial 3 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).
In this tutorial, we wil discuss how to gauge how good our estimated model parameters are.
- Learn how to use bootstrapping to generate new sample datasets
- Estimate our model parameter on these new sample datasets
- Quantify the variance of our estimate using confidence intervals
```
#@title Video 1: Confidence Intervals & Bootstrapping
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="hs6bVGQNSIs", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
Up to this point we have been finding ways to estimate model parameters to fit some observed data. Our approach has been to optimize some criterion, either minimize the mean squared error or maximize the likelihood while using the entire dataset. How good is our estimate really? How confident are we that it will generalize to describe new data we haven't seen yet?
One solution to this is to just collect more data and check the MSE on this new dataset with the previously estimated parameters. However this is not always feasible and still leaves open the question of how quantifiably confident we are in the accuracy of our model.
In Section 1, we will explore how to implement bootstrapping. In Section 2, we will build confidence intervals of our estimates using the bootstrapping method.
---
# Setup
```
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def solve_normal_eqn(x, y):
"""Solve the normal equations to produce the value of theta_hat that minimizes
MSE.
Args:
x (ndarray): An array of shape (samples,) that contains the input values.
y (ndarray): An array of shape (samples,) that contains the corresponding
measurement values to the inputs.
thata_hat (float): An estimate of the slope parameter.
Returns:
float: the value for theta_hat arrived from minimizing MSE
"""
theta_hat = (x.T @ y) / (x.T @ x)
return theta_hat
```
---
# Section 1: Bootstrapping
[Bootstrapping](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)) is a widely applicable method to assess confidence/uncertainty about estimated parameters, it was originally [proposed](https://projecteuclid.org/euclid.aos/1176344552) by [Bradley Efron](https://en.wikipedia.org/wiki/Bradley_Efron). The idea is to generate many new synthetic datasets from the initial true dataset by randomly sampling from it, then finding estimators for each one of these new datasets, and finally looking at the distribution of all these estimators to quantify our confidence.
Note that each new resampled datasets will be the same size as our original one, with the new data points sampled with replacement i.e. we can repeat the same data point multiple times. Also note that in practice we need a lot of resampled datasets, here we use 2000.
To explore this idea, we will start again with our noisy samples along the line $y_n = 1.2x_n + \epsilon_n$, but this time only use half the data points as last time (15 instead of 30).
```
#@title
#@markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
# Let's set some parameters
theta = 1.2
n_samples = 15
# Draw x and then calculate y
x = 10 * np.random.rand(n_samples) # sample from a uniform distribution over [0,10)
noise = np.random.randn(n_samples) # sample from a standard normal distribution
y = theta * x + noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
```
### Exercise 1: Resample Dataset with Replacement
In this exercise you will implement a method to resample a dataset with replacement. The method accepts $x$ and $y$ arrays. It should return a new set of $x'$ and $y'$ arrays that are created by randomly sampling from the originals.
We will then compare the original dataset to a resampled dataset.
TIP: The [numpy.random.choice](https://numpy.org/doc/stable/reference/random/generated/numpy.random.choice.html) method would be useful here.
```
def resample_with_replacement(x, y):
"""Resample data points with replacement from the dataset of `x` inputs and
`y` measurements.
Args:
x (ndarray): An array of shape (samples,) that contains the input values.
y (ndarray): An array of shape (samples,) that contains the corresponding
measurement values to the inputs.
Returns:
ndarray, ndarray: The newly resampled `x` and `y` data points.
"""
#######################################################
## TODO for students: resample dataset with replacement
# Fill out function and remove
raise NotImplementedError("Student exercise: resample dataset with replacement")
#######################################################
# Get array of indices for resampled points
sample_idx = ...
# Sample from x and y according to sample_idx
x_ = ...
y_ = ...
return x_, y_
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5))
ax1.scatter(x, y)
ax1.set(title='Original', xlabel='x', ylabel='y')
# Uncomment below to test your function
#x_, y_ = resample_with_replacement(x, y)
#ax2.scatter(x_, y_, color='c')
ax2.set(title='Resampled', xlabel='x', ylabel='y',
xlim=ax1.get_xlim(), ylim=ax1.get_ylim());
# to_remove solution
def resample_with_replacement(x, y):
"""Resample data points with replacement from the dataset of `x` inputs and
`y` measurements.
Args:
x (ndarray): An array of shape (samples,) that contains the input values.
y (ndarray): An array of shape (samples,) that contains the corresponding
measurement values to the inputs.
Returns:
ndarray, ndarray: The newly resampled `x` and `y` data points.
"""
# Get array of indices for resampled points
sample_idx = np.random.choice(len(x), size=len(x), replace=True)
# Sample from x and y according to sample_idx
x_ = x[sample_idx]
y_ = y[sample_idx]
return x_, y_
with plt.xkcd():
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5))
ax1.scatter(x, y)
ax1.set(title='Original', xlabel='x', ylabel='y')
x_, y_ = resample_with_replacement(x, y)
ax2.scatter(x_, y_, color='c')
ax2.set(title='Resampled', xlabel='x', ylabel='y',
xlim=ax1.get_xlim(), ylim=ax1.get_ylim());
```
In the resampled plot on the right, the actual number of points is the same, but some have been repeated so they only display once.
Now that we have a way to resample the data, we can use that in the full bootstrapping process.
### Exercise 2: Bootstrap Estimates
In this exercise you will implement a method to run the bootstrap process of generating a set of $\hat\theta$ values from a dataset of $x$ inputs and $y$ measurements. You should use `resample_with_replacement` here, and you may also invoke helper function `solve_normal_eqn` from Tutorial 1 to produce the MSE-based estimator.
We will then use this function to look at the theta_hat from different samples.
```
def bootstrap_estimates(x, y, n=2000):
"""Generate a set of theta_hat estimates using the bootstrap method.
Args:
x (ndarray): An array of shape (samples,) that contains the input values.
y (ndarray): An array of shape (samples,) that contains the corresponding
measurement values to the inputs.
n (int): The number of estimates to compute
Returns:
ndarray: An array of estimated parameters with size (n,)
"""
theta_hats = np.zeros(n)
##############################################################################
## TODO for students: implement bootstrap estimation
# Fill out function and remove
raise NotImplementedError("Student exercise: implement bootstrap estimation")
##############################################################################
# Loop over number of estimates
for i in range(n):
# Resample x and y
x_, y_ = ...
# Compute theta_hat for this sample
theta_hats[i] = ...
return theta_hats
np.random.seed(123) # set random seed for checking solutions
# Uncomment below to test function
# theta_hats = bootstrap_estimates(x, y, n=2000)
# print(theta_hats[0:5])
# to_remove solution
def bootstrap_estimates(x, y, n=2000):
"""Generate a set of theta_hat estimates using the bootstrap method.
Args:
x (ndarray): An array of shape (samples,) that contains the input values.
y (ndarray): An array of shape (samples,) that contains the corresponding
measurement values to the inputs.
n (int): The number of estimates to compute
Returns:
ndarray: An array of estimated parameters with size (n,)
"""
theta_hats = np.zeros(n)
# Loop over number of estimates
for i in range(n):
# Resample x and y
x_, y_ = resample_with_replacement(x, y)
# Compute theta_hat for this sample
theta_hats[i] = solve_normal_eqn(x_, y_)
return theta_hats
np.random.seed(123) # set random seed for checking solutions
theta_hats = bootstrap_estimates(x, y, n=2000)
print(theta_hats[0:5])
```
You should see `[1.27550888 1.17317819 1.18198819 1.25329255 1.20714664]` as the first five estimates.
Now that we have our bootstrap estimates, we can visualize all the potential models (models computed with different resampling) together to see how distributed they are.
```
#@title
#@markdown Execute this cell to visualize all potential models
fig, ax = plt.subplots()
# For each theta_hat, plot model
theta_hats = bootstrap_estimates(x, y, n=2000)
for i, theta_hat in enumerate(theta_hats):
y_hat = theta_hat * x
ax.plot(x, y_hat, c='r', alpha=0.01, label='Resampled Fits' if i==0 else '')
# Plot observed data
ax.scatter(x, y, label='Observed')
# Plot true fit data
y_true = theta * x
ax.plot(x, y_true, 'g', linewidth=2, label='True Model')
ax.set(
title='Bootstrapped Slope Estimation',
xlabel='x',
ylabel='y'
)
# Change legend line alpha property
handles, labels = ax.get_legend_handles_labels()
handles[0].set_alpha(1)
ax.legend();
```
This looks pretty good! The bootstrapped estimates spread around the true model, as we would have hoped. Note that here we have the luxury to know the ground truth value for $\theta$, but in applications we are trying to guess it from data. Therefore, assessing the quality of estimates based on finite data is a task of fundamental importance in data analysis.
---
# Section 2: Confidence Intervals
Let us now quantify how uncertain our estimated slope is. We do so by computing [confidence intervals](https://en.wikipedia.org/wiki/Confidence_interval) (CIs) from our bootstrapped estimates. The most direct approach is to compute percentiles from the empirical distribution of bootstrapped estimates. Note that this is widely applicable as we are not assuming that this empirical distribution is Gaussian.
```
#@title
#@markdown Execute this cell to plot bootstrapped CI
theta_hats = bootstrap_estimates(x, y, n=2000)
print(f"mean = {np.mean(theta_hats):.2f}, std = {np.std(theta_hats):.2f}")
fig, ax = plt.subplots()
ax.hist(theta_hats, bins=20, facecolor='C1', alpha=0.75)
ax.axvline(theta, c='g', label=r'True $\theta$')
ax.axvline(np.percentile(theta_hats, 50), color='r', label='Median')
ax.axvline(np.percentile(theta_hats, 2.5), color='b', label='95% CI')
ax.axvline(np.percentile(theta_hats, 97.5), color='b')
ax.legend()
ax.set(
title='Bootstrapped Confidence Interval',
xlabel=r'$\hat{{\theta}}$',
ylabel='count',
xlim=[1.0, 1.5]
);
```
Looking at the distribution of bootstrapped $\hat{\theta}$ values, we see that the true $\theta$ falls well within the 95% confidence interval, wich is reinsuring. We also see that the value $\theta = 1$ does not fall within the confidence interval. From this we would reject the hypothesis that the slope was 1.
---
# Summary
- Bootstrapping is a resampling procedure that allows to build confidence intervals around inferred parameter values
- it is a widely applicable and very practical method that relies on computational power and pseudo-random number generators (as opposed to more classical approaches than depend on analytical derivations)
**Suggested readings**
Computer Age Statistical Inference: Algorithms, Evidence and Data Science, by Bradley Efron and Trevor Hastie
| github_jupyter |
## Face and Facial Keypoint detection
After you've trained a neural network to detect facial keypoints, you can then apply this network to *any* image that includes faces. The neural network expects a Tensor of a certain size as input and, so, to detect any face, you'll first have to do some pre-processing.
1. Detect all the faces in an image using a face detector (we'll be using a Haar Cascade detector in this notebook).
2. Pre-process those face images so that they are grayscale, and transformed to a Tensor of the input size that your net expects. This step will be similar to the `data_transform` you created and applied in Notebook 2, whose job was tp rescale, normalize, and turn any iimage into a Tensor to be accepted as input to your CNN.
3. Use your trained model to detect facial keypoints on the image.
---
In the next python cell we load in required libraries for this section of the project.
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
import cv2
```
#### Select an image
Select an image to perform facial keypoint detection on; you can select any image of faces in the `images/` directory.
```
import cv2
# load in color image for face detection
image = cv2.imread('images/obamas.jpg')
# switch red and blue color channels
# --> by default OpenCV assumes BLUE comes first, not RED as in many images
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# plot the image
fig = plt.figure(figsize=(9,9))
plt.imshow(image)
```
## Detect all faces in an image
Next, you'll use one of OpenCV's pre-trained Haar Cascade classifiers, all of which can be found in the `detector_architectures/` directory, to find any faces in your selected image.
In the code below, we loop over each face in the original image and draw a red square on each face (in a copy of the original image, so as not to modify the original). You can even [add eye detections](https://docs.opencv.org/3.4.1/d7/d8b/tutorial_py_face_detection.html) as an *optional* exercise in using Haar detectors.
An example of face detection on a variety of images is shown below.
<img src='images/haar_cascade_ex.png' width=80% height=80%/>
```
# load in a haar cascade classifier for detecting frontal faces
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')
# run the detector
# the output here is an array of detections; the corners of each detection box
# if necessary, modify these parameters until you successfully identify every face in a given image
faces = face_cascade.detectMultiScale(image, 1.2, 2)
# make a copy of the original image to plot detections on
image_with_detections = image.copy()
# loop over the detected faces, mark the image where each face is found
for (x,y,w,h) in faces:
# draw a rectangle around each detected face
# you may also need to change the width of the rectangle drawn depending on image resolution
cv2.rectangle(image_with_detections,(x,y),(x+w,y+h),(255,0,0),3)
fig = plt.figure(figsize=(9,9))
plt.imshow(image_with_detections)
```
## Loading in a trained model
Once you have an image to work with (and, again, you can select any image of faces in the `images/` directory), the next step is to pre-process that image and feed it into your CNN facial keypoint detector.
First, load your best model by its filename.
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import models
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
net = models.Net()
## TODO: load the best saved model parameters (by your path name)
## You'll need to un-comment the line below and add the correct name for *your* saved model
net.load_state_dict(torch.load('saved_models/keypoints_model_1.pth.tar'))
## print out your net and prepare it for testing (uncomment the line below)
net.eval()
print(net)
```
## Keypoint detection
Now, we'll loop over each detected face in an image (again!) only this time, you'll transform those faces in Tensors that your CNN can accept as input images.
### TODO: Transform each detected face into an input Tensor
You'll need to perform the following steps for each detected face:
1. Convert the face from RGB to grayscale
2. Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255]
3. Rescale the detected face to be the expected square size for your CNN (224x224, suggested)
4. Reshape the numpy image into a torch image.
You may find it useful to consult to transformation code in `data_load.py` to help you perform these processing steps.
### TODO: Detect and display the predicted keypoints
After each face has been appropriately converted into an input Tensor for your network to see as input, you'll wrap that Tensor in a Variable() and can apply your `net` to each face. The ouput should be the predicted the facial keypoints. These keypoints will need to be "un-normalized" for display, and you may find it helpful to write a helper function like `show_keypoints`. You should end up with an image like the following with facial keypoints that closely match the facial features on each individual face:
<img src='images/michelle_detected.png' width=30% height=30%/>
```
def visualize_output2(faces, test_outputs):
batch_size = len(faces)
for i, face in enumerate(faces):
plt.figure(figsize=(8, 8))
ax = plt.subplot(1, batch_size, i+1)
# un-transform the predicted key_pts data
predicted_key_pts = test_outputs[i].data
predicted_key_pts = predicted_key_pts.numpy()
# undo normalization of keypoints
predicted_key_pts = predicted_key_pts*50.0+100
plt.imshow(face, cmap='gray')
plt.scatter(predicted_key_pts[:, 0], predicted_key_pts[:, 1], s=20, marker='.', c='m')
plt.axis('off')
plt.show()
image_copy = np.copy(image)
images, keypoints = [], []
PADDING =65
from torch.autograd import Variable
# loop over the detected faces from your haar cascade
for (x,y,w,h) in faces:
# Select the region of interest that is the face in the image
roi = image_copy[y-PADDING:y+h+PADDING, x-PADDING:x+w+PADDING]
## TODO: Convert the face region from RGB to grayscale
roi = cv2.cvtColor(roi, cv2.COLOR_RGB2GRAY)
## TODO: Normalize the grayscale image so that its color range falls in [0,1] instead of [0,255]
roi = (roi / 255.).astype(np.float32)
## TODO: Rescale the detected face to be the expected square size for your CNN (224x224, suggested)
roi = cv2.resize(roi, (224, 224))
images.append(roi)
## TODO: Reshape the numpy image shape (H x W x C) into a torch image shape (C x H x W)
if len(roi.shape) == 2:
roi = np.expand_dims(roi, axis=0)
else:
roi = np.moveaxis(roi, 2, 0)
## TODO: Make facial keypoint predictions using your loaded, trained network
roi = Variable(torch.from_numpy(roi).type(torch.FloatTensor).unsqueeze(0)).cpu()
net.cpu()
results = net(roi)
results = results.view(results.size()[0], 68, -1).cpu()
keypoints.append(results[0])
## perform a forward pass to get the predicted facial keypoints
visualize_output2(images, keypoints)
## TODO: Display each detected face and the corresponding keypoints
```
| github_jupyter |
https://www.boostcourse.org/ai214/lecture/42282/?isDesc=false
학습목표
텐서 조작(Tensor Manipulation)에 대해 알아본다.
핵심키워드
* 텐서(Tensor)
* 넘파이(NumPy)
* 텐서 조작(Tensor Manipulation)
* 브로드캐스팅(Broadcasting)
```
import numpy as np
import torch
```
# Numpy Review
1D Array w/ Numpy
```
t = np.array([0., 1., 2., 3., 4., 5., 6.])
print(t)
print('Rank of t: ', t.ndim)
print('Shape of t: ', t.shape)
print('t[0] t[1] t[-1] = ', t[0], t[1], t[-1]) # Elements
print('t[2:5] t[4:-1]j = ', t[2:5], t[4:-1]) # Slicings
print('t[:2] t[3:] = ', t[:2], t[3:]) # Slicings
```
2D Array w/ Numpy
```
t = np.array([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.], [10., 11., 12.]])
print(t)
print('Rank of t: ', t.ndim)
print('Shape of t: ', t.shape)
print(t[:,1]) # Numpy, Tensor에서는 Slicing할 때 행, 열 인덱스를 한 리스트 안에 담는다!
```
# Pytorch Tensor
1D Array w/ Pytorch
```
t = torch.FloatTensor([0., 1., 2., 3., 4., 5., 6.])
print(t)
print(t.dim()) # rank
print(t.shape) # shape
print(t.size()) # shape
print(t[0], t[1], t[-1]) # Element
print(t[2:5], t[4:-1]) # Slicing
print(t[:2], t[3:]) # Slicing
```
2D Array w/ Pytorch
```
t = torch.FloatTensor([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.], [10., 11., 12.]])
print(t)
print(t.dim())
print(t.size())
print(t.shape)
print(t[:, 1])
print(t[:, 1].size())
print(t[:, :-1])
```
---
Pytorch에서 지원하는 연산 방법들
# Broadcasting
```
# Same shape
m1 = torch.FloatTensor([[3, 3]])
m2 = torch.FloatTensor([[3, 2]])
m1 + m2
# Vector + Scaler
m1 = torch.FloatTensor([[1, 2]])
m2 = torch.FloatTensor([[3]]) # 3 -> [[3, 3]]
m1 + m2
# 2*1 Vector + 1*2 Vector
m1 = torch.FloatTensor([[1],[2]])
m2 = torch.FloatTensor([[3, 4]])
m1 + m2
```
Matrix multiplication vs Multiplication
```
print()
print('-----------')
print('Matmul vs Mul')
print('-----------')
m1 = torch.FloatTensor([[1,2], [3, 4]]) # 2*2
m2 = torch.FloatTensor([[2], [3]]) # 2*1
print(m1.matmul(m2)) # 행렬곱(matrix multiply)
# 행렬의 (일반)곱 - 같의 자리의 element끼리의 곱
print(m1*m2)
print(m1.mul(m2)) # mul = '*'
```
matmul(matrix multiply)와 mul(multiply)의 경우 곱하고자 하는 두 수 중 한 수 뒤에 .matmul()이나 .mul()식의 매서드로 붙어서 활용된다.
Mean
```
t = torch.FloatTensor([1, 2])
print(t.mean())
# Can't use mean on integers
t = torch.LongTensor([1,2])
try:
print(t.mean())
except Exception as exc:
print(exc)
```
You can also use t.mean for higher rank tensors to get mean of all elements, or mean by particular dimension.
```
t = torch.FloatTensor([[1,2],[3,4]])
print(t.mean())
print(t.mean(dim = 0))
print(t.mean(dim = 1))
print(t.mean(dim = -1))
```
Sum
```
t = torch.FloatTensor([[1,2], [3,4]])
print(t.sum())
print(t.sum(dim = 0))
print(t.sum(dim = 1))
print(t.sum(dim = -1))
```
Max & Argmax
```
t = torch.FloatTensor([[1,2], [3,4]])
t
```
The max operator returns one value if it is called without an argument.
```
t.max()
```
The max operator returns two values when called with dimention specified. The first value is the maximum value, and the second value is the argmax: the indice of the element with maximum value.
```
t.max(dim = 0) # Returns two values: Max and Argmax
print('Max: ', t.max(dim = 0)[0])
print('Argmax: ', t.max(dim = 0)[1])
t.max(dim = 1)
t.max(dim = -1)
```
| github_jupyter |
# Lab Three - Clustering
Team Members
* Chance Robinson
* Dan Crouthamel
* Shane Weinstock
# Business Understanding 1
_Describe the purpose of the data set you selected (i.e., why was this data collected in the first place?). How will you measure the effectiveness of a good algorithm? Why does your chosen validation method make sense for this specific dataset and the stakeholders needs?_
```
# Base Imports
import pandas as pd
import numpy as np
import time
from matplotlib import pyplot as plt
from matplotlib.ticker import MaxNLocator
import seaborn as sns; sns.set()
%matplotlib inline
# Pre-Processing
from sklearn.preprocessing import RobustScaler
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OneHotEncoder
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
# Metrics and Evaluation
from sklearn import metrics
from sklearn.metrics import classification_report
from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import plot_roc_curve
# Train/ Test Split
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import Pipeline
# Imbalanced Data
# from imblearn.over_sampling import SMOTE
# from imblearn.over_sampling import BorderlineSMOTE
# from imblearn.pipeline import make_pipeline, Pipeline
# Estimators
# from sklearn.naive_bayes import MultinomialNB
# from sklearn.neighbors import KNeighborsClassifier
# from sklearn.ensemble import RandomForestClassifier
# from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
# from sklearn.tree import DecisionTreeClassifier
# Hyper Parameter Tuning
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
# T-Tests
# from mlxtend.evaluate import paired_ttest_5x2cv
# Machine Learning Visualizations
# from yellowbrick.classifier import ROCAUC
# from yellowbrick.classifier import PrecisionRecallCurve
# from yellowbrick.classifier import ClassificationReport
plt.style.use("ggplot")
```
# Data Understanding 1
_Describe the meaning and type of data (scale, values, etc.) for each attribute in the data file. Verify data quality: Are there missing values? Duplicate data? Outliers? Are those mistakes? How do you deal with these problems?_
## Load Data
```
df = pd.read_csv('../../../../../../data/cardio_train.csv', delimiter=';')
# set id as index
df.set_index("id", inplace=True)
# copy original data
df_clean = df.copy(deep=True)
# drop duplicates
df_clean.drop_duplicates(inplace=True)
```
## Apply Transformations
```
# %%time
# re-encode gender to male (1) and female (0)
df_clean['gender'] = np.where((df_clean.gender == 2), 1, 0)
# If > 200, replace with ap_hi median (120)
# If < 80 (median for ap_lo), replace with ap_hi median (120)
df_clean['ap_hi'] = np.where(df_clean['ap_hi'] > 200, 120, df_clean['ap_hi'])
df_clean['ap_hi'] = np.where(df_clean['ap_hi'] < 80, 120, df_clean['ap_hi'])
# If > 120 (median for hi), replace with ap_lo median (80)
# If < 0 replace with ap_lo median (80)
df_clean['ap_lo'] = np.where(df_clean['ap_lo'] > 120, 80, df_clean['ap_lo'])
df_clean['ap_lo'] = np.where(df_clean['ap_lo'] < 0, 80, df_clean['ap_lo'])
```
**Table 1: Cardiovascular Dataset - Attribute Descriptions**
| Column Description | Feature Type | Column Name | Data Type |
|:---|:---|:---|:---|
| **Age** | Objective | age | int (days) |
| **Height** | Objective | height | int (cm) |
| **Weight** | Objective | weight | float (kg) |
| **Gender** | Objective | gender | 0: female, 1: male |
| **Systolic blood pressure** | Examination | ap_hi | int |
| **Diastolic blood pressure** | Examination | ap_lo | int |
| **Cholesterol** | Examination | cholesterol | 1: normal, 2: above normal, 3: well above normal |
| **Glucose** | Examination | gluc | 1: normal, 2: above normal, 3: well above normal |
| **Smoking** | Subjective | smoke | binary |
| **Alcohol intake** | Subjective | alco | binary |
| **Physical activity** | Subjective | active | binary |
| **Has CVD?** | Target * | cardio | binary |
```
df_clean.describe()
```
# Data Understanding 2
_Visualize the any important attributes appropriately. Important: Provide an interpretation for any charts or graphs._
```
corr_features = ['height', 'weight', 'ap_hi', 'ap_lo',]
plt.figure(figsize=(8,6))
# Use an easier to see colormap
cmap = sns.diverging_palette(220, 10, as_cmap=True)
# Mask
correlation = df_clean[corr_features].corr()
# correlation[np.abs(correlation)<.2] = 0
sns.heatmap(correlation, annot = True, cmap=cmap).set(title = 'Correlation Heatmap')
plt.show()
```
### Baseline Classification Performance
```
from sklearn.model_selection import StratifiedKFold, cross_val_score
numeric_features = ['age', 'height', 'weight', 'ap_hi', 'ap_lo', 'cholesterol', 'gluc']
categorical_features = ['gender', 'smoke', 'alco', 'active']
# Impute Numeric Features with the mean value
# One Hot Encode Categorical Features
# Robust Scaler
from sklearn.preprocessing import RobustScaler
rs = RobustScaler()
df_clean[["age", "height", "weight", "ap_hi", "ap_lo"]] = rs.fit_transform(df_clean[["age", "height", "weight", "ap_hi", "ap_lo"]])
X_cols = ['age', 'gender', 'height', 'weight', 'ap_hi', 'ap_lo', 'cholesterol', 'gluc', 'smoke', 'alco', 'active']
y = df_clean['cardio']
X = df_clean[X_cols]
cv = StratifiedKFold(n_splits=10)
clf_logreg = LogisticRegression(random_state=1, penalty='l2', C=.01)
roc = cross_val_score(clf_logreg, X, y=y, cv=cv, scoring='roc_auc')
print ("Average ROC (AUC) = ", roc.mean()*100, "+-", roc.std()*100)
```
# Modeling and Evaluation 1
_Train and adjust parameters_
```
from kneed import KneeLocator
# kl = KneeLocator(range(1, 11), sse, curve="convex", direction="decreasing")
# n_clusters = kl.elbow
# n_clusters
# plt.figure()
# # plt.subplot(1,2,1)
# X2=X2.values
# plt.scatter(X2[:, 0], X2[:, 1]+np.random.random(X2[:, 1].shape)/2, c=new_feature, cmap=plt.cm.rainbow, s=20, linewidths=0)
# plt.xlabel('ap_hi (normalized)'), plt.ylabel('ap_lo (normalized)')
# plt.grid()
```
## Spectral Clustering
```
%%time
# an example using SpectralClustering, which assumes that the graphi
from sklearn.cluster import SpectralClustering
X_cols = ['age', 'gender', 'height', 'weight', 'ap_hi', 'ap_lo', 'cholesterol', 'gluc', 'smoke', 'alco', 'active']
y = df_clean['cardio']
X = df_clean[X_cols]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.50, random_state=1)
X2 = X_train[['ap_hi','ap_lo']]
y = y_train
X = X_train[['age', 'gender', 'height', 'weight', 'cholesterol', 'gluc', 'smoke', 'alco', 'active']]
nclust = 3
# If a string, this may be one of
# ‘nearest_neighbors’, ‘precomputed’, ‘rbf’
# or one of the kernels supported by sklearn.metrics.pairwise_kernels
spc = SpectralClustering(n_clusters=nclust, affinity='nearest_neighbors', n_jobs=-1, random_state=1, assign_labels="kmeans")
labels = spc.fit_predict(X2)
X = np.column_stack((X, pd.get_dummies(labels)))
roc = cross_val_score(clf_logreg, X, y=y, cv=cv, scoring='roc_auc')
print ("Average ROC (AUC) = ", roc.mean()*100, "+-", roc.std()*100)
# plt.scatter(X2[:, 0], X2[:, 1], c=labels,
# cmap=plt.cm.rainbow, s=5, linewidths=0)
# plt.show()
```
# Modeling and Evaluation 2
_Evaluate and Compare_
# Modeling and Evaluation 3
_Visualize Results_
# Modeling and Evaluation 4
_Summarize the Ramifications_
# Deployment
_Be critical of your performance and tell the reader how you current model might be usable by other parties. Did you achieve your goals? If not, can you reign in the utility of your modeling? How useful is your model for interested parties (i.e., the companies or organizations that might want to use it)? How would your deploy your model for interested parties? What other data should be collected? How often would the model need to be updated, etc.?_
# Exceptional Work
_You have free reign to provide additional analyses or combine analyses._
```
X_cols = ['age', 'gender', 'height', 'weight', 'ap_hi', 'ap_lo', 'cholesterol', 'gluc', 'smoke', 'alco', 'active']
y = df_clean['cardio']
X = df_clean[X_cols]
clf_logreg = LogisticRegression(random_state=1)
pipe_logreg = Pipeline([['clf', clf_logreg]])
model_params = {
# "logisticregression": {
# "model": pipe_logreg,
# "params": {
# "clf__C": [.01, .1, 1, 5, 10, 25, 50],
# "clf__penalty": ["l1", "l2"]
# }
# }
}
scores = []
for model_name, mp in model_params.items():
start = time.time()
# clf = GridSearchCV(estimator = mp["model"], param_grid=mp["params"], cv=10, scoring="roc_auc", n_jobs=-1)
clf = RandomizedSearchCV(estimator = mp["model"], param_distributions=mp["params"], cv=10, scoring="roc_auc", n_jobs=-1)
clf.fit(X, y)
elapsed_time = (time.time() - start)
scores.append({"Model": model_name,
"Best ROC AUC": clf.best_score_, # Mean cross-validated score of the best_estimator
"Best Params": clf.best_params_,
"results": clf.cv_results_,
"Cross Validation Time": elapsed_time,
"Best Estimator": clf.best_estimator_
})
print('10 Fold Cross Validation Scores (CVD):')
for model in scores:
print()
for key, value in model.items():
if key == 'Best Estimator':
print("Prediction Accuracy",': ',value.score(X, y))
elif key == 'results':
print('Mean Fit Time: ', value['mean_fit_time'].mean())
print('Mean Score Time: ', value['mean_score_time'].mean())
else:
print(key,': ',value)
```
| github_jupyter |
## Wikidata Knowledge Graph Extraction
Many recommendation algorithms (DKN, RippleNet, KGCN) use Knowledge Graphs as an external source of information. We found that one of the bottlenecks to benchmark current algorithms like DKN, RippleNet or KGCN is that they used Microsoft Satori. As Satori is not open source, it's not possible to replicate the results found in the papers. The solution is using other open source KGs.
The goal of this notebook is to provide examples of how to interact with Wikipedia queries and Wikidata to extract a Knowledge Graph that can be used with the mentioned algorithms.
The steps covered are:
- How to find a Wikidata entity (https://www.wikidata.org/wiki/Wikidata:Glossary/en from a text query
- How to find surrounding entities of an entity
- How to find the description of an entity
- Create a KG for Movielens
```
# set the environment path to find Recommenders
import sys
sys.path.append("../../")
print("System version: {}".format(sys.version))
import pandas as pd
from reco_utils.dataset.wikidata import (
find_wikidataID,
query_entity_links,
read_linked_entities,
query_entity_description
)
import networkx as nx
import matplotlib.pyplot as plt
from tqdm import tqdm
from reco_utils.dataset import movielens
from reco_utils.common.notebook_utils import is_jupyter
# Select MovieLens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
MOVIELENS_SAMPLE = False
MOVIELENS_SAMPLE_SIZE = 5
KG_FILE_NAME = "movielens_" + MOVIELENS_DATA_SIZE + '_wikidata.csv'
```
## 1. Create a KG from linked entities in Wikidata
```
names = ["The Godfather", "Al Pacino", "Tom Hanks", "Forrest Gump", "Julia Roberts", "", "My Best Friend's Wedding"]
def wikidata_KG_from_list(names):
results_list = pd.DataFrame()
for n in names:
print(n)
entity_id = find_wikidataID(n)
if entity_id != "entityNotFound":
json_links = query_entity_links(entity_id)
related_entities,related_names = read_linked_entities(json_links)
d = pd.DataFrame({
"name":n,
"original_entity":[entity_id]* len(related_entities),
"linked_entities":related_entities,
"name_linked_entities":related_names})
results_list = pd.concat([results_list, d])
return results_list
%%time
results_list = wikidata_KG_from_list(names)
results_list.head()
```
### Visualize KG using networkx
```
G = nx.from_pandas_edgelist(results_list, 'original_entity', 'linked_entities')
target_names = results_list[["linked_entities", "name_linked_entities"]].drop_duplicates().rename(columns={"linked_entities": "labels", "name_linked_entities": "name"})
source_names = results_list[["original_entity", "name"]].drop_duplicates().rename(columns={"original_entity": "labels"})
names = pd.concat([target_names, source_names])
names = names.set_index("labels")
names = names.to_dict()["name"]
plt.figure(figsize=(12,12))
pos = nx.spring_layout(G)
nx.draw(G,pos, node_size=60,font_size=9, width = 0.2)
nx.draw_networkx_labels(G, pos, names, font_size=9)
plt.show()
```
## 2. Create an item description with short description and linked entitites
```
# Create entity description with small description and string of linked entities
names = ["The Godfather", "Al Pacino"]
def wikidata_descriptions_from_list(names):
result_description = pd.DataFrame()
for n in names:
entity_id = find_wikidataID(n)
if entity_id != "entityNotFound":
json_links = query_entity_links(entity_id)
entity_description = query_entity_description(entity_id)
related_entities,related_names = read_linked_entities(json_links)
d = pd.DataFrame({"name": n,
"original_entity": entity_id,
"description":entity_description,
"related_names":', '.join(related_names)}, index = [0])
result_description = pd.concat([result_description, d])
return result_description
%%time
result_description = wikidata_descriptions_from_list(names)
result_description.head(10)
```
## 3. Create a KG from the Movielens Dataset
```
# Obtain pairs of Movie Title - IDs from Movielens
df = movielens.load_pandas_df(MOVIELENS_DATA_SIZE,
('UserId', 'ItemId', 'Rating', 'Timestamp'),
title_col='Title',
genres_col='Genres',
year_col='Year'
)
movies = df[["Title", "ItemId"]].drop_duplicates().reset_index()
movies["Title"][1:5]
movies.shape
def wikidata_KG_from_movielens(df):
result_linked = pd.DataFrame()
entity_id = find_wikidataID(df["Title"] + " film")
if entity_id != "entityNotFound":
json_links = query_entity_links(entity_id)
related_entities,related_names = read_linked_entities(json_links)
d = pd.DataFrame({"original_entity":[entity_id]* len(related_entities),
"linked_entities":related_entities,
"name_linked_entities":related_names,
"movielens_title": df["Title"],
"movielens_id": df["ItemId"],
})
result_linked = pd.concat([result_linked, d])
return result_linked
# For notebook testing
if MOVIELENS_SAMPLE == True:
movies = movies.sample(MOVIELENS_SAMPLE_SIZE, random_state=123)
tqdm().pandas(desc="Number of movies completed")
result = pd.concat(list(movies.progress_apply(lambda x: wikidata_KG_from_movielens(x), axis=1)))
result["movielens_title"].value_counts()
# result.to_csv(KG_FILE_NAME, index = False)
# Record results with papermill for tests - ignore this cell
if is_jupyter():
# Record results with papermill for unit-tests
import papermill as pm
pm.record("lenght_result", result.shape[0])
```
| github_jupyter |
## Importing and prepping data
```
import pandas as pd
import numpy as np
import diff_classifier.aws as aws
import diff_classifier.pca as pca
import os
features = []
remote_folder = 'Gel_studies' #Folder in AWS S3 containing files to be analyzed
bucket = 'dtoghani.data'
vids = 10
mws = ['5k_PEG', 'PS_COOH', '5k_PEG_NH2', 'PS_NH2']
nonnum = ['Particle Type', 'Video Number', 'Track_ID', 'Deff2',
'Mean Mean_Intensity', 'Std Mean_Intensity',
'X', 'Y', 'Mean X', 'Mean Y', 'Std X', 'Std Y']
calcs = [2]
counter = 0
for calc in calcs:
for mw in mws:
for num in range(1, vids+1):
try:
filename = 'features_{}_{}mM_XY{}.csv'.format(mw, calc, '%02d' % num)
#os.remove(filename)
#aws.download_s3('{}/{}'.format(remote_folder, filename), filename, bucket_name=bucket)
fstats = pd.read_csv(filename, encoding = "ISO-8859-1", index_col='Unnamed: 0')
fstats['Particle Type'] = pd.Series(fstats.shape[0]*[mw], index=fstats.index)
fstats['Video Number'] = pd.Series(fstats.shape[0]*[num], index=fstats.index)
#fstats['Calcium Concentration'] = pd.Series(fstats.shape[0]*[str(calcs)], index=fstats.index)
#print(num)
print(filename)
counter = counter + 1
if counter == 1:
fstats_tot = fstats
else:
fstats_tot = fstats_tot.append(fstats, ignore_index=True)
except:
print('skip filename: {}'.format(filename))
fstats_new.to_csv('features.csv')
fstats_tot.shape
#PCA analyses with too many datapoints fail. You get rows with lots of NAs. I'm going to try making a subset of the data first
#and then do a PCA analysis on that.
#include all in analysis
import random
subset = np.sort(np.array(random.sample(range(fstats_new.shape[0]), 500000)))
fstats_sub = fstats_new.loc[subset, :].reset_index(drop=True)
#with equal sample sizes for each particle type
import random
counter = 0
#mws = ['10k_PEG', '5k_PEG', '1k_PEG', 'PS_COOH']
for mw in mws:
fstats_type = fstats_tot[fstats_tot['Particle Type']==mw].reset_index(drop=True)
print(fstats_type.shape)
subset = np.sort(np.array(random.sample(range(fstats_type.shape[0]), 11000)))
if counter == 0:
fstats_sub = fstats_type.loc[subset, :].reset_index(drop=True)
else:
fstats_sub = fstats_sub.append(fstats_type.loc[subset, :].reset_index(drop=True), ignore_index=True)
counter = counter + 1
for mw in mws:
print(fstats_tot[fstats_tot['Particle Type'] == mw].shape)
#fstats = pd.read_csv(filename, encoding = "ISO-8859-1", index_col='Unnamed: 0')
fstats_totMW = fstats_sub[fstats_sub['Particle Type'].isin(mws)].reset_index(drop=True)
#nonnum = ['Particle Type', 'Video Number', 'Track_ID', 'Calcium Concentration', 'Deff2']
fstats_num = fstats_totMW.drop(nonnum, axis=1)
fstats_raw = fstats_num.values
#fstats
```
## PCA analysis
The pca.pca_analysis function provides a completely contained PCA analysis of the input trajectory features dataset. It includes options to impute NaN values (fill in with average values or drop them), and to scale features. Read the docstring for more information.
```
ncomp = 10
pcadataset = pca.pca_analysis(fstats_totMW, dropcols=nonnum, n_components=ncomp)
```
The pca.kmo function calculates the Kaiser-Meyer-Olkin statistic, a measure of sampling adequacy. Check the docstring for more information.
```
kmostat = pca.kmo(pcadataset.scaled)
```
## Visualization
Users can then compare average principle component values between subgroups of the data. In this case, all particles were taken from the same sample, so there are no experimental subgroups. I chose to compare short trajectories to long trajectories, as I would expect differences between the two groups.
```
import numpy as np
#ncomp = 10
dicti = {}
#test = np.exp(np.nanmean(np.log(pcadataset.final[pcadataset.final['Particle Size']==200].as_matrix()), axis=0))[-6:]
#test1 = np.exp(np.nanmean(np.log(pcadataset.final[pcadataset.final['Particle Size']==500].as_matrix()), axis=0))[-6:]
dicti[0] = np.nanmean(pcadataset.final[pcadataset.final['Particle Type']=='5k_PEG'].values[:, -ncomp:], axis=0)
dicti[1] = np.nanmean(pcadataset.final[pcadataset.final['Particle Type']=='PS_COOH'].values[:, -ncomp:], axis=0)
dicti[2] = np.nanmean(pcadataset.final[pcadataset.final['Particle Type']=='5k_PEG_NH2'].values[:, -ncomp:], axis=0)
dicti[3] = np.nanmean(pcadataset.final[pcadataset.final['Particle Type']=='PS_NH2'].values[:, -ncomp:], axis=0)
dicti[3]
labels = mws
pca.plot_pca(dicti, savefig=True, labels=labels, rticks=np.linspace(-5, 5, 11))
```
The variable pcadataset.prcomps shows the user the major contributions to each of the new principle components. When observing the graph above, users can see that there are some differences between short trajectories and long trajectories in component 0 (asymmetry1 being the major contributor) and component 1 (elongation being the major contributor).
```
pcadataset.prcomps
feats = pca.feature_violin(pcadataset.final, label='Particle Type', lvals=labels, fsubset=ncomp, yrange=[-12, 12])
fstats1 = pca.feature_plot_2D(pcadataset.final, label='Particle Type', lvals=labels, randcount=400, yrange=[-6, 6],
xrange=[-4, 4])
fstats1 = pca.feature_plot_3D(pcadataset.final, label='Particle Type', lvals=labels, randcount=400, ylim=[-12, 12],
xlim=[-12, 12], zlim=[-12, 12], features=[0, 1, 3])
#ncomp = 12
trainp = np.array([])
testp = np.array([])
for i in range(0, 20):
KNNmod, X, y = pca.build_model(pcadataset.final, 'Particle Type', labels, equal_sampling=True,
tsize=400, input_cols=ncomp, model='MLP', NNhidden_layer=(6, 2))
trainp = np.append(trainp, pca.predict_model(KNNmod, X, y))
X2 = pcadataset.final.values[:, -ncomp:]
y2 = pcadataset.final['Particle Type'].values
testp = np.append(testp, pca.predict_model(KNNmod, X2, y2))
print('Run {}: {}'.format(i, testp[i]))
print('{} +/ {}'.format(np.mean(trainp), np.std(trainp)))
print('{} +/ {}'.format(np.mean(testp), np.std(testp)))
subset
fstats_new
bitesize['Particle Type']
pcadataset.components.to_csv('components.csv')
pcadataset.prcomps
```
| github_jupyter |
# Module 10 Application
## Challenge: Crypto Investments
In this Challenge, you’ll combine your financial Python programming skills with the new unsupervised learning skills that you acquired in this module.
The CSV file provided for this challenge contains price change data of cryptocurrencies in different periods.
The steps for this challenge are broken out into the following sections:
* Import the Data (provided in the starter code)
* Prepare the Data (provided in the starter code)
* Cluster Cryptocurrencies with K-means
* Find the Best Value for k
* Optimize Clusters with Principal Component Analysis
* Visualize the Results
### Import the Data
This section imports the data into a new DataFrame. It follows these steps:
1. Read the “crypto_market_data.csv” file from the Resources folder into a DataFrame, and use `index_col="coin_id"` to set the cryptocurrency name as the index. Review the DataFrame.
2. Generate the summary statistics, and and use HvPlot to visualize your data to observe what your DataFrame contains.
> **Rewind:** The [Pandas`describe()`function](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.describe.html) generates summary statistics for a DataFrame.
```
# Import required libraries and dependencies
import pandas as pd
import hvplot.pandas
from path import Path
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
# Load the data into a Pandas DataFrame
df_market_data = pd.read_csv(
Path("Resources/crypto_market_data.csv"),
index_col="coin_id")
# Display sample data
df_market_data.head(10)
# Generate summary statistics
df_market_data.describe()
# Plot your data to see what's in your DataFrame
df_market_data.hvplot.line(
width=800,
height=400,
rot=90
)
```
### Prepare the Data
This section prepares the data before running the K-Means algorithm. It follows these steps:
1. Use the `StandardScaler` module from scikit-learn to normalize the CSV file data. This will require you to utilize the `fit_transform` function.
2. Create a DataFrame that contains the scaled data. Be sure to set the `coin_id` index from the original DataFrame as the index for the new DataFrame. Review the resulting DataFrame.
```
# Use the `StandardScaler()` module from scikit-learn to normalize the data from the CSV file
scaled_data = StandardScaler().fit_transform(df_market_data)
# Create a DataFrame with the scaled data
df_market_data_scaled = pd.DataFrame(
scaled_data,
columns=df_market_data.columns
)
# Copy the crypto names from the original data
df_market_data_scaled["coin_id"] = df_market_data.index
# Set the coinid column as index
df_market_data_scaled = df_market_data_scaled.set_index("coin_id")
# Display sample data
df_market_data_scaled.head()
```
---
# Cluster Cryptocurrencies with K-means
In this section, you will use the K-Means algorithm with a given value for `k` to cluster the cryptocurrencies according to the price changes of cryptocurrencies provided.
1. Initialize the K-Means model with four clusters (`n_clusters=4`).
2. Fit the K-Means model using the scaled data.
3. Predict the clusters to group the cryptocurrencies using the scaled data. View the resulting array of cluster values.
4. Add a new column to the DataFrame with the scaled data with the predicted clusters.
5. Create a scatter plot using hvPlot by setting `x="price_change_percentage_14d"` and `y="price_change_percentage_1y"`. Color the graph points with the labels found using K-Means and add the crypto name in the `hover_cols` parameter to identify the cryptocurrency represented by each data point.
```
# Initialize the K-Means model with four clusters
model = KMeans(n_clusters=4)
# Fit the K-Means model using the scaled data
model.fit(df_market_data_scaled)
# Predict the clusters to group the cryptocurrencies using the scaled data
crypto_clusters_k4 = model.predict(df_market_data_scaled)
# View the resulting array of cluster values.
print(crypto_clusters_k4)
# Note: The code for this step is provided for you.
# Add a new column to the DataFrame with the predicted clusters with k=4
df_market_data_scaled["crypto_cluster_k4"] = crypto_clusters_k4
# Display sample data
df_market_data_scaled.head()
# Create a scatter plot using hvPlot by setting
# `x="price_change_percentage_14d"` and `y="price_change_percentage_1y"`.
# Group the results by the clusters using `by="crypto_cluster_k4".
# Set the hover to the coin id using `hover_cols=["coin_id"]`.
df_market_data_scaled.hvplot.scatter(
x="price_change_percentage_14d",
y="price_change_percentage_1y",
by="crypto_cluster_k4",
hover_cols=["coin_id"],
marker=["hex", "square", "cross", "inverted_triangle", "triangle"],
)
```
---
# Find the Best Value for k
In this section, you will use the elbow method to find the best value for k.
1. Code the elbow method algorithm to find the best value for k. Use a range from 1 to 11.
2. Plot a line chart with all the inertia values computed with the different values of k to visually identify the optimal value for k.
3. Answer the following question: What is the best value for k?
```
# Create a list with the number of k-values to try
# Use a range from 1 to 11
k = list(range(1, 11))
# Create an empy list to store the inertia values
inertia = []
# Create a for loop to compute the inertia with each possible value of k
# Inside the loop:
# 1. Create a KMeans model using the loop counter for the n_clusters
# 2. Fit the model to the data using `df_market_data_scaled`
# 3. Append the model.inertia_ to the inirtia list
for i in k:
model = KMeans(n_clusters=i, random_state=0)
model.fit(df_market_data_scaled)
inertia.append(model.inertia_)
# Create a dictionary with the data to plot the Elbow curve
elbow_data = {
"k": k,
"inertia": inertia
}
# Create a DataFrame with the data to plot the Elbow curve
df_elbow = pd.DataFrame(elbow_data)
# Plot a line chart with all the inertia values computed with
# the different values of k to visually identify the optimal value for k.
df_elbow.hvplot.line(x="k", y="inertia", title="Elbow Curve", xticks=k)
```
#### 3. Answer the following question: What is the best value for k?
**Question:** What is the best value for `k`?
**Answer:** The best value for k appears to be 4, or 5..., as that is where the curve flattens out. For the purpose of analysis below we will pick k=5.
---
# Optimize Clusters with Principal Component Analysis
In this section, you will perform a principal component analysis (PCA) and reduce the features to three principal components.
1. Create a PCA model instance and set `n_components=3`.
2. Use the PCA model to reduce to three principal components. View the first five rows of the DataFrame.
3. Retrieve the explained variance to determine how much information can be attributed to each principal component.
4. Answer the following question: What is the total explained variance of the three principal components?
5. Create a new DataFrame with the PCA data. Be sure to set the `coin_id` index from the original DataFrame as the index for the new DataFrame. Review the resulting DataFrame.
6. Initiate a new K-Means algorithm using the PCA DataFrame to group the cryptocurrencies. Set the `n_components` parameter equal to the best value for `k` found before. View the resulting array.
7. For further analysis, add the following columns to the DataFrame with the PCA data. Review the resulting DataFrame once the additional columns have been added. Make sure to do the following:
- From the original DataFrame, add the `price_change_percentage_1y` and `price_change_percentage_14d` columns.
- Add a column with the predicted cluster values identified using a k value of 4. (The predicted cluster values were calculated in the “Cluster Cryptocurrencies with K-means” section.)
- Add a column with the predicted cluster values identified using the optimal value for k.
```
# Create a PCA model instance and set `n_components=3`.
pca = PCA(n_components=3)
# Use the PCA model with `fit_transform` to reduce to
# three principal components.
market_pca_data = pca.fit_transform(df_market_data_scaled)
# View the first five rows of the DataFrame.
market_pca_data[:5]
# Retrieve the explained variance to determine how much information
# can be attributed to each principal component.
pca.explained_variance_ratio_
```
Answer the following question: What is the total explained variance of the three principal components?
**Question** What is the total explained variance of the three principal components?
**Answer** The explained variance would be the sum of the explained variance ratio for the three components which is around 88%
```
# Create a new DataFrame with the PCA data.
# Note: The code for this step is provided for you
# Creating a DataFrame with the PCA data
df_market_data_pca = pd.DataFrame(
market_pca_data,
columns=["PC1", "PC2", "PC3"]
)
# Copy the crypto names from the original data
df_market_data_pca["coin_id"] = df_market_data.index
# Set the coinid column as index
df_market_data_pca = df_market_data_pca.set_index("coin_id")
# Display sample data
df_market_data_pca.head()
# Initiate a new K-Means algorithm using the PCA DataFrame to group
# the cryptocurrencies. Set the `n_components` parameter equal to
# the best value for `k` found before. View the resulting array.
# Initialize the K-Means model
model = KMeans(n_clusters=5)
# Fit the model
model.fit(df_market_data_pca)
# Predict clusters
crypto_clusters_k5 = model.predict(df_market_data_pca)
# View the resulting array
crypto_clusters_k5
# Note: The code for the following step has been provided for you.
# For further analysis, add the following columns to the DataFrame
# with the PCA data. Review the resulting DataFrame once the additional
# columns have been added. Make sure to do the following:
# - From the original DataFrame, add the `price_change_percentage_1y` and `price_change_percentage_14d` columns.
# - Add a column with the predicted cluster values identified using a k value of 4. (The predicted cluster values were calculated in the “Cluster Cryptocurrencies with K-means” section.)
# - Add a column with the predicted cluster values identified using the optimal value for k.
# Add the price_change_percentage_1y column from the original data
df_market_data_pca["price_change_percentage_1y"] = df_market_data["price_change_percentage_1y"]
# Add the price_change_percentage_14d column from the original data
df_market_data_pca["price_change_percentage_14d"] = df_market_data["price_change_percentage_14d"]
# Add a new column to the DataFrame with the predicted clusters using the best value of k
df_market_data_pca["crypto_cluster_k5"] = crypto_clusters_k5
# Add a new column to the DataFrame with the predicted clusters using k=4
df_market_data_pca["crypto_cluster_k4"] = crypto_clusters_k4
# Display sample data
df_market_data_pca
```
---
# Step 6: Plot Results
In this section, you will visually analyze the cluster analysis results after using the optimization techniques.
1. Use the PCA data to create two scatter plots using hvPlot by setting `x="price_change_percentage_14d"` and `y="price_change_percentage_1y"`. Make sure to do the following:
- In the first plot, color the plot points by the cluster values identified using a k value of 4.
- In the second plot, color the plot points by the cluster values identified using the optimal value for k.
- In both plots, add the crypto name by sing the `hover_cols` parameter to identify the cryptocurrency represented by each data point.
2. Be sure to professionally style and format the plots so that the visualizations can be easily read.
3. Answer the following question: What value of k creates the most accurate clusters of cryptocurrencies, grouped by profitability?
```
# Create a scatter plot for the Crypto Clusters using k=4 data.
# Use the PCA data to create a scatter plot with hvPlot by setting
# x="price_change_percentage_14d" and y="price_change_percentage_1y".
# Group by the clusters using `by="crypto_cluster_k4".
# Set the hover colors to the coin id with `hover_cols=["coin_id"]
# Create a descriptive title for the plot using the title parameter.
scatter_plot_k4 = df_market_data_pca.hvplot.scatter(
x="price_change_percentage_14d",
y="price_change_percentage_1y",
by="crypto_cluster_k4",
hover_cols = ["coin_id"],
title = "Crypto Clusters with k=4"
)
# Create a scatter plot for the Crypto Clusters using k=5 data.
# Use the PCA data to create a scatter plot with hvPlot by setting
# x="price_change_percentage_14d" and y="price_change_percentage_1y".
# Group by the clusters using `by="crypto_cluster_k5".
# Set the hover colors to the coin id with `hover_cols=["coin_id"]
# Create a descriptive title for the plot using the title parameter.
scatter_plot_k5 = df_market_data_pca.hvplot.scatter(
x="price_change_percentage_14d",
y="price_change_percentage_1y",
by="crypto_cluster_k5",
hover_cols = ["coin_id"],
title = "Crypto Clusters with k=5"
)
# Compare both scatter plots
scatter_plot_k4 + scatter_plot_k5
```
Answer the following question: What value of k creates the most accurate clusters of cryptocurrencies, grouped by profitability?
**Question:** What value of `k` seems to create the most accurate clusters to group cryptocurrencies according to their profitability?
**Answer:** Based off of our analysis between k=4 and k=5 the scatter plot with k=4 shows better clusters with less overlap. Honestly, for a better analysis we would need more points/data so that each cluster contains roughly the same number of points.
| github_jupyter |
```
#this allows plots to be displayed inline with the notebook
%matplotlib inline
```
Generally, you want to put your import statements at the top of the code, whether in notebooks or code files.
These first two import statements bring in the matplotlib plotting library and the numpy library, two core components of the "Scipy Stack", using a common convention among scientific Python developers (numpy as np, pyplot as plt).
```
import matplotlib.pyplot as plt
import numpy as np
```
These import statements bring in the PAGER models (population growth, exposure, fatality, etc.)
```
from losspager.models.emploss import LognormalModel,EmpiricalLoss
from losspager.models.growth import PopulationGrowth
from losspager.models.exposure import Exposure
from losspager.models.econexposure import EconExposure
```
These imports bring in other useful modules in the Python standard library, and other oddments.
```
import os.path
import time
from mpl_toolkits.axes_grid1 import make_axes_locatable
import fiona
```
LognormalModel objects contain properties and methods for the lognormal fatality models. This object contains a number of methods to calculate losses (fatalities and economic losses), calculate the loss rates for given MMI values, and even override the lognormal model with arbitrary rates.
To construct a LognormalModel, you provide it with a name (usually two letter country code) theta, beta, L2G, and (for economic losses) alpha values. Here we're ignoring the L2G values and simply assigning zero, as these are not used in loss calculations.
Fatalities
----------
```
iran = LognormalModel('IR',9.318099,0.100001,0.0)
california = LognormalModel('XF',37.729406,0.360337,0.0)
afghanistan = LognormalModel('AF',11.613073,0.180683,0.0)
china = LognormalModel('CN',10.328811,0.100058,0.0)
japan = LognormalModel('JP',11.862534,0.100779,0.0)
```
A lognormal fatality model "knows" how deadly it is (by calculating the area under the curve defined by MMI values from 5-9 and the fatality rates.) This allows the user to compare two models to each other, and even sort a list of them.
```
print('Iran is more deadly than California: %s\n' % (iran > california))
mlist = [iran,california,afghanistan,china,japan]
mlist.sort()
print('Sorted list of country models:')
print('%5s %6s %6s %-6s %14s' % ('Name','Theta','Beta','Area','Deaths'))
for model in mlist:
exp_pop = np.array([1e6,1e6,1e6,1e6,1e6])
mmirange = np.arange(5,10)
deaths = model.getLosses(exp_pop,mmirange)
print('%5s %6.3f %6.3f %6.4f %14.4f' % (model.name,model.theta,model.beta,model.getArea(),deaths))
```
You can plot the fatality rates of each of these models:
```
mmirange = np.arange(5,10)
f = plt.figure(figsize=(8,6))
colors = ['k','b','r','c','m']
for i in range(0,len(mlist)):
rates = mlist[i].getLossRates(mmirange)
plt.semilogy(mmirange,rates,colors[i],lw=2);
names = [m.name for m in mlist]
plt.legend(names,loc='lower right',numpoints=2);
plt.xlabel('MMI');
plt.ylabel('Loss Ratio');
```
You can see the fatality rates as rates by simply printing the LognormalModel object:
```
print('California Model:\n%s' % california)
print('Afhanistan Model:\n%s' % japan)
```
You can obtain the name, theta/beta/L2G values as properties from the LognormalModel object:
```
print('California:')
print('\tName %s' % california.name)
print('\tTheta %f' % california.theta)
print('\tBeta %f' % california.beta)
print('\tL2G %f' % california.l2g)
```
Up to this point, we've defined each model manually. We have all of these models for PAGER in one XML data file, which is included in this code repository. The EmpiricalLoss() class exists to handle large numbers of LognormalModel() objects, and has a method loadFromXML() to read in the XML file and create an EmpiricalLoss() instance.
```
xmlfile = os.path.join(os.getcwd(),'..','test','data','fatality.xml')
empfat = EmpiricalLoss.loadFromXML(xmlfile)
```
You can retrieve LognormalModel() objects from EmpiricalFatality() objects using the getModel() method.
```
chile = empfat.getModel('CL')
chile
```
And this is what PAGER will do...
(Example: Northridge)
```
t1 = time.time()
growthfile = os.path.join(os.getcwd(),'..','test','data','WPP2015_POP_F02_POPULATION_GROWTH_RATE.xls')
popgrowth = PopulationGrowth.loadFromUNSpreadsheet(growthfile)
sampledir = os.path.join(os.getcwd(),'..','test','data','eventdata','northridge')
popfile = os.path.join(sampledir,'northridge_gpw.flt')
isofile = os.path.join(sampledir,'northridge_isogrid.bil')
shakefile = os.path.join(sampledir,'northridge_grid.xml')
expmodel = Exposure(popfile,2012,isofile,popgrowth)
expdict = expmodel.calcExposure(shakefile)
for key,exp_pop in expdict.items():
print('Exposure for %s' % key)
for i in range(0,len(exp_pop)):
mmi = i+1
print('\tMMI %i: %s' % (mmi,format(int(exp_pop[i]),',d')))
#call fatality model
fat_results = empfat.getLosses(expdict)
for key,value in fat_results.items():
print('\nFatalities for %s: %i' % (key,value))
t2 = time.time()
print('\nTotal elapsed time for loss calculations: %.2f seconds' % (t2-t1))
```
In addition, the fatality module provides the ability to make a gridded fatality map. Note that this map may not be a reliable way to determine the distribution of fatalities, particularly as you zoom in.
```
mmidata = expmodel.getShakeGrid().getLayer('mmi').getData()
popdata = expmodel.getPopulationGrid().getData()
isodata = expmodel.getCountryGrid().getData()
fatgrid = empfat.getLossGrid(mmidata,popdata,isodata)
f = plt.figure(figsize=(8,8))
dmin = np.nanmin(fatgrid)
dmax = np.nanmax(fatgrid)
dmean = np.nanmean(fatgrid)
dstd = np.nanstd(fatgrid)
#Here we're zooming in on the affected regions...
plt.imshow(fatgrid[125:250,125:300],vmin=dmin,vmax=dmean+(3*dstd));
ax = plt.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(cax=cax);
```
We can also split up the fatalities from the above grid into input polygons. Here we take a shapefile of Los Angeles County city boundaries, and use it to divide up the fatalities.
```
shapefile = os.path.join(sampledir,'City_BoundariesWGS84','City_Boundaries.shp')
popdict = expmodel.getPopulationGrid().getGeoDict()
shapes = []
f = fiona.open(shapefile,'r')
for row in f:
shapes.append(row)
f.close()
fatshapes,totfat = empfat.getLossByShapes(mmidata,popdata,isodata,shapes,popdict)
for shape in fatshapes:
if shape['properties']['fatalities'] > 0:
cname = shape['properties']['CITY_NAME']
deaths = shape['properties']['fatalities']
print('%s: %i fatalities' % (cname,deaths))
```
With all the fatality models at our disposal, it's easy to see what fatalities Northridge exposures would bring in other countries:
```
for name in names:
fmodel = empfat.getModel(name)
exp_pop = expdict['XF'][4:9]
exp_pop[-1] += expdict['XF'][-1] #MMI 10 is folded into MMI 9 for loss modeling
deaths = fmodel.getLosses(exp_pop,np.arange(5,10))
dstr = format(int(deaths),",d")
print('%s fatalities from Northridge exposure: %8s' % (name,dstr))
```
We can also manually override the rates for a model with a custom rates array, which need not be lognormal in form. Here we are verifying that doubling the default rates doubles the fatalities.
```
#Testing modifying rates and stuffing them back in...
chile = LognormalModel('CL',19.786773,0.259531,0.0)
rates = chile.getLossRates(np.arange(5,10))
modrates = rates * 2 #does this make event twice as deadly?
#roughly the exposures from 2015-9-16 CL event
expo_pop = np.array([0,0,0,1047000,7314000,1789000,699000,158000,0,0])
mmirange = np.arange(5,10)
chile_deaths = chile.getLosses(expo_pop[4:9],mmirange)
chile_double_deaths = chile.getLosses(expo_pop[4:9],mmirange,rates=modrates)
print('Chile model fatalities: %f' % chile_deaths)
print('Chile model x2 fatalities: %f' % chile_double_deaths)
```
Economic Losses
---------------
All of the above fatality methods apply in a very similar way to economic losses. The difference between calculating fatalities and dollars lost is by use of a subclass of the Exposure() class, EconExposure().
```
iran = LognormalModel('IR',9.483180,0.100000,7.949160,alpha=15.614500)
california = LognormalModel('XF',9.592240,0.100117,9.753500,alpha=14.433700)
afghanistan = LognormalModel('AF',9.013810,0.100000,4.113200,alpha=15.065400)
china = LognormalModel('CN',7.511120,0.100328,9.340890,alpha=9.794960)
japan = LognormalModel('JP',10.290800,0.100015,10.068600,alpha=13.389900)
```
Again, we can sort the country models by vulnerability.
```
mlist = [iran,california,afghanistan,china,japan]
mlist.sort()
print('Sorted list of country models:')
print('%5s %6s %6s %-6s %14s' % ('Name','Theta','Beta','Area','Dollars'))
for model in mlist:
exp_pop = np.array([1e6,1e6,1e6,1e6,1e6])
mmirange = np.arange(5,10)
deaths = model.getLosses(exp_pop,mmirange)
print('%5s %6.3f %6.3f %6.4f %14.4f' % (model.name,model.theta,model.beta,model.getArea(),deaths))
```
Again, we can plot the loss rates for each of these countries against each other.
```
mmirange = np.arange(5,10)
f = plt.figure(figsize=(8,6))
colors = ['k','b','r','c','m']
for i in range(0,len(mlist)):
rates = mlist[i].getLossRates(mmirange)
plt.semilogy(mmirange,rates,colors[i],lw=2);
names = [m.name for m in mlist]
plt.legend(names,loc='lower right',numpoints=2);
plt.xlabel('MMI');
plt.ylabel('Loss Ratio');
```
Print the properties of the California economic loss model (now includes alpha).
```
print('California:')
print('\tName %s' % california.name)
print('\tTheta %f' % california.theta)
print('\tBeta %f' % california.beta)
print('\tAlpha %f' % california.alpha)
print('\tL2G %f' % california.l2g)
```
We can provide an XML file with economic models defined in it - the format is very similar to the fatality XML file.
```
xmlfile = os.path.join(os.getcwd(),'..','test','data','economy.xml')
empeco = EmpiricalLoss.loadFromXML(xmlfile)
```
We can print out the rates of an economic model as well - recall that these rates include GDP and alpha, an exposure correction factor.
```
chile = empeco.getModel('CL')
chile
```
And again, here is how PAGER would calculate dollar losses:
```
t1 = time.time()
growthfile = os.path.join(os.getcwd(),'..','test','data','WPP2015_POP_F02_POPULATION_GROWTH_RATE.xls')
popgrowth = PopulationGrowth.loadFromUNSpreadsheet(growthfile)
sampledir = os.path.join(os.getcwd(),'..','test','data','eventdata','northridge')
popfile = os.path.join(sampledir,'northridge_gpw.flt')
isofile = os.path.join(sampledir,'northridge_isogrid.bil')
gdpfile = os.path.join(os.getcwd(),'..','test','data','API_NY.GDP.PCAP.CD_DS2_en_excel_v2.xls')
xmlfile = os.path.join(os.getcwd(),'..','test','data','economy.xml')
shakefile = os.path.join(sampledir,'northridge_grid.xml')
expmodel = EconExposure(popfile,2012,isofile,popgrowth,gdpfile,xmlfile)
expdict = expmodel.calcExposure(shakefile)
for key,exp_pop in expdict.items():
print('Economic Exposure for %s' % key)
for i in range(0,len(exp_pop)):
mmi = i+1
print('\tMMI %i: %s' % (mmi,format(int(exp_pop[i]),',d')))
#call fatality model
eco_results = empeco.getLosses(expdict)
for key,value in eco_results.items():
print('\nEconomic losses for %s: $%s' % (key,format(value,",d")))
t2 = time.time()
print('\nTotal elapsed time for loss calculations: %.2f seconds' % (t2-t1))
```
Create an economic loss grid.
```
mmidata = expmodel.getShakeGrid().getLayer('mmi').getData()
popdata = expmodel.getEconPopulationGrid().getData()
isodata = expmodel.getCountryGrid().getData()
ecogrid = empeco.getLossGrid(mmidata,popdata,isodata)
f = plt.figure(figsize=(8,8))
dmin = np.nanmin(fatgrid)
dmax = np.nanmax(fatgrid)
dmean = np.nanmean(fatgrid)
dstd = np.nanstd(fatgrid)
#Here we're zooming in on the affected regions...
plt.imshow(fatgrid[125:250,125:300],vmin=dmin,vmax=dmean+(3*dstd));
ax = plt.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
plt.colorbar(cax=cax);
```
Divide that economic loss grid by cities in LA County.
```
shapefile = os.path.join(sampledir,'City_BoundariesWGS84','City_Boundaries.shp')
popdict = expmodel.getPopulationGrid().getGeoDict()
shapes = []
f = fiona.open(shapefile,'r')
for row in f:
shapes.append(row)
f.close()
ecoshapes,toteco = empeco.getLossByShapes(mmidata,popdata,isodata,shapes,popdict)
ecoshapes = sorted(ecoshapes,key=lambda shape:shape['properties']['dollars_lost'],reverse=True)
for shape in ecoshapes:
if shape['properties']['dollars_lost'] > 0:
cname = shape['properties']['CITY_NAME']
dollars = shape['properties']['dollars_lost']
print('%s: $%s dollars lost' % (cname,format(dollars,",d")))
```
Compare Northridge losses to losses in other countries using the same economic exposure.
```
for name in names:
fmodel = empeco.getModel(name)
exp_pop = expdict['XF'][4:9]
exp_pop[-1] += expdict['XF'][-1] #MMI 10 is folded into MMI 9 for loss modeling
dollars = fmodel.getLosses(exp_pop,np.arange(5,10))
dstr = format(int(dollars),",d")
print('%s dollars lost from Northridge exposure: $%s' % (name,dstr))
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Adversarial example using FGSM
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/generative/adversarial_fgsm"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/adversarial_fgsm.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/adversarial_fgsm.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/generative/adversarial_fgsm.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial creates an *adversarial example* using the Fast Gradient Signed Method (FGSM) attack as described in [Explaining and Harnessing Adversarial Examples](https://arxiv.org/abs/1412.6572) by Goodfellow *et al*. This was one of the first and most popular attacks to fool a neural network.
## What is an adversarial example?
Adversarial examples are specialised inputs created with the purpose of confusing a neural network, resulting in the misclassification of a given input. These notorious inputs are indistinguishable to the human eye, but cause the network to fail to identify the contents of the image. There are several types of such attacks, however, here the focus is on the fast gradient sign method attack, which is a *white box* attack whose goal is to ensure misclassification. A white box attack is where the attacker has complete access to the model being attacked. One of the most famous examples of an adversarial image shown below is taken from the aforementioned paper.

Here, starting with the image of a panda, the attacker adds small perturbations (distortions) to the original image, which results in the model labelling this image as a gibbon, with high confidence. The process of adding these perturbations is explained below.
## Fast gradient sign method
The fast gradient sign method works by using the gradients of the neural network to create an adversarial example. For an input image, the method uses the gradients of the loss with respect to the input image to create a new image that maximises the loss. This new image is called the adversarial image. This can be summarised using the following expression:
$$adv\_x = x + \epsilon*\text{sign}(\nabla_xJ(\theta, x, y))$$
where
* adv_x : Adversarial image.
* x : Original input image.
* y : Original input label.
* $\epsilon$ : Multiplier to ensure the perturbations are small.
* $\theta$ : Model parameters.
* $J$ : Loss.
An intriguing property here, is the fact that the gradients are taken with respect to the input image. This is done because the objective is to create an image that maximises the loss. A method to accomplish this is to find how much each pixel in the image contributes to the loss value, and add a perturbation accordingly. This works pretty fast because it is easy find how each input pixel contributes to the loss, by using the chain rule, and finding the required gradients. Hence, the gradients are used with respect to the image. In addition, since the model is no longer being trained (thus the gradient is not taken with respect to the trainable variables, i.e., the model parameters), and so the model parameters remain constant. The only goal is to fool an already trained model.
So let's try and fool a pretrained model. In this tutorial, the model is [MobileNetV2](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/applications/MobileNetV2) model, pretrained on [ImageNet](http://www.image-net.org/).
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf
except Exception:
pass
tf.enable_v2_behavior()
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['figure.figsize'] = (8, 8)
mpl.rcParams['axes.grid'] = False
```
Let's load the pretained MobileNetV2 model and the ImageNet class names.
```
pretrained_model = tf.keras.applications.MobileNetV2(include_top=True,
weights='imagenet')
pretrained_model.trainable = False
# ImageNet labels
decode_predictions = tf.keras.applications.mobilenet_v2.decode_predictions
# Helper function to preprocess the image so that it can be inputted in MobileNetV2
def preprocess(image):
image = tf.cast(image, tf.float32)
image = image/255
image = tf.image.resize(image, (224, 224))
image = image[None, ...]
return image
# Helper function to extract labels from probability vector
def get_imagenet_label(probs):
return decode_predictions(probs, top=1)[0][0]
```
## Original image
Let's use a sample image of a [Labrador Retriever](https://commons.wikimedia.org/wiki/File:YellowLabradorLooking_new.jpg) -by Mirko [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/) from Wikimedia Common and create adversarial examples from it. The first step is to preprocess it so that it can be fed as an input to the MobileNetV2 model.
```
image_path = tf.keras.utils.get_file('YellowLabradorLooking_new.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg')
image_raw = tf.io.read_file(image_path)
image = tf.image.decode_image(image_raw)
image = preprocess(image)
image_probs = pretrained_model.predict(image)
```
Let's have a look at the image.
```
plt.figure()
plt.imshow(image[0])
_, image_class, class_confidence = get_imagenet_label(image_probs)
plt.title('{} : {:.2f}% Confidence'.format(image_class, class_confidence*100))
plt.show()
```
## Create the adversarial image
### Implementing fast gradient sign method
The first step is to create perturbations which will be used to distort the original image resulting in an adversarial image. As mentioned, for this task, the gradients are taken with respect to the image.
```
loss_object = tf.keras.losses.CategoricalCrossentropy()
def create_adversarial_pattern(input_image, input_label):
with tf.GradientTape() as tape:
tape.watch(input_image)
prediction = pretrained_model(input_image)
loss = loss_object(input_label, prediction)
# Get the gradients of the loss w.r.t to the input image.
gradient = tape.gradient(loss, input_image)
# Get the sign of the gradients to create the perturbation
signed_grad = tf.sign(gradient)
return signed_grad
```
The resulting perturbations can also be visualised.
```
perturbations = create_adversarial_pattern(image, image_probs)
plt.imshow(perturbations[0])
```
Let's try this out for different values of epsilon and observe the resultant image. You'll notice that as the value of epsilon is increased, it becomes easier to fool the network, however, this comes as a trade-off which results in the perturbations becoming more identifiable.
```
def display_images(image, description):
_, label, confidence = get_imagenet_label(pretrained_model.predict(image))
plt.figure()
plt.imshow(image[0])
plt.title('{} \n {} : {:.2f}% Confidence'.format(description,
label, confidence*100))
plt.show()
epsilons = [0, 0.01, 0.1, 0.15]
descriptions = [('Epsilon = {:0.3f}'.format(eps) if eps else 'Input')
for eps in epsilons]
for i, eps in enumerate(epsilons):
adv_x = image + eps*perturbations
adv_x = tf.clip_by_value(adv_x, 0, 1)
display_images(adv_x, descriptions[i])
```
## Next steps
Now that you know about adversarial attacks, try this out on different datasets and different architectures. You may also create and train your own model, and then attempt to fool it using the same method. You can also try and see how the confidence in predictions vary as you change epsilon.
Though powerful, the attack shown in this tutorial was just the start of research into adversarial attacks, and there have been multiple papers creating more powerful attacks since then. In addition to adversarial attacks, research has also led to the creation of defenses, which aims at creating robust machine learning models. You may review this [survey paper](https://arxiv.org/abs/1810.00069) for a comprehensive list of adversarial attacks and defences.
For many more implementations of adversarial attacks and defenses, you may want to see the adversarial example library [CleverHans](https://github.com/tensorflow/cleverhans).
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Deep Learning
## Project: Build a Traffic Sign Recognition Classifier
In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.
> **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n",
"**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/481/view) for this project.
The [rubric](https://review.udacity.com/#!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.
>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
---
## Step 0: Load The Data
```
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = '../data/train.p'
validation_file='../data/valid.p'
testing_file = '../data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
print("Data Loaded from pickle files!")
```
---
## Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.
- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.
- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**
Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results.
### Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
```
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
# Number of training examples
n_train = np.shape(X_train)[0]
# Number of validation examples
n_validation = np.shape(X_valid)[0]
# Number of testing examples.
n_test = np.shape(X_test)[0]
# What's the shape of an traffic sign image?
image_shape = np.shape(X_train[0])
# How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))
print("Number of training examples =", n_train)
print("Number of validation examples =", n_validation)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
```
### Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.
**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
```
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
#uni,index,count=np.unique(y_train,return_index=true,return_count=true)
class_arr= []
samples_arr=[]
for class_n in range(n_classes):
class_indices = np.where(y_train == class_n)
n_samples = len(class_indices[0])
class_arr.append(class_n)
samples_arr.append(n_samples)
#plt.hist(y_train,bins=43)
plt.bar( class_arr, samples_arr,align='center', alpha=0.5)
plt.ylabel('Classes')
plt.xlabel('No of Samples')
plt.title('Data Visualization')
plt.show()
```
----
## Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).
The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.
There are various aspects to consider when thinking about this problem:
- Neural network architecture (is the network over or underfitting?)
- Play around preprocessing techniques (normalization, rgb to grayscale, etc)
- Number of examples per label (some have more than others).
- Generate fake data.
Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
### Pre-process the Data Set (normalization, grayscale, etc.)
Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project.
Other pre-processing steps are optional. You can try different techniques to see if it improves performance.
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
```
#Shuffle Data
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
#importing tensorflow
import tensorflow as tf
#setting the hyperparameters, no of iterations and batch_size
EPOCHS = 80
BATCH_SIZE = 128
```
### Model Architecture
```
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W= tf.Variable(tf.truncated_normal(shape=(5,5,3,6), mean=mu, stddev=sigma))
conv1_b= tf.Variable(tf.zeros(6))
conv1= tf.nn.conv2d(x,conv1_W,strides=[1,1,1,1],padding='VALID',use_cudnn_on_gpu=True) + conv1_b
# Activation.
conv1= tf.nn.relu(conv1)
# Layer 2: Convolutional. Input = 28x28x6. Output = 14x14x10.
conv3_W= tf.Variable(tf.truncated_normal(shape=(5,5,6,10), mean=mu, stddev=sigma))
conv3_b= tf.Variable(tf.zeros(10))
conv3= tf.nn.conv2d(conv1,conv3_W,strides=[1,2,2,1],padding='VALID',use_cudnn_on_gpu=True) + conv3_b
# Activation.
conv3= tf.nn.relu(conv3)
# Layer 3: Convolutional. Input = 14x14x10. Output = 8x8x16.
conv2_W= tf.Variable(tf.truncated_normal(shape=(5,5,10,16),mean=mu,stddev=sigma))
conv2_b=tf.Variable(tf.zeros(16))
conv2= tf.nn.conv2d(conv3,conv2_W,strides=[1,1,1,1],padding='VALID',use_cudnn_on_gpu=True) + conv2_b
# Activation.
conv2= tf.nn.relu(conv2)
# Pooling. Input = 8x8x16. Output = 4x4x16.
conv2= tf.nn.max_pool(conv2,ksize=[1,2,2,1],strides=[1,2,2,1],padding='VALID')
# Flatten. Input = 4x4x16. Output = 256.
f= flatten(conv2)
# Layer 4: Fully Connected. Input = 256. Output = 120.
fc1_W= tf.Variable(tf.truncated_normal(shape=(int(np.shape(f)[1]),120),mean=mu,stddev=sigma))
fc1_b= tf.Variable(tf.zeros(shape=120))
fc1= tf.matmul(f,fc1_W) + fc1_b
# Activation.
fc1= tf.nn.relu(fc1)
# Introduce Dropout after first fully connected layer
fc1 = tf.nn.dropout(fc1, keep_prob)
# Layer 5: Fully Connected. Input = 120. Output = 100.
fc2_W= tf.Variable(tf.truncated_normal(shape=(120,100),mean=mu,stddev=sigma))
fc2_b= tf.Variable(tf.zeros(100))
fc2= tf.matmul(fc1,fc2_W) + fc2_b
# Activation.
fc2= tf.nn.relu(fc2)
# Layer 6: Fully Connected. Input = 100. Output = 84.
fc4_W= tf.Variable(tf.truncated_normal(shape=(100,84),mean=mu,stddev=sigma))
fc4_b= tf.Variable(tf.zeros(84))
fc4= tf.matmul(fc2,fc4_W) + fc4_b
# Activation.
fc4= tf.nn.relu(fc4)
# Layer 7: Fully Connected. Input = 84. Output = 43.
fc3_W= tf.Variable(tf.truncated_normal(shape=(84,43),mean=mu,stddev=sigma))
fc3_b= tf.Variable(tf.zeros(43))
fc3= tf.matmul(fc4,fc3_W) + fc3_b
logits=fc3
return logits
x = tf.placeholder(tf.float32, (None, 32, 32, 3))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 43) # one hot encoding for output labels
keep_prob = tf.placeholder(tf.float32) # defining the dropout probability after fully connected layer in the architecture
```
### Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
```
rate = 0.0009 #learning rate
#defining various operations
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
total_loss=0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy,loss = sess.run([accuracy_operation,loss_operation],feed_dict={x: batch_x, y: batch_y,keep_prob:1})
total_accuracy += (accuracy * len(batch_x))
total_loss+= (loss*len(batch_x)) # getting the total loss to plot a graph later
return total_accuracy / num_examples, total_loss/num_examples
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
loss_Acc=[]
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y,keep_prob:0.5})
validation_accuracy,loss_acc = evaluate(X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
loss_Acc.append(loss_acc)
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
plt.plot(range(0,EPOCHS),loss_Acc)
plt.ylabel('loss')
plt.xlabel('Epochs')
plt.grid(True)
plt.show()
saver.save(sess, './trafficTest')
print("Model saved")
# Check Test Accuracy
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy[0]))
```
---
## Step 3: Test a Model on New Images
To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name.
### Load and Output the Images
```
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import os
import matplotlib.image as mpimg
import cv2
my_images = []
for i, img in enumerate(os.listdir('downloaded images')):
image = cv2.imread('downloaded images/' + img)
my_images.append(image)
print(image.shape)
plt.figure()
plt.xlabel(img)
plt.imshow(image)
my_images = np.asarray(my_images)
```
### Discussion
- the images are already resized to 32*32 in height and width for the model.
- In most cases the size of these images would not be a problem to detect since datasets like these are easy one's and often used as a starter projects for deep learning.
- but in some cases like the above shown bicycle and pedestrian sign, the pixels get aggregated and this may pose as problems.
- I find using color images would be more effective
#### The submission documents the performance of the model when tested on the captured images (here 5 images).
* I wantedly took the images which are harder to detect since this will show how the size of the image probably leads to wrong activation.
* both the images which were wrongly detected in this case are discussed above in discussion. the probable reason is that pixels getting aggregated and which leads to wrong activation. if you look at the top 5 for this pedestrian image, it gives (11, 25, 27, 24, 23) as top indices , in which top1 is Right-of-way at the next intersection(index 11). if you were to compare and see these 2 images , the Right-of-way at the next intersection sign (please google it for a quick look) looks like a man which confuses the classifier.
* similar is the case for bicycle which usually has two children in it running looks similar to bicycle sign when it's aggregated.
### Predict the Sign Type for Each Image and Analyze Performance
```
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
my_labels = [31,27,35, 17, 29]
# Check Test Accuracy
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
output_accuracy = evaluate(my_images, my_labels)
print("Test Accuracy = {:.3f}".format(output_accuracy[0]))
```
### Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.html#top_k) could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:
```
# (5, 6) array
a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
0.12789202],
[ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
0.15899337],
[ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
0.23892179],
[ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
0.16505091],
[ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
0.09155967]])
```
Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:
```
TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
[ 0.28086119, 0.27569815, 0.18063401],
[ 0.26076848, 0.23892179, 0.23664738],
[ 0.29198961, 0.26234032, 0.16505091],
[ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
[0, 1, 4],
[0, 5, 1],
[1, 3, 5],
[1, 4, 3]], dtype=int32))
```
Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
```
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
softmax_logits = tf.nn.softmax(logits)
top_k = tf.nn.top_k(softmax_logits, k=5)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver.restore(sess, tf.train.latest_checkpoint('.'))
my_softmax_logits = sess.run(softmax_logits, feed_dict={x: my_images, keep_prob: 1.0})
my_top_k = sess.run(top_k, feed_dict={x: my_images, keep_prob: 1.0})
print(my_softmax_logits)
print(my_top_k)
```
### Project Writeup
Once you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file.
> **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n",
"**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
---
## Step 4 (Optional): Visualize the Neural Network's State with Test Images
This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.
Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.
For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.
<figure>
<img src="visualize_cnn.png" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above)</p>
</figcaption>
</figure>
<p></p>
```
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
```
| github_jupyter |
# Tutor Feedback
Very nice in-depth analysis and generally well-structured notebook. The last part of Ex2 contains a coding error, but your solution is certainly sufficient to get the point.
Ex1: 1/1
Ex2: 1/1
Ex3: 1/1
```
import numpy as np
import matplotlib.pylab as plt
import numpy.random as rd
from sklearn.datasets import fetch_openml # MNIST data
from sklearn.model_selection import train_test_split
from sklearn.cluster import AgglomerativeClustering
from sklearn.datasets import make_circles
from sklearn.datasets import make_moons
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans
from sklearn.manifold import TSNE
#os.environ["CUDA_VISIBLE_DEVICES"] = '0'
#sess = tf.Session(config=tf.ConfigProto(device_count={'GPU': 0}))
from keras.models import Sequential
from keras.models import Model
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import UpSampling2D
from keras.layers import Input
from keras.layers import concatenate
from keras.layers import Cropping2D
from PIL import Image
import matplotlib.image as mpimg
```
## A1 - Clustering
### A1 - a - Performance of different linkages on samples
Linkage information:
- ward minimizes the variance of the clusters being merged.
- average uses the average of the distances of each observation of the two sets.
- complete or maximum linkage uses the maximum distances between all observations of the two sets.
- single uses the minimum of the distances between all observations of the two sets.
```
n_samples = 1000
n_clusters = 2
noise = 0.1
samples = [make_circles(n_samples=n_samples, noise=noise, factor = 0.5), make_moons(n_samples=n_samples, noise=noise), make_blobs(n_samples=n_samples, cluster_std=noise*20, centers=n_clusters)]
sample_labels = ["circles", "moons", "blobs"]
linkages = ["ward", "complete", "average", "single"]
for linkage in linkages:
for sample in samples:
data = sample[0]
# apply clustering algorithm
clustering = AgglomerativeClustering(affinity='euclidean', compute_full_tree='auto',
connectivity=None,
linkage=linkage, memory=None, n_clusters=n_clusters,
pooling_func='deprecated')
clustering.fit(data)
labels = clustering.labels_
# compute score (malfunctions)
#y_predicted = clustering.fit_predict(data)
#y = [el[1] for el in data]
#score = np.floor(100 - np.linalg.norm(y-y_predicted))
# plot
plt.figure()
class1 = [];
class2 = [];
# allocate to classes
for i in range(labels.size):
if labels[i] == 0:
class1.append(data[i])
elif labels[i] == 1:
class2.append(data[i])
plt.title(linkage) #+ " / score: " + str(score))
plt.scatter([x[0] for x in class1], [x[1] for x in class1])
plt.scatter([x[0] for x in class2], [x[1] for x in class2])
#print(str(score))
```
Comments:
- The ward linkage only works more or less well for the blobs and okay for the moons
- The complete linkage shows bad performance for all datasets
- The average linkage has also a bad performace, and depends on initial conditions for the blobs
- The single linkage tends to create a cluster comprising a single outlier, however, it sometimes workes well for the moons and seems to strongly depend on the initial point that is being chosen
### A1 - b - Compare to performance of t-SNE and k-means
```
classifiers = [TSNE(n_components=n_clusters), KMeans(n_clusters=n_clusters)]
clf_labels = ["TSN_E", "KMeans"]
j = 0
for clf in classifiers:
s = 0
for sample in samples:
# separate into data and classes
X = sample[0]
y = sample[1]
# get classes
red = y == 0
green = y == 1
# fit model and obtain transform
clf.fit(X)
Y = clf.fit_transform(X)
# plot
plt.figure()
plt.title(sample_labels[s] + " using " + clf_labels[j] )
plt.scatter(Y[red,0], Y[red, 1], c="r")
plt.scatter(Y[green,0], Y[green, 1], c="g")
s +=1
j +=1
```
Comments:
- TSN_E works well for the moons and the circles, but fails with the blobs
- KMeans works well for the circles but fails for moons and blobs
## A2 - AutoEncoder
### A2 - a - Generate 2D images showing polynomials
```
''' WRONG INTERPRETATION OF THE TASK
CORRECT INTERPRETATION BELOW
def generate_poly(degree, x, polycoeff_range):
polycoeffs = rd.uniform(coeff_range[0], coeff_range[1], degree+1)
y = np.zeros(x.size)
degree = 0
for p in polycoeffs:
y += p * (x**degree)
degree += 1
return y
# define initial parameters
coeff_range=[-1,1]
x = np.linspace(-20,20, 41)
# generate polynomials and corresponding pixel images
for degree in range(3):
y = generate_poly(degree, x, coeff_range)
# plot image
plt.figure()
ax = plt.gca()
plt.xlim([x[0], x[-1]])
plt.ylim([x[0], x[-1]])
ax.axis('off')
plt.plot(x, y)
plt.savefig("fig.png")
plt.show()
# pixelate image
basewidth = 40
img=Image.open("fig.png")
wpercent = (basewidth/float(img.size[0]))
hsize = int((float(img.size[1])*float(wpercent)))
img = img.resize((basewidth,hsize), Image.ANTIALIAS)
ax.axis('off')
img.save("fig.png")
img=mpimg.imread("fig.png")
ax.axis('off')
imgplot=plt.imshow(img)
plt.show()
def generate_pixelated_image(xdata, ydata):
# generate pixel image
length = xdata.size
image = np.zeros(shape=(length, length))
# treat every column
for col in range(length):
# check whether yval belongs to a certain pixel
for row in range(length):
# access yval for that column
yval = ydata[row]
# check which pixel it corresponds to
if yval >= xdata[row] and yval < xdata[row]+1:
image[col][row] = 1
return image
'''
# define points
x=np.linspace(-1,1,40)
xx,yy=np.meshgrid(x,x)
# define polynomial generation
def generate_poly(deg, xx=xx, yy=yy):
x_coeffs=np.random.uniform(-5.,5., deg+1)
y_coeffs=np.random.uniform(-5.,5., deg+1)
img=np.zeros((40,40))
for d in range(deg+1):
img+=x_coeffs[d]*xx**d + y_coeffs[d]*yy**d
return img
# sample images
samples=[]
n_samples = 5000
for i in range(n_samples):
deg=int(np.random.uniform(1,2))
samples.append(generate_poly(deg))
samples=np.array(samples)
plt.imshow(samples[0])
plt.show()
```
### A2 - b - Build two autoencoder architectures
```
# build autoencoders
# input parameter
input_size = 1600
hidden_size1 = 400
hidden_size2 = 200
hidden_size3 = 40
code_size = 10
# single hidden dense layer
input_img_s = Input(shape=(input_size,))
code_s = Dense(code_size, activation='relu')(input_img_s)
output_img_s = Dense(input_size, activation='sigmoid')(code_s)
autenc_small = Model(input_img_s, output_img_s)
# several hidden dense layers
input_img_l = Input(shape=(input_size,))
hidden_1 = Dense(hidden_size1, activation='relu')(input_img_l)
hidden_2 = Dense(hidden_size2, activation='relu')(hidden_1)
hidden_3 = Dense(hidden_size3, activation='relu')(hidden_2)
code_l = Dense(code_size, activation='relu')(hidden_3)
hidden_3r = Dense(hidden_size3, activation='relu')(code_l)
hidden_2r = Dense(hidden_size2, activation='relu')(hidden_3r)
hidden_1r = Dense(hidden_size1, activation='relu')(hidden_2r)
output_img_l = Dense(input_size, activation='sigmoid')(hidden_1r)
autenc_large = Model(input_img_l, output_img_l)
samples=samples.reshape((len(samples), np.prod(samples.shape[1:])))
# compile and fit
autencs = [autenc_small, autenc_large]
for autenc in autencs:
autenc.summary()
autenc.compile(optimizer='adam', loss='binary_crossentropy')
autenc.fit(samples, samples, epochs=20, batch_size=256, shuffle=True)
# output and input should be same!
```
Result: surprisingly, the loss is for both architectures about the same. This holds for various sizes of the coding layer.
### A2 - c - Visualize the results of latent dimensions
```
def layer_to_visualize(layer):
inputs = [K.learning_phase()] + model.inputs
_convout1_f = K.function(inputs, [layer.output])
def convout1_f(X):
# The [0] is to disable the training phase flag
return _convout1_f([0] + [X])
convolutions = convout1_f(img_to_visualize)
convolutions = np.squeeze(convolutions)
print ('Shape of conv:', convolutions.shape)
n = convolutions.shape[0]
n = int(np.ceil(np.sqrt(n)))
# Visualization of each filter of the layer
fig = plt.figure(figsize=(12,8))
for i in range(len(convolutions)):
ax = fig.add_subplot(n,n,i+1)
ax.imshow(convolutions[i], cmap='gray')
# Specify the layer to want to visualize
layer_to_visualize(code_s)
layer_to_visualize(code_r)
```
## A3 - KL - divergence
Let $X = \{x_1, x_2\}$.
Let $P(x_1) = 0.2$ and $P(x_2) = 0.8$ as well as
$Q(x_1) = 0.5$ and $Q(x_2) = 0.5$. Then,
$D_{KL}(P||Q) = \sum_{x\in X} P(x) \log(\frac{P(x)}{Q(x)}) = 0.2 \cdot \log(0.2/0.5) + 0.8 \cdot \log(0.8/0.5) \approx 0.193 $ and
$D_{KL}(Q||P) = \sum_{x\in X} Q(x) \log(\frac{Q(x)}{P(x)}) = 0.5 \cdot \log(0.5/0.2) + 0.5 \cdot \log(0.5/0.8) \approx 0.223 $.
Hence $D_{KL}(P||Q) \neq D_{KL}(Q||P)$ in general.
| github_jupyter |
Exercise 2 - Simple Linear Regression
===
In Exercise 1, we used R within Jupyter Notebooks to load information about chocolate bars, and stored it in a variable named `choc_data`. We checked the structure of `choc_data`, and explored some of the variables we have about chocolate bars using graphs.
In this exercise, we want to know how to make our chocolate-bar customers happier. To do this, we need to know whether chocolate bar _features_ can predict customer happiness. For example, customers may be happier when chocolate bars are bigger, or when they contain more cocoa.
We have data on customer happiness when eating chocolate bars with different features. Let's explore the relationship between customer happiness and the different features we have available.
Step 1
---
First, we need to load the required libraries and data we will use in this exercise.
Below, we'll also use the functions `str`, `head`, and `tail` to inspect the structure of `choc_data`.
** In the cell below replace: **
** 1. `<structureFunction>` with `str` **
** 2. `<headFunction>` with `head` **
** 3. `<tailFunction>` with `tail` **
** then __run the code__. **
```
# Load `ggplot2` library for graphing capabilities
library(ggplot2)
# Load the chocolate data and save it to the variable name `choc_data`
choc_data <- read.delim("Data/chocolate data.txt")
###
# REPLACE <structureFunction> <headFunction> <tailFunction> WITH str, head, and tail
###
# Check the structure of `choc_data` using `str(choc_data)`
<structureFunction>(choc_data)
# Inspect the start of the data by typing `head(choc_data)`
<headFunction>(choc_data)
# Inspect the end of the data by typing `tail(choc_data)`
<tailFunction>(choc_data)
```
Our object `choc_data` contains 100 different chocolate bar observations for 5 variables: weight, cocoa percent, sugar percent, milk percent, and customer happiness.
Step 2
---
We want to know which chocolate bar features make customers happy.
The example below shows a linear regression between __cocoa percentage__ and __customer happiness__.
** Run the code below to visualise this. You do not need to edit the code block below, just run it. **
```
# Run this box
# DO NOT EDIT THIS CODE
# Create our own function to generate a linear regression model then graph the result
lin_reg_choc <- function(x, y, my_data){
x_arg <- my_data[ , substitute(x)]
y_arg <- my_data[ , substitute(y)]
# Perform linear regression using `lm` (stands for linear models) function
lm_choc <- lm(formula = y_arg ~ x_arg, data = my_data)
# Create scatter plot of choc_data together with linear model
ggplot(data = my_data, aes_string(x = x, y = y)) +
geom_point() +
# Add line based on linear model
geom_abline(intercept = lm_choc$coefficients[1],
slope = lm_choc$coefficients[2],
colour = "red") +
# x-axis label remains constant
xlab("Customer happiness") +
# y-axis label; use `gsub` function to remove underscore from
ylab(gsub("_", " ", y)) +
# graph title
ggtitle(paste("Customer satisfaction with chocolate bars given", gsub("_", " ", y))) +
theme(plot.title = element_text(hjust = 0.5))
}
# This performs the linear regression steps listed above
lin_reg_choc(x = "customer_happiness", y = "cocoa_percent", my_data = choc_data)
```
In the scatter plot above, each point represents an observation for a single chocolate bar.
It seems that __a higher percentage of cocoa increases customer happiness__. We think this because as we increase the amount of cocoa (y-axis), the amount of customer happiness (x-axis) increases, as shown by our linear model (red line).
Step 3
---
** In the cell below: **
** 1. replace the text `<addFeatureHere>` with __`weight`__ to see if heavier chocolate bars make people happier. **
** 2. Also try the variables `sugar_percent` and `milk_percent` to see if these improve customers' experiences. **
** Remember to run each box when you are ready.**
```
###
# CHANGE <addFeatureHere> TO "weight" IN THE LINE BELOW (INCLUDING THE QUOTATION MARKS)
###
lin_reg_choc(x = "customer_happiness", y = <addFeatureHere>, my_data = choc_data)
###
###
# CHANGE <addFeatureHere> TO "sugar_percent" IN THE LINE BELOW (INCLUDING THE QUOTATION MARKS)
###
lin_reg_choc(x = "customer_happiness", y = <addFeatureHere>, my_data = choc_data)
###
###
# CHANGE <addFeatureHere> TO "milk_percent" IN THE LINE BELOW (INCLUDING THE QUOTATION MARKS)
###
lin_reg_choc(x = "customer_happiness", y = <addFeatureHere>, my_data = choc_data)
###
```
It looks like heavier chocolate bars make customers happier, whereas larger amounts of sugar or milk don't seem to make customers happier.
We can draw this conclusion based on the slope of our linear regression models (red line):
* Our linear regression model for "weight vs. customer happiness" reveals that as chocolate bar weight increases, customer happiness also increases;
* Our linear regression models for "sugar percent vs. customer happiness" and "milk percent vs. customer happiness" reveal that as the percentage of sugar or milk increases, customer happiness decreases.
> *N.B. It is possible to perform linear regression directly with `ggplot2` using the following function and arguments: `stat_smooth(method = "lm")`. However, we want to show you how to create linear models without the dependency of `ggplot2`.*
Conclusion
---
Well done! You have run a simple linear regression that revealed chocolate bars heavier in weight and with higher percentages of cocoa make customers happy.
You can now go back to the course and click __'Next Step'__ to move onto using linear regression with multiple features.
| github_jupyter |
##### Copyright 2019 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Introduction to TensorFlow Part 3 - Advanced Tensor Manipulation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/google/tf-quant-finance/blob/master/tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/google/tf-quant-finance/blob/master/tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
```
#@title Upgrade to TensorFlow 2.5+
!pip install --upgrade tensorflow
#@title Install and import Libraries for this colab. RUN ME FIRST!
import matplotlib.pyplot as plt
import tensorflow as tf
```
# What this notebook covers
This notebook carries on from [part 2](https://colab.research.google.com/github/google/tf-quant-finance/blob/master/tf_quant_finance/examples/jupyter_notebooks/Introduction_to_TensorFlow_Part_2_-_Debugging_and_Control_Flow.ipynb
), and covers various advanced ways of manipulating tensors, including
* Gather
* Updating tensor entries
* Sparse Tensors
* Various functional ops:
* tf.foldl
* tf.foldr
* tf.map_fn
* tf.vectorized_map
* XLA compilation
# Scatter / Gather
## tf.gather_nd
[Full documentation](https://www.tensorflow.org/api_docs/python/tf/gather_nd)
This operation allows you to take a multi-dimensional tensor and extract a list of subsets of data from it, according to a list of indices.
```
source = tf.constant([[[111,112,113], [121,122,123], [131,132,133]],
[[211,212,213], [221,222,223], [231,232,233]]])
# if we specify all values for all of source's dimensions, then we get a
# single value
indices = [[1,1,1]]
print("Looking up %s gives us\n%s" %(
indices, tf.gather_nd(source, indices)))
# we can look up multiple sets of indices
indices = [[1,1,1], [0,0,0], [0,0,1]]
print("\nLooking up %s gives us\n%s" %(
indices, tf.gather_nd(source, indices)))
# if we don't specify values for all of source's dimensions, then we get
# results of larger shape
indices = [[0,0]]
print("\nLooking up %s gives us\n%s" %(
indices, tf.gather_nd(source, indices)))
indices = [[1]]
print("\nLooking up %s gives us\n%s" %(
indices, tf.gather_nd(source, indices)))
```
The indices can easily be generated with tf.where:
```
source = tf.constant([[[111,112,113], [121,122,123], [131,132,133]],
[[211,212,213], [221,222,223], [231,232,233]]])
values_divisible_by_three = tf.gather_nd(
source, tf.where(tf.equal(0, source % 3)))
print(values_divisible_by_three)
```
## Updating elements of a Tensor
Tensors are immutable objects. Often there is a need to update certain values of a Tensor. In order to achieve this, one can use `tf.tensor_scatter_nd`, which creates a copy of the input Tensor along with updated values at the specified indices.
For user convenience a number of similar methods are available, such as `tf.tensor_scatter_nd_add/sub/min/max`.
```
x = tf.constant([[[1, 2, 3], [4, 5, 6], [7, 8, 9]],
[[11, 12, 13], [14, 15, 16], [17, 18, 19]]])
# Original Tensor
print("Original Tensor:\n%s"%x)
print("Updating a single single value:\n%s"%
tf.tensor_scatter_nd_update(
x,
indices = [[0, 1, 2]],
updates = [-1]))
print("\nUpdating multiple values:\n%s"%
tf.tensor_scatter_nd_update(
x,
indices = [[0, 0, 0], [0, 1, 1], [0, 2, 2]],
updates = [-1, -2, -3]))
# You can reduce the dimensions of indices and increase the dimensions of
# updates
print("\nScattering entire rows:\n%s"%
tf.tensor_scatter_nd_update(
x,
indices = [[0,0], [0,1]],
updates = [[-1, -2, -3], [-4, -5,- 6]]))
print("\nUpdating the entire matrix:\n%s"%
tf.tensor_scatter_nd_update(
x,
indices = [[0]],
updates = [[[-1, -2, -3], [-4, -5, -6], [-7, -8, -9]]]))
# Note that if `indices` contains duplicate or overlapping values, then the
# clashing updates will be added together (in an indeterminate order, which
# may result in non-deterministic output in the case of multiple floating
# point values of wildly different sizes).
print("\nUpdating single value multiple times:\n%s"%
tf.tensor_scatter_nd_update(
x,
indices = [[0,0,0], [0,0,0], [0,0,0]],
updates = [-1, -2, -3]))
```
##tf.scatter_nd
[Full documentation](https://www.tensorflow.org/api_docs/python/tf/scatter_nd)
`scatter_nd` is similar to `tf.tensor_scatter_nd_update`. It creates a zero-initialised tensor of a given shape, and then writes a series of specified values at specified positions in that tensor.
### Gather then Update
In some cases, you will want `tensor_scatter_nd` to act as a "setter" to `gather_nd`'s "getter": i.e. you have a tensor, you extract a subset of values that meet a certain criteria using `gather_nd`, you calculate new values for that subset, and then create a new tensor based on the original that replaces the elements that met the criteria with the new values.
```
source = tf.constant([[[111,112,113], [121,122,123], [131,132,133]],
[[211,212,213], [221,222,223], [231,232,233]]])
# Create a list of indices where is_divisible_by_three is true (we no longer
# need the to keep a reference to the result of tf.equal)
indices = tf.where(tf.equal(0, source % 3))
# Extract a list of values that need updating
values_divisible_by_three = tf.gather_nd(source, indices)
# Perform a really expensive operation on those values
new_values = values_divisible_by_three % 100
# Update entries in the original Tensor
new_tensor = tf.tensor_scatter_nd_update(
source, indices, new_values)
# Updated Tensor
print(new_tensor)
```
## Exercise: Mandlebrot set
Lets revisit the Mandlebrot set from the previous training course. In that solution, we ran the z=z*z+c calculation for all co-ordinates, even the ones whose magnitude had already gone over 2.
For the purpose of this exercise, we will pretend that the complex calculation is very expensive and that we should eliminate the calculation where possible. In actual fact, the calculation is utterly trivial and swamped by the cost of the gather/scatter operations, but the same methods can be used in situations rather more expensive than a complex add and multiply
```
MAX_ITERATIONS = 64
NUM_PIXELS = 512
def generate_grid(nX, nY, bottom_left=(-1.0, -1.0), top_right=(1.0, 1.0)):
"""Generates a complex matrix of shape [nX, nY].
Generates an evenly spaced grid of complex numbers spanning the rectangle
between the supplied diagonal points.
Args:
nX: A positive integer. The number of points in the horizontal direction.
nY: A positive integer. The number of points in the vertical direction.
bottom_left: The coordinates of the bottom left corner of the rectangle to
cover.
top_right: The coordinates of the top right corner of the rectangle to
cover.
Returns:
A constant tensor of type complex64 and shape [nX, nY].
"""
x = tf.linspace(bottom_left[0], top_right[0], nX)
y = tf.linspace(bottom_left[1], top_right[1], nY)
real, imag = tf.meshgrid(x, y)
return tf.cast(tf.complex(real, imag), tf.complex128)
c_values = generate_grid(NUM_PIXELS, NUM_PIXELS)
initial_Z_values = tf.zeros_like(c_values, dtype=tf.complex128)
initial_diverged_after = tf.ones_like(c_values, dtype=tf.int32) * MAX_ITERATIONS
# You need to put the various values you want to change inside the loop here
loop_vars = (0, initial_Z_values, initial_diverged_after)
# this needs to take the same number of arguments as loop_vars contains and
# return a tuple of equal size with the next iteration's values
def body(iteration_count, Z_values, diverged_after):
# a matrix of bools showing all the co-ordinatesthat haven't diverged yet
not_diverged = tf.equal(diverged_after, MAX_ITERATIONS)
# a list of the indices in not_diverged that are true
not_diverged_indices = tf.where(not_diverged)
# you now need to gather just the Z and c values covered by
# not_diverged_indices, calculate the new Z values, and then scatter the
# values back into a new Z_values matrix to pass to the next iteration.
new_Z_values = # TODO
# And now we're back to the original code
has_diverged = tf.abs(new_Z_values) > 2.0
new_diverged_after = tf.minimum(diverged_after, tf.where(
has_diverged, iteration_count, MAX_ITERATIONS))
return (iteration_count+1, new_Z_values, new_diverged_after)
# this just needs to take the same number of arguments as loop_vars contains and
# return true (we'll use maximum_iterations to exit the loop)
def cond(iteration_count, Z_values, diverged_after):
return True
results = tf.while_loop(
loop_vars=loop_vars,
body = body,
cond = cond,
maximum_iterations=MAX_ITERATIONS)
## extract the final value of diverged_after from the tuple
final_diverged_after = results[-1]
plt.matshow(final_diverged_after)
pass
#@title Solution: Mandlebrot set (Double-click to reveal)
MAX_ITERATIONS = 64
NUM_PIXELS = 512
def GenerateGrid(nX, nY, bottom_left=(-1.0, -1.0), top_right=(1.0, 1.0)):
"""Generates a complex matrix of shape [nX, nY].
Generates an evenly spaced grid of complex numbers spanning the rectangle
between the supplied diagonal points.
Args:
nX: A positive integer. The number of points in the horizontal direction.
nY: A positive integer. The number of points in the vertical direction.
bottom_left: The coordinates of the bottom left corner of the rectangle to
cover.
top_right: The coordinates of the top right corner of the rectangle to
cover.
Returns:
A constant tensor of type complex64 and shape [nX, nY].
"""
x = tf.linspace(bottom_left[0], top_right[0], nX)
y = tf.linspace(bottom_left[1], top_right[1], nY)
real, imag = tf.meshgrid(x, y)
return tf.cast(tf.complex(real, imag), tf.complex128)
c_values = GenerateGrid(NUM_PIXELS, NUM_PIXELS)
initial_Z_values = tf.zeros_like(c_values, dtype=tf.complex128)
initial_diverged_after = tf.ones_like(c_values, dtype=tf.int32) * MAX_ITERATIONS
# You need to put the various values you want to change inside the loop here
loop_vars = (0, initial_Z_values, initial_diverged_after)
# this needs to take the same number of arguments as loop_vars contains and
# return a tuple of equal size with the next iteration's values
def body(iteration_count, Z_values, diverged_after):
# a matrix of bools showing all the co-ordinatesthat haven't diverged yet
not_diverged = tf.equal(diverged_after, MAX_ITERATIONS)
# a list of the indices in not_diverged that are true
not_diverged_indices = tf.where(not_diverged)
# Gather the values for just the undiverged co-ordinates, and generate the
# next iteration's values
not_diverged_c_values_array = tf.gather_nd(c_values, not_diverged_indices)
not_diverged_Z_values_array = tf.gather_nd(Z_values, not_diverged_indices)
new_Z_values_array = (not_diverged_Z_values_array * not_diverged_Z_values_array
+ not_diverged_c_values_array)
# merge the new values with the already-diverged
new_Z_values_or_zeroes = tf.scatter_nd(
not_diverged_indices,
new_Z_values_array,
tf.shape(Z_values, out_type=tf.dtypes.int64))
new_Z_values = tf.where(not_diverged, new_Z_values_or_zeroes, Z_values)
# And now we're back to the original code
has_diverged = tf.abs(new_Z_values) > 2.0
new_diverged_after = tf.minimum(diverged_after, tf.where(
has_diverged, iteration_count, MAX_ITERATIONS))
return (iteration_count+1, new_Z_values, new_diverged_after)
# this just needs to take the same number of arguments as loop_vars contains and
# return true (we'll use maximum_iterations to exit the loop)
def cond(iteration_count, Z_values, diverged_after):
return True
results = tf.while_loop(
loop_vars=loop_vars,
body = body,
cond = cond,
maximum_iterations=MAX_ITERATIONS)
## extract the final value of diverged_after from the tuple
final_diverged_after = results[-1]
plt.matshow(final_diverged_after)
plt.show()
```
## SparseTensor
[Full documentation](https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor])
A sparse tensor is created from a list of indices, a list of values and a shape: the same as the arguments to scatter_nd. Any element within the tensor that doesn't have an explicit value will be treated as zero. So a sparse tensor can be viewed as a deferred call to scatter_nd.
For large tensors where most of the values are zero, sparse tensors can grant major savings in memory. The [tf.sparse module](https://www.tensorflow.org/api_docs/python/tf/sparse) contains several specialised operations that know how to work with sparse tensor's internals and skipping all the zero values, thus granting major savings in processing speed as well.
Similarly sparse tensors can be efficiently divided or multiplied by a tensor or scalar. But attempts to perform inefficient operations on a sparse tensor (i.e. ones likely to set most elements to a non-zero value) are not allowed. You need to convert the sparse tensor to a normal, or "dense", tensor with the ```tf.sparse.to_dense``` function.
```
source = tf.constant([[[111,112,113], [121,122,123], [131,132,133]],
[[211,212,213], [221,222,223], [231,232,233]]])
# create a list of indices where is_divisible_by_three is true
indices = tf.where(tf.equal(0, source % 3))
# extract a matching list of values
values_divisible_by_three = tf.gather_nd(source, indices)
sparse = tf.sparse.SparseTensor(
indices,
values_divisible_by_three,
tf.shape(source, out_type=tf.dtypes.int64))
print ("sparse =")
print(sparse)
# We can efficiently multiply sparse by a dense tensor
print ("\nsparse * dense =")
print(sparse * source)
# We can efficiently multiply a dense tensor by a sparse
print ("\ndense * sparse")
print(source * sparse)
# We can efficiently divide sparse by a dense tensor
print ("\nsparse / dense")
print(sparse / source)
# But attempts to perform inefficient operations on a sparse tensor (i.e. ones
# likely to set most elements to a non-zero value) are not allowed.
# You need to convert the sparse tensor into a dense tensor first.
try:
not_allowed = sparse + source
except ValueError:
pass
# Running to_dense is exactly the same as calling scatter_nd:
print ("\nto_dense gives")
print(tf.sparse.to_dense(sparse))
print ("\nscatter_nd gives")
print(tf.scatter_nd(sparse.indices, sparse.values, sparse.dense_shape))
```
# Functional ops
## tf.foldl and tf.foldr
[Full documentation](https://www.tensorflow.org/api_docs/python/tf/foldl)
These two functions split a given tensor across its first dimension. The resulting subtensors are then each passed to an op along with an "accumulator".
For most iterations, the value of the accumulator will be the result of the previous iteration. But for the first iteration, the accumulator will either be passed an initial value passed into the foldl/foldr call, or the first subtensor. The final iteration's result then becomes the overall result of the op.
So the rough pseudo code of ```x = tf.foldl(op, [[1,1], [2,2], [3,3]], initializer)``` would be
``` python
result_iteration1 = op(initializer, [1,1])
result_iteration2 = op(result_iteration1, [2,2])
result iteration3 = op(result_iteration2, [3,3])
x = result_iteration3
```
Whereas the rough pseudo-code of ```x = tf.foldl(op, [[1,1], [2,2], [3,3])``` (i.e. no initializer supplied) would be
``` python
result_iteration1 = op([1,1]], [2,2])
result_iteration2 = op(result_iteration1, [3,3])
x = result_iteration3
```
```foldr``` is identical to ```foldl```, except that the order the tensor is iterated through is reversed. So the rough pseudo code of ```x = tf.foldr(op, [[1,1], [2,2], [3,3], initializer)``` would be
``` python
result_iteration1 = op(initializer, [3,3])
result_iteration2 = op(result_iteration1, [2,2])
result iteration3 = op(result_iteration2, [1,1])
x = result_iteration3
```
The only complication of this method is that the op is defined by a python callable. Note that the callable is only called once, at execution time, to build the operation. **Your python callable is not called for every row in the input**, nor can it see the individual values. It is the op created by your python code that will be repeatedly called.
Note that despite this, use of these methods still eliminates several optimisation opportunities that are present in tensorflow built-in operations. So if you can use something like `tf.math.reduce_sum` instead of these ops then your code may well run significantly faster.
```
source = tf.constant([[1,2],[3,4],[5,6]])
# element
@tf.function
def my_function(previous_iterations_result, element):
print("In my_function.")
# this depends on the previous values, thus highlighting the difference
# between foldl and foldr
return tf.math.maximum(
previous_iterations_result, element) + previous_iterations_result
print("Executing foldl")
print("foldl result:\n%s"%
tf.foldl(my_function, source))
print("\nExecuting foldr")
print("foldr result:\n%s"%
tf.foldr(my_function, source))
```
## tf.map_fn
This op is similar to `foldl`, but the python function only takes a single argument, it lacks the accumulator argument containing the result of the previous iteration. Again the callable is called just once, and is used to generate an tensorflow op. It is this generated op that is executed once per row. And again, be aware that replacing the map_fn call with a built-in op - if possible - can result in significant increases in speed.
```
source = tf.constant([[1,2],[3,4],[5,6]])
# element
@tf.function
def my_function(element):
print("In my_function")
return tf.math.reduce_sum(element)
print("foldr result:\n%s"%
tf.map_fn(my_function, source))
```
## tf.vectrorized_map
This op is similar to `map_fn`, but has a much better performance due to vectorization. `map_fn` is serial since it is based on `tf.while_loop`, while `tf.vectorized_map` relies on [`pfor`](https://github.com/tensorflow/tensorflow/blob/a4dfb8d1a71385bd6d122e4f27f86dcebb96712d/tensorflow/python/ops/parallel_for/control_flow_ops.py#L546) (parallel for) in its implementation. Potential speed can be the same as from batching.
The op is useful to parallelize tasks where batching is hard to achieve (e.g., Jacobian calculation).
See [official documentation](https://www.tensorflow.org/api_docs/python/tf/vectorized_map) for more details.
Again the callable is called just once, and is used to generate a tensorflow op.
```
source = tf.constant([[1,2],[3,4],[5,6]])
# element
@tf.function
def my_function(element):
print("In my_function")
return tf.math.reduce_sum(element)
print("foldr result:\n%s"%
tf.vectorized_map(my_function, source))
# Vectorization map vs map_fn vs batching
@tf.function
def square_map(x):
return tf.map_fn(lambda x: x**2, x)
@tf.function
def square_vectorized_map(x):
return tf.vectorized_map(lambda x: x**2, x)
dtype = tf.float64
x = tf.random.uniform([1_000], dtype=dtype)
```
When timing, we call `.numpy()` to ensure the result is copied to memory.
```
%%timeit
# map_fn speed
square_map(x).numpy()
%%timeit
# vectorized_map speed
square_vectorized_map(x).numpy()
%%timeit
# Batched version
(x**2).numpy()
```
# XLA compilation
One of the main TensorFlow concepts is computational graph. One can imagine that knowing the graph should provide enough information to create an efficient low-level code targeting a specific device (i.e., CPU/GPU/TPU). XLA (Accelerated Linear Algebra) is a compiler that does precisely that - it creates an LLVM representation from the computational graph, potentially brinning significant speed up to the calculation. Compilation can be done in either Ahead-of-time (AOT) or Just-in-time (JIT) modes.
Refer to the [official XLA page](https://www.tensorflow.org/xla) for more details. XLA Architecture details can be found [here](https://www.tensorflow.org/xla/architecture).
From user perspective, using JIT compilation is easy: simply set `jit_compile=True` argument of `tf.function`.
To use AOT compilation mode please refer to the [documentation](https://www.tensorflow.org/xla/tfcompile).
**NB**
* At the moment not every function can be XLA-compiled. For example, inputs and output shapes of `tf.while_loop` should be the same. Also, some of the ops might be missing an XLA implementation.
* JIT-compilation means that compilation happens at the first function call. For the successive calls, the compiled function is used. If an input has a different shape from the one used during the compilation, JIT-compilation happens again for the new input shapes.
```
@tf.function
def square_map(x):
return tf.map_fn(lambda x: x**2, x)
@tf.function(jit_compile=True)
def square_map_xla(x):
return tf.map_fn(lambda x: x**2, x)
@tf.function(jit_compile=True)
def square_vectorized_map_xla(x):
return tf.vectorized_map(lambda x: x**2, x)
dtype = tf.float64
x = tf.random.uniform([1_000], dtype=dtype)
%%timeit
square_map(x).numpy()
%%timeit
# Compare time to the non-compiled code above
square_map_xla(x).numpy()
# Now compare with compiled vectorized_map
x = tf.random.uniform([500_000], dtype=dtype)
%%timeit
# map_fn + XLA
square_map_xla(x).numpy()
%%timeit
# vectorized_map + XLA
square_vectorized_map(x).numpy()
```
| github_jupyter |
# Accelerate pretraining of BERT model using ONNX Runtime
This notebook contains a walkthrough of using ONNX Runtime in Azure Machine Learning service to pretrain [BERT: Bidirectional Encoder Representations from Transformers](https://arxiv.org/abs/1810.04805) models. This example shows how ONNX Runtime training can accelerate BERT pretraining implementation in PyTorch maintained at https://github.com/NVIDIA/DeepLearningExamples.
Steps:
- Intialize an AzureML workspace
- Register a datastore to use preprocessed data for training
- Create an AzureML experiment
- Provision a compute target
- Create an Estimator
- Configure and Run
Prerequisites
If you are using an Azure Machine Learning [Compute Instance](https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-instance) you are all set. Otherwise, you need to setup your environment by installing AzureML Python SDK to run this notebook. Refer to [How to use Estimator in Azure ML](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training-with-deep-learning/how-to-use-estimator/how-to-use-estimator.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace.
Refer to instructions at https://github.com/microsoft/onnxruntime-training-examples/blob/master/nvidia-bert/README.md before running the steps below.
### Check SDK installation
```
import os
import requests
import sys
# AzureML libraries
import azureml.core
from azureml.core import Experiment, Workspace, Datastore, Run
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.container_registry import ContainerRegistry
from azureml.core.runconfig import MpiConfiguration, RunConfiguration, DEFAULT_GPU_IMAGE
from azureml.train.dnn import PyTorch
from azureml.train.estimator import Estimator
from azureml.widgets import RunDetails
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
```
### AzureML Workspace setup
```
# Create or retrieve Azure machine learning workspace
# see https://docs.microsoft.com/en-us/python/api/overview/azure/ml/?view=azure-ml-py
ws = Workspace.get(name="myworkspace", subscription_id='<azure-subscription-id>', resource_group='myresourcegroup')
# Print workspace attributes
print('Workspace name: ' + ws.name,
'Workspace region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
```
### Register Datastore
Before running the step below, data prepared using the instructions at https://github.com/microsoft/onnxruntime-training-examples/blob/master/nvidia-bert/README.md should be transferred to an Azure Blob container referenced in the `Datastore` registration step. Refer to the documentation at https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-access-data for details on using data in Azure ML experiments.
```
# Create a datastore from blob storage containing training data.
# Consult README.md for instructions downloading and uploading training data.
ds = Datastore.register_azure_blob_container(workspace=ws,
datastore_name='<datastore-name>',
account_name='<storage-account-name>',
account_key='<storage-account-key>',
container_name='<storage-container-name>')
# Print datastore attributes
print('Datastore name: ' + ds.name,
'Container name: ' + ds.container_name,
'Datastore type: ' + ds.datastore_type,
'Workspace name: ' + ds.workspace.name, sep = '\n')
```
### Create AzureML Compute Cluster
This recipe is supported on Azure Machine Learning Service using 16 x Standard_NC24rs_v3 or 8 x Standard_ND40rs_v2 VMs. In the next step, you will create an AzureML Compute cluster of Standard_NC40s_v2 GPU VMs with the specified name, if it doesn't already exist in your workspace.
```
# Create GPU cluster
gpu_cluster_name = "ortbertpretrain"
try:
gpu_compute_target = ComputeTarget(workspace=ws, name=gpu_cluster_name)
print('Found existing compute target.')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='Standard_ND40rs_v2', min_nodes=0, max_nodes=8)
gpu_compute_target = ComputeTarget.create(ws, gpu_cluster_name, compute_config)
gpu_compute_target.wait_for_completion(show_output=True)
# Create experiment for phase 1
experiment_name = 'nvbert-ort-pretraining-phase1'
experiment = Experiment(ws, name=experiment_name)
```
### Create Estimator
Notes before running the following step:
* Update the following step to replace two occurences of `<blob-path-to-phase1-training-data>` with the actual path in the datastore that contains the training files.
* If you followed instructions at https://github.com/microsoft/onnxruntime-training-examples/blob/master/nvidia-bert/README.md to prepare data, make sure that the data and others files that are not code or config are moved out `workspace` directory. Data files should have been moved to a `Datastore` to use in training.
To fully utilize capacity, we suggest parameters from below table for phase 1.
| VM SKU | node_count | gpu_memory_limit_gb | train_batch_size | gradient_accumulation_steps |
| ------------------ |:------------------:|-----------------:|-----------------:| ---------------------------:|
| Standard_ND40rs_v2 | 1 (8 GPUs total) | 32 | 8192 | 64 |
| Standard_ND40rs_v2 | 2 (16 GPUs total) | 32 | 4096 | 32 |
| Standard_ND40rs_v2 | 4 (32 GPUs total) | 32 | 2048 | 16 |
| Standard_ND40rs_v2 | 8 (64 GPUs total) | 32 | 1024 | 8 |
| Standard_NC24rs_v3 | 1 (4 GPUs total) | 16 | 16320 | 340 |
| Standard_NC24rs_v3 | 2 (8 GPUs total) | 16 | 8160 | 170 |
| Standard_NC24rs_v3 | 4 (16 GPUs total) | 16 | 4080 | 85 |
| Standard_NC24rs_v3 | 8 (32 GPUs total) | 16 | 2016 | 42 |
| Standard_NC24rs_v3 | 16 (64 GPUs total) | 16 | 1008 | 21 |
Refer to [README.md](../README.md) for an in-depth explanation of batch sizes and gradient accumulation steps.
```
# this directory should contain run_pretraining_ort.py, ort_supplement directory and other files copied over based on the instructions at https://github.com/microsoft/onnxruntime-training-examples/blob/master/nvidia-bert/README.md
project_folder = '../../workspace/BERT'
image_name = 'mcr.microsoft.com/azureml/onnxruntime-training:0.1-rc3.1-openmpi4.0-cuda10.2-cudnn8.0-nccl2.7-for-bert'
# set MPI configuration
# set processes per node to be equal to GPU count on SKU.
mpi = MpiConfiguration()
mpi.process_count_per_node = 8
import uuid
output_id = uuid.uuid1().hex
# Define training estimator for phase 1
# Consult https://docs.microsoft.com/en-us/azure/machine-learning/how-to-train-ml-models
# Fill in blob path to phase 1 training data in argument below
estimator_ph1 = Estimator(source_directory=project_folder,
# Compute configuration
compute_target = gpu_compute_target,
node_count=4,
process_count_per_node=1, # separate MPI jobs
distributed_training = mpi,
use_gpu = True,
# supply Docker image
use_docker = True,
custom_docker_image = image_name,
user_managed = True,
# Training script parameters
script_params = {
"--config_file": "bert_config.json",
'--input_dir' : ds.path('<blob-path-to-phase1-training-data>').as_mount(),
'--output_dir': ds.path(f'output/{experiment_name}/{output_id}/').as_mount(),
'--bert_model' : 'bert-large-uncased',
'--train_batch_size' : 2048,
'--max_seq_length': 128,
'--max_predictions_per_seq': 20,
'--max_steps' : 7038,
'--warmup_proportion' : '0.2843',
'--num_steps_per_checkpoint' : 200,
'--learning_rate' : '6e-3',
'--seed': 42,
'--fp16' : '',
'--gradient_accumulation_steps' : 16,
'--allreduce_post_accumulation' : '',
'--allreduce_post_accumulation_fp16' : '',
'--do_train' : '',
'--use_ib' : '', # pass if infiniband available on SKU
'--gpu_memory_limit_gb' : 32 # set to per GPU memory in GB (check SKU)
},
entry_script = 'run_pretraining_ort.py',
inputs = [ds.path('<blob-path-to-phase1-training-data>').as_mount()]
)
```
### Run AzureML experiment - Phase 1 of pretraining
```
# Submit phase 1 (check logs from Outputs + logs tab of corresponding link)
run = experiment.submit(estimator_ph1)
RunDetails(run).show()
print(run.get_portal_url())
# Create experiment for phase 2
experiment_name = 'nvbert-ort-pretraining-phase2'
experiment = Experiment(ws, name=experiment_name)
```
### Create Estimator - Phase 2
Notes before running the following step:
* Update the following step to replace two occurences of `<blob-path-to-phase1-training-data>` with the actual path in the datastore that contains the training files.
* If you followed instructions at https://github.com/microsoft/onnxruntime-training-examples/blob/master/nvidia-bert/README.md to prepare data, make sure that the data and others files that are not code or config are moved out `workspace` directory. Data files should have been moved to a `Datastore` to use in training.
To fully utilize capacity, we suggest parameters from below table for phase 2.
| VM SKU | node_count | gpu_memory_limit_gb | train_batch_size | gradient_accumulation_steps |
| ------------------ |:------------------:|-----------------:|-----------------:| ---------------------------:|
| Standard_ND40rs_v2 | 1 (8 GPUs total) | 32 | 4096 | 256 |
| Standard_ND40rs_v2 | 2 (16 GPUs total) | 32 | 2048 | 128 |
| Standard_ND40rs_v2 | 4 (32 GPUs total) | 32 | 1024 | 64 |
| Standard_ND40rs_v2 | 8 (64 GPUs total) | 32 | 512 | 32 |
| Standard_NC24rs_v3 | 1 (4 GPUs total) | 16 | 8192 | 1024 |
| Standard_NC24rs_v3 | 2 (8 GPUs total) | 16 | 4096 | 512 |
| Standard_NC24rs_v3 | 4 (16 GPUs total) | 16 | 2048 | 256 |
| Standard_NC24rs_v3 | 8 (32 GPUs total) | 16 | 1024 | 128 |
| Standard_NC24rs_v3 | 16 (64 GPUs total) | 16 | 512 | 64 |
```
# Define training estimator for phase 2
# Fill in blob path to phase 1 training data as well as phase 1 checkpoint in arguments below
estimator_ph2 = Estimator(source_directory=project_folder,
# Compute configuration
compute_target = gpu_compute_target,
node_count=4,
process_count_per_node=1, # separate MPI jobs
distributed_training = mpi,
use_gpu = True,
#Docker image
use_docker = True,
custom_docker_image = image_name,
user_managed = True,
# Training script parameters
script_params = {
# Required Params
"--config_file": "bert_config.json",
'--input_dir' : ds.path('<blob-path-to-phase2-training-data>').as_mount(),
'--output_dir': ds.path(f'output/{experiment_name}/{output_id}/').as_mount(),
'--bert_model' : 'bert-large-uncased',
'--train_batch_size' : 1024,
'--max_seq_length': 512,
'--max_predictions_per_seq': 80,
'--max_steps' : 1563,
'--warmup_proportion' : '0.128',
'--num_steps_per_checkpoint' : 200,
'--learning_rate' : '4e-3',
'--seed': 42,
'--fp16' : '',
'--gradient_accumulation_steps' : 64,
'--allreduce_post_accumulation' : '',
'--allreduce_post_accumulation_fp16' : '',
'--do_train' : '',
'--phase2' : '',
'--resume_from_checkpoint' : '',
'--phase1_end_step' : '7038',
'--init_checkpoint' : ds.path('<path-to-checkpoint-from-phase-1>'),
'--use_ib' : '', # pass if infiniband available on SKU
'--gpu_memory_limit_gb' : 32 # set to per GPU memory in GB (check SKU)
},
entry_script='run_pretraining_ort.py',
inputs=[ds.path('<blob-path-to-phase2-training-data>').as_mount()])
```
Run AzureML experiment - Phase 2 of pretraining
```
# Submit phase 2 run (check logs from Outputs + logs tab of corresponding link)
run = experiment.submit(estimator_ph2)
RunDetails(run).show()
print(run.get_portal_url())
```
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Start-to-Finish Example: $\text{GiRaFFE_HO}$ 1D tests
### Author: Patrick Nelson
### Adapted from [Start-to-Finish Example: Head-On Black Hole Collision](Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide.ipynb)
## This module implements a basic GRFFE code to evolve one-dimensional GRFFE waves.
### NRPy+ Source Code for this module:
1. [GiRaFFEfood_HO/GiRaFFEfood_HO_1D_tests.py](../edit/GiRaFFEfood_HO/GiRaFFEfood_HO_1D_tests.py); [\[**tutorial**\]](Tutorial-GiRaFFEfood_HO_1D_tests.ipynb): Aligned rotator initial data, sets all FFE variables in a Cartesian basis.
1. [GiRaFFE_HO/GiRaFFE_Higher_Order_v2.py](../edit/GiRaFFE_HO/GiRaFFE_Higher_Order_v2.py); [\[**tutorial**\]](Tutorial-GiRaFFE_Higher_Order_v2.ipynb): Generates the right-hand sides for the GRFFE evolution equations in Cartesian coordinates.
We will also borrow C code from the ETK implementation of $\text{GiRaFFE_HO}$
Here we use NRPy+ to generate the C source code necessary to set up initial data for a model neutron star (see [the original GiRaFFE paper](https://arxiv.org/pdf/1704.00599.pdf)). Then we use it to generate the RHS expressions for [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration based on the [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4).
The entire algorithm is outlined below, with NRPy+-based components highlighted in <font color='green'>green</font>.
1. Allocate memory for gridfunctions, including temporary storage for the RK4 time integration.
1. (**Step 2** below) <font color='green'>Set gridfunction values to initial data (**[documented in previous module](Tutorial-GiRaFFEfood_HO_1D_tests.ipynb)**).</font>
1. Evolve the initial data forward in time using RK4 time integration. At each RK4 substep, do the following:
1. (**Step 3A** below) <font color='green'>Evaluate GRFFE RHS expressions.</font>
1. (**Step 4** below) Apply singular, curvilinear coordinate boundary conditions [*a la* the SENR/NRPy+ paper](https://arxiv.org/abs/1712.07658)
1. (**Step 3B** below) At the end of each iteration in time, output the <font color='green'>FFE variables</font>. (This is in Step 3B, because Step 4 requires that *all* gridfunctions be defined.)
1. Repeat above steps at two numerical resolutions to confirm convergence to the expected value.
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
1. [Step 1](#writec): Use NRPy+ to generate necessary C code to solve the scalar wave equation in curvilinear, singular coordinates
1. [Step 1.a](#mol_timestep): Use NRPy+ to generate the RK4 timestepping method
1. [Step 1.b](#c_headers): Use NRPy+ to generate C headers
1. [Step 1.c](#copy_c): Copy code over from the GiRaFFE C code library
1. [Step 1.d](#initial_data): Import Alfvén Wave initial data C function
1. [Step 1.e](#poynting): Import densitized Poynting flux initial data conversion C function
1. [Step 1.f](#rhs): Output GiRaFFE RHS expressions
1. [Step 2](#mainc): `GiRaFFE_standalone.c`: The Main C Code
1. [Step 2.a](#import_headers): Import needed header files
1. [Step 2.b](#data_type): Set data type
1. [Step 2.c](#free_params): Set free parameters
1. [Step 2.d](#idx4): Declare the IDX4 macro
1. [Step 2.e](#gridfuncs): Define gridfunctions
1. [Step 2.f](#bcs): Boundary Conditions, the A-to-B driver, and the conservative-to-primitive solver
1. [Step 2.g](#timestep): Find the CFL-constrained timestep
1. [Step 2.h](#initial_data_c): Declare the function for the exact solution
1. [Step 2.i](#rhsC): Declare the functions to evaluate the GRFFE RHSs
1. [Step 2.j](#main): The `main()` function
1. [Step 3](#convergence): Code validation: Verify that relative error in numerical solution converges to zero at the expected order
1. [Step 4](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='writec'></a>
# Step 1: Use NRPy+ to generate necessary C code to solve the scalar wave equation in curvilinear, singular coordinates \[Back to [top](#toc)\]
$$\label{writec}$$
We first begin by importing the needed NRPy+ modules; we then create the directory to which we wish to write our C code. Note that we first remove it and all its contents to make sure we have a fresh start. We set the FD-order to two and our spatial dimension to 3. We set our coordinate system to Cartesian and set up the reference metric.
We also set the symmetry axes parameter from `indexedexp` to `2`. This is done because the Alfvén wave is completely independent of $z$, which is direction `2` in Cartesian coordinates; so, we can speed up the code in this case by assuming that *any* derivative in the $z$ direction is 0.
```
import os,sys
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Step P1: First we import needed core NRPy+ modules
from outputC import * # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
#Step P2: Create C code output directory:
import cmdline_helper as cmd
import shutil, os
cmd.delete_existing_files("GiRaFFE_standalone_Ccodes/*")
Ccodesdir = os.path.join("GiRaFFE_standalone_Ccodes/")
cmd.mkdir(Ccodesdir)
# Set the finite differencing order to 4.
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", 2)
# Set spatial dimension (must be 3 for BSSN)
DIM = 3
par.set_parval_from_str("grid::DIM",DIM)
# Then we set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem","Cartesian")
rfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.
# Then we set the y and z axes to be the symmetry axis; i.e., axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
par.set_parval_from_str("indexedexp::symmetry_axes","2")
# Then we set the phi axis to be the symmetry axis; i.e., axis "2", corresponding to the i2 direction.
# This sets all spatial derivatives in the phi direction to zero.
#par.set_parval_from_str("indexedexp::symmetry_axes","2") # Let's not deal with this yet.
```
<a id='mol_timestep'></a>
## Step 1.a: Use NRPy+ to generate the RK4 timestepping method \[Back to [top](#toc)\]
$$\label{mol_timestep}$$
Next, we will we will use the `MoLtimestepping` module to write the code for our timestepping algorithm. Here, we will use it to write RK4, but note that by changing the variable `RK_method` we can easily and immediately use many other algorithms.
It is also imperative to pass the correct strings `RHS_string` and `post_RHS_string`. *Any* functions that we want to call between each step need to be included here.
```
# Choices are: Euler, "RK2 Heun", "RK2 MP", "RK2 Ralston", RK3, "RK3 Heun", "RK3 Ralston",
# SSPRK3, RK4, DP5, DP5alt, CK5, DP6, L6, DP8
RK_method = "RK4"
# Generate timestepping code. As described above the Table of Contents, this is a 3-step process:
# 3.A: Evaluate RHSs (RHS_string)
# 3.B: Apply boundary conditions (post_RHS_string, pt 1)
# 3.C: Enforce det(gammabar) = det(gammahat) constraint (post_RHS_string, pt 2)
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method,
RHS_string = """
calc_u0(Nxx_plus_2NGHOSTS,aux_gfs);
quantities_to_FD_for_rhs_eval(Nxx_plus_2NGHOSTS,dxx,xx,RK_INPUT_GFS,aux_gfs);
rhs_eval(Nxx,Nxx_plus_2NGHOSTS,dxx, xx, RK_INPUT_GFS, aux_gfs, RK_OUTPUT_GFS);""",
post_RHS_string = """
GiRaFFE_HO_conserv_to_prims_FFE(Nxx, Nxx_plus_2NGHOSTS, dxx,xx, RK_OUTPUT_GFS, aux_gfs);
apply_bcs(Nxx, Nxx_plus_2NGHOSTS, RK_OUTPUT_GFS, aux_gfs);
driver_A_to_B(Nxx, Nxx_plus_2NGHOSTS, dxx, RK_OUTPUT_GFS, aux_gfs);
//apply_bcs_EXACT(Nxx,Nxx_plus_2NGHOSTS,xx,n,dt,RK_OUTPUT_GFS,aux_gfs);\n""",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
```
<a id='c_headers'></a>
## Step 1.b: Use NRPy+ to generate C headers \[Back to [top](#toc)\]
$$\label{c_headers}$$
We will also need to output C header files that are related to the numerical grids and the coordinate system we set up using the reference metric. These end up as very simple files in Cartesian, but they will be more complex in other coordinate systems.
```
#################
# Next output C headers related to the numerical grids we just set up:
#################
# First output the coordinate bounds xxmin[] and xxmax[]:
with open(os.path.join(Ccodesdir,"xxminmax.h"), "w") as file:
file.write("const REAL xxmin[3] = {"+str(rfm.xxmin[0])+","+str(rfm.xxmin[1])+","+str(rfm.xxmin[2])+"};\n")
file.write("const REAL xxmax[3] = {"+str(rfm.xxmax[0])+","+str(rfm.xxmax[1])+","+str(rfm.xxmax[2])+"};\n")
# Next output the proper distance between gridpoints in given coordinate system.
# This is used to find the minimum timestep.
dxx = ixp.declarerank1("dxx",DIM=3)
ds_dirn = rfm.ds_dirn(dxx)
outputC([ds_dirn[0],ds_dirn[1],ds_dirn[2]],["ds_dirn0","ds_dirn1","ds_dirn2"],os.path.join(Ccodesdir,"ds_dirn.h"))
# Generic coordinate NRPy+ file output, Part 2: output the conversion from (x0,x1,x2) to Cartesian (x,y,z)
outputC([rfm.xx_to_Cart[0],rfm.xx_to_Cart[1],rfm.xx_to_Cart[2]],["xCart[0]","xCart[1]","xCart[2]"],
os.path.join(Ccodesdir,"xx_to_Cart.h"))
```
<a id='copy_c'></a>
## Step 1.c: Copy code over from the GiRaFFE C code library \[Back to [top](#toc)\]
$$\label{copy_c}$$
There are some important C codes that we have already written that are stored elsewhere. We will now copy them to our working directory. More detail about these codes can be found here:
* [Tutorial-GiRaFFE_HO_C_code_library-A2B.ipynb](Tutorial-GiRaFFE_HO_C_code_library-A2B.ipynb)
* [Tutorial-GiRaFFE_HO_C_code_library-C2P_P2C.ipynb](Tutorial-GiRaFFE_HO_C_code_library-C2P_P2C.ipynb)
* [Tutorial-GiRaFFE_HO_C_code_library-BCs.ipynb](Tutorial-GiRaFFE_HO_C_code_library-BCs.ipynb)
```
# First, let's make sure that the directories to which we want to copy files exist.
# The cmdline_helper function mkdir will do this regardless of OS
cmd.mkdir(os.path.join(Ccodesdir,"boundary_conditions/"))
cmd.mkdir(os.path.join(Ccodesdir,"A2B/"))
# Now, we'll start copying files using shutil.copy(src,dst)
shutil.copy(os.path.join("GiRaFFE_HO/GiRaFFE_Ccode_library/A2B/driver_AtoB.c"),os.path.join(Ccodesdir,"A2B/"))
shutil.copy(os.path.join("GiRaFFE_HO/GiRaFFE_Ccode_library/driver_conserv_to_prims_FFE.C"),os.path.join(Ccodesdir))
shutil.copy(os.path.join("GiRaFFE_HO/GiRaFFE_Ccode_library/compute_conservatives_FFE.C"),os.path.join(Ccodesdir))
shutil.copy(os.path.join("GiRaFFE_HO/GiRaFFE_Ccode_library/boundary_conditions/GiRaFFE_boundary_conditions.h"),os.path.join(Ccodesdir,"boundary_conditions/"))
```
<a id='initial_data'></a>
## Step 1.d: Import Alfvén Wave initial data C function
$$\label{initial_data}$$
The [GiRaFFEfood_HO.GiRaFFEfood_HO_1D_tests.py](../edit/GiRaFFEfood_HO/GiRaFFEfood_HO_AlignedRotator.py) NRPy+ module does the following:
1. Set up Alfvén Wave initial data quantities in the **Cartesian basis**, as [documented here](Tutorial-GiRaFFEfood_HO_1D_tests.ipynb).
```
import GiRaFFEfood_HO.GiRaFFEfood_HO_1D_tests as gf1D
gf1D.GiRaFFEfood_HO_1D_tests()
# Step 2: Create the C code output kernel.
#BU = ixp.register_gridfunctions_for_single_rank1("AUX","BU")
GiRaFFEfood_A_v_to_print_left = [\
lhrh(lhs=gri.gfaccess("out_gfs","AD0"),rhs=gf1D.AleftD[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","AD1"),rhs=gf1D.AleftD[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","AD2"),rhs=gf1D.AleftD[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","ValenciavU0"),rhs=gf1D.ValenciavleftU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","ValenciavU1"),rhs=gf1D.ValenciavleftU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","ValenciavU2"),rhs=gf1D.ValenciavleftU[2]),\
]
GiRaFFEfood_A_v_to_print_center = [\
lhrh(lhs=gri.gfaccess("out_gfs","AD0"),rhs=gf1D.AcenterD[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","AD1"),rhs=gf1D.AcenterD[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","AD2"),rhs=gf1D.AcenterD[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","ValenciavU0"),rhs=gf1D.ValenciavcenterU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","ValenciavU1"),rhs=gf1D.ValenciavcenterU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","ValenciavU2"),rhs=gf1D.ValenciavcenterU[2]),\
]
GiRaFFEfood_A_v_to_print_right = [\
lhrh(lhs=gri.gfaccess("out_gfs","AD0"),rhs=gf1D.ArightD[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","AD1"),rhs=gf1D.ArightD[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","AD2"),rhs=gf1D.ArightD[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","ValenciavU0"),rhs=gf1D.ValenciavrightU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","ValenciavU1"),rhs=gf1D.ValenciavrightU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","ValenciavU2"),rhs=gf1D.ValenciavrightU[2]),\
]
GiRaFFEfood_A_v_CKernel_left = fin.FD_outputC("returnstring",GiRaFFEfood_A_v_to_print_left, params="outCverbose=False")
GiRaFFEfood_A_v_CKernel_center = fin.FD_outputC("returnstring",GiRaFFEfood_A_v_to_print_center, params="outCverbose=False")
GiRaFFEfood_A_v_CKernel_right = fin.FD_outputC("returnstring",GiRaFFEfood_A_v_to_print_right, params="outCverbose=False")
# Step 4: Write the C code kernel to file.
with open(os.path.join(Ccodesdir,"GiRaFFEfood_A_v_1D_tests_left.h"), "w") as file:
file.write(str(GiRaFFEfood_A_v_CKernel_left).replace("IDX4","IDX4S"))
with open(os.path.join(Ccodesdir,"GiRaFFEfood_A_v_1D_tests_center.h"), "w") as file:
file.write(str(GiRaFFEfood_A_v_CKernel_center).replace("IDX4","IDX4S"))
with open(os.path.join(Ccodesdir,"GiRaFFEfood_A_v_1D_tests_right.h"), "w") as file:
file.write(str(GiRaFFEfood_A_v_CKernel_right).replace("IDX4","IDX4S"))
# We will also need to declare some C parameters for this initial data
lbound,rbound = par.Cparameters("REAL","GiRaFFEfood_HO_1D",["lbound","rbound"], [-0.1,0.1])
```
<a id='poynting'></a>
## Step 1.e: Import densitized Poynting flux initial data conversion C function
$$\label{poynting}$$
The [GiRaFFEfood_HO.GiRaFFEfood_HO.py](../edit/GiRaFFEfood_HO/GiRaFFEfood_HO.py) NRPy+ module does the following:
1. Set up Exact Wald initial data quantities in the **Cartesian basis**, as [documented here](Tutorial-GiRaFFEfood_HO_Aligned_Rotator.ipynb).
2. Convert initial magnetic fields and Valencia 3-velocities into densitized Poynting flux initial data.
We only use the second functionality here (for now).
```
# Step 2: Create the C code output kernel.
gri.glb_gridfcs_list = []
import GiRaFFEfood_HO.GiRaFFEfood_HO_Exact_Wald as gfho
gfho.GiRaFFEfood_HO_ID_converter()
# To best format this for the ETK, we'll need to register this gridfunction.
StildeD = ixp.register_gridfunctions_for_single_rank1("EVOL","StildeD")
GiRaFFE_S_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","StildeD0"),rhs=gfho.StildeD[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","StildeD1"),rhs=gfho.StildeD[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","StildeD2"),rhs=gfho.StildeD[2]),\
]
GiRaFFE_S_CKernel = fin.FD_outputC("returnstring",GiRaFFE_S_to_print,params="outCverbose=False")
# Format the code within a C loop over cctkGH
#GiRaFFE_S_looped = lp.loop(["i2","i1","i0"],["0","0","0"],
# ["Nxx_plus_2NGHOSTS[2]","Nxx_plus_2NGHOSTS[1]","Nxx_plus_2NGHOSTS[0]"],\
# ["1","1","1"],["#pragma omp parallel for","",""],"",\
# GiRaFFE_S_CKernel.replace("time","cctk_time"))
# Step 4: Write the C code kernel to file.
with open(os.path.join(Ccodesdir,"GiRaFFEfood_HO_Stilde.h"), "w") as file:
file.write(str(GiRaFFE_S_CKernel))
```
<a id='rhs'></a>
## Step 1.f: Output GiRaFFE RHS expressions
$$\label{rhs}$$
```
gri.glb_gridfcs_list = [] # This is necessary because, since this was originally designed as two ETK thorns,
# some gridfunctions are registered twice.
import GiRaFFE_HO.GiRaFFE_Higher_Order_v2 as gho
gho.GiRaFFE_Higher_Order_v2()
# Declaring StildeD as a gridfunction is unnecessary in GiRaFFE_HO. While it was declared in GiRaFFEfood_HO,
# those have since been cleared to avoid conflict; so, we re-declare it here.
StildeD = ixp.register_gridfunctions_for_single_rank1("EVOL","StildeD")
# Create the C code output kernel.
# Here, "Prereqs" refers to quantities that must be finite-difference to construct the RHSs.
Prereqs_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","AevolParen"),rhs=gho.AevolParen),\
lhrh(lhs=gri.gfaccess("out_gfs","PevolParenU0"),rhs=gho.PevolParenU[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","PevolParenU1"),rhs=gho.PevolParenU[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","PevolParenU2"),rhs=gho.PevolParenU[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","SevolParenUD00"),rhs=gho.SevolParenUD[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","SevolParenUD01"),rhs=gho.SevolParenUD[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","SevolParenUD02"),rhs=gho.SevolParenUD[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","SevolParenUD10"),rhs=gho.SevolParenUD[1][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","SevolParenUD11"),rhs=gho.SevolParenUD[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","SevolParenUD12"),rhs=gho.SevolParenUD[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","SevolParenUD20"),rhs=gho.SevolParenUD[2][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","SevolParenUD21"),rhs=gho.SevolParenUD[2][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","SevolParenUD22"),rhs=gho.SevolParenUD[2][2]),\
]
metric_quantities_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","gammaUU00"),rhs=gho.gammaUU[0][0]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaUU01"),rhs=gho.gammaUU[0][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaUU02"),rhs=gho.gammaUU[0][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaUU11"),rhs=gho.gammaUU[1][1]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaUU12"),rhs=gho.gammaUU[1][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammaUU22"),rhs=gho.gammaUU[2][2]),\
lhrh(lhs=gri.gfaccess("out_gfs","gammadet"),rhs=gho.gammadet),\
]
# Now, we'll add the Kreiss-Oliger dissipation terms to the RHS of StildeD
# thismodule = __name__
# diss_strength = par.Cparameters("REAL", thismodule, "diss_strength", 1e300) # diss_strength must be set in C, and
# # we set it crazy high to ensure this.
# StildeD_dKOD = ixp.declarerank2("StildeD_dKOD","nosym")
# for k in range(DIM):
# for i in range(DIM):
# gho.Stilde_rhsD[i] += diss_strength * StildeD_dKOD[i][k]
# To best format this for the ETK, we'll need to register these gridfunctions.
#Stilde_rhsD = ixp.register_gridfunctions_for_single_rank1("AUX","Stilde_rhsD")
#A_rhsD = ixp.register_gridfunctions_for_single_rank1("AUX","A_rhsD")
#psi6Phi_rhs = gri.register_gridfunctions("AUX","psi6Phi_rhs")
Conservs_to_print = [\
lhrh(lhs=gri.gfaccess("rhs_gfs","StildeD0"),rhs=gho.Stilde_rhsD[0]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","StildeD1"),rhs=gho.Stilde_rhsD[1]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","StildeD2"),rhs=gho.Stilde_rhsD[2]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","AD0"),rhs=gho.A_rhsD[0]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","AD1"),rhs=gho.A_rhsD[1]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","AD2"),rhs=gho.A_rhsD[2]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","psi6Phi"),rhs=gho.psi6Phi_rhs),\
]
import time
print("Generating C code for GiRaFFE RHSs in "+par.parval_from_str("reference_metric::CoordSystem")+" coordinates.")
start = time.time()
desc="Evaluate quantities to FD for RHSs"
name="quantities_to_FD_for_rhs_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,"Prereqs.h"), desc=desc, name=name,
params = """const int Nxx_plus_2NGHOSTS[3],const REAL dxx[3], REAL *xx[3],
const REAL *in_gfs, REAL *aux_gfs""",
body = fin.FD_outputC("returnstring",Prereqs_to_print,params="outCverbose=False"),
loopopts = "AllPoints,Enable_rfm_precompute")
desc="Calculate the metric determinant and inverse"
name="update_metric_det_inverse"
outCfunction(
outfile = os.path.join(Ccodesdir,"metric_quantities.h"), desc=desc, name=name,
params = """const int Nxx_plus_2NGHOSTS[3],const REAL dxx[3],REAL *xx[3],REAL *aux_gfs""",
body = fin.FD_outputC("returnstring",metric_quantities_to_print,params="outCverbose=False"),
loopopts = "AllPoints,Enable_rfm_precompute")
desc="Evaluate the RHSs"
name="rhs_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,"Conservs.h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict aux_gfs""",
preloop = """REAL invdx0 = 1.0/dxx[0];
REAL invdx1 = 1.0/dxx[1];
REAL invdx2 = 1.0/dxx[2];""",
body = fin.FD_outputC("returnstring",Conservs_to_print,params="outCverbose=False"),
loopopts = "InteriorPoints,Enable_rfm_precompute",
postloop = """LOOP_REGION(0,Nxx_plus_2NGHOSTS[0],0,Nxx_plus_2NGHOSTS[1],0,Nxx_plus_2NGHOSTS[2]){
if (sqrt(sq_radial_coord(xx[0][i0],xx[1][i1],xx[2][i2])) < min_radius_inside_of_which_conserv_to_prims_FFE_and_FFE_evolution_is_DISABLED){
idx = IDX3(i0,i1,i2);
rhs_gfs[IDX4pt(STILDED0GF, idx)] = 0.0;
rhs_gfs[IDX4pt(STILDED1GF, idx)] = 0.0;
rhs_gfs[IDX4pt(STILDED2GF, idx)] = 0.0;
rhs_gfs[IDX4pt(AD0GF, idx)] = 0.0;
rhs_gfs[IDX4pt(AD1GF, idx)] = 0.0;
rhs_gfs[IDX4pt(AD2GF, idx)] = 0.0;
rhs_gfs[IDX4pt(PSI6PHIGF, idx)] = 0.0;
}
}""")
end = time.time()
# Step 5: Import the function to calculate u0 and write it to a file.
import u0_smallb_Poynting__Cartesian.u0_smallb_Poynting__Cartesian as u0etc
#u0etc.compute_u0_smallb_Poynting__Cartesian(gammaDD,betaU,alpha,ValenciavU,BU)
with open(os.path.join(Ccodesdir,"computeu0_Cfunction.h"), "w") as file:
file.write(u0etc.computeu0_Cfunction)
```
<a id='cparams_rfm_and_domainsize'></a>
## Step 1.g: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h`
$$\label{cparams_rfm_and_domainsize}$$
```
# Step 3.d.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.d.ii: Set free_parameters.h
with open(os.path.join(Ccodesdir,"free_parameters.h"),"w") as file:
file.write("""
// Step P3: Set free parameters
// Step P3a: Free parameters for the numerical grid
// Cartesian coordinates parameters
params.xmin = -4.0;params.xmax=4.0;
params.ymin = -4.0;params.ymax=4.0;
params.zmin = -4.0;params.zmax=4.0;
/*params.ymin = -0.0125;params.ymax=0.0125;
params.zmin = -0.0125;params.zmax=0.0125;*/
// Step P3b: Free parameters for the spacetime evolution
params.B_p_aligned_rotator = 1.0e-5;
params.Omega_aligned_rotator = 0.2;
// Disable these when doing 1D tests!
params.min_radius_inside_of_which_conserv_to_prims_FFE_and_FFE_evolution_is_DISABLED = -1.0; // Must be equal! v
params.R_NS_aligned_rotator = -1.0; // Must be equal! ^
params.xi = 0.1;
params.diss_strength = 0.3;
params.GAMMA_SPEED_LIMIT = 2000.0;
params.current_sheet_null_v = 0; // Boolean: 1=true,0=false
// Step P3c: Free parameters defining a 1D wave
//const REAL mu_AW = -0.5; // The wave speed of the Alfven wave
#define mu_AW -0.5
params.lbound = -0.1*sqrt(1-mu_AW*mu_AW); // The left -most edge of the wave: divide by the
params.rbound = 0.1*sqrt(1-mu_AW*mu_AW); // The right-most edge of the wave: Lorentz Factor
// Time coordinate parameters
params.t_final = 2.0; /* Final time is set so that at t=t_final,
* data at the origin have not been corrupted
* by the approximate outer boundary condition */
params.CFL_FACTOR = 0.5; // Set the CFL Factor
""")
```
## Step 4: Apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
```
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cmd.mkdir(os.path.join(Ccodesdir,"boundary_conditions/"))
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"))
```
## Step 3B:
Now, we will generate the header files used to calculate the magnetic field $B^i$ from the vector potential $A_i$. We can use the function `GiRaFFE_HO_A2B()` that we wrote for this.
```
gri.glb_gridfcs_list = []
import GiRaFFE_HO.GiRaFFE_HO_A2B as A2B
A2B.GiRaFFE_HO_A2B("GiRaFFE_standalone_Ccodes/A2B/")
# Declaring StildeD as a gridfunction is unnecessary in GiRaFFE_HO. While it was declared in GiRaFFEfood_HO,
# those have since been cleared to avoid conflict; so, we re-declare it here.
StildeD = ixp.register_gridfunctions_for_single_rank1("EVOL","StildeD")
```
<a id='mainc'></a>
# Step 2: GiRaFFE_standalone.c: The Main C Code \[Back to [top](#toc)\]
$$\label{mainc}$$
```
# Part P0: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
with open(os.path.join(Ccodesdir,"NGHOSTS.h"), "w") as file:
file.write("// Part P0: Set the number of ghost zones, from NRPy+'s FD_CENTDERIVS_ORDER\n")
# We do not need upwinding in GiRaFFE
file.write("#define NGHOSTS "+str(int(par.parval_from_str("finite_difference::FD_CENTDERIVS_ORDER")/2+1))+"\n")
```
<a id='import_headers'></a>
# Step 2.a: Import needed header files \[Back to [top](#toc)\]
$$\label{import_headers}$$
```
%%writefile $Ccodesdir/GiRaFFE_standalone.c
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "string.h" // Needed for strncmp, etc.
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
#include "time.h"
#include "NGHOSTS.h" // A NRPy+-generated file, which is set based on FD_CENTDERIVS_ORDER.
```
<a id='data_type'></a>
# Step 2.b: Set data type \[Back to [top](#toc)\]
$$\label{data_type}$$
```
%%writefile -a $Ccodesdir/GiRaFFE_standalone.c
// Step P2: Add needed #define's to set data type, the IDX4() macro, and the gridfunctions
// Step P2a: set REAL=double, so that all floating point numbers are stored to at least ~16 significant digits.
#define REAL double
```
<a id='free_params'></a>
# Step 2.c: Set free parameters \[Back to [top](#toc)\]
$$\label{free_params}$$
```
%%writefile -a $Ccodesdir/GiRaFFE_standalone.c
#include "declare_Cparameters_struct.h"
```
<a id='idx4'></a>
# Step 2.d: Declare the IDX4 macro \[Back to [top](#toc)\]
$$\label{idx4}$$
```
%%writefile -a $Ccodesdir/GiRaFFE_standalone.c
// Step P6: Declare the IDX4(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS[0] elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS[0]*Nxx_plus_2NGHOSTS[1] in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3(i,j,k) ( (i) + Nxx_plus_2NGHOSTS[0] * ( (j) + Nxx_plus_2NGHOSTS[1] * (k) ) )
// Assuming idx = IDX3(i,j,k). Much faster if idx can be reused over and over:
// To be deprecated soon:
#define IDX4(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS[0] * ( (j) + Nxx_plus_2NGHOSTS[1] * ( (k) + Nxx_plus_2NGHOSTS[2] * (g) ) ) )
#define IDX4pt(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS[0]*Nxx_plus_2NGHOSTS[1]*Nxx_plus_2NGHOSTS[2]) * (g) )
```
<a id='gridfuncs'></a>
# Step 2.e: Define gridfunctions \[Back to [top](#toc)\]
$$\label{gridfuncs}$$
```
%%writefile -a $Ccodesdir/GiRaFFE_standalone.c
// Step P7: Set #define's for GRFFE gridfunctions. C code generated above
#include "boundary_conditions/gridfunction_defines.h"
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
#include "boundary_conditions/EigenCoord_xx_to_Cart.h"
```
<a id='bcs'></a>
# Step 2.f: Boundary Conditions, the A-to-B driver, and the conservative-to-primitive solver \[Back to [top](#toc)\]
$$\label{bcs}$$
```
%%writefile -a $Ccodesdir/GiRaFFE_standalone.c
// Step P8: Include basic functions needed to impose boundary conditions.
//#include "../CurviBoundaryConditions/curvilinear_parity_and_outer_boundary_conditions.h"
#include "boundary_conditions/GiRaFFE_boundary_conditions.h"
// Step P8c: Import C files for the A-to-B driver and the conservative-to-primitive solver
#include "A2B/driver_AtoB.c"
#include "driver_conserv_to_prims_FFE.C"
```
<a id='timestep'></a>
# Step 2.g: Find the CFL-constrained timestep \[Back to [top](#toc)\]
$$\label{timestep}$$
```
%%writefile -a $Ccodesdir/GiRaFFE_standalone.c
// Step P9: Find the CFL-constrained timestep
REAL find_timestep(const int Nxx_plus_2NGHOSTS[3],const REAL dxx[3],REAL *xx[3], const REAL CFL_FACTOR) {
const REAL dxx0 = dxx[0], dxx1 = dxx[1], dxx2 = dxx[2];
REAL dsmin = 1e38; // Start with a crazy high value... close to the largest number in single precision.
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS[0]-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS[1]-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS[2]-NGHOSTS) {
const REAL xx0 = xx[0][i0], xx1 = xx[1][i1], xx2 = xx[2][i2];
REAL ds_dirn0, ds_dirn1, ds_dirn2;
#include "ds_dirn.h"
//#define MIN(A, B) ( ((A) < (B)) ? (A) : (B) ) // Provided by driver_conserv_to_prims_FFE.C
// Set dsmin = MIN(dsmin, ds_dirn0, ds_dirn1, ds_dirn2);
dsmin = MIN(dsmin,MIN(ds_dirn0,MIN(ds_dirn1,ds_dirn2)));
}
return dsmin*CFL_FACTOR;
}
```
<a id='initial_data'></a>
# Step 2.h: Declare the function for the exact solution \[Back to [top](#toc)\]
$$\label{initial_data_c}$$
```
%%writefile -a $Ccodesdir/GiRaFFE_standalone.c
// Step P10: Declare the function for the exact solution. time==0 corresponds to the initial data.
void initial_data(const int Nxx_plus_2NGHOSTS[3],REAL *xx[3], REAL *out_gfs, REAL *aux_gfs) {
#pragma omp parallel for
LOOP_REGION(0,Nxx_plus_2NGHOSTS[0], 0,Nxx_plus_2NGHOSTS[1], 0,Nxx_plus_2NGHOSTS[2]) {
const int idx = IDX3(i0,i1,i2);
aux_gfs[IDX4pt(GAMMADD00GF, idx)] = 1.0;
aux_gfs[IDX4pt(GAMMADD01GF, idx)] = 0.0;
aux_gfs[IDX4pt(GAMMADD02GF, idx)] = 0.0;
aux_gfs[IDX4pt(GAMMADD11GF, idx)] = 1.0;
aux_gfs[IDX4pt(GAMMADD12GF, idx)] = 0.0;
aux_gfs[IDX4pt(GAMMADD22GF, idx)] = 1.0;
aux_gfs[IDX4pt(BETAU0GF, idx)] = 0.0;
aux_gfs[IDX4pt(BETAU1GF, idx)] = 0.0;
aux_gfs[IDX4pt(BETAU2GF, idx)] = 0.0;
aux_gfs[IDX4pt(ALPHAGF, idx)] = 1.0;
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
if(xx0<=lbound) {
#include "GiRaFFEfood_A_v_1D_tests_left.h"
}
else if (xx0<rbound) {
#include "GiRaFFEfood_A_v_1D_tests_center.h"
}
else {
#include "GiRaFFEfood_A_v_1D_tests_right.h"
}
out_gfs[IDX4pt(PSI6PHIGF, idx)] = 0.0;
}
}
void initial_Stilde_from_ID(const int Nxx_plus_2NGHOSTS[3],REAL *xx[3],const REAL *aux_gfs, REAL *out_gfs) {
LOOP_REGION(0,Nxx_plus_2NGHOSTS[0],0,Nxx_plus_2NGHOSTS[1],0,Nxx_plus_2NGHOSTS[2]){
#include "GiRaFFEfood_HO_Stilde.h"
}
}
```
<a id='rhsC'></a>
# Step 2.i: Declare the functions to evaluate the GRFFE RHSs \[Back to [top](#toc)\]
$$\label{rhsC}$$
```
%%writefile -a $Ccodesdir/GiRaFFE_standalone.c
// Step P11: Declare the functions to evaluate the GRFFE RHSs
// Step P11a: Create the function to calculate u4upperZero:
void calc_u0(const int Nxx_plus_2NGHOSTS[3],REAL *aux_gfs)
{
int idx;
LOOP_REGION(0,Nxx_plus_2NGHOSTS[0],0,Nxx_plus_2NGHOSTS[1],0,Nxx_plus_2NGHOSTS[2]){
idx = IDX3(i0,i1,i2);
REAL u0;
REAL ValenciavU0 = aux_gfs[IDX4pt(VALENCIAVU0GF,idx)];
REAL ValenciavU1 = aux_gfs[IDX4pt(VALENCIAVU1GF,idx)];
REAL ValenciavU2 = aux_gfs[IDX4pt(VALENCIAVU2GF,idx)];
REAL alpha = aux_gfs[IDX4pt(ALPHAGF,idx)];
REAL gammaDD00 = aux_gfs[IDX4pt(GAMMADD00GF,idx)];
REAL gammaDD01 = aux_gfs[IDX4pt(GAMMADD01GF,idx)];
REAL gammaDD02 = aux_gfs[IDX4pt(GAMMADD02GF,idx)];
REAL gammaDD11 = aux_gfs[IDX4pt(GAMMADD11GF,idx)];
REAL gammaDD12 = aux_gfs[IDX4pt(GAMMADD12GF,idx)];
REAL gammaDD22 = aux_gfs[IDX4pt(GAMMADD22GF,idx)];
#include "computeu0_Cfunction.h"
aux_gfs[IDX4pt(U4UPPERZEROGF,idx)] = u0;
aux_gfs[IDX4pt(VALENCIAVU0GF,idx)] = ValenciavU0;
aux_gfs[IDX4pt(VALENCIAVU1GF,idx)] = ValenciavU1;
aux_gfs[IDX4pt(VALENCIAVU2GF,idx)] = ValenciavU2;
}
}
// Step P11b: Set the quantities to be differentiated by finite difference for the RHSs--ALWAYS run immediately
// before rhs_eval()
#include "Prereqs.h"
// While this code is generally cartesian, we will need an r coordinate for the evolution:
REAL sq_radial_coord(const REAL x,const REAL y,const REAL z) { return x*x+y*y+z*z; }
// Step P11c: Set the RHSs themselves.
#include "Conservs.h"
```
<a id='main'></a>
# Step 2.j: The `main()` function \[Back to [top](#toc)\]
$$\label{main}$$
```
%%writefile -a $Ccodesdir/GiRaFFE_standalone.c
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up scalar wave initial data
// Step 2: Evolve scalar wave initial data forward in time using Method of Lines with RK4 algorithm,
// applying quadratic extrapolation outer boundary conditions.
// Step 3: Output relative error between numerical and exact solution.
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 4 && argc != 5) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./BrillLindquist_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
if(argc == 5) {
CFL_FACTOR = strtod(argv[4],NULL);
if(CFL_FACTOR > 0.5 && atoi(argv[3])!=2) {
fprintf(stderr,"WARNING: CFL_FACTOR was set to %e, which is > 0.5.\n",CFL_FACTOR);
fprintf(stderr," This will generally only be stable if the simulation is purely axisymmetric\n");
fprintf(stderr," However, Nx2 was set to %d>2, which implies a non-axisymmetric simulation\n",atoi(argv[3]));
}
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
const int Nxx_plus_2NGHOSTS[3] = { Nxx[0]+2*NGHOSTS, Nxx[1]+2*NGHOSTS, Nxx[2]+2*NGHOSTS };
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS[0]*Nxx_plus_2NGHOSTS[1]*Nxx_plus_2NGHOSTS[2];
#include "xxminmax.h"
// Step 0c: Allocate memory for gridfunctions
REAL *aux_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUX_GFS * Nxx_plus_2NGHOSTS_tot);
#include "../MoLtimestepping/RK_Allocate_Memory.h"
for(int i=0;i<NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot;i++) {
y_n_gfs[i] = 1.0/0.0;
//k_even_gfs[i] = 1.0/0.0;
//k_odd_gfs[i] = 1.0/0.0;
}
for(int i=0;i<NUM_AUX_GFS * Nxx_plus_2NGHOSTS_tot;i++) {
aux_gfs[i] = 1.0/0.0;
}
REAL *evol_gfs_exact = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *aux_gfs_exact = (REAL *)malloc(sizeof(REAL) * NUM_AUX_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
#include "set_Cparameters-nopointer.h"
// Step 0d: Set up space and time coordinates
// Step 0d.i: Set \Delta x^i on uniform grids.
REAL dxx[3];
for(int i=0;i<3;i++) dxx[i] = (xxmax[i] - xxmin[i]) / ((REAL)Nxx[i]);
for(int i=0;i<3;i++) printf("dxx[%d] = %.15e\n",i,dxx[i]);
// Step 0d.ii: Set up uniform coordinate grids
REAL *xx[3];
for(int i=0;i<3;i++) {
xx[i] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS[i]);
for(int j=0;j<Nxx_plus_2NGHOSTS[i];j++) {
xx[i][j] = xxmin[i] + ((REAL)(j-NGHOSTS) + (1.0/2.0))*dxx[i]; // Cell-centered grid.
}
}
// Step 0d.iii: Set timestep based on smallest proper distance between gridpoints and CFL factor
REAL dt = find_timestep(Nxx_plus_2NGHOSTS, dxx,xx, CFL_FACTOR);
printf("# Timestep set to = %e\n",(double)dt);
int N_final = (int)(t_final / dt + 0.5); // The number of iterations in time.
//Add 0.5 to account for C rounding down integers.
// Step 1: Set up initial data to an exact solution at time=0:
// Step 1a: Set up the exact initial data:
initial_data(Nxx_plus_2NGHOSTS, xx, y_n_gfs, aux_gfs);
// Step 1b: Run the initial A-to-B driver:
driver_A_to_B(Nxx, Nxx_plus_2NGHOSTS, dxx, y_n_gfs, aux_gfs);
// Step 1c: Solve for StildeD from BU and ValenciavU
initial_Stilde_from_ID(Nxx_plus_2NGHOSTS, xx, aux_gfs, y_n_gfs);
// Step 1d: Apply boundary conditions, as initial data
// are sometimes ill-defined in ghost zones.
// E.g., spherical initial data might not be
// properly defined at points where r=-1.
// Step 1e: Run the conservative-to-primitive solver:
GiRaFFE_HO_conserv_to_prims_FFE(Nxx, Nxx_plus_2NGHOSTS, dxx,xx, y_n_gfs, aux_gfs);
//apply_bcs(Nxx, Nxx_plus_2NGHOSTS, bc_gz_map,bc_parity_conditions,NUM_EVOL_GFS,evol_gf_parity, y_n_gfs);
apply_bcs(Nxx, Nxx_plus_2NGHOSTS, y_n_gfs, aux_gfs);
// Rerun AtoB for consistency:
driver_A_to_B(Nxx, Nxx_plus_2NGHOSTS, dxx, y_n_gfs, aux_gfs);
// Step 3: Start the timer, for keeping track of how fast the simulation is progressing.
struct timespec start, end;
clock_gettime(CLOCK_REALTIME, &start);
// Step 4: Integrate the initial data forward in time using the Method of Lines and RK4
for(int n=0;n<=N_final;n++) { // Main loop to progress forward in time.
/* Step 5: Output 2D data file, for visualization. Do this first to get initial data. */
// For convergence testing, we'll shift the grid x -> x-1 and output initial data again, giving the exact solution.
LOOP_REGION(0,Nxx_plus_2NGHOSTS[0],0,1,0,1) {
xx[0][i0] += -mu_AW*(n)*dt;
}
// Recalculate the initial data on the shifted grid, using the same process as before for consistency.
initial_data(Nxx_plus_2NGHOSTS, xx, evol_gfs_exact, aux_gfs_exact);
driver_A_to_B(Nxx, Nxx_plus_2NGHOSTS, dxx, evol_gfs_exact, aux_gfs_exact);
initial_Stilde_from_ID(Nxx_plus_2NGHOSTS, xx, aux_gfs_exact, evol_gfs_exact);
GiRaFFE_HO_conserv_to_prims_FFE(Nxx, Nxx_plus_2NGHOSTS, dxx,xx, evol_gfs_exact, aux_gfs_exact);
driver_A_to_B(Nxx, Nxx_plus_2NGHOSTS, dxx, evol_gfs_exact, aux_gfs_exact);
apply_bcs(Nxx, Nxx_plus_2NGHOSTS, evol_gfs_exact, aux_gfs_exact);
// And now, we'll set the grid back to rights.
LOOP_REGION(0,Nxx_plus_2NGHOSTS[0],0,1,0,1) {
xx[0][i0] -= -mu_AW*(n)*dt;
}
if(n%10 == 0) {
printf("Writing output...\n");
// const int i1mid = Nxx_plus_2NGHOSTS[1]/2;
char filename[100];
sprintf(filename,"out%d-%08d_numer.txt",Nxx[0],n);
FILE *out2D_numer = fopen(filename, "w");
//LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS[0]-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS[1]-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS[2]-NGHOSTS) {
//LOOP_REGION(0,Nxx_plus_2NGHOSTS[0], Nxx_plus_2NGHOSTS[1]/2,Nxx_plus_2NGHOSTS[1]/2+1,Nxx_plus_2NGHOSTS[2]/2,Nxx_plus_2NGHOSTS[2]/2+1) {
LOOP_REGION(0,Nxx_plus_2NGHOSTS[0],Nxx_plus_2NGHOSTS[1]/2,Nxx_plus_2NGHOSTS[1]/2+1,Nxx_plus_2NGHOSTS[2]/2,Nxx_plus_2NGHOSTS[2]/2+1) {
const int idx = IDX3(i0,i1,i2);
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
#include "xx_to_Cart.h"
fprintf(out2D_numer,"%.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e\n",
xCart[0],xCart[1],xCart[2],
aux_gfs[IDX4pt(BU0GF,idx)],aux_gfs[IDX4pt(BU1GF,idx)],aux_gfs[IDX4pt(BU2GF,idx)],
y_n_gfs[IDX4pt(AD0GF,idx)],y_n_gfs[IDX4pt(AD1GF,idx)],y_n_gfs[IDX4pt(AD2GF,idx)],
y_n_gfs[IDX4pt(STILDED0GF,idx)],y_n_gfs[IDX4pt(STILDED1GF,idx)],y_n_gfs[IDX4pt(STILDED2GF,idx)],
aux_gfs[IDX4pt(VALENCIAVU0GF,idx)],aux_gfs[IDX4pt(VALENCIAVU1GF,idx)],aux_gfs[IDX4pt(VALENCIAVU2GF,idx)]);
}
fclose(out2D_numer);
// Now rerun the same output code we used in the main simulation.
printf("Writing EXACT output...\n");
sprintf(filename,"out%d-%08d_exact.txt",Nxx[0],n);
FILE *out2D_exact = fopen(filename, "w");
//LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS[0]-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS[1]-NGHOSTS, NGHOSTS,Nxx_plus_2NGHOSTS[2]-NGHOSTS) {
LOOP_REGION(0,Nxx_plus_2NGHOSTS[0], Nxx_plus_2NGHOSTS[1]/2,Nxx_plus_2NGHOSTS[1]/2+1,Nxx_plus_2NGHOSTS[2]/2,Nxx_plus_2NGHOSTS[2]/2+1) {
const int idx = IDX3(i0,i1,i2);
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL xCart[3];
#include "xx_to_Cart.h"
fprintf(out2D_exact,"%.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e\n",
xCart[0],xCart[1],xCart[2],
aux_gfs_exact[IDX4pt(BU0GF,idx)],aux_gfs_exact[IDX4pt(BU1GF,idx)],aux_gfs_exact[IDX4pt(BU2GF,idx)],
evol_gfs_exact[IDX4pt(AD0GF,idx)],evol_gfs_exact[IDX4pt(AD1GF,idx)],evol_gfs_exact[IDX4pt(AD2GF,idx)],
evol_gfs_exact[IDX4pt(STILDED0GF,idx)],evol_gfs_exact[IDX4pt(STILDED1GF,idx)],evol_gfs_exact[IDX4pt(STILDED2GF,idx)],
aux_gfs_exact[IDX4pt(VALENCIAVU0GF,idx)],aux_gfs_exact[IDX4pt(VALENCIAVU1GF,idx)],aux_gfs_exact[IDX4pt(VALENCIAVU2GF,idx)]);
}
fclose(out2D_exact);
}
// Step 6: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
#include "../MoLtimestepping/RK_MoL.h"
for(int gf=0;gf<NUM_EVOL_GFS;gf++) {
LOOP_REGION(NGHOSTS,Nxx_plus_2NGHOSTS[0]-NGHOSTS,NGHOSTS,Nxx_plus_2NGHOSTS[1]-NGHOSTS,NGHOSTS,Nxx_plus_2NGHOSTS[2]-NGHOSTS){
if(isnan(y_n_gfs[IDX4(gf,i0,i1,i2)])) {
printf("ERROR, FOUND A NAN ON GF %d AT POINT %d %d %d\n",gf,i0,i1,i2);
exit(1);
}
}
}
// This function will now write the Exact solution for StildeD to the boundaries.
//apply_bcs_EXACT_StildeD(Nxx, Nxx_plus_2NGHOSTS, xx,y_n_gfs,evol_gfs_exact);
// Progress indicator printing to stdout
// Measure average time per iteration
clock_gettime(CLOCK_REALTIME, &end);
const long long unsigned int time_in_ns = 1000000000L * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
const REAL s_per_iteration_avg = ((REAL)time_in_ns / (REAL)n) / 1.0e9;
const int iterations_remaining = N_final - n;
const REAL time_remaining_in_mins = s_per_iteration_avg * (REAL)iterations_remaining / 60.0;
const REAL num_RHS_pt_evals = (REAL)(Nxx[0]*Nxx[1]*Nxx[2]) * 4.0 * (REAL)n; // 4 RHS evals per gridpoint for RK4
const REAL RHS_pt_evals_per_sec = num_RHS_pt_evals / ((REAL)time_in_ns / 1.0e9);
// Progress indicator printing to stderr
fprintf(stderr,"%c[2K", 27); // Clear the line
fprintf(stderr,"It: %d t=%.2f | %.1f%%; ETA %.0f s | t/h %.2f | gp/s %.2e\r", // \r is carriage return, move cursor to the beginning of the line
n, n * (double)dt, (double)(100.0 * (REAL)n / (REAL)N_final),
(double)time_remaining_in_mins*60, (double)(dt * 3600.0 / s_per_iteration_avg), (double)RHS_pt_evals_per_sec);
fflush(stderr); // Flush the stderr buffer
} // End main loop to progress forward in time.
fprintf(stderr,"\n"); // Clear the line.
/* Step 6: Free all allocated memory */
#include "../MoLtimestepping/RK_Free_Memory.h"
free(aux_gfs);
free(aux_gfs_exact);
free(evol_gfs_exact);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
```
Now that the C code is put together, we will compile and run the code.
```
import cmdline_helper as cmd
print("Now compiling, should take ~2 seconds...\n")
start = time.time()
cmd.C_compile(os.path.join(Ccodesdir,"GiRaFFE_standalone.c"), "GiRaFFE_standalone_1D")
# Switching back so I can use a nan checker
#!gcc -g -O2 -fopenmp GiRaFFE_standalone/GiRaFFE_standalone.c -o GiRaFFE_standalone_1D -lm
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
cmd.delete_existing_files("out128*.txt")
cmd.delete_existing_files("out2560*.txt")
print("Now running at low resolution. Should take ~20 seconds...\n")
start = time.time()
#cmd.Execute("GiRaFFE_standalone_1D", "1280 2 2 0.5")
# Switching back to see more output
!taskset -c 0,1 ./GiRaFFE_standalone_1D 1280 2 2 0.5
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
print("Now running at medium resolution. Should take ~300 seconds...\n")
start = time.time()
#cmd.Execute("GiRaFFE_standalone_1D", "2560 8 8 0.5")
end = time.time()
print("Finished in "+str(end-start)+" seconds.\n\n")
```
<a id='convergence'></a>
# Step 3: Code validation: Verify that relative error in numerical solution converges to zero at the expected order \[Back to [top](#toc)\]
$$\label{convergence}$$
Now, we will load the data generated by the simulation and plot it in order to test for convergence.
```
import numpy as np
import matplotlib.pyplot as plt
Data_numer = np.loadtxt("out256-00000010_numer.txt")
#Data_oldbc = np.loadtxt("oldbc_out1280-00000030_numer.txt")
#Data_num_2 = np.loadtxt("out2560-00000600_numer.txt")
Data_exact = np.loadtxt("out256-00000010_exact.txt")
#Data_exa_2 = np.loadtxt("out2560-00000600_exact.txt")
#path = "/home/penelson/OldCactus/Cactus/exe/ABE-GiRaFFEfood_1D_AlfvenWave/"
#Data_oldG = np.loadtxt(path + "giraffe-grmhd_primitives_bi.x.asc")
#n=32
#Data_oldG_atT = Data_oldG[1285*n:1285*(n+1)-1,:]
#predicted_order = 4.0
column = 5
plt.figure()
#plt.plot(Data_numer[:,0],Data_numer[:,column]-Data_oldbc[:,column],'.')
#plt.plot(Data_num_2[:,0],(Data_num_2[:,column]-Data_exa_2[:,column])*(2**predicted_order),'.')
plt.plot(Data_numer[:,0],Data_numer[:,column],'.')
plt.plot(Data_exact[:,0],Data_exact[:,column],'.')
#plt.plot(Data_oldG_atT[1:-1,9],)
#plt.plot(Data_numer[1:-2,1],(Data_numer[0:-3,column]-Data_numer[2:-1,column])/3.125e-3,'.-')
#plt.plot(Data_numer[1:-2,1],(Data_oldbc[0:-3,column]-Data_oldbc[2:-1,column])/3.125e-3,'.')
#plt.plot(Data_numer[0,0],Data_numer[0,column],'o')
#plt.xlim(-0.15,0.15)
#plt.ylim(-0.2e-10,0.2e-10)
plt.xlabel("y")
plt.ylabel("BU2")
plt.show()
```
This code will create an animation of the wave over time to hopefully show us where things go wrong.
```
import matplotlib.pyplot as plt
from matplotlib.pyplot import savefig
from IPython.display import HTML
import matplotlib.image as mgimg
import glob
import sys
from matplotlib import animation
globby = glob.glob('out1280-00*.txt')
file_list = []
for x in sorted(globby):
file_list.append(x)
number_of_files = len(file_list)/2
for timestep in range(number_of_files):
fig = plt.figure()
numer_filename = file_list[2*timestep]
exact_filename = file_list[2*timestep+1]
Numer = np.loadtxt(numer_filename)
Exact = np.loadtxt(exact_filename)
plt.title("Alfven Wave")
plt.xlabel("x")
plt.ylabel("BU2")
plt.xlim(-4.0,4.0)
plt.ylim(1.14,1.57)
plt.plot(Numer[:,0],Numer[:,5])
plt.plot(Exact[:,0],Exact[:,5],'.')
savefig(numer_filename+".png",dpi=150)
plt.close(fig)
sys.stdout.write("%c[2K" % 27)
sys.stdout.write("Processing file "+numer_filename+"\r")
sys.stdout.flush()
## VISUALIZATION ANIMATION, PART 2: Combine PNGs to generate movie ##
# https://stackoverflow.com/questions/14908576/how-to-remove-frame-from-matplotlib-pyplot-figure-vs-matplotlib-figure-frame
# https://stackoverflow.com/questions/23176161/animating-pngs-in-matplotlib-using-artistanimation
!rm -f GiRaFFE_HO-1D_tests.mp4
fig = plt.figure(frameon=False)
ax = fig.add_axes([0, 0, 1, 1])
ax.axis('off')
myimages = []
for i in range(len(file_list)/2):
img = mgimg.imread(file_list[2*i]+".png")
imgplot = plt.imshow(img)
myimages.append([imgplot])
ani = animation.ArtistAnimation(fig, myimages, interval=100, repeat_delay=1000)
plt.close()
ani.save('GiRaFFE_HO-1D_tests.mp4', fps=5,dpi=150)
%%HTML
<video width="480" height="360" controls>
<source src="GiRaFFE_HO-1D_tests.mp4" type="video/mp4">
</video>
```
<a id='latex_pdf_output'></a>
# Step 3: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-Start_to_Finish-GiRaFFE_HO-1D_tests.pdf](Tutorial-Start_to_Finish-GiRaFFE_HO-1D_tests.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-Start_to_Finish-GiRaFFE_HO-1D_tests.ipynb
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-GiRaFFE_HO-1D_tests.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-GiRaFFE_HO-1D_tests.tex
!pdflatex -interaction=batchmode Tutorial-Start_to_Finish-GiRaFFE_HO-1D_tests.tex
!rm -f Tut*.out Tut*.aux Tut*.log
```
| github_jupyter |
# TensorFlow script mode training and serving
Script mode is a training script format for TensorFlow that lets you execute any TensorFlow training script in SageMaker with minimal modification. The [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk) handles transferring your script to a SageMaker training instance. On the training instance, SageMaker's native TensorFlow support sets up training-related environment variables and executes your training script. In this tutorial, we use the SageMaker Python SDK to launch a training job and deploy the trained model.
Script mode supports training with a Python script, a Python module, or a shell script. In this example, we use a Python script to train a classification model on the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). In this example, we will show how easily you can train a SageMaker using TensorFlow 1.x and TensorFlow 2.0 scripts with SageMaker Python SDK. In addition, this notebook demonstrates how to perform real time inference with the [SageMaker TensorFlow Serving container](https://github.com/aws/sagemaker-tensorflow-serving-container). The TensorFlow Serving container is the default inference method for script mode. For full documentation on the TensorFlow Serving container, please visit [here](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst).
# Set up the environment
Let's start by setting up the environment:
```
import os
import sagemaker
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
region = sagemaker_session.boto_session.region_name
```
## Training Data
The MNIST dataset has been loaded to the public S3 buckets ``sagemaker-sample-data-<REGION>`` under the prefix ``tensorflow/mnist``. There are four ``.npy`` file under this prefix:
* ``train_data.npy``
* ``eval_data.npy``
* ``train_labels.npy``
* ``eval_labels.npy``
```
training_data_uri = 's3://sagemaker-sample-data-{}/tensorflow/mnist'.format(region)
```
# Construct a script for distributed training
This tutorial's training script was adapted from TensorFlow's official [CNN MNIST example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/layers/cnn_mnist.py). We have modified it to handle the ``model_dir`` parameter passed in by SageMaker. This is an S3 path which can be used for data sharing during distributed training and checkpointing and/or model persistence. We have also added an argument-parsing function to handle processing training-related variables.
At the end of the training job we have added a step to export the trained model to the path stored in the environment variable ``SM_MODEL_DIR``, which always points to ``/opt/ml/model``. This is critical because SageMaker uploads all the model artifacts in this folder to S3 at end of training.
Here is the entire script:
```
!pygmentize 'mnist.py'
# TensorFlow 2.1 script
!pygmentize 'mnist-2.py'
```
# Create a training job using the `TensorFlow` estimator
The `sagemaker.tensorflow.TensorFlow` estimator handles locating the script mode container, uploading your script to a S3 location and creating a SageMaker training job. Let's call out a couple important parameters here:
* `py_version` is set to `'py3'` to indicate that we are using script mode since legacy mode supports only Python 2. Though Python 2 will be deprecated soon, you can use script mode with Python 2 by setting `py_version` to `'py2'` and `script_mode` to `True`.
* `distributions` is used to configure the distributed training setup. It's required only if you are doing distributed training either across a cluster of instances or across multiple GPUs. Here we are using parameter servers as the distributed training schema. SageMaker training jobs run on homogeneous clusters. To make parameter server more performant in the SageMaker setup, we run a parameter server on every instance in the cluster, so there is no need to specify the number of parameter servers to launch. Script mode also supports distributed training with [Horovod](https://github.com/horovod/horovod). You can find the full documentation on how to configure `distributions` [here](https://github.com/aws/sagemaker-python-sdk/tree/master/src/sagemaker/tensorflow#distributed-training).
```
from sagemaker.tensorflow import TensorFlow
mnist_estimator = TensorFlow(entry_point='mnist.py',
role=role,
train_instance_count=2,
train_instance_type='ml.p3.16xlarge',
framework_version='1.15.2',
py_version='py3',
distributions={'parameter_server': {'enabled': True}})
```
You can also initiate an estimator to train with TensorFlow 2.1 script. The only things that you will need to change are the script name and ``framewotk_version``
```
mnist_estimator2 = TensorFlow(entry_point='mnist-2.py',
role=role,
train_instance_count=2,
train_instance_type='ml.p3.16xlarge',
framework_version='2.1.0',
py_version='py3',
distributions={'parameter_server': {'enabled': True}})
```
## Calling ``fit``
To start a training job, we call `estimator.fit(training_data_uri)`.
An S3 location is used here as the input. `fit` creates a default channel named `'training'`, which points to this S3 location. In the training script we can then access the training data from the location stored in `SM_CHANNEL_TRAINING`. `fit` accepts a couple other types of input as well. See the API doc [here](https://sagemaker.readthedocs.io/en/stable/estimators.html#sagemaker.estimator.EstimatorBase.fit) for details.
When training starts, the TensorFlow container executes mnist.py, passing `hyperparameters` and `model_dir` from the estimator as script arguments. Because we didn't define either in this example, no hyperparameters are passed, and `model_dir` defaults to `s3://<DEFAULT_BUCKET>/<TRAINING_JOB_NAME>`, so the script execution is as follows:
```bash
python mnist.py --model_dir s3://<DEFAULT_BUCKET>/<TRAINING_JOB_NAME>
```
When training is complete, the training job will upload the saved model for TensorFlow serving.
```
mnist_estimator.fit(training_data_uri)
```
Calling fit to train a model with TensorFlow 2.1 scroipt.
```
mnist_estimator2.fit(training_data_uri)
```
# Deploy the trained model to an endpoint
The `deploy()` method creates a SageMaker model, which is then deployed to an endpoint to serve prediction requests in real time. We will use the TensorFlow Serving container for the endpoint, because we trained with script mode. This serving container runs an implementation of a web server that is compatible with SageMaker hosting protocol. The [Using your own inference code]() document explains how SageMaker runs inference containers.
```
predictor = mnist_estimator.deploy(initial_instance_count=1, instance_type='ml.p3.2xlarge')
```
Deployed the trained TensorFlow 2.1 model to an endpoint.
```
predictor2 = mnist_estimator2.deploy(initial_instance_count=1, instance_type='ml.p3.2xlarge')
```
# Invoke the endpoint
Let's download the training data and use that as input for inference.
```
import numpy as np
!aws --region {region} s3 cp s3://sagemaker-sample-data-{region}/tensorflow/mnist/train_data.npy train_data.npy
!aws --region {region} s3 cp s3://sagemaker-sample-data-{region}/tensorflow/mnist/train_labels.npy train_labels.npy
train_data = np.load('train_data.npy')
train_labels = np.load('train_labels.npy')
```
The formats of the input and the output data correspond directly to the request and response formats of the `Predict` method in the [TensorFlow Serving REST API](https://www.tensorflow.org/serving/api_rest). SageMaker's TensforFlow Serving endpoints can also accept additional input formats that are not part of the TensorFlow REST API, including the simplified JSON format, line-delimited JSON objects ("jsons" or "jsonlines"), and CSV data.
In this example we are using a `numpy` array as input, which will be serialized into the simplified JSON format. In addtion, TensorFlow serving can also process multiple items at once as you can see in the following code. You can find the complete documentation on how to make predictions against a TensorFlow serving SageMaker endpoint [here](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst#making-predictions-against-a-sagemaker-endpoint).
```
predictions = predictor.predict(train_data[:50])
for i in range(0, 50):
prediction = predictions['predictions'][i]['classes']
label = train_labels[i]
print('prediction is {}, label is {}, matched: {}'.format(prediction, label, prediction == label))
```
Examine the prediction result from the TensorFlow 2.1 model.
```
predictions2 = predictor2.predict(train_data[:50])
for i in range(0, 50):
prediction = predictions2['predictions'][i]
label = train_labels[i]
print('prediction is {}, label is {}, matched: {}'.format(prediction, label, prediction == label))
```
# Delete the endpoint
Let's delete the endpoint we just created to prevent incurring any extra costs.
```
sagemaker.Session().delete_endpoint(predictor.endpoint)
```
Delete the TensorFlow 2.1 endpoint as well.
```
sagemaker.Session().delete_endpoint(predictor2.endpoint)
```
| github_jupyter |
# Unit 5 - Financial Planning
```
# Initial imports
import os
import requests
import pandas as pd
from dotenv import load_dotenv
import alpaca_trade_api as tradeapi
from MCForecastTools import MCSimulation
%matplotlib inline
# Load .env enviroment variables
load_dotenv()
```
## Part 1 - Personal Finance Planner
### Collect Crypto Prices Using the `requests` Library
```
# Set current amount of crypto assets
# YOUR CODE HERE!
my_btc = 1.2
my_eth = 5.3
# Crypto API URLs
btc_url = "https://api.alternative.me/v2/ticker/Bitcoin/?convert=CAD"
eth_url = "https://api.alternative.me/v2/ticker/Ethereum/?convert=CAD"
# Fetch current BTC price
# YOUR CODE HERE!
btc_url = btc_url + "?format=json"
requests.get(btc_url)
response_data_btc = requests.get(btc_url)
response_content_btc = response_data_btc.content
#print(response_content_btc)
import json
data_btc = response_data_btc.json()
print(json.dumps(data_btc, indent=4))
# Fetch current ETH price
# YOUR CODE HERE!
eth_url = eth_url + "?format=json"
requests.get(eth_url)
response_data_eth = requests.get(eth_url)
response_content_eth = response_data_eth.content
data_eth = response_data_eth.json()
print(json.dumps(data_eth, indent=4))
# Compute current value of my crypto
# YOUR CODE HERE!
btc_price = data_btc['data']['1']['quotes']['USD']['price']
eth_price = data_eth['data']['1027']['quotes']['USD']['price']
#print(f"Current BTC price $"+str(btc_price))
#print(f"Current ETH price $" +str(eth_price))
my_btc_value = my_btc * btc_price
my_eth_value = my_eth * eth_price
# Print current crypto wallet balance
print(f"The current value of your {my_btc} BTC is ${my_btc_value:0.2f}")
print(f"The current value of your {my_eth} ETH is ${my_eth_value:0.2f}")
```
### Collect Investments Data Using Alpaca: `SPY` (stocks) and `AGG` (bonds)
```
# Set current amount of shares
my_agg = 200
my_spy = 50
# Set Alpaca API key and secret
# YOUR CODE HERE!
alpaca_api_key = os.getenv("ALPACA_API_KEY")
alpaca_secret_key = os.getenv("ALPACA_SECRET_KEY")
print(f"Alpaca Key type: {type(alpaca_api_key)}")
print(f"Alpaca Secret Key type: {type(alpaca_secret_key)}")
# Create the Alpaca API object
# YOUR CODE HERE!
alpaca = tradeapi.REST(
alpaca_api_key,
alpaca_secret_key,
api_version="v2")
# Format current date as ISO format
# YOUR CODE HERE!
today = pd.Timestamp("2021-07-16", tz="America/New_York").isoformat()
# Set the tickers
tickers = ["AGG", "SPY"]
# Set timeframe to '1D' for Alpaca API
timeframe = "1D"
# Get current closing prices for SPY and AGG
# (use a limit=1000 parameter to call the most recent 1000 days of data)
# YOUR CODE HERE!
df_portfolio = alpaca.get_barset(
tickers,
timeframe,
limit = 1000
).df
# Preview DataFrame
# YOUR CODE HERE!
df_portfolio
# Pick AGG and SPY close prices
# YOUR CODE HERE!
df_current_portfolio = alpaca.get_barset(
tickers,
timeframe,
start = today,
end = today
).df
agg_close_price = float(df_current_portfolio["AGG"]["close"])
spy_close_price = float(df_current_portfolio["SPY"]["close"])
# Print AGG and SPY close prices
print(f"Current AGG closing price: ${agg_close_price}")
print(f"Current SPY closing price: ${spy_close_price}")
# Compute the current value of shares
# YOUR CODE HERE!
my_spy_value = my_spy * spy_close_price
my_agg_value = my_agg * agg_close_price
# Print current value of shares
print(f"The current value of your {my_spy} SPY shares is ${my_spy_value:0.2f}")
print(f"The current value of your {my_agg} AGG shares is ${my_agg_value:0.2f}")
```
### Savings Health Analysis
```
# Set monthly household income
# YOUR CODE HERE!
monthly_income = 12000
# Consolidate financial assets data
# YOUR CODE HERE!
crypto = my_btc_value + my_eth_value
shares = my_agg_value + my_spy_value
amount_data = {
"amount": [crypto, shares]
}
asset_type = ["crypto","shares"]
# Create savings DataFrame
# YOUR CODE HERE!
df_savings = pd.DataFrame(amount_data, index=asset_type)
# Display savings DataFrame
display(df_savings)
#Other way to do it
# Consolidate financial assets data
# YOUR CODE HERE!
df_consolidated = pd.DataFrame({"":["crypto","crypto","shares","shares"],
"description":["BTC", "ETH","AGG","SPY"],
"amount": [my_btc_value, my_eth_value,my_agg_value,my_spy_value]})
df_consolidated
#Other way to do it
# Create savings DataFrame
# YOUR CODE HERE!
import numpy as np
df_savings2 = df_consolidated.pivot_table(values="amount", index=[''], aggfunc=np.sum)
#Other way to do it
# Display savings DataFrame
display(df_savings2)
# Plot savings pie chart
# YOUR CODE HERE!
df_savings.plot.pie(y="amount", title="Savings Composition")
# Set ideal emergency fund
emergency_fund = monthly_income * 3
print(f"Ideal Emergency fund $"+str(emergency_fund))
# Calculate total amount of savings
total_savings = crypto + shares
print(f"Total savings $"+str(total_savings))
# Validate saving health
# YOUR CODE HERE!
if(total_savings >= emergency_fund):
print("Congratulations! You have enough money in your emergency fund.")
else:
print("Your savings are below the amount suggested as your ideal emergency fund. You need to save/invest more, you are below your ideal emergency fund by $", emergency_fund - total_savings )
```
## Part 2 - Retirement Planning
### Monte Carlo Simulation
```
# Set start and end dates of five years back from today.
# Sample results may vary from the solution based on the time frame chosen
start_date = pd.Timestamp('2016-07-16', tz='America/New_York').isoformat()
end_date = pd.Timestamp('2021-07-16', tz='America/New_York').isoformat()
# Get 5 years' worth of historical data for SPY and AGG
# (use a limit=1000 parameter to call the most recent 1000 days of data)
# YOUR CODE HERE!
df_stock_data = alpaca.get_barset(
tickers,
timeframe,
start=start_date,
end=end_date,
limit=1000,
).df
# Display sample data
df_stock_data.head()
# Configuring a Monte Carlo simulation to forecast 30 years cumulative returns
# YOUR CODE HERE!
MC_stock_dist_thirty = MCSimulation(
portfolio_data = df_stock_data,
weights = [.40,.60], # Stocks (SPY): 60% and Bonds (AGG): 40%
num_simulation = 500,
num_trading_days = 252*30
)
# Printing the simulation input data
# YOUR CODE HERE!
MC_stock_dist_thirty.portfolio_data.head()
# Running a Monte Carlo simulation to forecast 30 years cumulative returns
# YOUR CODE HERE!
MC_stock_dist_thirty.calc_cumulative_return()
# Plot simulation outcomes
# YOUR CODE HERE!
line_plot = MC_stock_dist_thirty.plot_simulation()
# Save the plot for future usage
line_plot.get_figure().savefig("MC_stock_dist_thirty_sim_plot.png", bbox_inches="tight")
# Plot probability distribution and confidence intervals
# YOUR CODE HERE!
dist_plot = MC_stock_dist_thirty.plot_distribution()
# Save the plot for future usage
dist_plot.get_figure().savefig('MC_stock_dist_thirty_dist_plot.png',bbox_inches='tight')
```
### Retirement Analysis
```
# Fetch summary statistics from the Monte Carlo simulation results
# YOUR CODE HERE!
tbl = MC_stock_dist_thirty.summarize_cumulative_return()
# Print summary statistics
# YOUR CODE HERE!
print(tbl)
```
### Calculate the expected portfolio return at the `95%` lower and upper confidence intervals based on a `$20,000` initial investment.
```
# Set initial investment
initial_investment = 20000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $20,000
# YOUR CODE HERE!
ci_lower = round(tbl[8]*initial_investment,2) #You are telling the program to take values 8 CI Lower Confidence Interval Lower limit and 9 CI Upeer Confidence Interval Upper limit
ci_upper = round(tbl[9]*initial_investment,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
f" over the next 30 years will end within in the range of"
f" ${ci_lower} and ${ci_upper}")
```
### Calculate the expected portfolio return at the `95%` lower and upper confidence intervals based on a `50%` increase in the initial investment.
```
# Set initial investment
initial_investment = 20000 * 1.5
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $30,000
# YOUR CODE HERE!
ci_lower = round(tbl[8]*initial_investment,2) #You are telling the program to take values 8 CI Lower Confidence Interval Lower limit and 9 CI Upeer Confidence Interval Upper limit
ci_upper = round(tbl[9]*initial_investment,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
f" over the next 30 years will end within in the range of"
f" ${ci_lower} and ${ci_upper}")
```
## Optional Challenge - Early Retirement
### Five Years Retirement Option
```
# Configuring a Monte Carlo simulation to forecast 5 years cumulative returns
# YOUR CODE HERE!
MC_stock_dist_early_five = MCSimulation(
portfolio_data = df_stock_data,
weights = [.25,.75], # Riskier portfolio: Stocks (SPY): 75% and Bonds (AGG): 25%
num_simulation = 500,
num_trading_days = 252*5
)
MC_stock_dist_early_five.portfolio_data.head()
# Running a Monte Carlo simulation to forecast 5 years cumulative returns
# YOUR CODE HERE!
MC_stock_dist_early_five.calc_cumulative_return()
# Plot simulation outcomes
# YOUR CODE HERE!
line_plot_early_five = MC_stock_dist_early_five.plot_simulation()
# Save the plot for future usage
line_plot_early_five.get_figure().savefig("MC_stock_dist_early_five_sim_plot.png", bbox_inches="tight")
# Plot probability distribution and confidence intervals
# YOUR CODE HERE!
dist_plot_early_five = MC_stock_dist_early_five.plot_distribution()
# Save the plot for future usage
dist_plot_early_five.get_figure().savefig("MC_stock_dist_early_five_dist_plot.png", bbox_inches="tight")
# Fetch summary statistics from the Monte Carlo simulation results
# YOUR CODE HERE!
tbl_early_five = MC_stock_dist_early_five.summarize_cumulative_return()
# Print summary statistics
# YOUR CODE HERE!
print(tbl_early_five)
# Set initial investment
# YOUR CODE HERE!
initial_investment = 60000 #Larger initial investment $60,000>$20,000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $60,000
# YOUR CODE HERE!
ci_lower_five = round(tbl_early_five[8]*initial_investment,2) #You are telling the program to take values 8 CI Lower Confidence Interval Lower limit and 9 CI Upeer Confidence Interval Upper limit
ci_upper_five = round(tbl_early_five[9]*initial_investment,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
f" over the next 5 years will end within in the range of"
f" ${ci_lower_five} and ${ci_upper_five}")
```
### Ten Years Retirement Option
```
# Configuring a Monte Carlo simulation to forecast 10 years cumulative returns
# YOUR CODE HERE!
MC_stock_dist_early_ten = MCSimulation(
portfolio_data = df_stock_data,
weights = [.25,.75], # Riskier portfolio: Stocks (SPY): 75% and Bonds (AGG): 25%
num_simulation = 500,
num_trading_days = 252*10 # 10 years
)
MC_stock_dist_early_ten.portfolio_data.head()
# Running a Monte Carlo simulation to forecast 10 years cumulative returns
# YOUR CODE HERE!
MC_stock_dist_early_ten.calc_cumulative_return()
# Plot simulation outcomes
# YOUR CODE HERE!
line_plot_early_ten = MC_stock_dist_early_ten.plot_simulation()
# Save the plot for future usage
line_plot_early_ten.get_figure().savefig("MC_stock_dist_early_ten_sim_plot.png", bbox_inches="tight")
# Plot probability distribution and confidence intervals
# YOUR CODE HERE!
dist_plot_early_ten = MC_stock_dist_early_ten.plot_distribution()
# Save the plot for future usage
dist_plot_early_ten.get_figure().savefig("MC_stock_dist_early_ten_dist_plot.png", bbox_inches="tight")
# Fetch summary statistics from the Monte Carlo simulation results
# YOUR CODE HERE!
tbl_early_ten = MC_stock_dist_early_ten.summarize_cumulative_return()
# Print summary statistics
# YOUR CODE HERE!
print(tbl_early_ten)
# Set initial investment
# YOUR CODE HERE!
initial_investment = 60000 #Larger initial investment $60,000>$20,000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $60,000
# YOUR CODE HERE!
ci_lower_ten = round(tbl_early_ten[8]*initial_investment,2) #You are telling the program to take values 8 CI Lower Confidence Interval Lower limit and 9 CI Upeer Confidence Interval
ci_upper_ten = round(tbl_early_ten[9]*initial_investment,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
f" over the next 10 years will end within in the range of"
f" ${ci_lower_ten} and ${ci_upper_ten}")
```
| github_jupyter |
# **Multiple Sequence Alignment Workflow**
In this notebook the actual multiple sequence alignment (MSA) analysis work flow is explained.
## **Goals**
1. To conduct a MSA on the combined African Insecta data sets listed below
>1. **enafroCOI_Under500_data.fasta: 6,715 sequences**
>2. **enafroCOI_Over700_data.fasta: 1,607 sequences**
>3. **enafroCOI_650to660_data.fasta: 99,698 sequences**
>4. **enafroCOI_500to700_data-650to660.fasta: 85,157 sequences**
2. For the above listed data sets; conduct a rigorous subsetting and alignment until a good quality sequence alignment is generated.
3. Extract only sequences from these alignments that fit the right loci and length of the 658 5' cytochrome c oxidase subunit 1 gene; must extend to both ends of the gene, with or without gaps within the sequnce; Only a maximum of ten gaps, '-', and ten unidentified nucleotides, 'N', are allowed in the terminals, 3' or 5', of the sequnce.
4. Conduct the final alignment of combined cleaned up data set from all the four sets of data above in one set.
## **Tasks**
1. Perform a MSA on eafroCOI_650to660_data.fasta (*24,475 sequences*), East African insect sequences, and vizualise in SeaView to assess the quality of alignment. This alignment will be used to define the 3' and 5' ends of the COI-5P sequences. It is on these basis that trimming will be done and comparison to other data sets will be done.
Refine the alignment if necessesary or use fewer sequences but but many enough to be accurately representative of all possible COI-5P lengths.
2. Conduct MSA on enafroCOI_Under500_data.fasta and refine when needed to acquire a good quality alignment. Compare the alignment to the reference eafroCOI_650to660_data.fasta alignment and trim at the determined 3' and 5' positions. Take the output and delete sequences that have in excess of ten end gaps "-" in the aligned nucleotide blocks/columns, excluding largely gappy columns from the counting of the ten gaps. save the output.
3. carry out task 2 above on enafroCOI_Over700_data.fasta
4. carry out task 2 on enafroCOI_650to660_data.fasta
5. Then finally on enafroCOI_500to700_data-650to660.fasta: There are a lot of sequences and possibly a lot of impurities in this set. Will require a lot of subsetting and iterations.
6. Concatate all the outputs from 2, 3, 4, and 5; then align them again and refune the alignment further.
### **1. MSA alignment on eafroCOI_650to660_data.fasta (24,475 sequences)**
### **2. MSA alignment on enafroCOI_Under500_data.fasta (6,715 sequences)**
### **3. MSA alignment on enafroCOI_Over700_data.fasta (1,607 sequences)**
### **4. MSA alignment on enafroCOI_650to660_data.fasta (99,698 sequences)**
### **5. MSA alignment on enafroCOI_500to700_data-650to660.fasta (85,157 sequences)**
### **6. Concatate all the outputs from 2, 3, 4, and 5;** then align them again and refune the alignment further.
#### **6.1.**
#### **6._**
```
%%bash
cd ../data/output/alignment/pasta_output/aligned/
wc -l enafroCOI_all_clean*
%%bash
cd ../data/output/alignment/pasta_output/aligned/
source ../../../../../code/process_all_input_files.sh
delete_shortseqs_N enafroCOI_all_clean.aln << EOF
10
10
EOF
```
Visualise the output file "enafroCOI_all_clean_sN10-eN10.aln" in Seaview and remove gap sites only, possibly created by the removal of some sequences in the step above. Save the output file.
```
%%bash
cd ../data/output/alignment/pasta_output/aligned/
seaview enafroCOI_all_clean_sN10-eN10.aln
```
Delete Outgroups (Crustacea, Arachnida and Chilopoda), they were initially included in the alignment of enafroCOI_all_clean_sN10-eN10.aln.
See the code below:
```
%%bash
cd ../data/output/alignment/pasta_output/aligned/
source ../../../../../code/process_all_input_files.sh
delete_unwanted enafroCOI_all_clean_sN10-eN10.aln << EOF
1
Crustacea
1
Arachnida
1
Chilopoda
2
EOF
#Renaming
mv enafroCOI_all_clean_sN10-eN10_generaNA_undesired.fasta outgroups.aln
```
Then sort the "enafroCOI_all_clean_sN10-eN10.aln" into two files:
**1. enafroCOI_all_clean_sN10-eN10_genera.aln** - contains African COI sequences with genus names
**2. enafroCOI_all_clean_sN10-eN10_generaNA.aln** - Contains African COI sequences without genus names
```
%%bash
cd ../data/output/alignment/pasta_output/aligned/
source ../../../../../code/process_all_input_files.sh
cp enafroCOI_all_clean_sN10-eN10.aln enafroCOI_all_clean_sN10-eN10.aln2
delete_unwanted enafroCOI_all_clean_sN10-eN10.aln2 << EOF
1
gs-NA
2
EOF
#Renaming
mv enafroCOI_all_clean_sN10-eN10.aln2 enafroCOI_all_clean_sN10-eN10_genera.aln
mv enafroCOI_all_clean_sN10-eN10_undesired.aln enafroCOI_all_clean_sN10-eN10_generaNA.aln
```
Then sort the data into two files:
**1. enafroCOI_all_clean_sN10-eN10_eafro.aln** - Contains East African sequences; Kenya, Uganda, Tanzania, Rwanda, Burundi, South Sudan and Ethiopia
**2. enafroCOI_all_clean_sN10-eN10_eafroNA.aln** - Contains non East African Data
```
%%bash
cd ../data/output/alignment/pasta_output/aligned/
source ../../../../../code/process_all_input_files.sh
cp enafroCOI_all_clean_sN10-eN10.aln enafroCOI_all_clean_sN10-eN10.aln2
delete_unwanted enafroCOI_all_clean_sN10-eN10.aln2 << EOF
1
Kenya
1
Tanzania
1
Uganda
1
Rwanda
1
Burundi
1
South_Sudan
1
Ethiopia
2
EOF
#Renaming
mv enafroCOI_all_clean_sN10-eN10_undesired.fasta enafroCOI_all_clean_sN10-eN10_eafro.aln
mv enafroCOI_all_clean_sN10-eN10.aln2 enafroCOI_all_clean_sN10-eN10_eafroNA.aln
```
Then sort enafroCOI_all_clean_sN10-eN10_eafro.aln into two files:
**1. enafroCOI_all_clean_sN10-eN10_eafro_genera.aln** - Contains East African COI records with genus names
**2. enafroCOI_all_clean_sN10-eN10_eafro_generaNA.aln** - Contains East African COI records without genus names
Those without genus names do not have the species names either, howerver those with species names may lack the genus names
```
%%bash
cd ../data/output/alignment/pasta_output/aligned/
cp enafroCOI_all_clean_sN10-eN10_eafro.aln enafroCOI_all_clean_sN10-eN10_eafro.aln2
delete_unwanted enafroCOI_all_clean_sN10-eN10_eafro.aln2 << EOF
1
gs-NA
2
EOF
#Renaming
mv enafroCOI_all_clean_sN10-eN10_eafro.aln2 enafroCOI_all_clean_sN10-eN10_eafro_genera.aln
mv enafroCOI_all_clean_sN10-eN10_eafro_undesired.fasta enafroCOI_all_clean_sN10-eN10_eafro_generaNA.aln
%%bash
cd ../data/output/alignment/pasta_output/aligned/
ls enafroCOI_all_clean*sN10-eN10*.aln
#cat $(ls enafroCOI_all_clean*sN10-eN10*.aln2) | head -5
```
#### **Replacing Illigal characters.**
Replacing **Illegal characters in taxon-names are: tabulators, carriage returns, spaces, ":", ",", ")", "(", ";", "]", "\[", "'"** that affect the interpretation in RAxML
This has to be done, otherwise RAxML will throw up an error and exit
```
%%bash
cd ../data/output/alignment/pasta_output/aligned/
for i in $(ls enafroCOI_all_clean*sN10-eN10*.aln); do vim $i -n << EOF
:%s/\[//g
:%s/\]//g
:%s/ /_/g
:%s/://g
:%s/;//g
:%s/,/__/g
:%s/(/__/g
:%s/)//g
:%s/'//g
:wq
EOF
echo -e "`basename -- ${i}` completed"
done
```
Updating the remote server (hpc01.icipe.org) ready for execution
```
%%bash
cd ../data/output/alignment/pasta_output/aligned/
scp ./enafroCOI_all_clean* gilbert@hpc01.icipe.org:/home/gilbert/bioinformatics/github/co1_metaanalysis/data/output/alignment/pasta_output/aligned/ << EOF
<password>
EOF
```
| github_jupyter |
# Exercícios Python Basic Lets Code - modulo I
### Variáveis
```
x = 5
y = 'Aline'
z = True
j= 2.3
print(f'String : {y}')
print(f'Inteiro : {x}')
print(f'Float : {j}')
print(f'Bool : {z}')
print(type(y))
print(type(x))
print(type(j))
print(type(z))
```
### Operadores aritimeticos
```
#Mesmo ambos inteiors a saída será float
x = 50
y = 2
#Adição
print(x + y )
#Subtração
print(x - y )
#Multiplicação
print(x * y )
#Divisão
print(x / y )
#Exponênciação
print(x ** y )
#Divisão inteira
print(x // y )
#Resto da divisão
print(x % y )
```
### Operadores lógicos NOT , AND , OR
```
tem_cafe = True
tem_pao = False
print(not tem_cafe)
print(tem_cafe or tem_pao)
print(tem_cafe and tem_pao)
```
### Operadores relacionais ou comparação
```
dollar = 1
real = 5.6
print(dollar > real)
print(dollar < real)
print(dollar == real)
print(dollar >= real)
print(dollar <= real)
print(dollar != real)
```
### Estruturas sequênciais
```
idade = input('Informe a sua idade: ')
print(idade, type(idade))
idade = int(input('Informe a sua idade: '))
print(idade, type(idade))
print(float('123.25'))
print(str(123.25))
print(bool(''))
print(bool('abcd'))
print(bool(0))
print(bool(-2))
salario_mensal = input('Digite o valor do seu salario mensal: ')
salario_mensal = float(salario_mensal) #conversão de str para int
gasto_mensal = input('Digite o valor do seu gasto mensal em média:')
gasto_mensal = float(gasto_mensal) #conversão de str para int
salario_total = salario_mensal * 12
gasto_total = gasto_mensal * 12
montante_economizado = salario_total - gasto_total
print(f'O montante que você pode economizar ao fim do ano é de : {montante_economizado}')
```
### Estruturas Condicionais
```
valor_passagem = 4.30
valor_corrida = input('Qual valor da corrida? ')
if float(valor_corrida) <= valor_passagem * 5:
print('Pague a corrida!')
if float(valor_corrida) > valor_passagem * 5:
print('Pegue o ônibus!')
valor_passagem = 4.30
valor_corrida = input('Qual valor da corrida? ')
if float(valor_corrida) <= valor_passagem * 5:
print('Pague a corrida!')
else:
print('Pegue o ônibus!')
valor_passagem = 4.30
valor_corrida = input('Qual valor da corrida? ')
if float(valor_corrida) <= valor_passagem * 5:
print('Pague a corrida!')
else:
if float(valor_corrida) <= valor_passagem * 6 :
print('Aguarde um momento, o valor pode baixar!')
else:
print('Pegue o ônibus!')
valor_passagem = 4.30
valor_corrida = input('Qual valor da corrida? ')
if float(valor_corrida) <= valor_passagem * 5:
print('Pague a corrida!')
elif float(valor_corrida) <= valor_passagem * 6 :
print('Aguarde um momento, o valor pode baixar!')
else:
print('Pegue o ônibus!')
```
### Estrutura de repetição While
```
contador = 0
while contador < 10:
contador += 1
if contador == 1:
print(contador,'item limpo')
else:
print(contador,'item limpos')
print('Fim da repetição do bloco while')
contador = 0
while True:
if contador < 10:
contador = contador + 1
if contador == 1:
print(contador,'item limpo')
else:
print(contador,'item limpos')
else:
break
print('Fim da repetição do bloco while')
texto = input('Digite a sua senha: ')
while texto != 'LetsCode':
texto = input('Senha invalida. Tente novamente')
print('Acesso permitido')
contador = 0
while contador < 10:
contador = contador + 1
if contador == 1:
continue
print(contador,'item limpos')
print('Fim da repetição do bloco while')
```
### Trabalhando com data
```
import datetime as dt
#Modelo de formatação
print(dt.time(12, 6, 21, 7), 'Hora: minuto:segundo.microsegundo')
print('----')
print(dt.date(2020, 4, 25),'Ano-mês-dia')
print('----')
print(dt.datetime(2020, 4, 25, 12, 6, 21, 7), 'Ano-mês,dia Hora:minuto:segundo.microsegundo')
#Trabalhando com data (Quanto tempo tem entre essas duas datas)
natal = dt.date(2020, 12, 25)
reveillon = dt.date(2021, 1, 1)
print(reveillon - natal)
print((reveillon - natal).days)
print((reveillon - natal).seconds)
print((reveillon - natal).microseconds)
```
| github_jupyter |
#### Copyright 2017 Google LLC.
本课程原版地址:https://colab.research.google.com/notebooks/mlcc/multi-class_classification_of_handwritten_digits.ipynb?utm_source=mlcc&utm_campaign=colab-external&utm_medium=referral&utm_content=multiclass-colab&hl=en
采用Apache 2.0协议
# Classifying Handwritten Digits with Neural Networks

**Learning Objectives:**
* Train both a linear model and a neural network to classify handwritten digits from the classic [MNIST](http://yann.lecun.com/exdb/mnist/) data set
* Compare the performance of the linear and neural network classification models
* Visualize the weights of a neural-network hidden layer
Our goal is to map each input image to the correct numeric digit. We will create a NN with a few hidden layers and a Softmax layer at the top to select the winning class.
## Setup
First, let's download the data set, import TensorFlow and other utilities, and load the data into a *pandas* `DataFrame`. Note that this data is a sample of the original MNIST training data; we've taken 20000 rows at random.
```
from __future__ import print_function
import glob
import math
import os
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
mnist_dataframe = pd.read_csv(
"mnist_train_small.csv",
sep=",",
header=None)
# Use just the first 10,000 records for training/validation.
mnist_dataframe = mnist_dataframe.head(10000)
mnist_dataframe = mnist_dataframe.reindex(np.random.permutation(mnist_dataframe.index))
mnist_dataframe.head()
```
Each row represents one labeled example. Column 0 represents the label that a human rater has assigned for one handwritten digit. For example, if Column 0 contains '6', then a human rater interpreted the handwritten character as the digit '6'. The ten digits 0-9 are each represented, with a unique class label for each possible digit. Thus, this is a multi-class classification problem with 10 classes.

Columns 1 through 784 contain the feature values, one per pixel for the 28×28=784 pixel values. The pixel values are on a gray scale in which 0 represents white, 255 represents black, and values between 0 and 255 represent shades of gray. Most of the pixel values are 0; you may want to take a minute to confirm that they aren't all 0. For example, adjust the following text block to print out the values in column 72.
```
mnist_dataframe.loc[:, 72:72]
```
Now, let's parse out the labels and features and look at a few examples. Note the use of `loc` which allows us to pull out columns based on original location, since we don't have a header row in this data set.
```
def parse_labels_and_features(dataset):
"""Extracts labels and features.
This is a good place to scale or transform the features if needed.
Args:
dataset: A Pandas `Dataframe`, containing the label on the first column and
monochrome pixel values on the remaining columns, in row major order.
Returns:
A `tuple` `(labels, features)`:
labels: A Pandas `Series`.
features: A Pandas `DataFrame`.
"""
labels = dataset[0]
# DataFrame.loc index ranges are inclusive at both ends.
features = dataset.loc[:,1:784]
# Scale the data to [0, 1] by dividing out the max value, 255.
features = features / 255
return labels, features
training_targets, training_examples = parse_labels_and_features(mnist_dataframe[:7500])
training_examples.describe()
validation_targets, validation_examples = parse_labels_and_features(mnist_dataframe[7500:10000])
validation_examples.describe()
```
Show a random example and its corresponding label.
```
rand_example = np.random.choice(training_examples.index)
_, ax = plt.subplots()
ax.matshow(training_examples.loc[rand_example].values.reshape(28, 28))
ax.set_title("Label: %i" % training_targets.loc[rand_example])
ax.grid(False)
```
## Task 1: Build a Linear Model for MNIST
First, let's create a baseline model to compare against. The `LinearClassifier` provides a set of *k* one-vs-all classifiers, one for each of the *k* classes.
You'll notice that in addition to reporting accuracy, and plotting Log Loss over time, we also display a [**confusion matrix**](https://en.wikipedia.org/wiki/Confusion_matrix). The confusion matrix shows which classes were misclassified as other classes. Which digits get confused for each other?
Also note that we track the model's error using the `log_loss` function. This should not be confused with the loss function internal to `LinearClassifier` that is used for training.
```
def construct_feature_columns():
"""Construct the TensorFlow Feature Columns.
Returns:
A set of feature columns
"""
# There are 784 pixels in each image.
return set([tf.feature_column.numeric_column('pixels', shape=784)])
```
Here, we'll make separate input functions for training and for prediction. We'll nest them in `create_training_input_fn()` and `create_predict_input_fn()`, respectively, so we can invoke these functions to return the corresponding `_input_fn`s to pass to our `.train()` and `.predict()` calls.
```
def create_training_input_fn(features, labels, batch_size, num_epochs=None, shuffle=True):
"""A custom input_fn for sending MNIST data to the estimator for training.
Args:
features: The training features.
labels: The training labels.
batch_size: Batch size to use during training.
Returns:
A function that returns batches of training features and labels during
training.
"""
def _input_fn(num_epochs=None, shuffle=True):
# Input pipelines are reset with each call to .train(). To ensure model
# gets a good sampling of data, even when number of steps is small, we
# shuffle all the data before creating the Dataset object
idx = np.random.permutation(features.index)
raw_features = {"pixels":features.reindex(idx)}
raw_targets = np.array(labels[idx])
ds = Dataset.from_tensor_slices((raw_features,raw_targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
feature_batch, label_batch = ds.make_one_shot_iterator().get_next()
return feature_batch, label_batch
return _input_fn
def create_predict_input_fn(features, labels, batch_size):
"""A custom input_fn for sending mnist data to the estimator for predictions.
Args:
features: The features to base predictions on.
labels: The labels of the prediction examples.
Returns:
A function that returns features and labels for predictions.
"""
def _input_fn():
raw_features = {"pixels": features.values}
raw_targets = np.array(labels)
ds = Dataset.from_tensor_slices((raw_features, raw_targets)) # warning: 2GB limit
ds = ds.batch(batch_size)
# Return the next batch of data.
feature_batch, label_batch = ds.make_one_shot_iterator().get_next()
return feature_batch, label_batch
return _input_fn
def train_linear_classification_model(
learning_rate,
steps,
batch_size,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a linear classification model for the MNIST digits dataset.
In addition to training, this function also prints training progress information,
a plot of the training and validation loss over time, and a confusion
matrix.
Args:
learning_rate: An `int`, the learning rate to use.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
training_examples: A `DataFrame` containing the training features.
training_targets: A `DataFrame` containing the training labels.
validation_examples: A `DataFrame` containing the validation features.
validation_targets: A `DataFrame` containing the validation labels.
Returns:
The trained `LinearClassifier` object.
"""
periods = 10
steps_per_period = steps / periods
# Create the input functions.
predict_training_input_fn = create_predict_input_fn(
training_examples, training_targets, batch_size)
predict_validation_input_fn = create_predict_input_fn(
validation_examples, validation_targets, batch_size)
training_input_fn = create_training_input_fn(
training_examples, training_targets, batch_size)
# Create a LinearClassifier object.
my_optimizer = tf.train.AdagradOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
classifier = tf.estimator.LinearClassifier(
feature_columns=construct_feature_columns(),
n_classes=10,
optimizer=my_optimizer,
config=tf.estimator.RunConfig(keep_checkpoint_max=1)
)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("LogLoss error (on validation data):")
training_errors = []
validation_errors = []
for period in range (0, periods):
# Train the model, starting from the prior state.
classifier.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute probabilities.
training_predictions = list(classifier.predict(input_fn=predict_training_input_fn))
training_probabilities = np.array([item['probabilities'] for item in training_predictions])
training_pred_class_id = np.array([item['class_ids'][0] for item in training_predictions])
training_pred_one_hot = tf.keras.utils.to_categorical(training_pred_class_id,10)
validation_predictions = list(classifier.predict(input_fn=predict_validation_input_fn))
validation_probabilities = np.array([item['probabilities'] for item in validation_predictions])
validation_pred_class_id = np.array([item['class_ids'][0] for item in validation_predictions])
validation_pred_one_hot = tf.keras.utils.to_categorical(validation_pred_class_id,10)
# Compute training and validation errors.
training_log_loss = metrics.log_loss(training_targets, training_pred_one_hot)
validation_log_loss = metrics.log_loss(validation_targets, validation_pred_one_hot)
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, validation_log_loss))
# Add the loss metrics from this period to our list.
training_errors.append(training_log_loss)
validation_errors.append(validation_log_loss)
print("Model training finished.")
# Remove event files to save disk space.
_ = map(os.remove, glob.glob(os.path.join(classifier.model_dir, 'events.out.tfevents*')))
# Calculate final predictions (not probabilities, as above).
final_predictions = classifier.predict(input_fn=predict_validation_input_fn)
final_predictions = np.array([item['class_ids'][0] for item in final_predictions])
accuracy = metrics.accuracy_score(validation_targets, final_predictions)
print("Final accuracy (on validation data): %0.2f" % accuracy)
# Output a graph of loss metrics over periods.
plt.ylabel("LogLoss")
plt.xlabel("Periods")
plt.title("LogLoss vs. Periods")
plt.plot(training_errors, label="training")
plt.plot(validation_errors, label="validation")
plt.legend()
plt.show()
# Output a plot of the confusion matrix.
cm = metrics.confusion_matrix(validation_targets, final_predictions)
# Normalize the confusion matrix by row (i.e by the number of samples
# in each class).
cm_normalized = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis]
ax = sns.heatmap(cm_normalized, cmap="bone_r")
ax.set_aspect(1)
plt.title("Confusion matrix")
plt.ylabel("True label")
plt.xlabel("Predicted label")
plt.show()
return classifier
```
**Spend 5 minutes seeing how well you can do on accuracy with a linear model of this form. For this exercise, limit yourself to experimenting with the hyperparameters for batch size, learning rate and steps.**
Stop if you get anything above about 0.9 accuracy.
```
classifier = train_linear_classification_model(
learning_rate=0.02,
steps=100,
batch_size=10,
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
```
### Solution
Click below for one possible solution.
Here is a set of parameters that should attain roughly 0.9 accuracy.
```
_ = train_linear_classification_model(
learning_rate=0.03,
steps=1000,
batch_size=30,
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
```
## Task 2: Replace the Linear Classifier with a Neural Network
**Replace the LinearClassifier above with a [`DNNClassifier`](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNClassifier) and find a parameter combination that gives 0.95 or better accuracy.**
You may wish to experiment with additional regularization methods, such as dropout. These additional regularization methods are documented in the comments for the `DNNClassifier` class.
```
#
# YOUR CODE HERE: Replace the linear classifier with a neural network.
#
```
Once you have a good model, double check that you didn't overfit the validation set by evaluating on the test data that we'll load below.
```
mnist_test_dataframe = pd.read_csv(
"https://download.mlcc.google.com/mledu-datasets/mnist_test.csv",
sep=",",
header=None)
test_targets, test_examples = parse_labels_and_features(mnist_test_dataframe)
test_examples.describe()
#
# YOUR CODE HERE: Calculate accuracy on the test set.
#
```
### Solution
Click below for a possible solution.
The code below is almost identical to the original `LinearClassifer` training code, with the exception of the NN-specific configuration, such as the hyperparameter for hidden units.
```
def train_nn_classification_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network classification model for the MNIST digits dataset.
In addition to training, this function also prints training progress information,
a plot of the training and validation loss over time, as well as a confusion
matrix.
Args:
learning_rate: An `int`, the learning rate to use.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing the training features.
training_targets: A `DataFrame` containing the training labels.
validation_examples: A `DataFrame` containing the validation features.
validation_targets: A `DataFrame` containing the validation labels.
Returns:
The trained `DNNClassifier` object.
"""
periods = 10
# Caution: input pipelines are reset with each call to train.
# If the number of steps is small, your model may never see most of the data.
# So with multiple `.train` calls like this you may want to control the length
# of training with num_epochs passed to the input_fn. Or, you can do a really-big shuffle,
# or since it's in-memory data, shuffle all the data in the `input_fn`.
steps_per_period = steps / periods
# Create the input functions.
predict_training_input_fn = create_predict_input_fn(
training_examples, training_targets, batch_size)
predict_validation_input_fn = create_predict_input_fn(
validation_examples, validation_targets, batch_size)
training_input_fn = create_training_input_fn(
training_examples, training_targets, batch_size)
# Create the input functions.
predict_training_input_fn = create_predict_input_fn(
training_examples, training_targets, batch_size)
predict_validation_input_fn = create_predict_input_fn(
validation_examples, validation_targets, batch_size)
training_input_fn = create_training_input_fn(
training_examples, training_targets, batch_size)
# Create feature columns.
feature_columns = [tf.feature_column.numeric_column('pixels', shape=784)]
# Create a DNNClassifier object.
my_optimizer = tf.train.AdagradOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
classifier = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
n_classes=10,
hidden_units=hidden_units,
optimizer=my_optimizer,
config=tf.contrib.learn.RunConfig(keep_checkpoint_max=1)
)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("LogLoss error (on validation data):")
training_errors = []
validation_errors = []
for period in range (0, periods):
# Train the model, starting from the prior state.
classifier.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute probabilities.
training_predictions = list(classifier.predict(input_fn=predict_training_input_fn))
training_probabilities = np.array([item['probabilities'] for item in training_predictions])
training_pred_class_id = np.array([item['class_ids'][0] for item in training_predictions])
training_pred_one_hot = tf.keras.utils.to_categorical(training_pred_class_id,10)
validation_predictions = list(classifier.predict(input_fn=predict_validation_input_fn))
validation_probabilities = np.array([item['probabilities'] for item in validation_predictions])
validation_pred_class_id = np.array([item['class_ids'][0] for item in validation_predictions])
validation_pred_one_hot = tf.keras.utils.to_categorical(validation_pred_class_id,10)
# Compute training and validation errors.
training_log_loss = metrics.log_loss(training_targets, training_pred_one_hot)
validation_log_loss = metrics.log_loss(validation_targets, validation_pred_one_hot)
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, validation_log_loss))
# Add the loss metrics from this period to our list.
training_errors.append(training_log_loss)
validation_errors.append(validation_log_loss)
print("Model training finished.")
# Remove event files to save disk space.
_ = map(os.remove, glob.glob(os.path.join(classifier.model_dir, 'events.out.tfevents*')))
# Calculate final predictions (not probabilities, as above).
final_predictions = classifier.predict(input_fn=predict_validation_input_fn)
final_predictions = np.array([item['class_ids'][0] for item in final_predictions])
accuracy = metrics.accuracy_score(validation_targets, final_predictions)
print("Final accuracy (on validation data): %0.2f" % accuracy)
# Output a graph of loss metrics over periods.
plt.ylabel("LogLoss")
plt.xlabel("Periods")
plt.title("LogLoss vs. Periods")
plt.plot(training_errors, label="training")
plt.plot(validation_errors, label="validation")
plt.legend()
plt.show()
# Output a plot of the confusion matrix.
cm = metrics.confusion_matrix(validation_targets, final_predictions)
# Normalize the confusion matrix by row (i.e by the number of samples
# in each class).
cm_normalized = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis]
ax = sns.heatmap(cm_normalized, cmap="bone_r")
ax.set_aspect(1)
plt.title("Confusion matrix")
plt.ylabel("True label")
plt.xlabel("Predicted label")
plt.show()
return classifier
classifier = train_nn_classification_model(
learning_rate=0.05,
steps=1000,
batch_size=30,
hidden_units=[100, 100],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
```
Next, we verify the accuracy on the test set.
```
mnist_test_dataframe = pd.read_csv(
"https://download.mlcc.google.com/mledu-datasets/mnist_test.csv",
sep=",",
header=None)
test_targets, test_examples = parse_labels_and_features(mnist_test_dataframe)
test_examples.describe()
predict_test_input_fn = create_predict_input_fn(
test_examples, test_targets, batch_size=100)
test_predictions = classifier.predict(input_fn=predict_test_input_fn)
test_predictions = np.array([item['class_ids'][0] for item in test_predictions])
accuracy = metrics.accuracy_score(test_targets, test_predictions)
print("Accuracy on test data: %0.2f" % accuracy)
```
## Task 3: Visualize the weights of the first hidden layer.
Let's take a few minutes to dig into our neural network and see what it has learned by accessing the `weights_` attribute of our model.
The input layer of our model has `784` weights corresponding to the `28×28` pixel input images. The first hidden layer will have `784×N` weights where `N` is the number of nodes in that layer. We can turn those weights back into `28×28` images by *reshaping* each of the `N` `1×784` arrays of weights into `N` arrays of size `28×28`.
Run the following cell to plot the weights. Note that this cell requires that a `DNNClassifier` called "classifier" has already been trained.
```
print(classifier.get_variable_names())
weights0 = classifier.get_variable_value("dnn/hiddenlayer_0/kernel")
print("weights0 shape:", weights0.shape)
num_nodes = weights0.shape[1]
num_rows = int(math.ceil(num_nodes / 10.0))
fig, axes = plt.subplots(num_rows, 10, figsize=(20, 2 * num_rows))
for coef, ax in zip(weights0.T, axes.ravel()):
# Weights in coef is reshaped from 1x784 to 28x28.
ax.matshow(coef.reshape(28, 28), cmap=plt.cm.pink)
ax.set_xticks(())
ax.set_yticks(())
plt.show()
```
The first hidden layer of the neural network should be modeling some pretty low level features, so visualizing the weights will probably just show some fuzzy blobs or possibly a few parts of digits. You may also see some neurons that are essentially noise -- these are either unconverged or they are being ignored by higher layers.
It can be interesting to stop training at different numbers of iterations and see the effect.
**Train the classifier for 10, 100 and respectively 1000 steps. Then run this visualization again.**
What differences do you see visually for the different levels of convergence?
| github_jupyter |
```
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten, Reshape
from keras.layers.convolutional import Convolution1D, Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras import callbacks
import time
import logging
import keras
print (keras.__version__)
class printbatch(callbacks.Callback):
def on_batch_end(self, batch, logs={}):
# if batch%10 == 0:
print "Batch " + str(batch) + " ends"
def on_epoch_begin(self, epoch, logs={}):
print(logs)
def on_epoch_end(self, epoch, logs={}):
print(logs)
def myGenerator():
# Dataset of 60,000 28x28 grayscale images of the 10 digits, along with a test set of 10,000 images.
# X_train, X_test: uint8 array of grayscale image data with shape (nb_samples, 28, 28).
# y_train, y_test: uint8 array of digit labels (integers in range 0-9) with shape (nb_samples,).
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# ((60000, 28, 28), (60000,), (10000, 28, 28), (10000,))
y_train = np_utils.to_categorical(y_train,10)
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
count = 0
while 1:
for i in range(20): # 1875 * 32 = 60000 -> # of training samples
print('i:' + str(i) + ' count:' + str(count))
count = count + 1
print('i*32:' + str(i*32) + ' (i+1)*32:' + str((i+1)*32))
yield X_train[i*32:(i+1)*32], y_train[i*32:(i+1)*32] # 32만큼 전달한다.
print("-------")
batch_size = 32
nb_classes = 10
nb_epoch = 12
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters = 32
# size of pooling area for max pooling
nb_pool = 2
# convolution kernel size
nb_conv = 3
model = Sequential()
model.add(Convolution2D(nb_filters, nb_conv, nb_conv,
border_mode='valid',
input_shape=(1, img_rows, img_cols)))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, nb_conv, nb_conv))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adadelta')
pb = printbatch()
model.fit_generator(myGenerator(),
#samples_per_epoch = 60000,
samples_per_epoch = 640,
nb_epoch = 2,
verbose=1,
callbacks=[pb],
validation_data=None,
class_weight=None,
max_q_size = 1,
nb_worker=1)
for i in range(10):
print i
```
| github_jupyter |
```
import numpy as np
import sklearn as sk
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
from sklearn.decomposition import PCA
from sklearn import linear_model
# from fbprophet import Prophet
import warnings
warnings.filterwarnings('ignore')
```
# Functions
```
def find_correlation(data, threshold=0.9):
corr_mat = data.corr()
corr_mat.loc[:, :] = np.tril(corr_mat, k=-1)
already_in = set()
result = []
for col in corr_mat:
perfect_corr = corr_mat[col][abs(corr_mat[col])> threshold].index.tolist()
if perfect_corr and col not in already_in:
already_in.update(set(perfect_corr))
perfect_corr.append(col)
result.append(perfect_corr)
select_nested = [f[1:] for f in result]
select_flat = [i for j in select_nested for i in j]
return select_flat
def generate_stat(X_stat):
for names in X_stat.drop(columns=['city','year','weekofyear','week_start_date']):
X_stat['month_avg_'+names] = X_stat[names].rolling(3 ,min_periods=1).mean()
X_stat['month_min_'+names] = X_stat[names].rolling(3 ,min_periods=1).min()
X_stat['month_max_'+names] = X_stat[names].rolling(3 ,min_periods=1).max()
X_stat['lag_3_'+names] = X_stat[names].shift(3)
X_stat['lag_1_'+names] = X_stat[names].shift(1)
return X_stat
def PP_X(X1):
# X1 = X1[(X1['city']=='sj')]
# le = preprocessing.LabelEncoder()
# X1['city'] = le.fit_transform(X1['city'])
X1['week_start_date'] = pd.to_datetime(X1['week_start_date'])
# X1['ndvi_s'] = X1['ndvi_se'] + X1['ndvi_sw']
# X1['ndvi_n'] = X1["ndvi_ne"] + X1['ndvi_nw']
X1 = generate_stat(X1)
X1['month'] = pd.DatetimeIndex(X1['week_start_date']).month
X1 = X1.fillna(method = 'bfill')
X1 = X1.fillna(method = 'ffill')
X1 = X1.drop(
columns=[
'weekofyear',
'week_start_date',
'city',
'year'
]
)
return X1
def PP_Y(Y1):
# Y1 = Y1[(Y1['city']=='sj')]
return Y1
pd.read_csv("dengue_features_train.csv")
```
## Read and clean data
```
X = pd.read_csv("dengue_features_train.csv")
Y = pd.read_csv("dengue_labels_train.csv")
X_New = pd.read_csv("dengue_features_test.csv")
Y_New = pd.read_csv('submission_format.csv')
Xrf = X[(X['city'] == 'sj')]
Yrf = Y[(X['city'] == 'sj')]
Y_New = Y_New[(Y_New['city'] == 'sj')]
X_New = X_New[(X_New['city'] == 'sj')]
X_train, X_test, Y_train, Y_test = Xrf[:int(len(Xrf)*0.8)], Xrf[int(len(Xrf)*0.8):] , Yrf[:int(len(Yrf)*0.8)] , Yrf[int(len(Yrf)*0.8):]
X_train, X_test , X_New = PP_X(X_train),PP_X(X_test) , PP_X(X_New)
```
## Principal Componant Analysis
```
pca = PCA(n_components=10)
X_train2 = pd.DataFrame(pca.fit_transform(X_train))
X_train = pd.concat([X_train,X_train2],axis=1)
X_test2 = pd.DataFrame(pca.transform(X_test))
X_test = pd.concat([X_test,X_test2],axis=1)
X_New2 = pd.DataFrame(pca.transform(X_New))
X_New = pd.concat([X_New,X_New2],axis=1)
pca = PCA(n_components=10)
X_train2 = pd.DataFrame(pca.fit_transform(X_train))
X_test2 = pd.DataFrame(pca.transform(X_test))
X_New2 = pd.DataFrame(pca.transform(X_New))
import statsmodels.api as sm
model_nb = sm.GLM(Y_train,X_train[linear_features], family=sm.families.NegativeBinomial()).fit()
predict2 = model_nb.predict(X_test[linear_features])
print(mean_absolute_error(Y_test, [max(x,0) for x in predict2.round(0).astype(int)]))
predict_New2 = model_nb.predict(X_New[linear_features])
predict_New2
columns_to_drop = find_correlation(X_New , 0.7)
X_test = X_test.drop(columns=columns_to_drop)
X_train = X_train.drop(columns=columns_to_drop)
X_New = X_New.drop(columns=columns_to_drop)
X_crr = X_train.copy(deep=True)
X_crr['total_cases'] = Y['total_cases']
corr = X_crr.corr()
linear_features=abs(corr).total_cases.drop('total_cases').sort_values(ascending=False)[:10].keys()
linear_features
```
## Rmove Outliers
```
X_train , Y_train = X_train.loc[Y_train['total_cases'] <150] , Y_train.loc[Y_train['total_cases'] <150]
Y_train = Y_train['total_cases'].values.ravel()
Y_test = Y_test['total_cases'].values.ravel()
rf = RandomForestRegressor(n_estimators=20)
model1 = rf.fit(X_train,Y_train)
predict = rf.predict(X_test)
print(mean_absolute_error(Y_test, predict.round(0).astype(int)))
predict_RF = model1.predict(X_New)
# mode12 = rf.fit(Xrf,Yrf)
```
## Variable Importance
```
plt.figure(figsize=(20,15))
importance = rf.feature_importances_
feat_importances_act = pd.Series(importance, index=X_train.columns)
feat_importances = feat_importances_act.nlargest(20)
feat_importances.plot(kind='barh')
X_train1 = X_train[linear_features]
X_train1['total_cases'] = Y_train
X_train1.to_csv('X_train')
X_test1 = X_test[linear_features]
X_test1['total_cases'] = Y_test
X_test1.to_csv('X_test')
X_New[linear_features].to_csv('X_New')
Y_New.to_csv('Y_New')
from sklearn import linear_model
reg = linear_model.Lasso (alpha = 0.0005 , positive=True)
# reg = rf = RandomForestRegressor(n_estimators=20)
reg.fit(X_train[linear_features],Y_train)
predict2 = reg.predict(X_test[linear_features])
print(mean_absolute_error(Y_test, [max(x,0) for x in predict2.round(0).astype(int)]))
predict_lnr = reg.predict(X_New[linear_features])
predict_lnr= [max(x,0) for x in predict_lnr]
predict_lnr
from statsmodels.genmod.families import NegativeBinomial
from sklearn.preprocessing import StandardScaler
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
X_train1 = X_train[linear_features]
X_test1 = X_test[linear_features]
X_New1 = X_New[linear_features]
X_train1 = scaler.fit_transform(X_train1)
X_test1 = scaler.fit_transform(X_test1)
X_New1 = scaler.fit_transform(X_New1)
model_k = Sequential()
model_k.add(LSTM(1, input_shape=(1,10)))
model_k.add(Dense(1))
model_k.compile(loss='mean_squared_error', optimizer='adam')
X_train1 = X_train1.reshape((X_train1.shape[0], 1, X_train1.shape[1]))
X_test1 = X_test1.reshape((X_test1.shape[0], 1, X_test1.shape[1]))
model_k.fit(X_train1, Y_train, epochs=100, batch_size=1, verbose=2)
trainPredict = model_k.predict(X_test1)
print("MAE" , mean_absolute_error(Y_test, trainPredict))
import statsmodels.api as sm
model_nb = sm.GLM(Y_train,X_train[linear_features], family=sm.families.NegativeBinomial()).fit()
predict2 = model_nb.predict(X_test[linear_features])
print(mean_absolute_error(Y_test, [max(x,0) for x in predict2.round(0).astype(int)]))
predict_New2 = model_nb.predict(X_New[linear_features])
predict_New2
```
## linear regresion model
```
Y_New['total_cases'] = np.array(predict_New2, dtype=np.float32).round(0)
Y_New['total_cases'] = Y_New['total_cases'].astype(int)
Y_New.to_csv('submission_sj.csv')
Y_New
X_crr = X.copy(deep=True)
X_crr['total_cases'] = Y['total_cases']
corr = X_crr.corr()
corr.total_cases.drop('total_cases').sort_values(ascending=False).plot.barh()
plt.figure(figsize=(10, 6))
sns.heatmap(corr,
xticklabels=corr.columns,
yticklabels=corr.columns)
```
| github_jupyter |
# Q-Learning Lab Notebook
In this notebook, you will learn the basics applying the Q-Learning algorithm to simple environments in OpenAI Gym. By the end of the notebook, you will have a working Q-Learning agent that can balance a CartPole as well as solve other simple control problems in alternate environments.
## Sections
1. <a href=#rl>Basic concept of reinforcement learning (RL)</a>
- <a href=#cp>CartPole environment from OpenAI Gym</a>
- <a href=#random>CartPole with a random agent</a>
- <a href=#qlcode>Q-Learning Implementation in Python</a>
- <a href=#qlagent>CartPole with a Q-Learning agent</a>
- <a href=#mtn>MountainCar environment from OpenAI Gym</a>
## Requirements
* OpenAI Gym[classic_control] 0.7.4 or higher
* Python 3.5 or higher
* CUDA8.0 enabled GPU
Portions of this notebook have been borrowed from the Nvidia Deep Learning Institute (DLI) hands-on labs.
# 1. Reinforcement learning <a name='rl' />
### Problem setting
Unlike standard supervised learning in machine learning, reinforcement learning is about sequential decision making of an agent, which takes actions to maximize cumulative reward, by interacting with environments (problems).
<img src="image/rl.png" width="200">
After observing the current state $s_t$ and reward $r_t$, agent will decide which action to take $a_t$.
For clarity, here we assume that actions are discrete, and reward is a scalar value. Multi-dimensional state $s_t$ can be either discrete or continuous.
### Action-value function approximation
There are multiple strategies in RL on how to represent the policy, the behavior mechanism inside the agent and how to optimize it.
In most of this hands-on session, we just consider action-value function approximation, in which the expected cumulative reward for all future steps is represented as a function $Q(s, a)$ for pairs of state $s$ and following action $a$.
By learning a good approximation of optimal action-value function $Q(s, a)$, by taking action $a_t$ which maximizes the learned Q-value $Q(s_t, a_t)$ given $s_t$ at each time step $t$, the optimal policy should be realized.
# 2. Example - CartPole <a name='cp' />
In this section, we introduce a control problem named CartPole and describe the way it is handled as a RL problem. Before looking closer at the CartPole environment, we'll import the packages we need and define a visualization function.
### Preparation: Import packages
```
# install packages in Udacity Workspaces
# xvfb-run -s "-screen 0 1400x900x24" bash && export DISPLAY=:0
!python -m pip install pyvirtualdisplay
# uncomment for Udacity server using xvfb
from pyvirtualdisplay import Display
display = Display(visible=0, size=(1400, 900))
display.start()
# The typical imports
import gym
import numpy as np
from tqdm import tqdm
# visualization helpers
from IPython import display
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
# Set logger level
import logging
logging.basicConfig(level=logging.ERROR)
```
### CartPole Environment
As an example of classical RL problem, we use CartPole-v0 from [OpenAI Gym](https://gym.openai.com/), which contains two-dimensional physics simulator of a black cart and a yellow pole. It is a kind of inverted pendulum.
<img src="image/cartpole.png?" width="450">
The following overview of CartPole-v0 is taken from [Gym's Wiki page](https://github.com/openai/gym/wiki/CartPole-v0)
#### Description
By an un-actuated joint, a pole is attached to a cart, which moves along a frictionless track. The pendulum starts upright, and the goal is to prevent it from falling over by increasing and reducing the cart's velocity.
#### Observation
Type: Box(4)
Num | Observation | Min | Max
---|---|---|---
0 | Cart Position | -2.4 | 2.4
1 | Cart Velocity | -Inf | Inf
2 | Pole Angle | ~ -41.8° | ~ 41.8°
3 | Pole Velocity At Tip | -Inf | Inf
#### Actions
Type: Discrete(2)
Num | Action
--- | ---
0 | Push cart to the left
1 | Push cart to the right
Note: The amount the velocity is reduced or increased is not fixed as it depends on the angle the pole is pointing. This is because the center of gravity of the pole increases the amount of energy needed to move the cart underneath it
#### Conditions for episode termination
1. Pole Angle is more than ±20.9°
2. Cart Position is more than ±2.4 (center of the cart reaches the edge of the display)
3. Episode length is greater than 200
#### Reward
Reward is 1 for every step taken, including the termination step
#### Starting state
All observations are assigned a uniform random value between ±0.05
### How Gym's environment works
Gym's environment (problem) has a unified interface with the following three steps.
#### 1. Create environment specified by name
```python
env = gym.make('ENV_NAME')
```
#### 2. Initialize environment
```python
env.reset()
```
#### 3. Take action and observe reward & next state
```python
observation, reward, is_finished, info = env.step(action)
```
where
- obs: a next observation
- reaward: a scalar reward
- is_finished: a boolean value indicating whether the current state is terminal or not
- info: additional information
By interacting the environment, a reinforcement learning agent learns how to optimize its strategy for maximizing cumulative reward.
### Execution: How the environment works
Create the CartPole environment and observe the initial state and the result of an action. In this case a random action has been selected by sampling the action space.
```
# Create environment of CartPole-v0
env = gym.make('CartPole-v0')
print('observation space:', env.observation_space)
print('action space:', env.action_space)
observation = env.reset()
# env.render(mode='rgb_array', close=True)
print('initial observation:', observation)
action = env.action_space.sample() # Select random action
print('random action:', action)
observation, reward, is_finished, info = env.step(action)
print('next observation:', observation)
print('reward:', reward)
print('is_finished:', is_finished)
print('info:', info)
```
# 3. CartPole with the Random agent <a name='random' />
### Class: Random agent
```
# copied from gym examples - a model for our agent
class RandAgent(object):
"""The world's simplest agent!"""
def __init__(self, action_space):
self.action_space = action_space
def act(self, observation, reward, done=None, mode=None):
return self.action_space.sample()
def init_episode(self, observation):
# provided for compatibility with general learner
return self.action_space.sample()
```
### Definition: Learner for agents
```
# This learner provides a general interface for agents and environments
# optionally visualize within a Jupyter Notebook when visualize_plt is True
def learner(agent=None, env_id='CartPole-v0', episodes=100, max_length = 100, init_reward=0,
ignore_done=False, visualize_plt=True, mode=None):
# load the environment
env = gym.make(env_id)
# set the agent to random if none provided
if agent is None:
agent = RandAgent(env.action_space)
# each episode runs until it is observed as finished, or exceeds max_length in time steps
episode_count = episodes
done = False
n_steps = np.zeros((episode_count,))
# run the episodes - use tqdm to track in the notebook
for i in tqdm(range(episode_count), disable=visualize_plt):
# Initialize environment for each episode
ob = env.reset()
reward = init_reward
if visualize_plt:
img = plt.imshow(env.render(mode='rgb_array')) # only call this once, only for jupyter
# initialize the agent
agent.init_episode(ob)
n_steps[i]=max_length
# run the steps in each epsisode
for t in range(max_length):
# render the environment
if visualize_plt:
img.set_data(env.render(mode='rgb_array')) # just update the data
plt.axis('off')
display.display(plt.gcf())
display.clear_output(wait=True)
else:
env.render()
# get agent's action
action = agent.act(ob, reward, mode=mode)
# take the action and get reward and updated observation
ob, reward, done, _ = env.step(action)
# terminate the steps if the problem is done
if done and not ignore_done:
n_steps[i] = t
break
if done and ignore_done and env_id=='MountainCar-v0' and ob[0]>= 0.5:
# special case MountainCar
# if have achieved the goal, then quit but otherwise keep going
print("Episode {} done at step {}".format(i,t))
print("Observations {}, Reward {}".format(ob, reward))
n_steps[i] = t
break
env.render(close=True)
return n_steps # stats
```
### Execution: Run the experiment with the random agent
Before introducing an RL agent, you can just repeat random actions to see how the environment changes over time. Here you repeat the episode for 10 times. The number of steps before the cart fails are recorded for each episode.
```
num_episodes = 10
max_length = 200
steps = learner(episodes=num_episodes, max_length=max_length)
print("Minimum step count in {} episodes: {}".format(num_episodes, np.min(steps)))
print("Average step count in {} episodes: {}".format(num_episodes, np.mean(steps)))
print("Maximum step count in {} episodes: {}".format(num_episodes, np.max(steps)))
```
# 4. Traditional RL = Q-learning <a name='qlcode' />
Q-learning is one of the most popular approaches in reinforcement learning, to determine which action is optimal on each state.
At each time step $t$, after taking an action $a_t$ and given reward $r_{t+1}$ and next state $s_{t+1}$, the old Q-value $Q(s_t, a_t)$ is updated by the following rule.
<img src="image/q-learning.svg" width="800">
It is not necessary to follow the details of math but the intuition is: the Q-value should be updated by taking into account the difference between the old value and the sum of the actual reward and the discounted estimated future value obtained by next action, by the factor of learning rate $\alpha_t$.
Q-learning is an iterative algorithm, so Q-values are used to take actions and then gradually improved over time.
To make the good balance between exploration (random action) and exploitation (estimated best action), $\epsilon$-greedy strategy is typically used with Q-learning in which where the agent take random actions with probability $\epsilon$ and current best action otherwise.
The simplest way to implement Q-Learning is to build a table, named Q-table, to store the Q-value for each discrete state and action combination.
### Class: Q-learning agent
Here we define the QLearningAgent class, in which the observation space is discretized into q-tables of size 20,000 = ((that of bins (9)+1) ^ numbers of observations (4)) * number of actions (2). Each cell represents current Q-value for each combination of discretized state and possible action.
By calling the `act()` method with the current observation (and reward), the agent updates the Q-table and returns the next action.
The hyper-parameters for training have been set at default values, but may be modified when called:
**`learning_rate` = 0.2**
* the new information replaces 20% of the old information at each step
**`discount_factor` = 1.0**
* the agent will strive for a long-term high reward because the value includes expected future rewards
**`exploration_rate` = 0.5**
* the agent will choose a random action, i.e. "explore", 50% of the time
**`exploration_decay_rate` = 0.99**
* the probability of exploration will be reduced by 1% at the start of each episode
```
class QLearningAgent:
def __init__(self,
learning_rate = 0.2, discount_factor = 1.0,
exploration_rate = 0.5, exploration_decay_rate = 0.99,
n_bins = 9, n_actions = 2, splits=None):
self.learning_rate = learning_rate
self.discount_factor = discount_factor
self.exploration_rate = exploration_rate # initial epsilon
self.exploration_decay_rate = exploration_decay_rate # decay factor for epsilon
self.n_bins = n_bins
self.n_actions = n_actions
self.splits = splits
self.state = None
self.action = None
if self.splits is None: #CartPole default
self.splits = [
# Position
np.linspace(-2.4, 2.4, self.n_bins)[1:-1],
# Velocity
np.linspace(-3.5, 3.5, self.n_bins)[1:-1],
# Angle.
np.linspace(-0.5, 0.5, self.n_bins)[1:-1],
# Tip velocity
np.linspace(-2.0, 2.0, self.n_bins)[1:-1]
]
# Create Q-Table
num_states = (self.n_bins+1) ** len(self.splits)
self.q_table = np.zeros(shape=(num_states, self.n_actions))
# Turn the observation into integer state
def set_state(self, observation):
state = 0
for i, column in enumerate(observation):
state += np.digitize(x=column, bins=self.splits[i]) * ((self.n_bins + 1) ** i)
return state
# Initialize for each episode
def init_episode(self, observation):
# Gradually decrease exploration rate
self.exploration_rate *= self.exploration_decay_rate
# Decide initial action
self.state = self.set_state(observation)
return np.argmax(self.q_table[self.state])
# Select action and update
def act(self, observation, reward=None, done=None, mode='train'):
next_state = self.set_state(observation)
if mode == 'test':
# Test mode
next_action = np.argmax(self.q_table[next_state])
else:
# Train mode by default
# Train by updating Q-Table based on current reward and 'last' action.
self.q_table[self.state, self.action] += self.learning_rate * \
(reward + self.discount_factor * max(self.q_table[next_state, :]) - self.q_table[self.state, self.action])
# Exploration or exploitation
do_exploration = (1 - self.exploration_rate) < np.random.uniform(0, 1)
if do_exploration:
# Exploration
next_action = np.random.randint(0, self.n_actions)
else:
# Exploitation
next_action = np.argmax(self.q_table[next_state])
self.state = next_state
self.action = next_action
return next_action
```
# 5. CartPole with the Q-Learning agent <a name='qlagent' />
### Preparation: Initialize the Q-learning agent and training parameters
Beginning from an empty Q-table, QLearningAgent tries to learn $Q(s_t, a_t)$ by Q-learning with $\epsilon$-greedy strategy in 50 episodes. The result gif shows the agent gradually learns from trials and errors.
```
# Instantiate the agent
q_agent = QLearningAgent()
num_episodes = 50
max_length = 200
initial_reward = 1
```
### Execution: Train the Q-learning agent
Each time the learner cell is executed, the agent improves its policy. To start over from scratch with the agent, go back and execute the cell that instantiates the agent with `q_agent = QLearningAgent()`
The result statistics of average and maximum steps achieved during the episodes shows that over time the agent is able to improve its behavior. We can see this in the visualization as the CartPole is able to stay verticle for longer periods of time after more training episodes.
```
# train the agent - execute this cell as many times as you wish
# set the visualize_plt flag to True to see the cart in the notebook.
# note that this will run slower if visualized
steps = learner(agent=q_agent, episodes=num_episodes, max_length = max_length,
init_reward=initial_reward, visualize_plt=False)
print("Minimum step count in {} episodes: {}".format(num_episodes, np.min(steps)))
print("Average step count in {} episodes: {}".format(num_episodes, np.mean(steps)))
print("Maximum step count in {} episodes: {}".format(num_episodes, np.max(steps)))
print("Q-table size: ", q_agent.q_table.size)
print("Q-table nonzero count: ", np.count_nonzero(q_agent.q_table))
```
### Execution: Test the trained agent
```
# Testing the agent - run this smaller sampling after the agent is achieving success and NOT exploring
num_episodes = 5
max_length = 200
initial_reward = 1
steps = learner(agent=q_agent, episodes=num_episodes, max_length = max_length,
init_reward=initial_reward, mode='test') # set mode to 'test' to avoid exploration
print("Minimum step count in {} episodes: {}".format(num_episodes, np.min(steps)))
print("Average step count in {} episodes: {}".format(num_episodes, np.mean(steps)))
print("Maximum step count in {} episodes: {}".format(num_episodes, np.max(steps)))
print("Q-table size: ", q_agent.q_table.size)
print("Q-table nonzero count: ", np.count_nonzero(q_agent.q_table))
```
# 6. Example: MountainCar-v0 <a name='mtn' />
OpenAI Gym is full of environments to try with varying difficulties and challenges. Start by reviewing the information found in the [wiki](https://github.com/openai/gym/wiki/Table-of-environments). Here's some information provided about the [MountainCar-v0 environment](https://github.com/openai/gym/wiki/MountainCar-v0):
<img src="image/mtncar.png?" width="450">
#### Description
Get an under powered car to the top of a hill (top = 0.5 position).
#### Observation
Type: Box(2)
Num | Observation | Min | Max
---|---|---|---
0 | Position | -1.2 | 0.6
1 | Velocity | -0.07 | 0.07
#### Actions
Type: Discrete(3)
Num | Action
--- | ---
0 | push left
1 | no push
2 | push right
#### Reward
-1 for each time step, until the goal position of 0.5 is reached. As with MountainCarContinuous v0, there is no penalty for climbing the left hill, which upon reached acts as a wall.
#### Starting state
Random position from -0.6 to -0.4 with no velocity.
#### Episode termination
The episode ends when you reach 0.5 position, or if 200 iterations are reached.
### Execution: MountainCar with the Random agent
```
# try mountain car with the Random agent (default)
steps = learner(env_id='MountainCar-v0', episodes=3, max_length = 200, init_reward=0)
```
### Preparation: MountainCar with the Q-Learning agent
Before running the Q-Learning agent, we need to consider how the environment setup will affect the learning for our agent. The MountainCar-v0 problem is a hard one because the agent can't really learn what actions are helpful until it actually climbs the hill successfully. As long as that goal has not been met, all the episodes will have an accumulated reward of -1 for each timestep, or -200, since the number of steps allowed is capped. Therefore, it's important that the car explores as many avenues as possible to solve the problem. As a help for this notebook, we can extend the number of timesteps for the learner by ignoring the "done" signal and setting a higher number for the `max_length` parameter in learner. A special case provision has been included in the learner for this operation specifically for MountainCar-v0
The Q-Learning agent also needs a way to discretize the observations for MountainCar, just as it did for the CartPole problem. Although the agent itself will use the same basic iterative algorithm, we need to change the setup a bit. This can be accomplished by setting up a different table of "splits".
Whereas in CartPole, the split was:
``` python
splits = [
# Position
np.linspace(-2.4, 2.4, n_bins)[1:-1],
# Velocity
np.linspace(-3.5, 3.5, n_bins)[1:-1],
# Angle.
np.linspace(-0.5, 0.5, n_bins)[1:-1],
# Tip velocity
np.linspace(-2.0, 2.0, n_bins)[1:-1]
]
```
MountainCar only has two observation values:
``` python
splits = [
# Position
np.linspace(-1.2, 0.6, n_bins)[1:-1],
# Velocity
np.linspace(-0.07, 0.07, n_bins)[1:-1],
]
```
In addition, the MountainCar problem has three actions whereas the CarPole only had two, so the agent will need to knw that . The Q-Learning agent defined previously has a provision for passing all the parameters it uses including the number of actions and splits. Go ahead and create a split table now for the MountainCar Q-Learning agent and define the agent.
```
# TODO define the number of bins `n_bins` and a list name `splits_mtncar` for use by the Q-Learning agent.
# The n_bins used for CartPole was 9, but feel free to experiment with this number
# n_bins =
# splits_mtncar =
# ANSWER
n_bins = 20
splits_mtncar = [
# Position
np.linspace(-1.2, 0.6, n_bins)[1:-1],
# Velocity
np.linspace(-0.07, 0.07, n_bins)[1:-1],
]
# TODO instantiate a QLearningAgent named q_agent_mtncar
# you may want to tweak the hyper-parameters, such as the exploration rate, to increase
# the agent's chances of climbing the hill
# q_agent_mtncar =
# ANSWER
q_agent_mtncar = QLearningAgent(learning_rate = 0.2, discount_factor = 0.9,
exploration_rate = 0.8,
n_actions = 3, n_bins = n_bins, splits = splits_mtncar)
# Run the agent long enough for it to achieve success within 500 steps
# Feel free to modify any of these parameters
num_episodes = 50
max_length = 2000
steps = learner(agent=q_agent_mtncar, env_id='MountainCar-v0',
episodes=num_episodes, max_length = max_length, init_reward=0,
ignore_done=True, visualize_plt=False)
print("Minimum step count in {} episodes: {}".format(num_episodes, np.min(steps)))
print("Average step count in {} episodes: {}".format(num_episodes, np.mean(steps)))
print("Maximum step count in {} episodes: {}".format(num_episodes, np.max(steps)))
print("Q-table size: ", q_agent_mtncar.q_table.size)
print("Q-table nonzero count: ", np.count_nonzero(q_agent_mtncar.q_table))
```
### Execution: Test the trained agent
```
# Testing the agent - run this smaller sampling after the agent is achieving success
num_episodes = 3
max_length = 500
steps = learner(agent=q_agent_mtncar, env_id='MountainCar-v0',
episodes=num_episodes, max_length = max_length, init_reward=0,
ignore_done=True, visualize_plt=True, mode='test')
print("Minimum step count in {} episodes: {}".format(num_episodes, np.min(steps)))
print("Average step count in {} episodes: {}".format(num_episodes, np.mean(steps)))
print("Maximum step count in {} episodes: {}".format(num_episodes, np.max(steps)))
print("Q-table size: ", q_agent_mtncar.q_table.size)
print("Q-table nonzero count: ", np.count_nonzero(q_agent_mtncar.q_table))
```
# Congratulations!
Now you are on your way to trying other environment problems and RL algorithms. Feel free to add additional cells here in order to try additional environments in OpenAI Gym.
| github_jupyter |
# EDA on a full page
## Imports
```
%run "../config/local.ipynb"
from random import randint
import numpy as np
import matplotlib.pyplot as plt
import cv2
import tqdm.notebook as tq
import math
import xml.etree.ElementTree as ET
import pandas as pd
```
## Features
```
features = os.listdir(ORIGINAL_FEATURES_DIR)
```
## Display some features
```
grid_size = (2,2)
plt.figure(figsize=(16,16))
indexes = np.random.choice(range(len(features)), size=grid_size[0]*grid_size[1], replace=False)
for i in range(grid_size[0]*grid_size[1]):
plt.subplot(grid_size[0],grid_size[1],i+1)
idx = indexes[i]
img_name = features[idx]
img_path = os.path.join(ORIGINAL_FEATURES_DIR, img_name)
img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)
plt.imshow(img, cmap='gray', vmin=0, vmax=255)
plt.title(img_name)
plt.axis('off')
plt.show()
```
The features are composed by images of pages with handwritten text.
### About the files
```
idx = randint(0,len(features))
img_name = features[idx]
img_path = os.path.join(ORIGINAL_FEATURES_DIR, img_name)
img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)
print("img shape: ", img.shape)
```
The images are in a pretty high resolution. In order to facilitate the learning on the image, the image must be cropped.
### Colors
#### Colors levels
```
colors_quantities = np.zeros((255))
img_flattened = img.flatten()
for i in tq.tqdm(range(len(img_flattened)), total=len(img_flattened)):
value = img_flattened[i]
colors_quantities[value] += 1
colors_quantities = colors_quantities.astype(int)
```
#### One image analysis
```
plt.figure(figsize=(16,6))
grid_size = (1,6)
range_length = math.floor(len(colors_quantities) / (grid_size[0]*grid_size[1]))
total = colors_quantities.sum()
m = np.zeros((10,10))
ratios = np.zeros((len(features)))
for i in range(grid_size[0]*grid_size[1]):
plt.subplot(grid_size[0],grid_size[1],i+1)
from_idx = i * range_length
to_idx = (i+1) * range_length
sample = colors_quantities[from_idx:to_idx]
img = np.full((32,32), math.floor((from_idx+to_idx)/2))
ratio = round(sample.sum()*100/total,1)
plt.imshow(img, cmap='gray', vmin=0, vmax=255)
plt.title(ratio)
```
#### Whole features analysis
```
colors_quantities = np.zeros((255))
# for each feature
for i in tq.tqdm(range(len(features[:100])), total=len(features[:100])):
# read the image
img = cv2.imread(os.path.join(ORIGINAL_FEATURES_DIR, features[i]), cv2.IMREAD_GRAYSCALE)
img_flattened = img.flatten()
values = [len(img[img == value]) for value in range(255)]
colors_quantities += values
colors_quantities = colors_quantities.astype(int)
plt.figure(figsize=(16,6))
grid_size = (1,6)
range_length = math.floor(len(colors_quantities) / (grid_size[0]*grid_size[1]))
total = colors_quantities.sum()
m = np.zeros((10,10))
for i in range(grid_size[0]*grid_size[1]):
plt.subplot(grid_size[0],grid_size[1],i+1)
from_idx = i * range_length
to_idx = (i+1) * range_length
sample = colors_quantities[from_idx:to_idx]
img = np.full((32,32), math.floor((from_idx+to_idx)/2))
ratio = round(sample.sum()*100/total,1)
plt.imshow(img, cmap='gray', vmin=0, vmax=255)
plt.title(ratio)
```
### Downscale the image
```
grid_size = (1,10)
plt.figure(figsize=(16,16))
indexes = np.random.choice(range(len(features)), size=grid_size[0]*grid_size[1], replace=False)
idx = randint(0,len(features))
img_name = features[idx]
img_path = os.path.join(ORIGINAL_FEATURES_DIR, img_name)
img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)
for i in range(grid_size[0]*grid_size[1]):
plt.subplot(grid_size[0],grid_size[1],i+1)
dim = (int(img.shape[0]/(i+1)), int(img.shape[1]/(i+1)))
resized = cv2.resize(img, dim, interpolation = cv2.INTER_NEAREST)
cv2.imwrite(os.path.join("/tmp", "{}_{}x{}.png".format(img_name[:-3], dim[0],dim[1])), resized)
plt.imshow(img, cmap='gray', vmin=0, vmax=255)
plt.title("({},{})".format(dim[0],dim[1]))
plt.axis('off')
plt.show()
```
#### Resize all images
```
def resize_image(src, size=(512,512)):
img = cv2.imread(src, cv2.IMREAD_GRAYSCALE)
# scale factor
try:
scale_factor = size[0] / img.shape[0]
resized = cv2.resize(img, size, fx=scale_factor, fy=scale_factor, interpolation = cv2.INTER_NEAREST)
patched = np.full(size, resized[0,0])
margin_left = int((patched.shape[1] - resized.shape[1]) / 2)
patched[:, margin_left:margin_left + resized.shape[1]] = resized
return patched
except:
print("ERROR on src: ", src)
return None
def save_image(image, dest):
cv2.imwrite(dest, image)
def resize_all(src_dir, dest_dir, size=(512,512)):
images = os.listdir(src_dir)
for i in tq.tqdm(range(len(images)), total=len(images)):
src = os.path.join(src_dir, images[i])
dest = os.path.join(dest_dir, images[i])
img = resize_image(src, size)
save_image(img, dest)
resize_all(ORIGINAL_FEATURES_DIR, RESIZED_512x512_FEATURES_DIR)
img_path = os.path.join(ORIGINAL_FEATURES_DIR, feature)
img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)
```
## Labels
```
idx = randint(0, len(features))
tree = ET.parse(os.path.join(XML_LABELS_DIR, features[1].replace(".png", ".xml")))
root = tree.getroot()
for child in root:
print(child.tag, child.attrib)
chars = []
for word in root.iter('word'):
for char in word.iter('cmp'):
if int(char.attrib['width']) < 60:
chars.append(char.attrib)
df = pd.DataFrame(chars)
img_path = os.path.join(ORIGINAL_FEATURES_DIR, features[0])
img = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)
for idx, row in df.iterrows():
start_point = (int(row['x']),int(row['y']))
end_point = (int(row['x'])+int(row['width']), int(row['y']) + int(row['height']))
color = (0, 0, 0)
# img = cv2.rectangle(img, (10,10), (20,20), (0,0,0), 2)
img = cv2.rectangle(img, start_point, end_point, color, 2)
plt.figure(figsize=(16,16))
plt.imshow(img, cmap='gray', vmin=0, vmax=255)
```
| github_jupyter |
```
import sys
import importlib
sys.path.insert(0, '/cndd/fangming/CEMBA/snmcseq_dev')
from __init__ import *
from __init__jupyterlab import *
from matplotlib.ticker import MaxNLocator
from matplotlib.patches import Rectangle
import collections
import itertools
import tables
#from adjustText import adjust_text
from scipy import sparse
from scipy import stats
from scipy import optimize
import scipy.cluster.hierarchy as sch
import fbpca
# import statsmodels.api as sm
from statsmodels.stats.multitest import multipletests
import snmcseq_utils
importlib.reload(snmcseq_utils)
import CEMBA_clst_utils
importlib.reload(CEMBA_clst_utils)
import enhancer_gene_utils
```
# config
```
import datetime
today = datetime.date.today().strftime("%y%m%d")
output_figures = 'figures/corr_and_linked{}_{{}}'.format(today)
output_results = 'results_new/corr_and_linked{}_{{}}'.format(today)
```
# read data
```
f = '/cndd/Public_Datasets/CEMBA/snmCSeq/References/Annotation/gencode.vM16.annotation_genes.tsv'
df_genes = pd.read_csv(f, sep='\t')
df_genes['chrom'] = df_genes['chr'].apply(lambda x: x[3:])
df_genes = df_genes[df_genes['chrom'].isin(snmcseq_utils.get_mouse_chromosomes())]
df_genes['gid'] = df_genes['gene_id'].apply(lambda x: x.split('.')[0])
df_genes['length'] = df_genes['end'] - df_genes['start']
print(df_genes.shape)
df_genes.head()
df_genes_v2 = df_genes.groupby('gene_name').first()
df_genes_v2['chrom'] = df_genes_v2['chr'].apply(lambda x: (x[3:]))
gid_to_name = df_genes.set_index('gid')['gene_name']
df_genes_v2.head()
gid_to_name.head()
data_dir = '/cndd2/fangming/projects/scf_enhancers/enhancer_ethan38_200520/'
# list of enhancers
f = os.path.join(data_dir, 'results/enhancers.bed')
df_enhs = pd.read_csv(f, sep='\t', header=None, names=['chr', 'start', 'end', 'clsts'])
df_enhs['length'] = df_enhs['end'] - df_enhs['start']
df_enhs['index'] = df_enhs.index.values
df_enhs['chrom'] = df_enhs['chr'].apply(lambda x: x[3:])
print(df_enhs.shape)
df_enhs.head()
# list of clusters
f = os.path.join(data_dir, 'ref/annotations_order.tsv')
clst_annot = pd.read_csv(f, sep='\t').set_index('cluster')['annotation']
print(clst_annot.shape)
clst_annot.head()
```
### get features
```
# global mC levels
import pickle as pkl
fs = [
os.path.join(data_dir, 'results/cluster_global_mcg_round2.pkl'),
os.path.join(data_dir, 'results/cluster_global_mcg_round3.pkl'),
]
global_mean_mcg = []
for f in fs:
with open(f, 'rb') as fh:
global_mean_mcg.append(pkl.load(fh))
global_mean_mcg = pd.concat(global_mean_mcg)
fs = [
os.path.join(data_dir, 'results/cluster_global_mch_round2.pkl'),
os.path.join(data_dir, 'results/cluster_global_mch_round3.pkl'),
]
global_mean_mch = []
for f in fs:
with open(f, 'rb') as fh:
global_mean_mch.append(pkl.load(fh))
global_mean_mch = pd.concat(global_mean_mch)
print(global_mean_mcg.shape, global_mean_mch.shape)
global_mean_mcg.head()
```
## 4 matrices
```
# gene rna
f = os.path.join(data_dir, 'results/gene_counts_10x_cells_v3_ethan38.tsv')
gene_rna_clsts = pd.read_csv(f, sep='\t', index_col=0)
nclsts = gene_rna_clsts.shape[1]
print(gene_rna_clsts.shape)
# remove genes with coverage in less than half of clusters
cov_mat = (gene_rna_clsts > 50)
gene_conds = (cov_mat.sum(axis=1) > int(nclsts*0.5))
clsts_conds = (cov_mat.sum(axis=0) > 1000) # coverage in more than 1000 genes
gene_rna_clsts = gene_rna_clsts.loc[gene_conds, clsts_conds]
print(gene_rna_clsts.shape)
# logcpm normalization
# gene_lengths = df_genes.set_index('gid').reindex(gene_rna_clsts.index)['length'].dropna()
# gene_lengths = gene_lengths.fillna(np.nanmean(gene_lengths))
# gene_rna_clsts = snmcseq_utils.logcpm(gene_rna_clsts.loc[gene_lengths.index])
gene_rna_clsts = snmcseq_utils.logcpm(gene_rna_clsts) # this should be the same
print(gene_rna_clsts.shape)
gene_rna_clsts.head()
# gene mch
f = os.path.join(data_dir, 'results/gene_counts_mch_mch_ethan38.tsv')
mc_table = pd.read_csv(f, sep='\t', index_col=0)
f = os.path.join(data_dir, 'results/gene_counts_mch_ch_ethan38.tsv')
c_table = pd.read_csv(f, sep='\t', index_col=0)
nclsts = c_table.shape[1]
print(mc_table.shape, c_table.shape)
mc_table.head()
# remove low coverage genes
# remove low coverage clusters
base_call_cutoff = 1e3
_cov_mat = (c_table >= base_call_cutoff)
clst_cond = (_cov_mat.sum(axis=0) > 1000) # more than 1000 genes are covered in that cell type
gene_cond = (_cov_mat.sum(axis=1) > int(0.5*nclsts)) # more than half of the clusters are covered the gene
gene_mch_c_clsts = c_table.loc[gene_cond, clst_cond]
gene_mch_mc_clsts = mc_table.loc[gene_cond, clst_cond]
print(gene_mch_c_clsts.shape)
print(gene_mch_mc_clsts.shape)
# get mcc
gene_mch_clsts = snmcseq_utils.get_mcc_lite_v2(gene_mch_c_clsts, gene_mch_mc_clsts, base_call_cutoff=base_call_cutoff)
gene_mch_clsts = gene_mch_clsts.divide(global_mean_mch.loc[gene_mch_clsts.columns.values], axis=1)
del gene_mch_c_clsts, gene_mch_mc_clsts
print(gene_mch_clsts.shape)
gene_mch_clsts.head()
# enhancer mcg
f = os.path.join(data_dir, 'results/enhancer_cluster_mcg.tsv')
mc_table = pd.read_csv(f, sep='\t', index_col=[0, 1, 2], dtype={'chr': str})
f = os.path.join(data_dir, 'results/enhancer_cluster_cg.tsv')
c_table = pd.read_csv(f, sep='\t', index_col=[0, 1, 2], dtype={'chr': str})
nclsts = c_table.shape[1]
print(mc_table.shape, c_table.shape)
mc_table.head()
# remove low coverage enhs
# remove low coverage clusters
base_call_cutoff = 20
_cov_mat = (c_table >= base_call_cutoff)
clst_cond = (_cov_mat.sum(axis=0) > 1000) # more than 1000 enhs are covered in that cell type
enh_cond = (_cov_mat.sum(axis=1) > int(0.5*nclsts)) # more than half of the clusters are covered the enh
enh_mcg_c_clsts = c_table.loc[enh_cond, clst_cond]
enh_mcg_mc_clsts = mc_table.loc[enh_cond, clst_cond]
print(enh_mcg_c_clsts.shape, enh_mcg_mc_clsts.shape)
# get mcc
enh_mcg_clsts = snmcseq_utils.get_mcc_lite_v2(enh_mcg_c_clsts, enh_mcg_mc_clsts, base_call_cutoff=base_call_cutoff)
enh_mcg_clsts = enh_mcg_clsts.divide(global_mean_mcg.loc[enh_mcg_clsts.columns.values], axis=1)
print(enh_mcg_clsts.shape)
# index
enh_mcg_clsts.index = df_enhs.set_index(['chrom', 'start', 'end']).reindex(enh_mcg_clsts.index)['index']
print(enh_mcg_clsts.shape)
del enh_mcg_c_clsts, enh_mcg_mc_clsts
print(enh_mcg_clsts.shape)
enh_mcg_clsts.head()
# enhancer atac
f = os.path.join(data_dir, 'results/enhancer_cluster_atac.tsv')
enh_atac_clsts = pd.read_csv(f, sep='\t', index_col=[0, 1, 2])
print(enh_atac_clsts.shape)
enh_atac_clsts.head()
nclsts = enh_atac_clsts.shape[1]
print(enh_atac_clsts.shape)
# remove enhs with coverage in less than half of clusters
# remove clusters with low coverage
cov_mat = (enh_atac_clsts > 0)
enh_conds = (cov_mat.sum(axis=1) > int(nclsts*0.5))
clsts_conds = (cov_mat.sum(axis=0) > 1000) # coverage in more than 10000 enhs
enh_atac_clsts = enh_atac_clsts.loc[enh_conds, clsts_conds]
print(enh_atac_clsts.shape)
# logtpm normalization
enh_lengths = df_enhs.copy()
enh_lengths['start'] = df_enhs['start'] - 1000
enh_lengths['end'] = df_enhs['end'] + 1000
enh_lengths['length'] = df_enhs['length'] + 2*1000 # 1kb flanking
enh_lengths = enh_lengths.set_index(['chr', 'start', 'end']).reindex(enh_atac_clsts.index)
enh_atac_clsts = snmcseq_utils.logtpm(enh_atac_clsts, enh_lengths['length'])
enh_atac_clsts.index = enh_lengths['index']
print(enh_atac_clsts.shape)
enh_atac_clsts.head()
```
# subset to variable
```
# most variable genes
gene_var = gene_rna_clsts.std(axis=1)
var_genes = gene_var > np.percentile(gene_var, 70)
np.sum(var_genes)
```
### Shared enhancers, shared genes, shared clusters, and nan check
```
print(gene_rna_clsts.shape)
print(gene_mch_clsts.shape)
print(enh_mcg_clsts.shape)
print(enh_atac_clsts.shape)
shared_genes = np.intersect1d(gene_rna_clsts.index.values, gene_mch_clsts.index.values)
shared_enhs = np.intersect1d(enh_mcg_clsts.index.values, enh_atac_clsts.index.values)
shared_clusters = np.array(list(set(gene_rna_clsts.columns.tolist())
& set(gene_mch_clsts.columns.tolist())
& set(enh_mcg_clsts.columns.tolist())
& set(enh_atac_clsts.columns.tolist())
))
print(shared_genes.shape, shared_enhs.shape, shared_clusters.shape)
print(shared_genes[:2])
print(shared_enhs[:2])
print(shared_clusters[:2])
gene_rna_clsts = gene_rna_clsts.loc[shared_genes, shared_clusters]
gene_mch_clsts = gene_mch_clsts.loc[shared_genes, shared_clusters]
enh_mcg_clsts = enh_mcg_clsts.loc[shared_enhs, shared_clusters]
enh_atac_clsts = enh_atac_clsts.loc[shared_enhs, shared_clusters]
print(gene_rna_clsts.shape)
print(gene_mch_clsts.shape)
print(enh_mcg_clsts.shape)
print(enh_atac_clsts.shape)
print(gene_rna_clsts.isnull().sum().sum())
print(gene_mch_clsts.isnull().sum().sum())
print(enh_mcg_clsts.isnull().sum().sum())
print(enh_atac_clsts.isnull().sum().sum())
```
### Get nearest gene info for each region
```
f = os.path.join(data_dir, 'results/enhancer_nearest_genes.bed')
regions_info = pd.read_csv(f, sep='\t', header=None, dtype={0: str, 4: str})
regions_info.head()
# regions_info = regions_info.iloc[:, [0,1,2,8,9,11]]
regions_info = regions_info.iloc[:, [0,1,2,7,8,9,10,11]]
regions_info.columns = ['chr', 'start', 'end', 'transcript_id', 'transcript_name', 'gene_id', 'gene_name', 'distance']
regions_info = pd.merge(regions_info, df_enhs, on=['chr', 'start', 'end']).sort_values('index')
# regions_info = regions_info.groupby('index').first().reset_index()
print(regions_info.shape)
regions_info.head()
```
## Correlation
#### Length of enhancers
```
lens = df_enhs.loc[shared_enhs, 'length']
fig, ax = plt.subplots()
sns.distplot(lens.values/1000, ax=ax)
ax.set_xlabel('Length in kb')
ax.set_title('Distribution of enhancer length')
output_name = 'Enhancer length distribution'
fig.savefig(output_figures.format(output_name, 'pdf'), bbox_inches='tight')
plt.show()
# full correlation
# gene
_x_features = shared_genes
_X = gene_rna_clsts.values
# # enhancer
_y_features = shared_enhs
_Y = enh_mcg_clsts.values
# # enhancer
_y2_features = shared_enhs
_Y2 = enh_atac_clsts.values
print(_X.shape, _Y.shape, _Y2.shape)
# row genes_list
def get_tss(row):
if row['strand'] == '+':
return row['start']
elif row['strand'] == '-':
return row['end']
genes_list = df_genes.set_index('gid').reindex(shared_genes).reset_index().copy()
genes_list['chrom'] = genes_list['chr'].apply(lambda x: x[len('chr'):])
genes_list['tss'] = genes_list.apply(get_tss, axis=1)
# row ens_list
ens_list = df_enhs.set_index('index').loc[shared_enhs].reset_index()
ens_list['center'] = ens_list[['start', 'end']].mean(axis=1).astype(int)
print(ens_list.shape, genes_list.shape)
genes_list.head()
_X_ranks = snmcseq_utils.rank_rows(_X)
_Y_ranks = snmcseq_utils.rank_rows(_Y)
_Y2_ranks = snmcseq_utils.rank_rows(_Y2)
# this should be imported from a list
# more than 15 mins for ~500,000 enhancers
# tss and enhancer center: 500KB ~ 2KB
# to_evals - ens, gene, val, dist
KB = 1000
window_size = 2000*KB # (+/- 1Mb)
inner_window_size = 4*KB #(+/- 2kb)
ti = time.time()
# ens, gene
toeval_genes = []
toeval_enhs = []
toeval_dists = []
toeval_isingenebody = []
for idx, gene in genes_list.iterrows():
if idx % 1000 == 0:
print(idx, time.time()-ti)
chrom, pos, start, end = gene['chrom'], gene['tss'], gene['start'], gene['end']
if chrom in ['Y']:
continue
chrom_size = snmcseq_utils.get_chrom_lengths_mouse().loc[chrom]
window = [max(0, pos-window_size/2),
min(chrom_size, pos+window_size/2)]
window_exclude = [max(0, pos-inner_window_size/2),
min(chrom_size, pos+inner_window_size/2)
]
in_gene = [start, end]
# get ens
pos_enh = ens_list['center']
cond = ((ens_list['chrom'] == chrom)
& (pos_enh >= window[0])
& (pos_enh <= window[1])
& ((pos_enh <= window_exclude[0]) | (pos_enh >= window_exclude[1]))
)
in_gene = ((ens_list['chrom'] == chrom)
& (pos_enh >= in_gene[0])
& (pos_enh <= in_gene[1])
)
ens_include = ens_list[cond]['index'].values
in_gene = in_gene[cond].values
dist = np.abs(ens_list[cond]['center'] - pos)
toeval_genes.append([gene['gid']]*len(ens_include))
toeval_enhs.append(ens_include)
toeval_dists.append(dist)
toeval_isingenebody.append(in_gene)
# if idx > 10:
# break
to_evals = pd.DataFrame(np.array([
np.hstack(toeval_genes),
np.hstack(toeval_enhs),
np.hstack(toeval_dists),
np.hstack(toeval_isingenebody),
]).T, columns=['gene', 'enh', 'dist', 'is_in_genebody'])
print(to_evals.shape)
to_evals.head()
# save 4 matrices
"""
genes
enhancers
pairs
clusters
gene_clst_rna
gene_clst_mch
enhancer_clst_mcg
enhancer_clst_atac
"""
to_save = [
genes_list, # genes
ens_list,
to_evals[['gene', 'enh', 'dist', 'is_in_genebody']],
shared_clusters,
gene_rna_clsts,
gene_mch_clsts,
enh_mcg_clsts,
enh_atac_clsts,
]
for i in to_save:
print(i.shape)
import pickle
to_save_filename = '/sphere/fangming/enhancers/scripts/data_organized/enhancer_gene_analysis_processed_data_201026.pkl'
with open(to_save_filename, 'wb') as fh:
pickle.dump(to_save, fh)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/zangell44/DS-Unit-2-Sprint-3-Advanced-Regression/blob/master/DS_Unit_2_Sprint_Challenge_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Data Science Unit 2 Sprint Challenge 3
## Logistic Regression and Beyond
In this sprint challenge you will fit a logistic regression modeling the probability of an adult having an income above 50K. The dataset is available at UCI:
https://archive.ics.uci.edu/ml/datasets/adult
Your goal is to:
1. Load, validate, and clean/prepare the data.
2. Fit a logistic regression model
3. Answer questions based on the results (as well as a few extra questions about the other modules)
Don't let the perfect be the enemy of the good! Manage your time, and make sure to get to all parts. If you get stuck wrestling with the data, simplify it (if necessary, drop features or rows) so you're able to move on. If you have time at the end, you can go back and try to fix/improve.
### Hints
It has a variety of features - some are continuous, but many are categorical. You may find [pandas.get_dummies](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html) (a method to one-hot encode) helpful!
The features have dramatically different ranges. You may find [sklearn.preprocessing.minmax_scale](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.minmax_scale.html#sklearn.preprocessing.minmax_scale) helpful!
## Part 1 - Load, validate, and prepare data
The data is available at: https://archive.ics.uci.edu/ml/datasets/adult
Load it, name the columns, and make sure that you've loaded the data successfully. Note that missing values for categorical variables can essentially be considered another category ("unknown"), and may not need to be dropped.
You should also prepare the data for logistic regression - one-hot encode categorical features as appropriate.
```
# imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
# import plotly
# import plotly.plotly as py
# import cufflinks as cf
# import plotly.graph_objs as go
# plotly.tools.set_credentials_file(username='zangell', api_key='bs2CJxqOA2hlrJXKyeM9')
```
### Load Data
```
headers = ['age','workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain','capital-loss', 'hours-per-week','native-country', 'over50k']
income_train = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data',
header=None,
names=headers,
index_col=False)
income_train.head()
income_test = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test',
header=None,
index_col=False,
names=headers)
# theres an annoying header row we'll need to drop
income_test.drop(0, axis=0, inplace=True)
income_test.head()
```
### Data Cleaning
First, we need to run some sanity checks and do some basic cleaning
- Ensure we have the right number of values
- Ensure missing values are dealth with
Additionally, we need to make some changes to the data, including, but not limited to:
- Encoding target as binary
- Encoding sex as binary
- One hot encoding categorical variables
```
# ensure we have the correct number of values
# based on UCI, we should have 14 features + 1 target, and 48842 observations
assert income_train.shape[0] + income_test.shape[0] == 48842, print ('Wrong number of observations')
assert income_train.shape[1] == 15 and income_test.shape[1] == 15, print ('Wrong number of features')
# check for null values, UCI says there should be some
# but we don't know how they're encoded, assuming ' ?'
# based on raw data exploration
income_train.replace(' ?', np.nan, inplace=True)
income_train.isnull().sum()
```
All of our null values occur in large categorical variables. Let's check the value counts for each to see if there's an obvious mode that we could fill with.
If that is not the case, it is likely best to create a new 'other' category for each variable.
```
income_train.workclass.value_counts(normalize=True)
income_train.occupation.value_counts(normalize=True)
income_train['native-country'].value_counts(normalize=True)
```
For both 'workclass' and 'native-country', the majority of values come from one class (Private and United-States, respectively). I feel comfortable filling the missing values for these columns with the most frequently occuring value.
For occupation, the distribution is more evenly split. I'm going to choose to encode missing values as 'Other-service', since this category already seems like a catch all for non-standardized professions.
```
# replace ' ?' with nulls in test data
income_test.replace(' ?', np.nan, inplace=True)
# replace workclass nulls with Private
income_train['workclass'].fillna('Private', inplace=True)
income_test['workclass'].fillna('Private', inplace=True)
# replace native-country nulls with United-States
income_train['native-country'].fillna('United-States', inplace=True)
income_test['native-country'].fillna('United-States', inplace=True)
# replace occupation nulls with Other-service
income_train['occupation'].fillna('Other-service', inplace=True)
income_test['occupation'].fillna('Other-service', inplace=True)
assert income_train.isnull().sum().sum() == 0, print ('Check for reamining null in train data')
assert income_test.isnull().sum().sum() == 0, print ('Check for reamining null in test data')
# encode target variable as binary
income_train['over50k'] = income_train['over50k'] == ' >50K'
income_test['over50k'] = income_test['over50k'] == ' >50K'
# encode sex as binary
income_train['sex'] = income_train['sex'] == ' Male'
income_test['sex'] = income_test['sex'] == ' Male'
income_train.head()
# one hot encode categorical variables
# first, let's check the data types to make sure numeric variables are really numeric
income_train.dtypes
# looks fair enough, let's one hot encode the categorical features
income_train = pd.get_dummies(income_train, prefix_sep="__",
columns=['workclass', 'education', 'marital-status',
'occupation', 'relationship', 'race', 'native-country'])
income_test = pd.get_dummies(income_test, prefix_sep="__",
columns=['workclass', 'education', 'marital-status',
'occupation', 'relationship', 'race', 'native-country'])
# let's quickly make sure both of these still have the same number of columns
print ('Training features', income_train.shape[1] - 1)
print ('Testing features', income_test.shape[1] - 1)
# whoops, looks like a category was unique to training and not found in test
# let's try to isolate that
for col in income_train.columns:
if col not in income_test.columns:
print (col)
# simple enough, we just didn't have anyone from Holand in the test data
# let's append that column as all zeroes
income_test['native-country__ Holand-Netherlands'] = 0
assert (income_train.shape[1] == income_test.shape[1])
```
## Part 2 - Fit and present a Logistic Regression
Your data should now be in a state to fit a logistic regression. Use scikit-learn, define your `X` (independent variable) and `y`, and fit a model.
Then, present results - display coefficients in as interpretible a way as you can (hint - scaling the numeric features will help, as it will at least make coefficients more comparable to each other). If you find it helpful for interpretation, you can also generate predictions for cases (like our 5 year old rich kid on the Titanic) or make visualizations - but the goal is your exploration to be able to answer the question, not any particular plot (i.e. don't worry about polishing it).
It is *optional* to use `train_test_split` or validate your model more generally - that is not the core focus for this week. So, it is suggested you focus on fitting a model first, and if you have time at the end you can do further validation.
### Logistic Regression
We're going to use all features except fnlwgt. UCI gives an explanation of this variable, and it appears to relate to the generalization of the findings here to the US population overall based on demographic characteristics.
Since I'm not quite sure what it means, I would prefer to leave it out.
```
target = 'over50k'
features = income_train.columns.drop([target, 'fnlwgt'])
X_train, y_train = income_train[features], income_train[target]
X_test, y_test = income_test[features], income_test[target]
```
First, let's fit a regression without standardizing, allowing maximum interpretability. We an analyze afterwards how much standardizing helps.
```
# first try, to get a baseline for score without standardizing
lr = LogisticRegression(solver='newton-cg')
lr.fit(X_train, y_train)
print ('Training Score (Standardized)', lr.score(X_train, y_train))
print ('Testing Score (Standardized)', lr.score(X_test, y_test))
```
Next, let's fit a regression scaling the features beforehand. This hinders interpretability a bit, but convergence should occur quicker.
```
std_scale = StandardScaler()
X_train_std = std_scale.fit_transform(X_train)
X_test_std = std_scale.transform(X_test)
# first try, to get a baseline for score using standardized features
lr_std = LogisticRegression(solver='newton-cg')
lr_std.fit(X_train_std, y_train)
print ('Training Score (Standardized)', lr.score(X_train_std, y_train))
print ('Testing Score (Standardized)', lr.score(X_test_std, y_test))
```
Not bad, let's see what happens if we adjust the regularization strength.
```
c_list = np.arange(0, 10, 0.5)
scores = []
for c in c_list:
lr_temp = LogisticRegression(C=10**c, solver='newton-cg')
lr_temp.fit(X_train_std, y_train)
scores.append(lr_temp.score(X_test_std, y_test))
# plottig regularizations strength effects
fig, ax = plt.subplots()
ax.plot(c_list, scores)
ax.set_xlabel('Regularization Strength')
ax.set_ylabel('Test MSE')
ax.set_title('Logistic Regression Regularization Optimization')
plt.show()
```
The regularization parameter doesn't seem to help us much, overfitting does not seem to be an issue.
Further, in part because of the above statement, standardizing parameters does not help us much. Let's report the coefficients of the unstandardized features, allowing easier interpretation.
```
print ('-' * 80)
print('{:<50s}{:>15s}{:>15s}'.format('Feature','Coefficient','Odds Ratio'))
print ('-' * 80)
for feat, coef in (zip(income_train[features].columns, lr.coef_[0])):
print('{:<50s}{:>15f}{:>15f}'.format(feat,round(coef,2),round(np.exp(coef),2)))
```
## Part 3 - Analysis, Interpretation, and Questions
### Based on your above model, answer the following questions
1. What are 3 features positively correlated with income above 50k?
2. What are 3 features negatively correlated with income above 50k?
3. Overall, how well does the model explain the data and what insights do you derive from it?
*These answers count* - that is, make sure to spend some time on them, connecting to your analysis above. There is no single right answer, but as long as you support your reasoning with evidence you are on the right track.
Note - scikit-learn logistic regression does *not* automatically perform a hypothesis test on coefficients. That is OK - if you scale the data they are more comparable in weight.
### Match the following situation descriptions with the model most appropriate to addressing them
In addition to logistic regression, a number of other approaches were covered this week. Pair them with the situations they are most appropriate for, and briefly explain why.
Situations:
1. You are given data on academic performance of primary school students, and asked to fit a model to help predict "at-risk" students who are likely to receive the bottom tier of grades.
2. You are studying tech companies and their patterns in releasing new products, and would like to be able to model and predict when a new product is likely to be launched.
3. You are working on modeling expected plant size and yield with a laboratory that is able to capture fantastically detailed physical data about plants, but only of a few dozen plants at a time.
Approaches:
1. Ridge Regression
2. Quantile Regression
3. Survival Analysis
### Logistic Regression Questions
*A general comment on interpretation: Most of the features in my model are binary because of OHE of categorical variables. As such, they are not directly comparable to numeric coefficients. I only comment on the coefficients/odds ratios of the binary coefficients, which are easily interpreted when not standardized.*
***1. What are 3 features positively correlated with income above 50k?***
The table below details all regression features that are positively correlated with income above 50k. The magnitude of the relationship varies widely.
From the table below, we can examine a few that standout.
**Sex**
Males are about 2.35 times as likely to earn more than 50k.
**Marriage**
Being married to a civilian or armed forces member (Married-civ-spouse or Married-AF-spouse), or being in a relationship with a wife, are strong indicators of making over 50k. Precense of these attributes increases the likelihood of income over 50k by 3.45-4.93 times.
**Working for the Federal Govt**
A federal government worker is 2.08 times as likely to make over 50k as those working in other sectors.
```
# print out features positively correlated with income
print ('-' * 80)
print('{:<50s}{:>15s}{:>15s}'.format('Feature - Positively Correlated','Coefficient','Odds Ratio'))
print ('-' * 80)
for feat, coef in (zip(income_train[features].columns, lr.coef_[0])):
if coef > 0.0:
print('{:<50s}{:>15f}{:>15f}'.format(feat,round(coef,2),round(np.exp(coef),2)))
```
***2. What are 3 features negatively correlated with income above 50k?***
The table below details all regression features that are positively correlated with income above 50k. The magnitude of the relationship varies widely.
From the table below, we can examine a few that standout.
**Low Education Levels**
Although more chances of making over 50k do not increase monotonically with education levels, low education levels are associated with lower income generally, particularly in the extreme cases.
Someone with only a preschool education has 0.31 times the chance of making over 50k, all else equal.
**Immigration From Poorer Countries**
People who immigrated from especially poor countries like Columbia, the Dominican Republic, and Vietnam tend not to have income exceeding 50k.
**Occupation**
There seems to be a large disparity in pay between occupations. People who work in private house service (likely performing manual labor tasks), are 0.2 times as likely to have income exceeding 50k.
```
# print out features negatively correlated with income
print ('-' * 80)
print('{:<50s}{:>15s}{:>15s}'.format('Feature - Negatively Correlated','Coefficient','Odds Ratio'))
print ('-' * 80)
for feat, coef in (zip(income_train[features].columns, lr.coef_[0])):
if coef < 0.0:
print('{:<50s}{:>15f}{:>15f}'.format(feat,round(coef,2),round(np.exp(coef),2)))
```
3. Overall, how well does the model explain the data and what insights do you derive from it?
The classification accuracy on our test data was around 80%, a fairly high accuracy given the situation.
That said, the degree of accuracy acheived by a simple logistic regression is concerning. Our features mainly consist of structural factors (education, race, sex, industry).
In an ideal world, almost none of these structural factors should predict income. We would prefer that *sex*, for example, has no predictive power, and that men and women have equal earning power.
The question is incredibly complex, and socioeconomic relationships are constantly changing. It would be interesting to see how model accuracy changes as applied to new years of census data.
### Model Matching
1. You are given data on academic performance of primary school students, and asked to fit a model to help predict "at-risk" students who are likely to receive the bottom tier of grades.
**Quantile Regression**
Quantile regression creates a line of best fit through a given output quantile (0-100%), instead of through the mean output values.
In this case, we would establish a quantile that we would consider the bottom 'tier' of grades. Let's say the bottom 10%.
From there, we would use a quantile regression model to predict a line of best fit. Meaning, conditional on X, only 10% of true values would lie below our prediction value.
2. You are studying tech companies and their patterns in releasing new products, and would like to be able to model and predict when a new product is likely to be launched.
**Survival Analysis**
In this case, we can think of the release of a new product as a 'death' event in survival analysis.
If the description is referencing the replacement of an existing product, we would represent the existing product as the 'birth' event. If the description is representing a general innovation of a new product, we would represent the release of the last innovative product as the 'birth' event.
Either way, survival analysis allows us to fit a model when many of the data points are censored. For example, Google will almost certainly release a new version of GMail. However, since we haven't been able to observe that yet, all we know is "it has been X months since the last version of GMail was released".
Survival analysis allows us to correct for this censorship in our data.
3. You are working on modeling expected plant size and yield with a laboratory that is able to capture fantastically detailed physical data about plants, but only of a few dozen plants at a time.
**Ridge Regression**
In this case, we have *a lot* of features captured from the physical data, but very few observations. Normal regression techniques are prone to overfitting in these circumstances.
Ridge regression places a cost on the size of regression coefficients. This bias introduced by ridge regression mitigates overfitting, and will allow our model to generalize better from the few dozen observations in our data.
```
```
| github_jupyter |
```
test_index = 0
```
#### testing
```
from load_data import *
# load_data()
```
## Loading the data
```
from load_data import *
X_train,X_test,y_train,y_test = load_data()
len(X_train),len(y_train)
len(X_test),len(y_test)
```
## Test Modelling
```
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
class Test_Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.c1 = nn.Conv2d(1,64,5)
self.c2 = nn.Conv2d(64,128,5)
self.c3 = nn.Conv2d(128,256,5)
self.fc4 = nn.Linear(256*10*10,256)
self.fc6 = nn.Linear(256,128)
self.fc5 = nn.Linear(128,4)
def forward(self,X):
preds = F.max_pool2d(F.relu(self.c1(X)),(2,2))
preds = F.max_pool2d(F.relu(self.c2(preds)),(2,2))
preds = F.max_pool2d(F.relu(self.c3(preds)),(2,2))
# print(preds.shape)
preds = preds.view(-1,256*10*10)
preds = F.relu(self.fc4(preds))
preds = F.relu(self.fc6(preds))
preds = self.fc5(preds)
return preds
device = torch.device('cuda')
BATCH_SIZE = 32
IMG_SIZE = 112
model = Test_Model().to(device)
optimizer = optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
EPOCHS = 12
from tqdm import tqdm
PROJECT_NAME = 'Weather-Clf'
import wandb
# test_index += 1
# wandb.init(project=PROJECT_NAME,name=f'test-{test_index}')
# for _ in tqdm(range(EPOCHS)):
# for i in range(0,len(X_train),BATCH_SIZE):
# X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
# y_batch = y_train[i:i+BATCH_SIZE].to(device)
# model.to(device)
# preds = model(X_batch.float())
# preds.to(device)
# loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
# optimizer.zero_grad()
# loss.backward()
# optimizer.step()
# wandb.log({'loss':loss.item()})
# wandb.finish()
# for index in range(10):
# print(torch.argmax(preds[index]))
# print(y_batch[index])
# print('\n')
class Test_Model(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1,64,5)
self.conv2 = nn.Conv2d(64,128,5)
self.conv3 = nn.Conv2d(128,256,5)
self.fc1 = nn.Linear(256*10*10,64)
self.fc2 = nn.Linear(64,128)
self.fc3 = nn.Linear(128,256)
self.fc4 = nn.Linear(256,128)
self.fc5 = nn.Linear(128,6)
def forward(self,X):
preds = F.max_pool2d(F.relu(self.conv1(X)),(2,2))
preds = F.max_pool2d(F.relu(self.conv2(preds)),(2,2))
preds = F.max_pool2d(F.relu(self.conv3(preds)),(2,2))
# print(preds.shape)
preds = preds.view(-1,256*10*10)
preds = F.relu(self.fc1(preds))
preds = F.relu(self.fc2(preds))
preds = F.relu(self.fc3(preds))
preds = F.relu(self.fc4(preds))
preds = F.relu(self.fc5(preds))
return preds
model = Test_Model().to(device)
optimizer = optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
test_index += 1
wandb.init(project=PROJECT_NAME,name=f'test-{test_index}')
for _ in tqdm(range(EPOCHS)):
for i in range(0,len(X_train),BATCH_SIZE):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch.float())
preds.to(device)
loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item()})
wandb.finish()
for index in range(10):
print(torch.argmax(preds[index]))
print(y_batch[index])
print('\n')
```
| github_jupyter |
# Quantum Machine Learning with Amazon Braket: Binary Classifiers
This post details an approach taken by Aioi to build an exploratory
quantum machine learning application using Amazon Braket. Quantum
machine learning has been defined as "a research area that explores the
interplay of ideas from quantum computing and machine learning." Specifically, we explore how to use quantum computers to build a proof-of-principle classifier for risk assessment in a hypothetical car insurance use case. We use a hybrid quantum-classical approach and train a so-called quantum neural network to perform binary classification.
## Background
This demonstration is a result of collaboration with Aioi USA -
subsidiary of Aioi Nissay Dowa Insurance which is a member of MS&AD
Insurance Group Holdings - a major worldwide insurance organization
with close ties to the Toyota group, offering Toyota Insurance in 37
countries. Aioi USA is a full-service "insurtech" insurance agency
that develops data science-based products and services for the
transportation industry. Aioi was one of the first insurance companies
to work with Amazon Braket.
Aioi analyzes telematics data from self-driving vehicles to predict
driving risks. The vehicles are equipped with a multitude of sensors and
the goal is to use the sensor data to assign each vehicle a binary score
(safe or fail) that indicates the health of the vehicle. The problem can
be formalized computationally as a binary classification task in which
the driving risk score is a binary label to vehicle's sensor data.
To learn label assignments for each data point, classical machine learning
techniques such as e.g., linear regression (LR) or deep learning (DL)
can be applied. LR is a popular approach when the data-label mapping
is described by a linear function. For large and complex data structures, DL offers a way to capture
nonlinear behavior in data-label mapping.
So, we have powerful classical methods to perform classification tasks; how can quantum computers help here? The short answer is, we don't quite know yet. There are results ([arXiv:1204.5242](https://arxiv.org/abs/1204.5242), [arXiv:1601.07823](https://arxiv.org/abs/1601.07823) ) indicating that quantum LR algorithms applied to quantum data under specific assumptions can be exponentially faster than their classical counterparts operating on classical data. The flip side is that these quantum algorithms output a solution in the form of a quantum state which may not be immediately useful for further processing on a classical computer. On the DL front, quantum neural networks (QNNs) emerged as a potential replacement for classical neural nets ([arXiv:quant-ph/0201144](https://arxiv.org/abs/quant-ph/0201144)) . QNN designs to perform binary classification tasks were proposed recently (see e.g., [arXiv:1802.06002](https://arxiv.org/abs/1802.06002)) as well. An advantage of QNNs is that they can directly output a classical label value, though one still has to input data in the form of a quantum state. Whether or not QNNs have practical computational advantage over classical neural nets in DL task is very much an area of active research and the jury is not out yet on QNNs. This motivated us to explore how QNNs can be utilized for the driving risk
assignment in the case of binary sensor data with an eye towards near-term hardware implementation that constraints QNN's circuit depth due to decoherence.
In this post we build quantum machine learning applications using [Amazon Braket](https://aws.amazon.com/braket/). To run the example applications developed here, you need access to the [Amazon Braket SDK](https://github.com/aws/amazon-braket-sdk-python). You can either install the Braket SDK locally from the [Amazon Braket GitHub repo](https://github.com/aws/amazon-braket-sdk-python) or, alternatively, create a managed notebook in the [Amazon Braket console](https://aws.amazon.com/console/). (Please note that you need an AWS account, if you would like to run this demo on one of the quantum hardware backends offered by Amazon Braket.)
## Problem Setting
Binary classification is an example of supervised machine learning. It
requires a training data set to build a model that can be used to predict
labels (driving risk scores). We assume that we are given a training set
$T$ that consists of $M$ data-label pairs ${\bf x}, {\bf y}$
$(T=\{{\bf x}_i, {\bf y}_i\}$,$i=1,M)$. Here, ${\bf x}_i$ represents vehicle sensor data as a $N$-bit string
${\bf x}_i=\{x_{i0},\cdots,x_{iN-1}\}$ ($x_{ij}=\{0,1\}$). A label
${\bf y}_i=\{0,1\}$ represents the driving risk score associated with ${\bf x}_i$.
Before we proceed with a quantum solution, it is instructive to recall
the main steps of constructing a classical neural net (NN) based
solution. A classical NN takes data ${\bf x}$ and a set of
parameters $\vec{\theta}$ (so-called weights) as an input and transforms it into an output
label ${\bf z}$ such that $\hat{{\bf y} }= f({\bf x},\vec{\theta})$ where
$f$ is determined by NN. The goal is then
to use a training set to train the NN, i.e. to determine the values of
$\vec{\theta}$ for which the discrepancy between the output labels and
the training set labels is minimized. You achieve this by minimizing a
suitably chosen loss function $L(\hat{{\bf y}},{\bf y})$ over the NN
parameters $\vec{\theta}$ using e.g., a gradient-based optimizer.
To construct a quantum binary classifier we follow a similar procedure
with a couple of modifications
- We map our classical $N$-bit data $\{{\bf x}_i\}$ onto $N$-qubit quantum states $\{|\psi_i\rangle \}$. For example, a classical bit string $\{{\bf x}_i\}=0010$ maps onto $|\psi_i\rangle = |0010\rangle$
- Instead of a classical NN we construct a QNN - a $N+1$-qubit circuit $\mathcal{C}(\{\vec{\theta}\})$ (a sequence of elementary single- and two-qubit gates) that transforms the input states $\{|\psi_i\rangle|0\rangle \}$ into output states $\{|\phi_i \rangle \}$ $|\phi_i\rangle = \mathcal{C}|\psi_i\rangle $. The QNN circuit $\mathcal{C}(\{\vec{\theta}\})$ depends on classical parameters $\{\vec{\theta}\}$ that can be adjusted to change the output $\{|\phi_i\rangle \}$
- We use the $N+1$-th qubit to read out labels after the QNN acted on the input state. Every time we run the QNN with the same input state and parameters $\{\vec{\theta}\}$, we measure in what quantum state the $N+1$-th qubit ends up ($|0\rangle$ or $|1\rangle$). We denote the frequency of observing the state $|0\rangle$ ($|1\rangle$ ) as $p_0$ ($p_1$). We define the observed label $\hat{{\bf y}}$ as $\hat{{\bf y}} = \frac{1 - (p_0-p_1)}{2}$. (Note: in the language of quantum computing the difference $p_0-p_1$ equals the expected value of the Pauli $\hat{Z}$ operator measured on the $N+1$-th qubit.) By definition, $p_0-p_1$ is a function of the QNN parameters $\{\vec{\theta}\}$ in the range $ [-1,1] $ and, thus, $\hat{{\bf y}}$ has the range $ [0,1] $ .
In the training of the QNN circuit $\mathcal{C}$ our goal is to find a set of parameters $\{\vec{\theta}_o\}$ such that for each data point in the training set $T$ the label value ${\bf y}_i$ is close
to $\hat{{\bf y}}_i$.
To achieve this, we minimize the log loss function $L(\{\vec{\theta}\})$ defined as,
$L(\{\vec{\theta}\})=-(\sum\limits_{i=1}^{M}{\bf y}_i\log(\hat{{\bf y}}_i)+(1-{\bf y}_i)\log(1-\hat{{\bf y}}_i))$.
We use the Amazon Braket local simulator to evaluate $L(\{\vec{\theta}\})$ and a classical optimizer from $\verb+scipy.optimize+$ to minimize it.
## Mapping classical data onto quantum states.
The first step in the implementation of a quantum binary classifier is to specify a quantum circuit that maps classical data onto quantum states. We map classical bit values "0" and "1" onto quantum states
$|0\rangle$ and $|1\rangle$, respectively. By convention, the
initial state of a qubit is always assumed to be $|0\rangle$. If the
input quantum state is $|1\rangle$ then we obtain it from
$|0\rangle$ by applying a qubit flip gate $X$ i.e.
$|1\rangle = X|0\rangle$. Similarly, a quantum circuit to prepare an
input state, corresponding to classical data, consists of $X$
gates acting on qubits that are in state $|1\rangle$. For example, a
quantum circuit to prepare $|\psi_i\rangle =|101\rangle$ will consist
of two $X$ gate acting on qubits 0 and 2. Below we provide code that
generates a quantum circuit for preparing an arbitrary computational basis state
$|\psi_i\rangle$ using Amazon Braket.
```
# Import Braket libraries
from braket.circuits import Circuit
from braket.aws import AwsDevice
# A function that converts a bit string bitStr into a quantum circuit
def bit_string_to_circuit(bitStr):
circuit = Circuit()
for ind in range(len(bitStr)):
if bitStr[ind]=='1':
circuit.x(ind)
return circuit
# provide a feature string to test the function above
feature = '00101010'
# print quantum circuit that prepares corresponding quantum state
print(bit_string_to_circuit(feature))
```
## Designing Quantum Neural Networks and Training
Now that we know how to prepare input quantum states that correspond to classical data, the next step is to define and constuct a QNN circuit $\mathcal{C}(\{\vec{\theta}\})$ that we will train to
perform binary classification. We use the QNN design layout depicted in
the figure below. It is has $2N+1$ classical parameters defining:
$N$ two-qubit gates
$XX(\theta_k) = e^{-i\frac{\theta_k}{2} \hat{X}_j\hat{X}_{N+1}}$, $N$
single-qubit gates $R_{y}(\theta_m) = e^{-i\frac{\theta_m}{2}\hat{Y}_j}$, and one single-qubit gate $R_{x}(\theta) = e^{-i\frac{\theta}{2}\hat{X}_N}$ acting on the $N+1$-th qubit..

The code below implements this QNN, applies it to an arbitrary input state defined by a classical bit string, and measures the values of the label qubit using Amazon Braket.
```
# import standard numpy libraries and optimizers
import numpy as np
from scipy.optimize import minimize
# Braket imports
from braket.circuits import Circuit, Gate, Instruction, circuit, Observable
from braket.aws import AwsDevice, AwsQuantumTask
from braket.devices import LocalSimulator
# set Braket backend to local simulator (can be changed to other backends)
device = LocalSimulator()
# Quantum Neural Net from the QNN figure implemented in Braket
# Inputs: bitStr - data bit string (e.g. '01010101')
# pars - array of parameters theta (see the QNN figure for more details)
def QNN(bitStr,pars):
## size of the quantum neural net circuit
nQbts = len(bitStr) + 1 # extra qubit is allocated for the label
## initialize the circuit
qnn = Circuit()
## add single-qubit X rotation to the label qubit,
## initialize the input state to the one specified by bitStr
## add single-qubit Y rotations to data qubits,
## add XX gate between qubit i and the label qubit,
qnn.rx(nQbts-1, pars[0])
for ind in range(nQbts-1):
angles = pars[2*ind + 1:2*ind+1+2]
if bitStr[ind] == '1': # by default Braket sets input states to '0',
# qnn.x(ind) flips qubit number ind to state |1\
qnn.x(ind)
qnn.ry(ind, angles[0]).xx(ind, nQbts-1, angles[1])
## add Z observable to the label qubit
observZ = Observable.Z()
qnn.expectation(observZ, target=[nQbts-1])
return qnn
```
With the QNN defined, we need to code up the loss function $L(\{\vec{\theta}\})$ that we minimize in order to train
the QNN to perform binary classification. Below is the code that computes $L(\{\vec{\theta}\})$ using the local simulator in Amazon Braket.
```
## Function that computes the label of a given feature bit sting bitStr
def parity(bitStr):
return bitStr.count('1') % 2
## Log loss function L(theta,phi) for a given training set trainSet
## inputs: trainSet - array of feature bit strings e.g. ['0101','1110','0000']
## pars - quantum neural net parameters theta (See the QNN figure)
## device - Braket backend that will compute the log loss
def loss(trainSet, pars, device):
loss = 0.0
for ind in range(np.size(trainSet)):
## run QNN on Braket device
task = device.run(QNN(trainSet[ind], pars), shots=0)
## retrieve the run results <Z>
result = task.result()
if parity(trainSet[ind])==0:
loss += -np.log2(1.0-0.5*(1.0-result.values[0]))
else:
loss += -np.log2(0.5*(1.0-result.values[0]))
print ("Current value of the loss function: ", loss)
return loss
```
Putting it all together we are now ready to train our QNN circuit to reproduce binary classification of a training set $T$. For the example below, we assume that labels ${\bf y}_i$ are generated by a Boolean function $\hat{f}({\bf x}_i) = (\sum\limits_{j=0}^{N-1}x_{ij})\ {\rm mod}\ 2$. To emulate data in the training set $T$, we generated $11$ random $10$-bit strings (data) and assign them labels according to $\hat{f}$.
```
## Training the QNN using gradient-based optimizer
nBits = 10 # number of bits per feature
## Random training set consisting of 11 10-bit features
## Please explore other training sets
trainSet = ['1101011010',
'1000110011',
'0101001001',
'0010000110',
'0101111010',
'0000100010',
'1001010000',
'1100110001',
'1000010001',
'0000111101',
'0000000001']
## Initial assignment of QNN parameters theta and phi (random angles in [-pi,pi])
pars0 = 2 * np.pi * np.random.rand(2*nBits+1) - np.pi
## Run minimization
res = minimize(lambda pars: loss(trainSet, pars, device), pars0, method='BFGS', options={'disp':True})
```
Run the code and wait for the optimizer to converge. It outputs a message that looks like this when the optimizer finishes.
```
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 55
Function evaluations: 1430
Gradient evaluations: 65
```
We note that our QNN circuit is designed to compute the parity of input data exactly for an appropriate choice of the parameters $\{\vec{\theta}\}$. Thus, the global minimum of the loss function using this QNN is zero. This is generally not the case in DL applications, however. Note also that $L(\{\vec{\theta}\})$ is not convex
with respect to the parameters $\{\vec{\theta}\}$. This means that if the final value of the loss function value is not zero, the optimizer got stuck in a local minimum. Do not panic. Try running the optimizer with a
different set of initial parameters \verb+pars0+. You can also explore various minimization algorithms by
specifying $\verb+method=' '+$ in the minimize function.
Calling $\verb+res.x+$ outputs the optimal values of the parameters $\{\vec{\theta}\}$
and you can use them to run the "optimal" QNN and perform binary classification on the data that is not a part of the training set. Try that and compute the mean square error of the classifier.
For our 10-bit data example there are $2^{10}=1024$ possible
10-bit strings, we chose a training set that has only 11 data points. Yet it is
sufficiently large to train the QNN to act as a perfect
binary classifier for all 1024 possible features. Can you demonstrate
that?
```
## Print the predicted label values for all N-bit data points using the optimal QNN parameters res.x
for ind in range(2**nBits):
data = format(ind, '0'+str(nBits)+'b')
task = device.run(QNN(data, res.x), shots=100)
result = task.result()
if (data in trainSet):
inSet = 'in the training set'
else:
inSet = 'NOT in the training set'
print('Feature:', data, '| QNN predicted parity: ', 0.5*(1-result.values[0]), ' | ', inSet)
print('---------------------------------------------------')
```
As an exercise, use the optimal QNN parameters in $\verb+res.x+$ and apply the
resulting QNN to all 10-bit strings that are not in the training set.
Record the mean square error between the predicted and computed label
values.
### Conclusion
This post explored the use case of binary classification to analyze
binary (telematic) data by combining QNNs with Amazon Braket. The QNN binary classifier designed in this post
requires the number of two-qubit gates that scales linearly with the
feature size. This is advantageous for Noisy Intermediate Scale Quantum
(NISQ) devices that are limited in the circuit depth due to noise. A
future area of investigation for the team is to apply more complex
feature sets, and constructing QNNs to classify them. You can download and play with the code from this post here.
| github_jupyter |
# WeatherPy
----
#### Note
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from scipy.stats import linregress
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_cities_df = "../output_data/cities_df.csv"
output_weather_df = "../output_data/weather_df.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
```
## Generate Cities List
```
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(low = -90.000, high = 90.000, size = 1500)
lngs = np.random.uniform(low = -180.000, high = 180.000, size = 1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
```
### Perform API Calls
* Perform a weather check on each city using a series of successive API calls.
* Include a print log of each city as it'sbeing processed (with the city number and city name).
```
# Counters
city_counter = 1
set_counter = 1
# Create the lists to hold relative data
cities_list= []
cloudiness = []
country = []
date = []
humidity = []
lat = []
lng = []
max_temp = []
wind_speed = []
# Print Starting Log Statement
print(f"-------------------------------")
print("Beginning Data Retrieval")
print(f"-------------------------------")
# Create a query url for each city in the cities list to get json response
for i, city in enumerate(cities):
# Group cities as sets of 50s
if (i % 50 == 0 and i >= 50):
set_counter += 1
city_counter = 1
api_key = weather_api_key
base_url = "http://api.openweathermap.org/data/2.5/weather?units=imperial&appid=" + api_key
query_url = base_url + "&q=" + city
# Get json respose for each city
response = requests.get(query_url).json()
# Print the results
print(f"Processing Record {city_counter} of Set {set_counter} | {city}")
# Increase city counter
city_counter += 1
# Add the values to the lists
try:
cloudiness.append(response["clouds"]["all"])
country.append(response["sys"]["country"])
date.append(response["dt"])
humidity.append(response["main"]["humidity"])
lat.append(response["coord"]["lat"])
lng.append(response["coord"]["lon"])
max_temp.append(response["main"]["temp_max"])
wind_speed.append(response["wind"]["speed"])
cities_list.append(response["name"])
# Wait within loop in order to not exceed rate limit of API
# time.sleep(0.5)
# Exception handling
except Exception:
print("City not found. Skipping...")
pass
# Print Ending Log Statement
print(f"-------------------------------")
print(f"Data Retrieval Complete")
print(f"-------------------------------")
```
### Convert Raw Data to DataFrame
* Export the city data into a .csv.
* Display the DataFrame
```
# Create a dictionary to keep data
weather_data = {
"City": cities_list,
"Cloudiness": cloudiness,
"Country": country,
"Date": date,
"Humidity": humidity,
"Lat": lat,
"Lng": lng,
"Max Temp": max_temp,
"Wind Speed": wind_speed
}
# Create the data frame and count variables for each columns
weather_df = pd.DataFrame(weather_data)
weather_df.count()
# Exporting to CSV file
weather_df.to_csv(output_cities_df)
# Display the data frame
weather_df.head()
```
### Plotting the Data
* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
* Save the plotted figures as .pngs.
#### Latitude vs. Temperature Plot
```
# Create a scatter plot for latitude and temprature
plt.scatter(weather_df["Lat"], weather_df["Max Temp"], marker = "o", facecolor = "blue", edgecolor = "black")
plt.title("City Latitude vs. Max Temperature (%s)" % time.strftime("%x"))
plt.xlabel("Latitude")
plt.ylabel("Max Temprature (F)")
plt.grid()
plt.show()
```
#### Latitude vs. Humidity Plot
```
# Create a scatter plot for latitude and humidity
plt.scatter(weather_df["Lat"], weather_df["Humidity"], marker = "o", facecolor = "blue", edgecolor = "black")
plt.xlabel("Latitude")
plt.ylabel("Humidity (%)")
plt.title("City Latitude vs Humidity (%s)" % time.strftime("%x"))
plt.grid()
plt.show()
```
#### Latitude vs. Cloudiness Plot
```
# Create a scatter plot for latitude and cloudiness
plt.scatter(weather_df["Lat"], weather_df["Cloudiness"], marker = "o", facecolor = "blue", edgecolor = "black")
plt.xlabel("Latitude")
plt.ylabel("Cloudiness (%)")
plt.title("City Latitude vs Cloudiness (%s)" % time.strftime("%x"))
plt.grid()
plt.show()
```
#### Latitude vs. Wind Speed Plot
```
# Create a scatter plot for latitude and wind speed
plt.scatter(weather_df["Lat"], weather_df["Wind Speed"], marker = "o", facecolor = "blue", edgecolor = "black")
plt.xlabel("Latitude")
plt.ylabel("Speed (mph)")
plt.title("City Latitude vs Wind Speed (%s)" % time.strftime("%x"))
plt.grid()
plt.show()
```
## Linear Regression
```
# OPTIONAL: Create a function to create Linear Regression plots
# Create Northern and Southern Hemisphere DataFrames
northern_hemisphere = weather_df.loc[weather_df["Lat"] > 0.01]
southern_hemisphere = weather_df.loc[weather_df["Lat"] < -0.01]
```
#### Northern Hemisphere - Max Temp vs. Latitude Linear Regression
```
x_values = northern_hemisphere['Lat']
y_values = northern_hemisphere['Max Temp']
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope, 2)) + "x + " + str(round(intercept, 2))
plt.scatter(x_values, y_values)
plt.plot(x_values, regress_values, "red")
plt.annotate(line_eq, (5, 10), fontsize=15, color = "red")
plt.ylim(-10, 100)
plt.xlim(-10, 90)
plt.ylabel("Max. Temp")
plt.xlabel("Latitude")
plt.savefig("../output_data/NORTH MAX TEMP VS LAT.png")
plt.show()
```
#### Southern Hemisphere - Max Temp vs. Latitude Linear Regression
```
x_values = southern_hemisphere['Lat']
y_values = southern_hemisphere['Max Temp']
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope, 2)) + "x + " + str(round(intercept, 2))
plt.scatter(x_values, y_values)
plt.plot(x_values, regress_values, "r-")
plt.annotate(line_eq, (6, 10), fontsize = 15, color= "red")
plt.ylim(0, 100)
plt.xlim(-60, 60)
plt.ylabel("Max. Temp")
plt.xlabel("Latitude")
plt.savefig("../output_data/SOUTH MAX TEMP VS LAT.png")
plt.show()
```
#### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
x_values = northern_hemisphere['Lat']
y_values = northern_hemisphere['Humidity']
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope, 2)) + "x + " + str(round(intercept, 2))
plt.scatter(x_values, y_values)
plt.plot(x_values, regress_values, "r-")
plt.annotate(line_eq, (6, 10),fontsize = 15,color="red")
plt.ylim(0, 120)
plt.xlim(-10, 90)
plt.ylabel("Humidity")
plt.xlabel("Latitude")
plt.savefig("../output_data/NORTH HUM VS LAT.png")
plt.show()
```
#### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
x_values = southern_hemisphere['Lat']
y_values = southern_hemisphere['Humidity']
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope, 2)) + "x + " + str(round(intercept, 2))
plt.scatter(x_values,y_values)
plt.plot(x_values, regress_values, "r-")
plt.annotate(line_eq, (-25, 10), fontsize = 15, color = "red")
plt.ylim(0, 110)
plt.xlim(-90, 30)
plt.ylabel("Humidity")
plt.xlabel("Latitude")
plt.savefig("../output_data/SOUTH HUM VS LAT.png")
plt.show()
```
#### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
x_values = northern_hemisphere['Lat']
y_values = northern_hemisphere['Cloudiness']
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope, 2)) + "x + " + str(round(intercept, 2))
plt.scatter(x_values, y_values)
plt.plot(x_values, regress_values, "r-")
plt.annotate(line_eq, (6, 10), fontsize = 15, color = "red")
plt.ylim(-10, 110)
plt.xlim(-10, 90)
plt.ylabel("Cloudiness")
plt.xlabel("Latitude")
plt.savefig("../output_data/NORTH CLOUD VS LAT.png")
plt.show()
```
#### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
x_values = southern_hemisphere['Lat']
y_values = southern_hemisphere['Cloudiness']
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope, 2)) + "x + " + str(round(intercept, 2))
plt.scatter(x_values, y_values)
plt.plot(x_values, regress_values, "r-")
plt.annotate(line_eq, (-25, 10),fontsize = 15,color = "red")
plt.ylim(-10, 120)
plt.xlim(-60, 10)
plt.ylabel("Cloudiness")
plt.xlabel("Latitude")
plt.savefig("../output_data/SOUTH CLOUD VS LAT.png")
plt.show()
```
#### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
x_values = northern_hemisphere['Lat']
y_values = northern_hemisphere['Wind Speed']
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope, 2)) + "x + " + str(round(intercept, 2))
plt.scatter(x_values, y_values)
plt.plot(x_values, regress_values, "r-")
plt.annotate(line_eq, (45, 22), fontsize = 15, color = "red")
plt.ylim(-10, 90)
plt.xlim(-10, 90)
plt.ylabel("Cloudiness")
plt.xlabel("Latitude")
plt.savefig("../output_data/NORTH WIND VS LAT.png")
plt.show()
```
#### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
x_values = southern_hemisphere['Lat']
y_values = southern_hemisphere['Wind Speed']
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope, 2)) + "x + " + str(round(intercept, 2))
plt.scatter(x_values, y_values)
plt.plot(x_values, regress_values, "r-")
plt.annotate(line_eq, (-25, 25), fontsize = 15,color = "red")
plt.ylim(-10, 90)
plt.xlim(-90, 20)
plt.ylabel("Cloudiness")
plt.xlabel("Latitude")
plt.savefig("../output_data/SOUTH WIND VS LAT.png")
plt.show()
```
| github_jupyter |
```
from quchem.Hamiltonian_Generator_Functions import *
from quchem.Graph import *
### HAMILTONIAN start
Molecule = 'LiH'
geometry = [('Li', (0., 0., 0.)), ('H', (0., 0., 1.45))]
basis = 'sto-3g'
### Get Hamiltonian
Hamilt = Hamiltonian(Molecule,
run_scf=1, run_mp2=1, run_cisd=1, run_ccsd=1, run_fci=1,
basis=basis,
multiplicity=1,
geometry=geometry) # normally None!
QubitHamiltonian = Hamilt.Get_Qubit_Hamiltonian(threshold=None, transformation='JW')
### HAMILTONIAN end
#####################################
Hamiltonian_graph_obj = Openfermion_Hamiltonian_Graph(QubitHamiltonian)
commutativity_flag = 'AC' ## <- defines relationship between sets!!!
plot_graph = False
Graph_colouring_strategy='largest_first'
anti_commuting_sets = Hamiltonian_graph_obj.Get_Clique_Cover_as_QubitOp(commutativity_flag, Graph_colouring_strategy=Graph_colouring_strategy, plot_graph=plot_graph)
anti_commuting_sets
from quchem.Ansatz_Generator_Functions import *
##
NOON_spins_combined, NMO_basis = Hamilt.Get_NOON()
##
Hamilt.Get_CCSD_Amplitudes()
ansatz_obj = Ansatz(Hamilt.molecule.n_electrons, Hamilt.molecule.n_qubits)
reduced_Sec_Quant_CC_ops_ia, reduced_Sec_Quant_CC_ops_ijab, reduced_theta_parameters_ia, reduced_theta_parameters_ijab =ansatz_obj.Remove_NOON_terms(
NOON=NOON_spins_combined,
occ_threshold= 1.999,
unocc_threshold=1e-4,
indices_to_remove_list_manual=None,
single_cc_amplitudes=Hamilt.molecule.single_cc_amplitudes,
double_cc_amplitudes=Hamilt.molecule.double_cc_amplitudes,
singles_hamiltonian=Hamilt.singles_hamiltonian,
doubles_hamiltonian=Hamilt.doubles_hamiltonian,
tol_filter_small_terms=None)
reduced_Sec_Quant_CC_ops_ijab
ia_terms, ijab_terms, ia_theta, ijab_theta = ansatz_obj.Get_ia_and_ijab_terms()
print('REDUCTION')
print('ia_terms', len(ia_terms), 'TO', len(reduced_Sec_Quant_CC_ops_ia))
print('ijab_terms', len(ijab_terms), 'TO', len(reduced_Sec_Quant_CC_ops_ijab))
Qubit_Op_list_Second_Quant_CC_Ops_ia, Qubit_Op_list_Second_Quant_CC_Ops_ijab = ansatz_obj.UCCSD_single_trotter_step(reduced_Sec_Quant_CC_ops_ia,
reduced_Sec_Quant_CC_ops_ijab)
full_ansatz_Q_Circ = Ansatz_Circuit(Qubit_Op_list_Second_Quant_CC_Ops_ia, Qubit_Op_list_Second_Quant_CC_Ops_ijab,
Hamilt.molecule.n_qubits, Hamilt.molecule.n_electrons)
ansatz_cirq_circuit = full_ansatz_Q_Circ.Get_Full_HF_UCCSD_QC(reduced_theta_parameters_ia, reduced_theta_parameters_ijab)
print(ansatz_cirq_circuit.to_text_diagram(transpose=True))
simulator = cirq.Simulator()
result = simulator.compute_amplitudes(ansatz_cirq_circuit, bitstrings=[i for i in range(2 ** len(ansatz_cirq_circuit.all_qubits()))])
result=np.around(result, 5)
print(result.reshape([(2 ** len(ansatz_cirq_circuit.all_qubits())), 1]))
from quchem.LCU_method import *
from quchem.Simulating_Quantum_Circuit import *
R_uncorrected, Pn, gamma_l = Get_R_linear_combination(anti_commuting_sets[10], 1)
R_corrected_Op_list, R_corr_list, ancilla_amplitudes, l1 = absorb_complex_phases(R_uncorrected)
full_circ = Full_Q_Circuit(Pn, R_corrected_Op_list, R_corr_list, ancilla_amplitudes, Hamilt.molecule.n_qubits, ansatz_cirq_circuit)
output_bin_dict = Get_binary_dict_project(full_circ, Pn, 100000, Hamilt.molecule.n_qubits, ancilla_amplitudes, l1)
expectation_value_by_parity(output_bin_dict) * gamma_l
# from timeit import default_timer as timer
# n_shots = 1000
# start = timer()
# simulator = cirq.Simulator()
# raw_result = simulator.run(full_circ, repetitions=10*n_shots * int(np.ceil(1 / (1/l1)**2)))
# end = timer()
# print('time taken = ', end - start ,'seconds')
from timeit import default_timer as timer
n_shots=100
###
start = timer()
testing = VQE_Experiment_LCU_UP(anti_commuting_sets,
ansatz_cirq_circuit,
n_shots,
Hamilt.molecule.n_qubits,
N_indices_dict=None)
print(testing.Calc_Energy())
end = timer()
###
print('time taken = ', end - start ,'seconds')
Hamilt.molecule.fci_energy
n_shots=100
def GIVE_ENERGY(theta_ia_theta_jab_list):
theta_ia = theta_ia_theta_jab_list[:len(reduced_theta_parameters_ia)]
theta_ijab = theta_ia_theta_jab_list[len(reduced_theta_parameters_ia):]
ansatz_cirq_circuit = full_ansatz_Q_Circ.Get_Full_HF_UCCSD_QC(theta_ia, theta_ijab)
VQE_exp_LCU = VQE_Experiment_LCU_UP(anti_commuting_sets,
ansatz_cirq_circuit,
n_shots,
Hamilt.molecule.n_qubits,
N_indices_dict=None)#{7:0, 8:1, 9:0, 10:1})
return VQE_exp_LCU.Calc_Energy().real
### optimizer
from quchem.Scipy_Optimizer import *
THETA_params=[0 for _ in range(len(reduced_theta_parameters_ia)+len(reduced_theta_parameters_ijab))]
GG = Optimizer(GIVE_ENERGY, THETA_params, 'Nelder-Mead', store_values=True, display_iter_steps=True,
tol=1e-5,
display_convergence_message=True)
GG.get_env(50)
GG.plot_convergence()
plt.show()
```
### Tensorflow
```
from quchem.TensorFlow_Opt import *
```
**gradient is given by**
https://arxiv.org/pdf/1906.08728.pdf
$$\frac{\partial O(\theta)}{\partial \theta}=\left\langle\overrightarrow{0}\left|\hat{U}^{\dagger} \hat{R}_{y}^{C \dagger}(\theta+\pi / 4) \hat{V}^{\dagger} \hat{O} \hat{V} \hat{R}_{y}^{C}(\theta+\pi / 4) \hat{U}\right| \overrightarrow{0}\right\rangle -\left\langle\overrightarrow{0}\left|\hat{U}^{\dagger} \hat{R}_{y}^{C \dagger}(\theta-\pi / 4) \hat{V}^{\dagger} \hat{O} \hat{V} \hat{R}_{y}^{C}(\theta-\pi / 4) \hat{U}\right| \overrightarrow{0}\right\rangle$$
$$\frac{\partial O(\theta)}{\partial \theta} =O(\theta+\pi / 4)-O(\theta-\pi / 4)$$
```
def calc_gradient(theta_ia_theta_jab_list):
grad_list=[]
for index, theta in enumerate(theta_ia_theta_jab_list):
new_theta_list = theta_ia_theta_jab_list.copy()
new_theta_list[index] = theta + np.pi/4
Obs_PLUS = GIVE_ENERGY(new_theta_list)
new_theta_list[index] = theta - np.pi/4
Obs_MINUS = GIVE_ENERGY(new_theta_list)
gradient = Obs_PLUS - Obs_MINUS
grad_list.append((gradient, theta))
return grad_list
```
note:
this is very SLOW as it has to run a separate experiment TWICE for each parameter before taking a step!
```
import random
X0 = [random.uniform(0, 2*np.pi) for _ in range(len(reduced_Sec_Quant_CC_ops_ia) + len(reduced_Sec_Quant_CC_ops_ijab))]
GG = Tensor_Flow_Optimizer(GIVE_ENERGY, X0, 'Adam', calc_gradient, learning_rate=0.1, beta1=0.9,
beta2=0.999, store_values=True, display_iter_steps=True)
GG.optimize(50)
GG.plot_convergence()
from quchem.Adam_Optimizer import *
import random
def calc_gradient_ADAM(theta_ia_theta_jab_list):
grad_list=[]
for index, theta in enumerate(theta_ia_theta_jab_list):
new_theta_list = theta_ia_theta_jab_list.copy()
new_theta_list[index] = theta + np.pi/4
Obs_PLUS = GIVE_ENERGY(new_theta_list)
new_theta_list[index] = theta - np.pi/4
Obs_MINUS = GIVE_ENERGY(new_theta_list)
gradient = Obs_PLUS - Obs_MINUS
grad_list.append(gradient)
return np.array(grad_list)
X0 = np.array([random.uniform(0, 2*np.pi) for _ in range(len(reduced_Sec_Quant_CC_ops_ia) + len(reduced_Sec_Quant_CC_ops_ijab))])
opt_params, list_of_inputs, list_of_outputs = Adam_Opt(X0, GIVE_ENERGY,
calc_gradient_ADAM,
learning_rate=0.1,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-8,
max_iter=50,
disp=True,
tolerance=1e-3,
store_steps=True)
VQE_experiment_ENERGY(opt_params)
def LinAlgEnergy(theta_ia_theta_jab_list):
theta_ia = theta_ia_theta_jab_list[:len(reduced_theta_parameters_ia)]
theta_ijab = theta_ia_theta_jab_list[len(reduced_theta_parameters_ia):]
Qubit_Ham_matrix_JW = Hamilt.Get_sparse_Qubit_Hamiltonian_matrix(QubitHamiltonian)
ansatz_lin_alg_obj = Ansatz_MATRIX(Hamilt.molecule.n_electrons, Hamilt.molecule.n_qubits, reduced_Sec_Quant_CC_ops_ia, reduced_Sec_Quant_CC_ops_ijab)
# state_ket = ansatz_lin_alg_obj.Calc_ansatz_state_withOUT_trot(reduced_theta_parameters_ia, reduced_theta_parameters_ijab, 'JW')
state_ket = ansatz_lin_alg_obj.Calc_ansatz_state_WITH_trot_SINGLE_STEP(theta_ia, theta_ijab, 'JW')
Energy = ansatz_lin_alg_obj.Calc_energy_of_state(state_ket, Qubit_Ham_matrix_JW)
return Energy
X0 = np.array([random.uniform(0, 2*np.pi) for _ in range(len(reduced_Sec_Quant_CC_ops_ia) + len(reduced_Sec_Quant_CC_ops_ijab))])
LinAlgEnergy(X0)
def calc_gradient_ADAM_linAlg(theta_ia_theta_jab_list):
grad_list=[]
for index, theta in enumerate(theta_ia_theta_jab_list):
new_theta_list = theta_ia_theta_jab_list.copy()
new_theta_list[index] = theta + np.pi/4
Obs_PLUS = LinAlgEnergy(new_theta_list)
new_theta_list[index] = theta - np.pi/4
Obs_MINUS = LinAlgEnergy(new_theta_list)
gradient = Obs_PLUS - Obs_MINUS
grad_list.append(gradient)
return np.array(grad_list)
calc_gradient_ADAM_linAlg(X0)
opt_params, list_of_inputs, list_of_outputs = Adam_Opt(X0, LinAlgEnergy,
calc_gradient_ADAM_linAlg,
learning_rate=0.1,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-8,
max_iter=50,
disp=True,
tolerance=1e-3,
store_steps=True)
LinAlgEnergy(opt_params)
INPUT = np.array([4.52776195, 4.42573835, 1.72722763, 3.14817708, 0.28249851,
1.43945444, 3.90949061, 7.84202741, 4.38961051, 6.35661061,
3.28588646, 6.25964932])
opt_params, list_of_inputs, list_of_outputs = Adam_Opt(INPUT, LinAlgEnergy,
calc_gradient_ADAM_linAlg,
learning_rate=0.01,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-8,
max_iter=50,
disp=True,
tolerance=1e-5,
store_steps=True)
LinAlgEnergy(opt_params)
INPUT = np.array([4.55383355, 4.55428763, 1.56073891, 3.16001904, 0.30119385,
1.55034305, 3.83016092, 7.89648636, 4.41184851, 6.29877879,
3.2903192 , 6.29409434])
opt_params, list_of_inputs, list_of_outputs = Adam_Opt(INPUT, LinAlgEnergy,
calc_gradient_ADAM_linAlg,
learning_rate=0.1,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-8,
max_iter=50,
disp=True,
tolerance=1e-5,
store_steps=True)
LinAlgEnergy(opt_params)
Hamilt.molecule.ccsd_energy
```
| github_jupyter |
```
from scipy.spatial.distance import squareform, cdist
from scipy.spatial.distance import pdist
from scipy.integrate import quad
from itertools import combinations, product, combinations_with_replacement
from functools import partial
from goatools import obo_parser
import matplotlib.pyplot as plt
import networkx as nx
import pandas as pd
import numpy as np
import random
import scipy
import graco
import time
import os
```
# BioGRID
```
df = pd.read_csv("/media/clusterduck123/joe/data/raw-data/BIOGRID-IDENTIFIERS-3.5.181.tab.txt",
delimiter='\t',
header=20)
df = df[df.ORGANISM_OFFICIAL_NAME == 'Saccharomyces cerevisiae']
df.to_csv("/media/clusterduck123/joe/data/raw-data/BIOGRID_SC_IDENTIFIERS-3.5.181.csv")
ID = df.BIOGRID_ID.unique()
min(ID), max(ID)
df = pd.read_csv("/media/clusterduck123/joe/data/raw-data/BIOGRID-IDENTIFIERS-3.5.181.tab.txt",
delimiter='\t',
skiprows=set(range(511837)) - {28},
nrows=116022,)
df.to_csv("/media/clusterduck123/joe/data/raw-data/BIOGRID_SC_IDENTIFIERS-3.5.181.csv")
t1 = time.time()
organism_dict = {}
organisms = set()
df = pd.read_csv("/media/clusterduck123/joe/data/raw-data/BIOGRID-IDENTIFIERS-3.5.181.tab.txt",
delimiter='\t',
header=20,
iterator=True)
for nr in range(10000000):
organism, = df.get_chunk(1).ORGANISM_OFFICIAL_NAME
if not organism in organisms:
print(organism, nr)
organisms.add(organism)
organism_dict[organism] = nr
t2 = time.time()
print(t2-t1)
```
# Fill NaNs
```
all_metrics = {'euclidean', 'cityblock', 'sqeuclidean',
'cosine', 'correlation', 'chebyshev',
'canberra', 'braycurtis'}
GCV_NULL = pd.DataFrame(np.nan,
columns=graco.coefficients(nx.Graph()).columns,
index=['self'])
graco.fill_nan(GCV_NULL, 'barycenter')
GDV = np.array([
# 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,10,11,12,13,14
[2, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[2, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[3, 0, 2, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
[1, 2, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]
])
GCV = graco.coefficients(GDV)
for metric in all_metrics - {'correlation'}:
D1 = graco.GCV_distance_matrix(GCV, 'cityblock')
D2 = squareform([graco.GCV_distance(GCV.loc[i], GCV.loc[j], 'cityblock') for i,j in combinations(range(4), 2)])
print(np.isclose(D1,D2).all())
D = graco.GCV_distance_matrix(GCV, 'cityblock')
nx.barabasi_albert_graph
for t in GCV.iterrows():
break
t[1]
np.isclose(D,
squareform([graco.GCV_distance(u, v, 'cityblock') for (_,u),(_,v) in combinations(GCV.iterrows(), 2)]))
np.isclose(D,
squareform([graco.GCV_distance(GCV.loc[i], GCV.loc[j], 'cityblock') for i,j in combinations(range(4), 2)]))
squareform(graco.GCV_distance(GCV.loc[i], GCV.loc[j], 'cityblock') for i,j in combinations(range(4), 2))
u = GCV.loc[0]
v = GCV.loc[3]
df = pd.concat([u, v], axis=1).T.dropna(axis=1)
d = np.mean(list(graco.convex_distance(u[eq], v[eq], 'cityblock') for eq in graco.iter_equations(df)))
d
for eq,coeffs in graco.iter_equation_coefficients(df):
break
coeffs
for eq in graco.iter_equations(GCV_NULL):
break
for eq in graco.iter_equations(GCV_NULL):
GCV_NULL[eq]
D = graco.GCV_distance2(GCV_NULL, 'canberra')
GCV_NULL.dropna(axis=1)
GCV_NULL.loc['other'] = coeffs
u = GCV.loc[2]
v = GCV.loc[3]
df = pd.concat([u, v], axis=1).T.dropna(axis=1)
d = sum(graco.distance(coeffs[0], coeffs[1], metric) for eq,)
assert sum(u['A']['0']) == 1.
for eq, coeffs in graco.iter_equation_coefficients(df):
print(eq, len(coeffs.T))
graco.distance(coeffs.iloc[0], coeffs.iloc[1], dist='canberra')
i = 10
def foo():
i = 1
print(i)
print(i)
foo()
print(i)
enclosed
eval(_i54)
pd.concat([u, v], axis=1).T
u
GCV_NULL
pd.DataFrame(coeffs).T
GCV_NULL = pd.DataFrame(np.nan, columns=GCV.columns, index=['NULL'])
graco.fill_nan(GCV_NULL, 'barycenter')
GCV_NULL
GCV_NULL = pd.DataFrame(np.nan, columns=GCV.columns)
graco.fill_nan(pd.DataFrame(GCV.loc['NULL']).T, 'barycenter')
GCV
pd.DataFrame(GCV.loc['NULL']).T
graco.fill_nan(GCV, 'barycenter')
GCV.T.head(60)
lowest_two_levels = list(range(GCV.columns.nlevels-1))
for eq, coeffs in GCV.groupby(level = lowest_two_levels,
axis = 1):
GCV.loc[:,eq] = coeffs.fillna(coeffs.mean())
GCV
coeffs
GCV.loc[:,eq]
```
----
```
DATA_DIRECTORY = "/media/clusterduck123/joe/data"
YEAST_DIRECTORY = f"{DATA_DIRECTORY}/processed-data/yeast"
MATRIX_DIRECTORY = f"{YEAST_DIRECTORY}/distance-matrices"
RAW_DATA_DIRECTORY = f"{DATA_DIRECTORY}/raw-data"
NETWORK_DIRECTORY = f"{YEAST_DIRECTORY}/networks"
go_dag = obo_parser.GODag(f"{RAW_DATA_DIRECTORY}/go-basic.obo")
DEG_FILENAME = "degannotation-e.dat"
DEG_FILEPATH = f"{RAW_DATA_DIRECTORY}/{DEG_FILENAME}"
# load DEG dat-file as dataframe and extract yeast data (DEG2001)
DEG_df = pd.read_csv(f"{DEG_FILEPATH}", delimiter='\t', encoding='latin1').loc['DEG2001'].reset_index()
# load PPI with official symbol gene names
PPI_nx = nx.read_edgelist(f"{NETWORK_DIRECTORY}/PPI_BioGRID_official.txt")
e_genes = set(DEG_df.level_1) & set(PPI_nx)
n_genes = set(PPI_nx) - e_genes
assert len(PPI_nx) == len(e_genes) + len(n_genes)
GDV = graco.orbits(PPI_nx)
GCV = graco.coefficients(PPI_nx)
GDV.loc['SIGNATURE'] = GDV.loc[e_genes].mean()
GCV.loc['SIGNATURE'] = GCV.loc[e_genes].mean()
GCV
GCV['O'].T
plt.plot(pts1[0], pts1[1], '*')
plt.plot(pts2[0], pts2[1], 'o')
plt.plot(pts3[0], pts3[1], 'x')
G = nx.Graph()
G.add_edges_from([('A','B'), ('B','C'),
('D','E'), ('E','F'),
('A','D'), ('C','F'),
('B','E'), ('B','F')])
GDV = graco.orbits(G)
GCV = graco.coefficients(G)
GDV[['0','1','2','3']]
pd.DataFrame(graco.distance_matrix(GDV[['0','1','2','3']], 'GDV_similarity'))
pd.DataFrame(graco.GCV_distance(GCV['D']['0'], 'canberra'))
GCV['D']
{go_term.name for go_term in go_dag.values() if go_term.namespace == 'biological_process' and go_term.depth == 3 and go_term.level == 2}
[go_term.id for go_term in go_dag.values() if go_term.name == 'cell cycle']
[go_term.name for go_term in go_dag['GO:0009566'].children if go_term.depth==2]
go_dag['GO:0007049']
[[go_term.name for go_term in path] for path in go_dag.paths_to_top('GO:0051321')]
go_id, = random.sample(go_dag.keys(),1)
level = go_dag[go_id].level
depth = go_dag[go_id].depth
level, depth
sorted(len(path)-1 for path in go_dag.paths_to_top(go_id))
G = nx.DiGraph()
G.add_edges_from(term.get_all_parent_edges())
go_dag['GO:0008150']
nx.path_graph
nx.shortest_path_length(G,'GO:0000001', 'GO:0008150')
G['GO:0000001']
```
# Network models
```
PPI_nx = nx.read_edgelist(f"{NETWORK_DIRECTORY}/PPI_BioGRID.txt")
PPI_GDV = graco.orbits(PPI_nx)
PPI_GCV = graco.coefficients(PPI_nx)
nx.density(PPI_nx)
N = 2**12
p = 0.01
#G = nx.erdos_renyi_graph(N, p)
#G = nx.barabasi_albert_graph(N, 2)
#G = nx.random_geometric_graph(N, radius=0.64, dim=10)
G = nx.random_internet_as_graph(N)
print(nx.density(G))
GDV = graco.orbits(G)
GCV = graco.coefficients(G)
feature = pd.DataFrame({'x':GCV['D']['0']['3'],
'y':GCV['A']['0']['3']})
PPI_feature = pd.DataFrame({'x':PPI_GCV['D']['0']['3'],
'y':PPI_GCV['A']['0']['3']})
fig, ax = plt.subplots(figsize=(9,7))
ax.plot(PPI_feature['x'], PPI_feature['y'], '*', alpha=0.5);
ax.plot( feature['x'], feature['y'], '*', alpha=0.1);
ax.set_xlim(0,1)
ax.set_ylim(0,1)
x = GDV['0']
y = GCV['D']['0']['3']
plt.loglog(x,y, '*');
plt.loglog(x,x.astype(float)**(-2), '*');
```
# Density calculation
```
@np.vectorize
def f_Z(t):
return np.where(0<t<1, 1/np.sqrt(t)-1, 0)
@np.vectorize
def f_Z2(t):
return quad(lambda tau:f_Z(t-tau)*f_Z(tau),0,2)[0]
def kernel(t, tau):
return (1/np.sqrt(t-tau)-1)*(1/np.sqrt(tau)-1)
@np.vectorize
def f_ZZ2(t):
return quad(lambda tau : kernel(t,tau), max(0,t-1), min(1,t))[0]
@np.vectorize
def f_Z3(t):
return quad(lambda tau:f_ZZ2(t-tau)*f_Z(tau),0,3)[0]
X1 = np.random.uniform(size = 5000)
Y1 = np.random.uniform(size = 5000)
X2 = np.random.uniform(size = 5000)
Y2 = np.random.uniform(size = 5000)
X3 = np.random.uniform(size = 5000)
Y3 = np.random.uniform(size = 5000)
Z1 = np.abs(X1-Y1)**2
Z2 = np.abs(X2-Y2)**2
Z3 = np.abs(X2-Y2)**2
Z = Z1+Z2+Z3
x = np.linspace(0,3,200)
y = f_Z3(x)
plt.hist(Z, bins=50, density=True);
plt.plot(x,y)
```
# Nan-control
```
feature = 'GDV'
MIN_CLUSTERS = 2
MAX_CLUSTERS = 100
all_distances = sorted('_'.join(filename.split('_')[:-1])
for filename in os.listdir(f"{MATRIX_DIRECTORY}/{feature}"))
for distance in all_distances:
df = pd.read_csv(f"{MATRIX_DIRECTORY}/{feature}/{distance}_BioGRID.txt", delimiter=' ')
print(distance, df.isna().any().any())
```
# Matrix preparation
```
G = nx.erdos_renyi_graph(100,0.1)
GDV = graco.orbits(G)
deg = GDV['0'].values
A = nx.to_numpy_array(G)
Asq = A@A
T = Asq*A
E = Asq-T
np.fill_diagonal(E,0)
B1 = A*(deg-1)-T
B2 = B1.T
(1*GDV['1'] + 2*GDV['3'] == A@(GDV['0']-1)).all(), \
(1*GDV['4'] + 2*GDV['8'] + 2*GDV['9'] + 2*GDV['12'] == E@(GDV['0']-1)).all(), \
(1*GDV['10'] + 2*GDV['12'] + 6*GDV['14'] + 2*GDV['13'] == T@(GDV['0']-2)).all(), \
(2*GDV['6'] + 1*GDV['10'] + 2*GDV['9'] + 2*GDV['12'] == B1@(GDV['0']-2)).all(), \
(1*GDV['5'] + 2*GDV['11'] + 2*GDV['8'] + 2*GDV['13'] == B2@(GDV['0']-1)).all()
matrices = [A, Asq, T, E, B1, B2]
for i in range(4):
D1 = np.diag(GDV[str(i)])
D2 = D1*D1
matrices.append(D1.copy())
matrices.append(D2.copy())
```
# One matrix
```
for nr,M in enumerate(matrices):
for tmp1 in range(4):
GDV['tmp1'] = M @ GDV[str(tmp1)]
for i in range(4):
for comb in combinations(range(4),i):
orbits = sorted(map(str,set(range(4)) - set(comb))) + ['tmp1']
test = GDV[orbits]
rank = np.linalg.matrix_rank(test)
if rank == len(orbits)-1:
k = scipy.linalg.null_space(test)
assert k.shape == (len(orbits),1)
if (np.abs(k) < 1e-10).any():
continue
else:
print("YASS!!")
print(orbits, nr)
```
# Two matrices
```
name2matrix = {
'A':A,
'Asq':Asq,
'T':T,
'E':E,
'B1':B1,
'B2':B2,
'D01': np.diag(GDV['0']),
'D02': np.diag(GDV['0'])*np.diag(GDV['0']),
'D11': np.diag(GDV['1']),
'D12': np.diag(GDV['1'])*np.diag(GDV['1']),
'D21': np.diag(GDV['2']),
'D22': np.diag(GDV['2'])*np.diag(GDV['2']),
'D31': np.diag(GDV['3']),
'D32': np.diag(GDV['3'])*np.diag(GDV['3'])
}
for M_name, N_name in combinations_with_replacement(name2matrix, 2):
M = name2matrix[M_name]
N = name2matrix[N_name]
for tmp1,tmp2 in product(range(4), repeat=2):
GDV['tmp1'] = M @ GDV[str(tmp1)]
GDV['tmp2'] = N @ GDV[str(tmp2)]
for i in range(4):
for comb in combinations(range(4),i):
num_orbits = set(range(4)) - set(comb)
orbits = sorted(map(str,num_orbits)) + ['tmp1', 'tmp2']
test = GDV[orbits]
rank = np.linalg.matrix_rank(test)
if rank == len(orbits)-1:
k = scipy.linalg.null_space(test)
assert k.shape == (len(orbits),1)
if (np.abs(k) < 1e-10).any():
continue
else:
print(orbits, M_name, N_name)
```
# Three matrices
```
for M_name, N_name, O_name in combinations_with_replacement(name2matrix, 3):
M = name2matrix[M_name]
N = name2matrix[N_name]
O = name2matrix[O_name]
for tmp1,tmp2,tmp3 in product(range(4), repeat=3):
GDV['tmp1'] = M @ GDV[str(tmp1)]
GDV['tmp2'] = N @ GDV[str(tmp2)]
GDV['tmp3'] = O @ GDV[str(tmp3)]
for i in range(4):
for comb in combinations(range(4),i):
num_orbits = set(range(4)) - set(comb)
orbits = sorted(map(str,num_orbits)) + ['tmp1', 'tmp2']
test = GDV[orbits]
rank = np.linalg.matrix_rank(test)
if rank == len(orbits)-1:
k = scipy.linalg.null_space(test)
assert k.shape == (len(orbits),1)
if (np.abs(k) < 1e-10).any():
continue
else:
print(orbits, M_name, N_name, O_name)
```
# Here we GO
```
G = nx.erdos_renyi_graph(100,0.1)
A = nx.to_numpy_array(G)
Asq = A@A
T = Asq*A
GDV = graco.orbits(G)
GCV = graco.coefficients(GDV).sort_index(axis=1)
for tmp1,tmp2 in product(range(4), repeat=2):
print(tmp1,tmp2)
GDV['tmp1'] = GDV['0'] * GDV[str(tmp1)]
GDV['tmp2'] = A @ GDV[str(tmp2)]
for i in range(15):
for comb in combinations(range(15),i):
orbits = sorted(map(str,set(range(15)) - set(comb))) + ['tmp1', 'tmp2']
test = GDV[orbits]
rank = np.linalg.matrix_rank(test)
if rank == len(orbits)-1:
k = scipy.linalg.null_space(test)
assert k.shape == (len(orbits),1)
if (np.abs(k) < 1e-10).any():
continue
else:
print("YASS!!")
print(orbits)
print(orbits)
(k < 1e-10).any()
np.linalg.matrix_rank(test)
k = scipy.linalg.null_space(test)
k
k/np.min(np.abs(k))
def GCV_distance(GCV, distance, nan='include'):
D_all = pd.DataFrame(0, index=GCV.index, columns=GCV.index)
Divisor = pd.DataFrame(0, index=GCV.index, columns=GCV.index)
if nan == 'include':
if type(GCV.columns) == pd.MultiIndex:
depth = len(GCV.columns.levels)
for eq in set(GCV.columns.droplevel([depth-1])):
length = len(GCV[eq].T)
D_i = graco.distance_matrix(GCV[eq].dropna(), distance) / normalizer(distance,length)
not_nan_indices = GCV.index[~GCV[eq].isna().any(axis=1)]
D_all.loc[ not_nan_indices,not_nan_indices] += D_i
Divisor.loc[not_nan_indices,not_nan_indices] += 1
return D_all / Divisor
else:
raise Exception
else:
raise Exception
GCV_distance(GCV,distance)
distance = 'normalized1_linf'
D_all = pd.DataFrame(0, index=GCV.index, columns=GCV.index)
Divisor = pd.DataFrame(0, index=GCV.index, columns=GCV.index)
depth = len(GCV.columns.levels)
for eq in set(GCV.columns.droplevel([depth-1])):
length = len(GCV[eq].T)
D_i = graco.distance_matrix(GCV[eq].dropna(), distance) / normalizer(distance,length)
not_nan_indices = GCV.index[~GCV[eq].isna().any(axis=1)]
D_all.loc[ not_nan_indices,not_nan_indices] += D_i
Divisor.loc[not_nan_indices,not_nan_indices] += 1
D = D_all / Divisor
D
gcv = GCV.droplevel(0,axis=1)
GCV.columns.levels[-2]
GCV.columns.levels[0:2]
GCV.xs('0-0', axis=1, level=-2)
? pd.IndexSlice
T = nx.Graph()
T.add_edges_from(('o',i) for i in range(4))
graco.orbits(T)
T.add_edges_from([(0,1)])
graco.orbits(T)
T.add_edges_from([(1,2)])
graco.orbits(T)
T.add_edges_from([(0,3)])
graco.orbits(T)
nan_indices = GCV.index[ GCV[eq].isna().any(axis=1)]
not_nan_indices = GCV.index[~GCV[eq].isna().any(axis=1)]
Divisor.loc[not_nan_indices,not_nan_indices] += 1
multi = GCV.columns
depth = len(GCV.columns.levels)
set(GCV.columns.droplevel([depth-1]))
set(GCV.columns.droplevel([depth-1]))
GCV
GCV[a[-2:]]
for b in product(*gcv.columns.levels[:-1]):
break
b
gcv = GCV_distance(GCV, 3)
distance = 'normalized1_linf'
D = pd.DataFrame(0, index=gcv.index, columns=gcv.index)
for group in gcv.columns.levels[0]:
D = graco.distance_matrix(gcv[group], distance)
break
D = pd.DataFrame(0, index=gcv.index, columns=gcv.index)
D
type(D.columns) == pd.MultiIndex
pd.MultiIndex.from_product([D.columns, ['C']])
type(D.columns)
GCV.columns.droplevel([0,2])
range(-3)
```
| github_jupyter |
# M² Experimental Design
**Scott Prahl**
**Mar 2021**
The basic idea for measuring M² is simple. Use a CCD imager to capture changing beam profile at different points along the direction of propagation. Doing this accurately is a challenge because the beam must always fit within camera sensor and the measurement locations should include both points near the focus and far from the focus. Moreover, in most situations, the focus is not accessible. In this case a lens is used to create an artificial focus that can be measured.
One of the nice properties of M² is that it is not affected by refocusing: the artificially focused beam will have different beam waist and Rayleigh distances but the M² value will be the same as the original beam.
This notebook describes a set of constraints for selection of an imaging lens and then gives an example of a successful measurement and an unsuccessful measurement.
---
*If* `` laserbeamsize `` *is not installed, uncomment the following cell (i.e., delete the initial #) and execute it with* `` shift-enter ``. *Afterwards, you may need to restart the kernel/runtime before the module will import successfully.*
```
#!pip install --user laserbeamsize
import numpy as np
import matplotlib.pyplot as plt
try:
import laserbeamsize as lbs
except ModuleNotFoundError:
print('laserbeamsize is not installed. To install, uncomment and run the cell above.')
print('Once installation is successful, rerun this cell again.')
pixel_size = 3.75e-6 # pixel size in m
pixel_size_mm = pixel_size * 1e3
pixel_size_µm = pixel_size * 1e6
```
## Designing an M² measurement
We first need to to figure out the focal length of the lens that will be used. The design example that we will use is for a low divergence beam. (High divergence lasers (e.g. diodes) are more suited to other techniques.)
Obviously, we do not want to introduce experimental artifacts into the measurement and therefore we want to minimize introducing wavefront aberrations with the lens. In general, to avoid spherical aberrations the f-number (the focal length divided by the beam diameter) of the lens should be over 20. For a low divergence beam the beam diameter will be about 1mm at the lens and, as we will see below, the allowed f-numbers will all be much greater than 20 and we don't need to worry about it further (as long as a plano-convex lens or doublet is used in the right orientation).
### Creating an artificial focus
An example of beam propagation is shown below. The beam waist is at -500mm and a lens is located at 0mm. The beam cross section is exaggerated because the aspect ratio on the axes is 1000:1.
```
lambda0 = 632.8e-9 # wavelength of light [m]
w0 = 450e-6 # radius at beam waist [m]
f = 300e-3 # focal length of lens [m]
lbs.M2_focus_plot(w0, lambda0, f, z0=-500e-3, M2=2)
plt.show()
```
### Axial measurement positions
The ISO 11146-1 document, [Lasers and laser-related equipment - Test methods for laser beam widths, divergence angles and beam propagation, Part 1: Stigmatic and simple astigmatic beams](https://www.iso.org/obp/ui/#iso:std:iso:11146:-1:ed-1:v1:en) gives specific instructions for how to measure the M² value.
> If the beam waist is accessible for direct measurement, the beam waist location, beam widths, divergence angles and beam propagation ratios shall be determined by a hyperbolic fit to different measurements of the beam width along the propagation axis $z$. Hence, measurements at at least 10 different $z$ positions shall be taken. Approximately half of the measurements shall be distributed within one Rayleigh length on either side of the beam waist, and approximately half of them shall be distributed beyond two Rayleigh lengths from the beam waist. For simple astigmatic beams this procedure shall be applied separately for both principal directions.
In the picture above, the artificial beam waist is at 362mm and the Rayleigh distance for the artificial beam is 155mm. Therefore, to comply with the requirements above, five measurements should be made between 207 and 517mm of the lens and then five more at distances greater than 672mm. One possibility might be the ten measurements shown below.
```
lambda0 = 632.8e-9 # wavelength of light [m]
w0 = 450e-6 # radius at beam waist [m]
f = 300e-3 # focal length of lens [m]
z = np.array([250, 300, 350, 400, 450, 675, 725, 775, 825, 875])*1e-3
lbs.M2_focus_plot(w0, lambda0, f, z0=-500e-3, M2=2)
r = lbs.beam_radius(250e-6, lambda0, z, z0=362e-3, M2=2)
plt.plot(z*1e3,r*1e6,'or')
plt.show()
```
### Camera sensor size constraints
If the beam is centered on the camera sensor then should be larger than 20 pixels and it should less than 1/4 of the narrower sensor dimension. The first constraint is critical for weakly divergent beams (e.g., HeNe) and the second is critical for strongly divergent beams (e.g., diode laser).
For a HeNe, this ensures that the focal length of the lens should be greater than 100mm. If we want 40 pixel diameters then the requirement is that the focal length must be more than 190mm.
(Use M²=1 so that the beam size is smallest possible.)
```
w0 = (1e-3)/2
lambda0 = 632.8e-9
f = np.linspace(10,250)*1e-3
s = -400e-3
max_size = 960 * 0.25 * pixel_size_µm
min_size = 20 * pixel_size_µm
w0_artificial = w0 * lbs.magnification(w0,lambda0,s,f,M2=1)
plt.plot(f*1e3, w0_artificial*1e6)
plt.axhspan(min_size, 0, color='blue', alpha=0.1)
plt.text(70, 20, "Image too small")
plt.xlabel("Focal Length (mm)")
plt.ylabel("Beam Radius (µm)")
plt.axvline(190,color='black')
plt.show()
```
### Working size constraints (i.e., the optical table is only so big)
The measurements must be made on an optical table. Now, while mirrors could be used to bounce the light around the table, this makes exact measurements of the lens to the camera sensor difficult. Thus we would like the distance from the lens to the focus + 4 Rayleigh distances to be less than a meter.
Longer focal length lenses reduce the relative error in the positioning of the camera sensor relative to the lens. If one is doing these measurements by hand then ±1mm might be a typical positioning error. A motorized stage could minimize such errors, but who has the money for a stage that moves half of a meter!
This means the focal distance needs to be less than 320mm. However, at this distance, the beam becomes too large and the largest focal length lens is now about 275mm.
```
w0 = 1e-3 / 2
lambda0 = 632.8e-9
f = np.linspace(50,500)*1e-3
s = -400e-3
M2 = 2
w0_artificial = w0 * lbs.magnification(w0,lambda0,s,f,M2=M2)
z0_artificial = lbs.image_distance(w0,lambda0,s,f,M2=M2)
zR_artificial = lbs.z_rayleigh(w0_artificial, lambda0, M2=M2)
lens_to_4zr_distance = z0_artificial + 4 * zR_artificial
plt.plot(f*1e3, lens_to_4zr_distance*1e3)
plt.axhspan(1000, lens_to_4zr_distance[-1]*1e3, color='blue', alpha=0.1)
plt.text(350, 1050, "Axial distance too far")
plt.xlabel("Focal Length (mm)")
plt.ylabel("$z_0+4z_R$ (mm)")
plt.axvline(320,color='black')
plt.show()
radius_at_4zr = lbs.beam_radius(w0_artificial, lambda0, lens_to_4zr_distance, z0=z0_artificial, M2=M2)
max_size = 960 * 0.25 * pixel_size_µm
plt.plot(f*1e3, radius_at_4zr*1e6)
plt.axhspan(1600, max_size, color='blue', alpha=0.1)
plt.text(350, 1000, "Beam too big")
plt.axvline(275,color='black')
plt.xlabel("Focal Length (mm)")
plt.ylabel("Beam Radius (mm)")
plt.show()
```
### Putting it all together
The focal length of the lens to measure a multimode HeNe beam should then be between 190 and 275 mm. Here is what a reasonable set of measurements should be for a f=250mm lens.
```
lambda0 = 632.8e-9 # wavelength of light [m]
w0 = 500e-6 # radius at beam waist [m]
f = 250e-3 # focal length of lens [m]
s = -400e-3 # beam waist in laser to lens distance [m]
M2 = 2
lbs.M2_focus_plot(w0, lambda0, f, z0=s, M2=M2)
z0_after = lbs.image_distance(w0,lambda0,s,f,M2=M2)
w0_after = w0 * lbs.magnification(w0,lambda0,s,f,M2=M2)
zR_after = lbs.z_rayleigh(w0_after,lambda0,M2=M2)
zn = np.linspace(z0_after-zR_after,z0_after+zR_after,5)
zf = np.linspace(z0_after+2*zR_after,z0_after+4*zR_after,5)
rn = lbs.beam_radius(w0_after, lambda0, zn, z0=z0_after, M2=2)
rf = lbs.beam_radius(w0_after, lambda0, zf, z0=z0_after, M2=2)
plt.plot(zn*1e3,rn*1e6,'or')
plt.plot(zf*1e3,rf*1e6,'ob')
plt.show()
```
## Good spacing of beam size measurements
```
# datapoints digitized by hand from the graph at https://www.rp-photonics.com/beam_quality.html
lambda1=308e-9
z1_all=np.array([-200,-180,-160,-140,-120,-100,-80,-60,-40,-20,0,20,40,60,80,99,120,140,160,180,200])*1e-3
d1_all=2*np.array([416,384,366,311,279,245,216,176,151,120,101,93,102,120,147,177,217,256,291,316,348])*1e-6
lbs.M2_radius_plot(z1_all, d1_all, lambda1, strict=True)
```
## Poor spacing of beam size measurements
A nice fit of the beam is achieved, however the fitted value for M²<1. This is impossible. Basically the problem boils down to the fact that the measurements in the beam waist are terrible for determining the actual divergence of the beam. The fit then severely underestimates the divergence of the beam and claims that the beam diverges more slowly than a simple Gaussian beam!!
```
## Some Examples
f=500e-3 # m
lambda2 = 632.8e-9 # m
z2_all = np.array([168, 210, 280, 348, 414, 480, 495, 510, 520, 580, 666, 770]) * 1e-3 # [m]
d2_all = 2*np.array([597, 572, 547, 554, 479, 404, 415, 399, 377, 391, 326, 397]) * 1e-6 # [m]
lbs.M2_radius_plot(z2_all, d2_all, lambda2, strict=True)
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/darshvaghasia12/Awesome-Web-Art/blob/master/Music_Genre_Classification.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
pip install python_speech_features
from python_speech_features import mfcc
import scipy.io.wavfile as wav
import numpy as np
from tempfile import TemporaryFile
import os
import pickle
import random
import operator
import math
#To get the distance between feature vectors and get neighbours
def getNeighbors(trainingSet, instance, k):
distances = []
for x in range (len(trainingSet)):
dist = distance(trainingSet[x], instance, k )+ distance(instance, trainingSet[x], k)
distances.append((trainingSet[x][2], dist))
distances.sort(key=operator.itemgetter(1))
neighbors = []
for x in range(k):
neighbors.append(distances[x][0])
return neighbors
def nearestClass(neighbors):
classVote = {}
for x in range(len(neighbors)):
response = neighbors[x]
if response in classVote:
classVote[response]+=1
else:
classVote[response]=1
sorter = sorted(classVote.items(), key = operator.itemgetter(1), reverse=True)
return sorter[0][0]
#Model Evaluation
def getAccuracy(testSet, predictions):
correct = 0
for x in range (len(testSet)):
if testSet[x][-1]==predictions[x]:
correct+=1
return 1.0*correct/len(testSet)
#Extraction of Features
directory="/content/drive/MyDrive/genres"
f=open("my.dat",'wb')
i=0
for folder in os.listdir(directory):
i+=1
if i==11:
break
for file in os.listdir(directory+"/"+folder):
(rate,sig)=wav.read(directory+"/"+folder+"/"+file)
mfcc_feat=mfcc(sig,rate,winlen=0.020,appendEnergy=False)
covariance=np.cov(np.matrix.transpose(mfcc_feat))
mean_matrix=mfcc_feat.mean(0)
feature=(mean_matrix,covariance,i)
pickle.dump(feature,f)
f.close()
dataset=[]
def loadDataset(filename,split,trSet,teSet):
with open("my.dat",'rb') as f:
while True:
try:
dataset.append(pickle.load(f))
except EOFerror:
f.close()
break
for i in range(len(dataset)):
if random.random()<split:
trSet.append(dataset[i])
else:
teSet.append(dataset[i])
trainingSet=[]
testSet=[]
loadDataset("my.dat",0.66,trainingSet,testSet)
def distance(instance1 , instance2 , k ):
distance =0
mm1 = instance1[0]
cm1 = instance1[1]
mm2 = instance2[0]
cm2 = instance2[1]
distance = np.trace(np.dot(np.linalg.inv(cm2), cm1))
distance+=(np.dot(np.dot((mm2-mm1).transpose() , np.linalg.inv(cm2)) , mm2-mm1 ))
distance+= np.log(np.linalg.det(cm2)) - np.log(np.linalg.det(cm1))
distance-= k
return distance
#prediction on Accuracy
leng = len(testSet)
predictions = []
for x in range (leng):
predictions.append(nearestClass(getNeighbors(trainingSet ,testSet[x] , 5)))
accuracy1 = getAccuracy(testSet , predictions)
print(accuracy1)
from python_speech_features import mfcc
import scipy.io.wavfile as wav
import numpy as np
from tempfile import TemporaryFile
import os
import pickle
import random
import operator
import math
import numpy as np
from collections import defaultdict
dataset = []
def loadDataset(filename):
with open("my.dat" , 'rb') as f:
while True:
try:
dataset.append(pickle.load(f))
except EOFError:
f.close()
break
loadDataset("my.dat")
def distance(instance1 , instance2 , k ):
distance =0
mm1 = instance1[0]
cm1 = instance1[1]
mm2 = instance2[0]
cm2 = instance2[1]
distance = np.trace(np.dot(np.linalg.inv(cm2), cm1))
distance+=(np.dot(np.dot((mm2-mm1).transpose() , np.linalg.inv(cm2)) , mm2-mm1 ))
distance+= np.log(np.linalg.det(cm2)) - np.log(np.linalg.det(cm1))
distance-= k
return distance
def getNeighbors(trainingSet , instance , k):
distances =[]
for x in range (len(trainingSet)):
dist = distance(trainingSet[x], instance, k )+ distance(instance, trainingSet[x], k)
distances.append((trainingSet[x][2], dist))
distances.sort(key=operator.itemgetter(1))
neighbors = []
for x in range(k):
neighbors.append(distances[x][0])
return neighbors
def nearestClass(neighbors):
classVote ={}
for x in range(len(neighbors)):
response = neighbors[x]
if response in classVote:
classVote[response]+=1
else:
classVote[response]=1
sorter = sorted(classVote.items(), key = operator.itemgetter(1), reverse=True)
return sorter[0][0]
results=defaultdict(int)
i=1
for folder in os.listdir("/content/drive/MyDrive/genres"):
results[i]=folder
i+=1
(rate,sig)=wav.read("/content/drive/MyDrive/genres/Baarishein (DARSH MUSIC).wav")
mfcc_feat=mfcc(sig,rate,winlen=0.020,appendEnergy=False)
covariance = np.cov(np.matrix.transpose(mfcc_feat))
mean_matrix = mfcc_feat.mean(0)
feature=(mean_matrix,covariance,0)
pred=nearestClass(getNeighbors(dataset ,feature , 5))
print(results[pred])
```
| github_jupyter |
# Convolutional Layer
In this notebook, we visualize four filtered outputs (a.k.a. activation maps) of a convolutional layer.
In this example, *we* are defining four filters that are applied to an input image by initializing the **weights** of a convolutional layer, but a trained CNN will learn the values of these weights.
<img src='notebook_ims/conv_layer.gif' height=60% width=60% />
### Import the image
```
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
```
### Define and visualize the filters
```
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
# visualize all four filters
fig = plt.figure(figsize=(10, 5))
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
width, height = filters[i].shape
for x in range(width):
for y in range(height):
ax.annotate(str(filters[i][x][y]), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if filters[i][x][y]<0 else 'black')
```
## Define a convolutional layer
The various layers that make up any neural network are documented, [here](http://pytorch.org/docs/stable/nn.html). For a convolutional neural network, we'll start by defining a:
* Convolutional layer
Initialize a single convolutional layer so that it contains all your created filters. Note that you are not training this network; you are initializing the weights in a convolutional layer so that you can visualize what happens after a forward pass through this network!
#### `__init__` and `forward`
To define a neural network in PyTorch, you define the layers of a model in the function `__init__` and define the forward behavior of a network that applyies those initialized layers to an input (`x`) in the function `forward`. In PyTorch we convert all inputs into the Tensor datatype, which is similar to a list data type in Python.
Below, I define the structure of a class called `Net` that has a convolutional layer that can contain four 4x4 grayscale filters.
```
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a single convolutional layer with four filters
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# returns both layers
return conv_x, activated_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
```
### Visualize the output of each filter
First, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
```
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1, xticks=[], yticks=[])
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
```
Let's look at the output of a convolutional layer, before and after a ReLu activation function is applied.
```
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get the convolutional layer (pre and post activation)
conv_layer, activated_layer = model(gray_img_tensor)
# visualize the output of a conv layer
viz_layer(conv_layer)
```
#### ReLu activation
In this model, we've used an activation function that scales the output of the convolutional layer. We've chose a ReLu function to do this, and this function simply turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
<img src='notebook_ims/relu_ex.png' height=50% width=50% />
```
# after a ReLu is applied
# visualize the output of an activated conv layer
viz_layer(activated_layer)
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Train Your Own Model and Convert It to TFLite
# 0. Setup
Uninstall TensorFlow เวอร์ชันที่อยู่ใน Colab, Install Version nightly แล้ว Restart Runtime
```
# !pip3 uninstall tensorflow
# !pip3 install tf-nightly
```
ใน ep นี้ เราจะใช้ TensorFlow 2 ด้วยคำสั่ง Magic %tensorflow_version 2.x (สำหรับ Google Colab)
```
try:
%tensorflow_version 2.x
except:
pass
```
# 1. Import
Import Library ที่เกี่ยวข้อง และ Print เลข Version
```
import pathlib
import numpy as np
import matplotlib.pylab as plt
import tensorflow as tf
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
print("\u2022 Using TensorFlow Version:", tf.__version__)
print('\u2022 GPU Device Found.' if tf.test.is_gpu_available() else '\u2022 GPU Device Not Found. Running on CPU')
```
# 2. Dataset
## 2.1 Split Data to Training / Validation / Test Set
เราจะใช้ TensorFlow Dataset `tfds` โหลดชุดข้อมูล [Fashion MNIST Dataset](https://www.bualabs.com/archives/3398/what-is-fashion-mnist-dataset/) ขึ้นมา แล้ว [Split Training / Validation / Test Set](https://www.bualabs.com/archives/532/what-is-training-set-why-train-test-split-training-set-validation-set-test-set/) ด้วยสัดส่วน 80/10/10
```
splits = tfds.Split.ALL.subsplit(weighted=(80, 10, 10))
splits, info = tfds.load('fashion_mnist', with_info=True, as_supervised=True, split = splits)
(train_examples, validation_examples, test_examples) = splits
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
num_examples, num_classes
```
จะได้ [Dataset](https://www.bualabs.com/archives/1994/dataset-dataloader-feed-data-x-y-batch-to-neural-network-refactor-training-loop-neural-network-ep-5/) ที่มีข้อมูล Training Set 60,000 ตัวอย่าง มี 10 Class เนื่องจาก ชื่อ Class Name ไม่มีใน Dataset เราจะประกาศใส่ List ไว้ดังด้านล่าง
```
class_names = ['T-shirt_top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
```
สร้างไฟล์เก็บ Label ตั้งชื่อว่า labels.txt
```
# Create a labels.txt file with the class names
with open('labels.txt', 'w') as f:
f.write('\n'.join(class_names))
```
ขนาดรูปของ Fashion MNIST เหมือนกับ MNIST คือ 28 x 28 Pixel
```
# The images in the dataset are 28 by 28 pixels.
IMG_SIZE = 28
```
# 2.2 Preprocessing data
ประกาศฟังก์ชัน ใช้ `tf.image` เพื่อแปลงรูปใน Dataset ให้อยู่ในรูปแบบที่โมเดลต้องการ ในที่นี้คือ Resize เป็นขนาดที่กำหนด และ Rescale ค่าสี จาก 0-255 หาร 255 ให้เป็น Float 0-1
```
def format_example(image, label):
# Cast image to float32
image = tf.dtypes.cast(image, tf.float32)
# Normalize the image in the range [0, 1]
image = image / 255.0
return image, label
```
กำหนดขนาด [Batch Size](https://www.bualabs.com/archives/729/what-is-batch-size-in-deep-neural-networks-how-to-adjust-machine-learning-model-accuracy-deep-learning-hyperparameter-tuning-ep-2/) ให้ [DataLoader](https://www.bualabs.com/archives/1994/dataset-dataloader-feed-data-x-y-batch-to-neural-network-refactor-training-loop-neural-network-ep-5/)
```
# Specify the batch size
BATCH_SIZE = 256
```
Shuffle สับไพ่ข้อมูล และแบ่งข้อมูลเป็น Batch ตาม Batch Size ที่กำหนดด้านบน
```
# Create Datasets
train_batches = train_examples.cache().shuffle(num_examples//4).batch(BATCH_SIZE).map(format_example).prefetch(1)
validation_batches = validation_examples.cache().batch(BATCH_SIZE).map(format_example)
test_batches = test_examples.batch(1).map(format_example)
```
# 3. Building the Model
สร้างโมเดล Convolutional Neural Network ที่มี 10 Class
```
model = tf.keras.Sequential([
# Set the input shape to (28, 28, 1), kernel size=3, filters=16 and use ReLU activation,
tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
# Set the number of filters to 32, kernel size to 3 and use ReLU activation
tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
# Flatten the output layer to 1 dimension
tf.keras.layers.Flatten(),
# Add a fully connected layer with 64 hidden units and ReLU activation
tf.keras.layers.Dense(64, activation='relu'),
# Attach a final softmax classification head
tf.keras.layers.Dense(10, activation='softmax')])
```
ใช้ Adam [Optimizer](https://www.bualabs.com/archives/2042/refactor-parameter-optimizer-neural-network-train-deep-learning-machine-learning-neural-network-ep-6/) ใช้ Categorical [Cross Entropy Loss](https://www.bualabs.com/archives/1945/what-is-cross-entropy-loss-logistic-regression-log-loss-loss-function-ep-3/) สำหรับงาน [Multi-class Classification](https://www.bualabs.com/archives/3396/tensorflow-js-fashion-mnist-dataset-convolutional-neural-network-convnet-cnn-visualization-tfvis-tfjs-ep-5/) และ ใช้ Accuracy เป็น [Metrics](https://www.bualabs.com/archives/1968/what-is-metrics-confusion-matrix-accuracy-precision-recall-f1-score-difference-metrics-ep-1/) วัดผล
```
# Set the appropriate loss function and use accuracy as your metric
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
```
ดู [Model Architecture](https://www.bualabs.com/archives/2703/how-to-read-model-convolutional-neural-network-shape-activation-map-model-architecture-convnet-ep-7/) ที่เราสร้างด้านบน
```
model.summary()
```
# 4. Training the Model
```
model.fit(train_batches,
epochs=10,
validation_data=validation_batches)
```
# 5. Export the Model
Export โมเดลที่เทรนเสร็จเรียบร้อยแล้ว ในรูปแบบ ไฟล์ SavedModel Format
```
export_dir = 'saved_model/1'
tf.saved_model.save(model, export_dir)
```
# 6. Convert ไฟล์ SavedModel ด้วย TFLite Converter
ใช้ [TFLite Converter](https://www.bualabs.com/archives/3595/what-is-tensorflow-lite-converter-convert-mobilenet-transfer-learning-classifier-head-deploy-mobile-iot-edge-device-microcontroller-tflite-ep-3/) โหลดไฟล์โมเดล SavedModel ที่เรา Export ไว้ด้านบน
```
#@title เลือกโหมดที่จะทำ optimization
mode = "Speed" #@param ["Default", "Storage", "Speed"]
if mode == 'Storage':
optimization = tf.lite.Optimize.OPTIMIZE_FOR_SIZE
elif mode == 'Speed':
optimization = tf.lite.Optimize.OPTIMIZE_FOR_LATENCY
else:
optimization = tf.lite.Optimize.DEFAULT
```
Convert โมเดลเป็นไฟล์ tflite แล้ว Save ลง Disk
```
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
converter.experimental_new_converter = True
# Set the optimzations
converter.optimizations = mode
# Invoke the converter to finally generate the TFLite model
tflite_model = converter.convert()
```
ได้ไฟล์รวม ขนาด 1015768 Bytes
```
tflite_model_file = pathlib.Path('./model.tflite')
tflite_model_file.write_bytes(tflite_model)
```
# 7. Test โมเดล tflite ด้วย TFLite Intepreter
ใช้ TFLite Intepreter โหลดไฟล์ tflite ขึ้นมา
```
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
```
สุ่มเลือกรูปจาก Test Set มาให้โมเดล ทำ Inference จำนวน 50 รูป
```
# Gather results for the randomly sampled test images
predictions = []
test_labels = []
test_images = []
for img, label in test_batches.take(50):
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions.append(interpreter.get_tensor(output_index))
test_labels.append(label[0])
test_images.append(np.array(img))
```
ฟังก์ชันสำหรับการพล็อตแสดงผล
```
#@title Utility functions for plotting
# Utilities for plotting
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array[i], true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
img = np.squeeze(img)
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label.numpy():
color = 'green'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]), color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array[i], true_label[i]
plt.grid(False)
plt.xticks(list(range(10)), class_names, rotation='vertical')
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array[0], color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array[0])
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('green')
```
นำผลลัพธ์ที่ได้ มาพล็อตแสดงรูป เปรียบเทียบ label และ prediction เราสามารถเลื่อนดู ตัวอย่างต่าง ๆ ได้ทั้ง 50 ตัวอย่าง
```
#@title Visualize the outputs { run: "auto" }
index = 44 #@param {type:"slider", min:1, max:50, step:1}
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(index, predictions, test_labels, test_images)
plt.show()
plot_value_array(index, predictions, test_labels)
plt.show()
```
# 8. Save และ Download ไฟล์ tflite
Save ไฟล์ และ Download โมเดล และ Label มาที่ Local Disk เพื่อนำไปใส่ Device ที่ต้องการต่อไป
หมายเหตุ: เราอาจจะต้อง กดอนุญาตให้ Web Browser สามารถ Download หลาย ๆ ไฟล์ได้พร้อมกัน
```
try:
from google.colab import files
files.download(tflite_model_file)
files.download('labels.txt')
except:
pass
```
# 9. Credit
* https://www.coursera.org/learn/device-based-models-tensorflow/
* https://github.com/lmoroney/dlaicourse/tree/master/TensorFlow%20Deployment
* https://www.tensorflow.org/lite/convert
| github_jupyter |
```
a = 'this is a book'
a.split()
a = '那酸民婉君也可以報名嗎?'
! pip install jieba
import jieba
list(jieba.cut(a))
article = '''
貿易戰升溫對中國經濟的衝擊可能大於美國,原因是出口對中國經濟的影響遠大於美國。經濟學家預測,如果美國祭出的關稅明年元旦升至25%,2019年中國經濟成長率將下滑0.5%至0.6個百分點。
'''
import re
re.split(',|。', article)
seg_list = jieba.cut("大巨蛋案對市府同仁下封口令? 柯P否認", cut_all=True)
#for w in seg_list:
# print(w)
'/'.join(seg_list)
seg_list = jieba.cut("大巨蛋案對市府同仁下封口令? 柯P否認")
'/'.join(seg_list)
import jieba
jieba.load_userdict('userdict.txt')
seg_list = jieba.cut("大巨蛋案對市府同仁下封口令? 柯P否認")
'/'.join(seg_list)
```
## Jieba Zh_TW
- https://github.com/ldkrsi/jieba-zh_TW
```
import jieba.posseg as pseg
words = pseg.cut("大巨蛋案對市府同仁下封口令? 柯P否認")
for w in words:
print(w.word, w.flag)
```
## Download Jieba
- https://pypi.org/project/jieba/#files
- https://files.pythonhosted.org/packages/71/46/c6f9179f73b818d5827202ad1c4a94e371a29473b7f043b736b4dab6b8cd/jieba-0.39.zip
- pip install ~/Downloads/jieba-0.39.zip
```
! pip install /Users/davidchiu/Downloads/jieba-0.39.zip
```
## 取得同義詞
```
import requests
from bs4 import BeautifulSoup
res = requests.get('https://zh.wikipedia.org/wiki/{}'.format('賴清德'))
soup = BeautifulSoup(res.text, 'lxml')
soup.select_one('#bodyContent p').select('b')
import requests
res = requests.get('http://ec.ltn.com.tw/article/breakingnews/2556711')
from bs4 import BeautifulSoup
soup = BeautifulSoup(res.text, 'lxml')
with open('userdict.txt', 'a') as f:
for w in soup.select('.keyword .boxText'):
f.write(w.text.strip()+'\n')
```
## 量化分析
### 詞頻
```
article = '''
周遊與老公李朝永將在10月8日補辦婚禮,前天一同出席關懷演藝人員中秋節餐會,同場的「勇伯」陳松勇卻大酸這對愛侶,周遊夫婦氣得想提告。
《聯合報》報導,陳松勇直批周遊「她厚臉皮的故事,可以寫成一本書,不是『厚黑學』,叫『厚臉皮』!」還語出驚人表示,「她現在要結婚的這個人是考古學家,每天抱著殭屍在考古。」
《壹電親》報導,周遊夫婦氣得大罵陳松勇,周遊說,「怎麼會這麼沒有水平」,「白冰冰也很生氣,替我生氣」、「我的律師說可以告他」。周遊老公李朝永不滿,說我是「考古學家」、「我覺得他這是人身攻擊」、「怎麼可以這樣子講話」、「我覺得一個人要留有口德」。(即時新聞中心/綜合報導)
'''
import jieba
jieba.load_userdict('userdict.txt')
words = list(jieba.cut(article))
dic = {}
for w in words:
if w not in dic:
dic[w] = 1
else:
dic[w] = dic[w] + 1
import operator
dic.items()
swd=sorted(dic.items(), key = operator.itemgetter(1), reverse=True)
for k, v in swd[0:20]:
if len(k) >=2:
print(k,v)
import jieba
from collections import Counter
jieba.load_userdict('userdict.txt')
words = list(jieba.cut(article))
c = Counter(words)
for k,v in c.most_common(100):
if len(k) >=2:
print(k,v)
article = '''中秋佳節烤肉幾乎已成「全民運動」,但醫界多項研究證實,食物在高溫長時間燒烤下會產生致癌物質有害健康,醫生表示,中秋過後門診常出現,大吃大喝導致血糖升高、膽固醇失控就診病人,尤其患有糖尿病、高膽固醇、腎臟病、痛風、甲狀腺亢進族群居多。
烏日林新醫院營養科姜秋月主任建議,中秋佳節烤肉、食用月餅需淺嚐即可,更別忘了要搭配高纖蔬果,吃完大餐後可與家人一起運動散步,不僅能顧到健康更能增進家人間的感情。如果在過完中秋節後,以上族群若發現身體出現不適,建議可盡快至新陳代謝科門診,以免延誤病情。(王煌忠/台中報導)
烤肉應注意五大族群
●《糖尿病族群》
中秋月餅與烤肉時搭配的吐司、白飯,充滿高糖精緻澱粉,如果沒有注意攝取份量,恐怕會造成血糖飆高!另外,糖尿病患者容易有眼睛和腎臟方面的問題,有時在戶外烤肉天色昏暗,容易吃到沒有烤熟的肉,尤其有糖尿病足的患者更要小心,烤肉時要注意不要被燙到,恐會造成傷口潰瘍惡化。
●《高膽固醇族群》
由於烤肉時會搭配烤蝦子、花枝或其他海鮮,或是油脂含量高的三層肉,甚至是食品加工物如:培根、貢丸等等,無形中都會讓膽固醇爆表!建議在食材上可均衡選擇新鮮低脂食材,如:雞胸肉、去皮的雞腿或豬里肌肉片等瘦肉。
●《腎臟病患者》
烤肉時為了讓鮮肉食材入味,會事先醃漬肉品及沾烤肉醬食用,容易產生過多鹽分,腎臟病患者在食用上要格外注意;烏日林新醫院營養科姜秋月主任表示,烤肉醬料可利用天然食材來自製DIY,選擇薑、蔥、蒜搭配奇異果等食材,就可以降低鹹度,增添自然風味。另外,蔬菜須燙過再食用,未洗腎病患須限制肉類攝取量。
●《痛風族群》
現在許多燒烤店都會提供精心熬製的雞湯或大骨湯,如果熬煮的時間越久、湯頭濃郁,代表普林的含量也很高,痛風患者最好避免!烤肉方面如果有帶殼海鮮、肉類也要注意攝取量。另外攝取糖分也會引起痛風,建議烤肉時避免喝含糖飲料、汽水或可樂,如果仍想喝飲料可以喝無糖茶飲或檸檬水。
●《甲狀腺亢進的年輕男性族群》
烏日林新醫院新陳代謝科林瑋涵醫師表示,曾遇過甲狀腺亢進的年輕男性族群,因為平常沒有好好吃藥控制,碰到前一天晚上無節制地吃大餐,又喝太多高糖分的飲料,因受到電解質影響,早上起來竟發現自己像被鬼壓床,手腳癱軟使不上力,原來是甲狀腺毒性周期麻痺引起,好發於20~40歲的男性,必須坐救護車來醫院,為了吃得不償失。(資料來源:烏日林新醫院新陳代謝科醫師林瑋涵)
更多「 養生寶典」內容,請點此:http://bit.ly/2vrMGNZ
烏日林新醫院新陳代謝科醫師林瑋涵,針對五大族群患者,在中秋節烤肉應注意事項,希望民眾開心健康過中秋。翻攝畫面'''
import jieba
from collections import Counter
jieba.load_userdict('userdict.txt')
words = list(jieba.cut(article))
c = Counter(words)
for k,v in c.most_common(100):
if len(k) >=2:
print(k,v)
```
### tfidf
```
a,abb,abc = ['a'], ['a','b', 'b'], ['a', 'b', 'c']
D = [a,abb,abc]
import math
#tfidf('a', a, D)
tf = 1/1
idf = math.log(3/3)
tf * idf
import math
#tfidf('a', abb, D)
tf = 1/3
idf = math.log(3/3)
tf * idf
#tfidf('a', abc, D)
tf = 1/3
idf = math.log(3/3)
tf * idf
#tfidf('b', abb, D)
tf = 2/3
idf = math.log(3/2)
tf * idf
#tfidf('b', abc, D)
tf = 1/3
idf = math.log(3/2)
tf * idf
#tfidf('c', abc, D)
tf = 1/3
idf = math.log(3/1)
tf * idf
D
ary = []
for doc in D:
if 'c' in doc:
ary.append(doc)
len(ary)
[doc for doc in D if 'c' in doc]
def tfidf(t, d, D):
tf = d.count(t) / len(d)
idf = math.log(len(D)/ len([doc for doc in D if t in doc]))
return tf * idf
tfidf('c', abc, D)
```
| github_jupyter |
## Set Root Directory and Out Directory
```
import os
import time
ROOT_DIR = os.path.abspath('')
OUT_DIR = os.path.join(ROOT_DIR, 'out')
if not os.path.exists(OUT_DIR):
os.makedirs(OUT_DIR)
```
## Load WebDriver for Chrome
https://sites.google.com/a/chromium.org/chromedriver/downloads
```
DRIVER = os.path.join(ROOT_DIR, 'chromedriver')
from selenium import webdriver
driver = webdriver.Chrome(DRIVER)
driver.implicitly_wait(5)
window = {}
window['main'] = driver.window_handles[-1]
```
## Open Page
```
ECAMPUS = 'https://ecampus.ut.ac.kr'
driver.get(ECAMPUS)
```
## Load Authentication Info
```
import json
SECRET_JSON = os.path.join(ROOT_DIR, 'secrets.json')
with open(SECRET_JSON) as f:
secrets = json.load(f)
login_id = secrets["ID"]
login_pw = secrets["PW"]
```
## Login
```
def logout():
btn_logout = driver.find_element_by_id('btn_logout')
btn_logout.click()
def login(_id, _pw):
try:
logout()
except:
pass
# Enter Info
input_id = driver.find_element_by_id('id')
input_pw = driver.find_element_by_id('pass')
input_id.send_keys(_id)
input_pw.send_keys(_pw)
# Login
driver.execute_script('login_proc()')
time.sleep(5)
login(login_id, login_pw)
login_id, login_pw = (None, None)
time.sleep(10)
panel = driver.find_element_by_id('selfInfoAfter')
lecture_list = panel.find_element_by_class_name('lecInfo')
print(lecture_list)
lectures = lecture_list.find_elements_by_xpath("//a[contains(., '2020')]")
print(lectures)
```
## Enter The Lecture
```
sample_lecture = lectures[0]
sample_lecture.click()
```
You can enter the 'lecture room' with path below.
It will lead you to the lecture room of the last lecture you entered...
```
lecture_room_url = "https://ecampus.ut.ac.kr/lms/class/courseSchedule/doListView.dunet"
driver.get(lecture_room_url)
time.sleep(2)
```
## Get courses list
```
# not_progressed_list = driver.find_elements_by_xpath("//td[contains(., 'not progressed')]")
current_courses_link = driver.find_elements_by_xpath("//a[contains(., 'Lecture view')]")
current_courses = [course_link.find_element_by_xpath("../..") for course_link in current_courses_link]
current_courses_data = []
titles = []
links = []
mins = []
for course in current_courses:
datas = course.find_elements_by_tag_name('td')
title = datas[1].text
lecture_time = datas[2].text
period = datas[3].text
status = datas[4].text
link = datas[5].find_element_by_class_name('lectureWindow')
# link = datas[5]
print(title, lecture_time, period, status, link)
print()
if status != "Complete":
titles.append(title)
links.append(link)
mins.append(int(lecture_time[:-6]))
print(titles)
```
## Check Study Time and Open Lecture Window
```
import tqdm
print("{} courses.".format(len(links)))
seconds = [minute * 60 for minute in mins]
for sec, title, link in tqdm.tqdm(zip(seconds, titles, links)):
print("{} for {}minutes...".format(title, sec//60))
link.click()
window_lecture = driver.window_handles[-1]
time.sleep(sec + 100)
driver.switch_to_window(window_lecture)
driver.close()
print("Course End")
print("Finished.")
# sample_link = links[-1]
# sample_course = sample_link.find_element_by_xpath("../..")
# tds = sample_course.find_elements_by_tag_name('td')
# # sample_lecture_time = abcs[-1].find_element_by_name('td')[2][:-7]
# print(tds[1].text)
#TODO: "not progressed and lecture VIEW'
# sample_link.click()
# window['lecture'] = driver.window_handles[-1]
# time.sleep(5)
window['lecture']
```
| github_jupyter |
# Scene Classification
## 3. Build Model-InceptionV3 BatchTrain Top2Layer
- Import pkg
- Load sample data, only first 1000 objects
-
Reference:
- https://challenger.ai/competitions
- https://github.com/jupyter/notebook/issues/2287
**Tensorboard**
1. Input at command: **tensorboard --logdir=./log**
2. Input at browser: **http://127.0.0.1:6006**
### Import pkg
```
import numpy as np
import pandas as pd
# import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import seaborn as sns
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from keras.utils.np_utils import to_categorical # convert to one-hot-encoding
from keras.models import Sequential, load_model
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, BatchNormalization
from keras.optimizers import Adam, SGD
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler, TensorBoard
# import zipfile
import os
import zipfile
import math
import time
from IPython.display import display
import pdb
import json
from PIL import Image
import glob
import pickle
```
### Load sample data, only first 1000 objects
```
input_path = './input'
datasetName = 'train'
date = '20170904'
zip_path = input_path + '/ai_challenger_scene_{0}_{1}.zip'.format(datasetName, date)
extract_path = input_path + '/ai_challenger_scene_{0}_{1}'.format(datasetName, date)
image_path = extract_path + '/scene_{0}_images_{1}'.format(datasetName, date)
scene_classes_path = extract_path + '/scene_classes.csv'
scene_annotations_path = extract_path + '/scene_{0}_annotations_{1}.json'.format(datasetName, date)
print(input_path)
print(zip_path)
print(extract_path)
print(image_path)
print(scene_classes_path)
print(scene_annotations_path)
scene_classes = pd.read_csv(scene_classes_path, header=None)
display(scene_classes.head())
def get_scene_name(lable_number, scene_classes_path):
scene_classes = pd.read_csv(scene_classes_path, header=None)
return scene_classes.loc[lable_number, 2]
print(get_scene_name(0, scene_classes_path))
def load_pickle_data(dataset, index):
pickleFolder = './input/pickle_{0}'.format(dataset)
print(pickleFolder)
x_path = pickleFolder + '/x_data{0}.p'.format(index)
y_path = pickleFolder + '/y_data{0}.p'.format(index)
print(x_path)
print(y_path)
if not os.path.exists(x_path):
print(x_path + ' do not exist!')
return
if not os.path.exists(y_path):
print(y_path + ' do not exist!')
return
x_data = pickle.load(open(x_path, mode='rb'))
y_data = pickle.load(open(y_path, mode='rb'))
# y_data = to_categorical(y_train)
print(x_data.shape)
print(y_data.shape)
return (x_data, y_data)
x_train, y_train = load_pickle_data("train", 0)
print(x_train.shape)
print(y_train.shape)
del x_train
del y_train
%%time
x_train, y_train = load_pickle_data("train", 0)
fig, ax = plt.subplots(1, 2, figsize=(12, 6))
ax[0].imshow(x_train[0])
ax[0].set_title(get_scene_name(y_train[0], scene_classes_path))
ax[1].imshow(x_train[1])
ax[1].set_title(get_scene_name(y_train[1], scene_classes_path))
del x_train
del y_train
%%time
x_val, y_val = load_pickle_data("validation", 0)
fig, ax = plt.subplots(1, 2, figsize=(12, 6))
ax[0].imshow(x_val[0])
ax[0].set_title(get_scene_name(y_val[0], scene_classes_path))
ax[1].imshow(x_val[1])
ax[1].set_title(get_scene_name(y_val[1], scene_classes_path))
del x_val
del y_val
```
### Load model
```
from keras.preprocessing import image
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D
from keras import backend as K
from keras.applications.inception_v3 import InceptionV3
base_model = InceptionV3(weights='imagenet', include_top=False)
# add a global spatial average pooling layer
x = base_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(1024, activation='relu')(x)
# and a logistic layer -- let's say we have 200 classes
predictions = Dense(80, activation='softmax')(x)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers
for layer in base_model.layers:
layer.trainable = False
model.compile(loss='categorical_crossentropy', optimizer = Adam(lr=1e-4), metrics=["accuracy"])
for i, layer in enumerate(model.layers):
print(i, layer.name)
def saveModel(model, middleName):
modelPath = './model'
if not os.path.isdir(modelPath):
os.mkdir(modelPath)
fileName = middleName + time.strftime("%Y-%m-%d_%H-%M-%S", time.localtime())
jsonFileName = modelPath + '/' + fileName + '.json'
yamlFileName = modelPath + '/' + fileName + '.yaml'
json_string = model.to_json()
with open(jsonFileName, 'w') as file:
file.write(json_string)
yaml_string = model.to_yaml()
with open(yamlFileName, 'w') as file:
file.write(yaml_string)
weigthsFile = modelPath + '/' + fileName + '.h5'
model.save(weigthsFile)
# saveModel(model, 'ModelSaveTest')
```
**Train top 2 inception**
```
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range = 20,
zoom_range = 0.1,
width_shift_range = 0.1,
height_shift_range = 0.1,
horizontal_flip = True,
vertical_flip = True)
train_generator = train_datagen.flow_from_directory('./input/data_train',
target_size=(224, 224),
batch_size=64,
class_mode = "categorical")
print(train_generator.classes[0:1000])
# annealer = LearningRateScheduler(lambda x: 1e-3 * 0.9 ** x)
tensorBoard = TensorBoard(log_dir='./log_Top2Inc_171001')
x_val, y_val = load_pickle_data("validation", 0)
y_val = to_categorical(y_val)
# model.compile(loss='categorical_crossentropy', optimizer = Adam(lr=1e-4), metrics=["accuracy"])
hist = model.fit_generator(train_generator,
steps_per_epoch=128,
epochs=32, #Increase this when not on Kaggle kernel
verbose=2, #1 for ETA, 0 for silent
validation_data=(x_val, y_val),
callbacks=[tensorBoard])
saveModel(model, 'TrainImageFolder')
final_loss, final_acc = model.evaluate(x_val, y_val, verbose=0)
print("Final loss: {0:.4f}, final accuracy: {1:.4f}".format(final_loss, final_acc))
plt.plot(hist.history['loss'], color='b')
plt.plot(hist.history['val_loss'], color='r')
plt.show()
plt.plot(hist.history['acc'], color='b')
plt.plot(hist.history['val_acc'], color='r')
plt.show()
print('Done!')
```
| github_jupyter |
In August, Open Science collaboration published a [summary](http://www.sciencemag.org/content/349/6251/aac4716) of its replication project. From the abstract:
> Aarts et al. describe the replication of 100 experiments reported in papers published in 2008 in three high-ranking psychology journals. Assessing whether the replication and the original experiment yielded the same result according to several criteria, they find that about one-third to one-half of the original findings were also observed in the replication study.
They got one-third success when they used significance (reject $H_0$) as a criterium for replication success and around one-half when meta-analytic methods were used to asses the replication success. All replication studies had good power and one would expect around 95% replication success. So hypothesis testing does not work in psychology. What do we do now?
One popular excuse for doing nothing is the [following provided by Lisa Feldman Barrett](http://www.nytimes.com/2015/09/01/opinion/psychology-is-not-in-crisis.html?_r=0):
> Suppose you have two well-designed, carefully run studies, A and B, that investigate the same phenomenon. They perform what appear to be identical experiments, and yet they reach opposite conclusions. Study A produces the predicted phenomenon, whereas Study B does not. We have a failure to replicate.
Does this mean that the phenomenon in question is necessarily illusory? Absolutely not. If the studies were well designed and executed, it is more likely that the phenomenon from Study A is true only under certain conditions. The scientist’s job now is to figure out what those conditions are, in order to form new and better hypotheses to test.
This explanation reminds of a joke about a hedgehog:
> Hedgehog goes to physician and complains that he and his wife want a child but his wife is not getting pregnant.
>Physician: "Do you and your wife eat carrots?"
>Hedgehog: "No we don't."
>Physician: "But you have to. Without eating carrots you can't conceive a child."
>Hedgehog leaves, but returns a week later.
>Hedgehog: "Doctor, it doesn't work. My wife is still not pregnant."
>Physician: "Do you and your wife drink mint tea?"
>Hedgehog: "No we don't."
>Physician: "But you have to. Without drinking mint tea you can't conceive a child."
>Hedgehog leaves, but once again, returns a week later.
>Hedgehog: "Doctor, it doesn't work. My wife is still not pregnant."
>Physician: "Are you and your wife having sex?"
>Hedgehog: "No we don't."
>Physician: "But you have to. Without having sex with your wife you can't conceive a child."
If we accept Feldman Barrett's view of replication we are giving researchers a license to make up spurious claims without having to care about providing solid evidence. This is a situation similar to that of the physician in the above joke - physician is allowed to make up explanations without providing solid evidence. Hedgehog - just like the replicating researchers is asked to test the claims and if he fails he is just asked to try something else until he succeeds. The set of propositions about getting pregnant is not infinite, so physician will, at some point, hit upon a solution to hedgehog's problem. To suppress such guessing behavior, the hedgehog should ask the physician, what are all the necessary conditions that need to be satisfied in order to conceive a child. This is what the Open science collaboration does before each replication. They contact the authors of the original study and ask them what are potential moderators that would alter the results. The researchers then attempt to control the mentioned moderators in the replication.
The joke illustrates another aspect. The physician's suggestions are surprising and counter-intuitive, because carrots and mint tea are easily obtainable, yet no one would assume that they affect the probability of conception. Similar, most of the studies in the high-profile psychological journals (and those included in the replication attempt) present surprising and counter-intuitive findings. After an iterative addition of moderators, the findings become less robust, more context-dependent and even trivial. In the end, the physician hits upon a suggestion that solves hedgehog's problem. Unfortunately, one doesn't necessarily need to ask physician to get this kind of suggestion.
The psychological phenomena may be context dependent and fragile, but in that case one should be on against making claims about robust and context-independent effects and against using experiments designed to test robust and context-independent effects. Unfortunately, many of the studies that fail to replicate are conceived to test hypotheses that posit robust and context-independent effects.
| github_jupyter |
## Introduction to regression with tensorflow
```
import tensorflow as tf
print(tf.__version__)
```
### Creating some data to view and fit the model
```
import numpy as np
import matplotlib.pyplot as plt
# Create the features
X = np.array([-7.0, -4.0, -1.0, 2.0, 5.0, 8.0, 11.0, 14.0])
# Create the labels
y = np.array([3.0, 6.0, 9.0, 12.0 ,15.0, 18.0, 21.0, 24.0])
# Visualize the data
plt.scatter(X, y)
# The relation b/w X and y
y == X + 10
```
### Input and output features
```
house_info = tf.constant(['bedroom', 'bathroom', 'garage'])
house_price = tf.constant([900000])
# Turn any numoy arrays in tensors
X = tf.constant(X, dtype=tf.float32)
y = tf.constant(y, dtype=tf.float32)
```
### Steps for modelling in tensorflow
1. **Creating the model** - define the input and output layers, as well as the hidden layers of the deep learning model.
2. **Compiling a model** - define the loss function (in other words, the function which tells our model how wrong it is) and the optimizer (tells our model on how to improve the patterns it is learning) and evaluation metrices (what we can use to interpret the performance of our model)
3. **Fitting the model** - letting the model trying to find pattern b/w X and y
```
# Set the seed
tf.random.set_seed(42)
# 1. Create the model using the sequential API
model = tf.keras.Sequential([
tf.keras.layers.Dense(1)
])
# 2. Compile the model
model.compile(loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.SGD(),
metrics=["mae"])
# 3. Fit the model
model.fit(X, y, epochs=5)
X, y
# Making a prediction on X and y
model.predict([17.0])
```
### Improving the model
We can improve the model by tkaing the steps we took in the 3 steps
1. Creating the model- We can increse the number of layers, increase the number of neurons in each of those layers, changing the activation functions of each layer.
2. Compiling a model - here we might change the optimization function perhaps the learning rate of the optimization functions.
3. Fitting the model - here we can fit the model for more epochs (leave it training for longer) or on more data (give the model moere examples to learn from).
```
# rebuilding the model
# 1. Creating the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(1),
tf.keras.layers.Dense(1)
])
# 2. Complining the model
model.compile(loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.SGD(),
metrics=['mae'])
# 3. Fit the model
model.fit(X, y, epochs=100)
# Predicting
model.predict([17.0])
# Let's see if we can make another to improve our model
# 1. Create the model with 100 hidden layers
model = tf.keras.Sequential([
tf.keras.layers.Dense(50, activation=None),
tf.keras.layers.Dense(1)
])
# 2. Complie the model
model.compile(loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.Adam(lr=0.01),
metrics=["mae"])
# 3. Fit the model
model.fit(X, y, epochs=100)
model.predict([17.0])
```
The learning rate is the most important hyperparameter there is to improve the model
### Evaluating the model
```
Build a model -> fit it -> evaluate it -> tweak it -> fit it -> evaluate it and so on
```
> When it comes to evaluation, we should visualize the data
It's a good idea to visulaize
* The data - what data are we working with? What does it look like?
* The model itself - what does our model look like?
* The training of a model - how does the model perform while it learns?
* The prediction of the model - how does the predicitions line up against the actual truth
```
# Making a bigger data set
X = tf.range(-100, 100, 4)
X
# Maing the labels
y = X + 10
y
# Visualize the data
plt.scatter(X, y)
```
### The 3 sets...
* Training set - The model learns from this data, which is usually 70 - 80% of the data
* Validation set - the model get tuned on this data, whcih is 10-15% of the data available
* Testing set - the model get evaluated on this data on how it has learned, this is usually 10-15% of the data
In analogy of a real life example -
* The course materail(Used to prep for the exam) is the training set
* The practise exam is the validation set
* The final exam is the test set
```
# Spliting the data into train and test
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X.numpy(), y.numpy(), test_size=0.2)
# Shape of the data
X_train.shape, X_test.shape, y_train.shape, y_test.shape
# Using regualar ways to get hte data
X_train = X[:40]
y_train = y[:40]
X_test = X[40:]
y_test = y[40:]
len(X_train), len(y_train), len(X_test), len(y_test)
```
### Visualizing the data
Now we've got our data, lets viz the data
```
plt.figure(figsize=(7, 7))
plt.scatter(X_train, y_train, c="b", label="training data")
plt.scatter(X_test, y_test, c="r", label="testing data")
plt.legend();
# Let's see how this data works with the model
# 1. Create the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(1)
# tf.keras.layers.Dense(100, activation='relu')
])
# 2. Complikte the model
model.compile(loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.SGD(),
metrics=["mae"])
# 3. Fit the model
# model.fit(X_train, y_train, epochs=100)
model.summary()
```
Let's build the model which builds automatically by defining the input_shape args
```
# 0. Setting the radome seed
tf.random.set_seed(42)
# 1. Create a model
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape=[1], name="input_layer"),
tf.keras.layers.Dense(1, name="output_layer")
], name="another_model")
# 2. Compile the model
model.compile(loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.SGD(),
metrics=['mae'])
model.summary()
```
* Total params - Total number of parameters in the model.
* Trainable params - These are the params (patterns) the model can update as it training
* Non trainable params - these params aren't updated during the training of the model, this is typically when you bring in the parameters from other trained models (transfer learning)
[Article for Weights and Biases](https://wandb.ai/site/articles/fundamentals-of-neural-networks)
[Learnable Parameters ("Trainable Params") In A Keras Convolutional Neural Network
](https://deeplizard.com/learn/video/8d-9SnGt5E0)
[Deep Learning Course by MIT](http://introtodeeplearning.com/)
```
# Let's fit the model to the training data
model.fit(X_train, y_train, epochs=100, verbose=1)
# Get the models summary
model.summary()
from tensorflow.keras.utils import plot_model
# from keras.utils.vis_utils import plot_model
plot_model(model=model, show_shapes=True)
```
### How to visualize the model predictions
To viz the predictions it is a good idea to plot them
Often you'll see we do `y_test` or `y_true` vs `y_pred`
```
# Predicting the output
y_pred = model.predict(X_test)
y_pred
y_test.numpy()
# Creating a plotting function
def plot_predictions(train_data=X_train,
train_label=y_train,
test_data=X_test,
test_label=y_test,
predictions=y_pred):
"""
Plot training data, test data and compare to ground truth
"""
plt.figure(figsize=(10, 7))
plt.scatter(train_data, train_label, c="b", label="training data")
plt.scatter(test_data, test_label, c="g", label="test data")
plt.scatter(test_data, predictions, c="r", label="predictions")
plt.legend();
plot_predictions(train_data=X_train,
train_label=y_train,
test_data=X_test,
test_label=y_test,
predictions=y_pred)
```
### Evaluation metrices
These methods tell us in a more quantitaive way of how close our `y_test` and `y_pred` are
The evaluation is based on what kind of model we are running.
Since we are running the regression, the two ways of metrics are
* MAE - Average of the absolute value of the error
* MSE - Average of the squared difference of the error
```
model.evaluate(X_test, y_test)
# Calcualting the Mean Absolute Error
mae = tf.metrics.mean_absolute_error(y_test, tf.squeeze(y_pred))
mae
# Calculating the mean squared error - This is to make the larger error more significant
mse = tf.metrics.mean_squared_error(y_test, tf.squeeze(y_pred))
mse
# Function for evaluations
def mae(y_test=y_test,
y_pred=y_pred):
return tf.metrics.mean_absolute_error(y_test, tf.squeeze(y_pred))
# Function to creat MSE evaluation
def mse(y_test=y_test,
y_pred=y_pred):
return tf.metrics.mean_squared_error(y_test, tf.squeeze(y_pred))
print(mae())
print(mse())
```
### Improving the model
This is how we aim to achieve it
1. Get more data for the model to train on
2. Make your model larger
3. Train for longer, give the model more epochs
Let's do 3 experiments
1. `model_1` - same as the original model, 1 layer, trainded on 100 epochs
2. `model_2` - 2 layers, trained for 100 epochs
3. `model_3` - 2 layers, trained on 500 epochs
**Build `model_1`**
```
X_train, y_train
# This is the model_1
# Set random seed
tf.random.set_seed(42)
# 1. Create the model
model_1 = tf.keras.Sequential([
tf.keras.layers.Dense(1)
])
# 2. Compile the model
model_1.compile(loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.SGD(),
metrics=["mae"])
# 3. Fit the model
model_1.fit(X_train, y_train, epochs=100)
# Make and plots predictions for model_1
y_pred_1 = model_1.predict(X_test)
y_pred_1
plot_predictions(predictions=y_pred_1)
# Calcuate model_1 evaluation metrics
mae_1 = mae(y_test, y_pred_1)
mse_1 = mse(y_test, y_pred_1)
mae_1, mse_1
```
**Build `model_2`**
* 2 dense layers trained on 100 epochs
```
# 1. Create the model
model_2 = tf.keras.Sequential([
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1)
])
# 2. Compile the model
model_2.compile(loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.SGD(),
metrics=["mae"])
# 3. Fit the model
model_2.fit(X_train, y_train, epochs=200)
# Make and plot predictions
y_pred_2 = model_2.predict(X_test)
plot_predictions(predictions=y_pred_2)
mae_2 = mae(y_test, y_pred_2)
mae_2
mse_2 = mse(y_test, y_pred_2)
mse_2
```
**Building `model_3`**
```
tf.random.set_seed(42)
# 1. Build the model
model_3 = tf.keras.Sequential([
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1)
])
# 2. Compile the model
model_3.compile(loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.SGD(),
metrics=["mae"])
# 3. Fit the model
model_3.fit(X_train, y_train, epochs=500)
y_preds_3 = model_3.predict(X_test)
plot_predictions(predictions=y_preds_3)
# Calculate the evaluation metrics for model_3
mae_3 = mae(y_test, y_preds_3)
mse_3 = mse(y_test, y_preds_3)
mae_3, mse_3
```
### Compare the results of the experiments
```
# Let's create the pandas data frame
import pandas as pd
model_results = [['model_1', mae_1.numpy(), mse_1.numpy()],
['model_2', mae_2.numpy(), mse_2.numpy()],
['model_3', mae_3.numpy(), mse_3.numpy()]]
all_results = pd.DataFrame(model_results, columns=["model", "mae", "mse"])
all_results
model_2.summary()
```
**Note:** The time taken between experiments must be reduced to a minimum to facilitate a maximum number of experiments in short duration of time.
## Tracking the experiments
* Tensorboard - This is form tensorflow's library for model tracking
* Weights and Biases - A tool for trackign all of kinds of machine learngin experiments (plugs straigt into tensorflow).
### Saving our model
Saving our model allows us to use out of colab, like a web app and apps
There are 2 main formats to the model
1. The SaveModel format
2. HDF5 Format
```
# Save the model in SavedModel format
model_2.save("best_model_SavedModel_format")
# Save the model in HDF5 format
model_2.save('best_model_HDF5.h5')
model_hdf5 = tf.keras.models.load_model("/content/best_model_HDF5.h5")
model_hdf5.summary()
# Save from the SavedModel
model_savedmodel = tf.keras.models.load_model("/content/best_model_SavedModel_format")
model_savedmodel.summary()
# Save the model from HDF5
model_hdf5 = tf.keras.models.load_model("/content/best_model_HDF5.h5")
model_hdf5.summary()
# Compare the model_2 predictions with the SavedModel and the HDF5 format
model_2_preds = model_2.predict(X_test)
savedmodel_2_preds = model_savedmodel.predict(X_test)
hdf5model_2_preds = model_hdf5.predict(X_test)
model_2_preds == savedmodel_2_preds
model_2_preds == hdf5model_2_preds
```
### Downlaod a model or any other file from Google Colab
The number of ways you can do it
1. Download from the file section of the colab section
2. Use the code to download from colab
3. You can save it to drive, and copying it to where ever we require it to
```
# from google.colab import files
# files.download("/content/best_model_HDF5.h5")
# Save a file from Google Colab to Google Drive
!ls source dest
```
## A larger example
```
# Import the libs
import tensorflow as tf
import pandas as pd
import matplotlib.pyplot as plt
# Reading the dataset
insurance = pd.read_csv("https://raw.githubusercontent.com/stedy/Machine-Learning-with-R-datasets/master/insurance.csv")
insurance.head()
insurance_one_hot = pd.get_dummies(insurance)
insurance_one_hot.head()
# Get all the columns of the main dataframe
insurance_one_hot.columns
# Create X and y values (features and labels)
y = insurance_one_hot[['charges']]
X = insurance_one_hot.drop(['charges'], axis=1)
# Observe all the X and y dataframe columns
X.columns, y.columns
X.head()
y.head()
# Create the training and test set
from sklearn.model_selection import train_test_split
# Splitting the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Get all the shapes
X.shape, y.shape, X_train.shape, X_test.shape, y_train.shape, y_test.shape
# Set the randome seed
tf.random.set_seed(42)
# Buildng the neural network
insurance_model = tf.keras.Sequential([
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1)
])
# Compile the model
insurance_model.compile(loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.SGD(),
metrics=["mae"])
# Fit the model
insurance_model.fit(X_train, y_train, epochs=100)
# Evaluate the model with the test_data
insurance_model.evaluate(X_test, y_test)
y_train.median(), y_train.mean()
```
This model seems to not be working as expected, so we need to run a few experiments
We will be doing 2
1. We will adding extra layers with more hidden units
2. Train for longer
```
# Set radom seed
tf.random.set_seed(42)
# 1. Create the model
insurance_model_2 = tf.keras.Sequential([
tf.keras.layers.Dense(100),
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1)
])
# 2. Compile the model
insurance_model_2.compile(
loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.Adam(),
metrics=["mae"]
)
# 3. Fit the model
insurance_model_2.fit(X_train, y_train, epochs=100)
insurance_model_2.evaluate(X_test, y_test)
# se random seed
tf.random.set_seed(42)
# 1. Create the model
insurance_model_3 = tf.keras.Sequential([
tf.keras.layers.Dense(100),
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1)
])
# 2. Compile the model
insurance_model_3.compile(loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.Adam(),
metrics=["mae"])
# 3. Fit the model
history = insurance_model_3.fit(X_train, y_train, epochs=200)
# Evaluating the model
insurance_model_3.evaluate(X_test, y_test)
# Viewing the results of the previous model
insurance_model.evaluate(X_test, y_test)
# Plot history (loss curve or a training curve)
pd.DataFrame(history.history).plot()
plt.xlabel("loss")
plt.ylabel("epochs")
```
> How long should you train for?
It depends on the model and the problem to be precise. There is a solution by tensorflow called [EarlyStopping Callback](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping). This will stop once the metrics stops to decrease beyond a certain point.
### Preprocessing of data (normalization and standardization)
- Article: [Scale, Standardize, or Normalize with Scikit-Learn](https://towardsdatascience.com/scale-standardize-or-normalize-with-scikit-learn-6ccc7d176a02)
```
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
# Reading the data
insurance = pd.read_csv("https://raw.githubusercontent.com/stedy/Machine-Learning-with-R-datasets/master/insurance.csv")
insurance.head()
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
from sklearn.model_selection import train_test_split
# Make column transformer
ct = make_column_transformer(
(MinMaxScaler(), ["age", "bmi", "children"]),
(OneHotEncoder(handle_unknown="ignore"), ["sex", "smoker", "region"])
)
# Creating X and Y
X = insurance.drop(["charges"], axis=1)
y = insurance[["charges"]]
# Split the data into train and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Fit the column transformer to our training data
ct.fit(X_train)
# Transform the train data and the test data with Normalization and OneHotEncoder
X_train_normal = ct.transform(X_train)
X_test_normal = ct.transform(X_test)
X_train_normal[0]
```
The datat is pre-processed and ready to be used in the neural network
```
# Set the random seed
tf.random.set_seed(42)
# 1. Creat the model
insurance_model_4 = tf.keras.Sequential([
tf.keras.layers.Dense(100),
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1)
])
# 2. Compile the model
insurance_model_4.compile(loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.Adam(),
metrics=["mae"])
# 3. Fit the model
insurance_model_4.fit(X_train_normal, y_train, epochs=100)
insurance_model_4.evaluate(X_test_normal, y_test)
```
| github_jupyter |
# Deep Learning & Art: Neural Style Transfer
Welcome to the second assignment of this week. In this assignment, you will learn about Neural Style Transfer. This algorithm was created by Gatys et al. (2015) (https://arxiv.org/abs/1508.06576).
**In this assignment, you will:**
- Implement the neural style transfer algorithm
- Generate novel artistic images using your algorithm
Most of the algorithms you've studied optimize a cost function to get a set of parameter values. In Neural Style Transfer, you'll optimize a cost function to get pixel values!
```
import os
import sys
import scipy.io
import scipy.misc
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
from PIL import Image
from nst_utils import *
import numpy as np
import tensorflow as tf
%matplotlib inline
```
## 1 - Problem Statement
Neural Style Transfer (NST) is one of the most fun techniques in deep learning. As seen below, it merges two images, namely, a "content" image (C) and a "style" image (S), to create a "generated" image (G). The generated image G combines the "content" of the image C with the "style" of image S.
In this example, you are going to generate an image of the Louvre museum in Paris (content image C), mixed with a painting by Claude Monet, a leader of the impressionist movement (style image S).
<img src="images/louvre_generated.png" style="width:750px;height:200px;">
Let's see how you can do this.
## 2 - Transfer Learning
Neural Style Transfer (NST) uses a previously trained convolutional network, and builds on top of that. The idea of using a network trained on a different task and applying it to a new task is called transfer learning.
Following the original NST paper (https://arxiv.org/abs/1508.06576), we will use the VGG network. Specifically, we'll use VGG-19, a 19-layer version of the VGG network. This model has already been trained on the very large ImageNet database, and thus has learned to recognize a variety of low level features (at the earlier layers) and high level features (at the deeper layers).
Run the following code to load parameters from the VGG model. This may take a few seconds.
```
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
print(model)
```
The model is stored in a python dictionary where each variable name is the key and the corresponding value is a tensor containing that variable's value. To run an image through this network, you just have to feed the image to the model. In TensorFlow, you can do so using the [tf.assign](https://www.tensorflow.org/api_docs/python/tf/assign) function. In particular, you will use the assign function like this:
```python
model["input"].assign(image)
```
This assigns the image as an input to the model. After this, if you want to access the activations of a particular layer, say layer `4_2` when the network is run on this image, you would run a TensorFlow session on the correct tensor `conv4_2`, as follows:
```python
sess.run(model["conv4_2"])
```
## 3 - Neural Style Transfer
We will build the NST algorithm in three steps:
- Build the content cost function $J_{content}(C,G)$
- Build the style cost function $J_{style}(S,G)$
- Put it together to get $J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$.
### 3.1 - Computing the content cost
In our running example, the content image C will be the picture of the Louvre Museum in Paris. Run the code below to see a picture of the Louvre.
```
content_image = scipy.misc.imread("images/louvre.jpg")
imshow(content_image)
```
The content image (C) shows the Louvre museum's pyramid surrounded by old Paris buildings, against a sunny sky with a few clouds.
** 3.1.1 - How do you ensure the generated image G matches the content of the image C?**
As we saw in lecture, the earlier (shallower) layers of a ConvNet tend to detect lower-level features such as edges and simple textures, and the later (deeper) layers tend to detect higher-level features such as more complex textures as well as object classes.
We would like the "generated" image G to have similar content as the input image C. Suppose you have chosen some layer's activations to represent the content of an image. In practice, you'll get the most visually pleasing results if you choose a layer in the middle of the network--neither too shallow nor too deep. (After you have finished this exercise, feel free to come back and experiment with using different layers, to see how the results vary.)
So, suppose you have picked one particular hidden layer to use. Now, set the image C as the input to the pretrained VGG network, and run forward propagation. Let $a^{(C)}$ be the hidden layer activations in the layer you had chosen. (In lecture, we had written this as $a^{[l](C)}$, but here we'll drop the superscript $[l]$ to simplify the notation.) This will be a $n_H \times n_W \times n_C$ tensor. Repeat this process with the image G: Set G as the input, and run forward progation. Let $$a^{(G)}$$ be the corresponding hidden layer activation. We will define as the content cost function as:
$$J_{content}(C,G) = \frac{1}{4 \times n_H \times n_W \times n_C}\sum _{ \text{all entries}} (a^{(C)} - a^{(G)})^2\tag{1} $$
Here, $n_H, n_W$ and $n_C$ are the height, width and number of channels of the hidden layer you have chosen, and appear in a normalization term in the cost. For clarity, note that $a^{(C)}$ and $a^{(G)}$ are the volumes corresponding to a hidden layer's activations. In order to compute the cost $J_{content}(C,G)$, it might also be convenient to unroll these 3D volumes into a 2D matrix, as shown below. (Technically this unrolling step isn't needed to compute $J_{content}$, but it will be good practice for when you do need to carry out a similar operation later for computing the style const $J_{style}$.)
<img src="images/NST_LOSS.png" style="width:800px;height:400px;">
**Exercise:** Compute the "content cost" using TensorFlow.
**Instructions**: The 3 steps to implement this function are:
1. Retrieve dimensions from a_G:
- To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()`
2. Unroll a_C and a_G as explained in the picture above
- If you are stuck, take a look at [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape).
3. Compute the content cost:
- If you are stuck, take a look at [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract).
```
# GRADED FUNCTION: compute_content_cost
def compute_content_cost(a_C, a_G):
"""
Computes the content cost
Arguments:
a_C -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image C
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image G
Returns:
J_content -- scalar that you compute using equation 1 above.
"""
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape a_C and a_G (≈2 lines)
a_C_unrolled = tf.transpose(tf.reshape(a_C, [n_H * n_W, n_C]))
a_G_unrolled = tf.transpose(tf.reshape(a_G, [n_H * n_W, n_C]))
# compute the cost with tensorflow (≈1 line)
J_content = tf.reduce_sum(tf.square(tf.subtract(a_C_unrolled,a_G_unrolled))) / (4 * n_H * n_W * n_C)
### END CODE HERE ###
return J_content
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_C = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_content = compute_content_cost(a_C, a_G)
print("J_content = " + str(J_content.eval()))
```
**Expected Output**:
<table>
<tr>
<td>
**J_content**
</td>
<td>
6.76559
</td>
</tr>
</table>
<font color='blue'>
**What you should remember**:
- The content cost takes a hidden layer activation of the neural network, and measures how different $a^{(C)}$ and $a^{(G)}$ are.
- When we minimize the content cost later, this will help make sure $G$ has similar content as $C$.
### 3.2 - Computing the style cost
For our running example, we will use the following style image:
```
style_image = scipy.misc.imread("images/monet_800600.jpg")
imshow(style_image)
```
This painting was painted in the style of *[impressionism](https://en.wikipedia.org/wiki/Impressionism)*.
Lets see how you can now define a "style" const function $J_{style}(S,G)$.
### 3.2.1 - Style matrix
The style matrix is also called a "Gram matrix." In linear algebra, the Gram matrix G of a set of vectors $(v_{1},\dots ,v_{n})$ is the matrix of dot products, whose entries are ${\displaystyle G_{ij} = v_{i}^T v_{j} = np.dot(v_{i}, v_{j}) }$. In other words, $G_{ij}$ compares how similar $v_i$ is to $v_j$: If they are highly similar, you would expect them to have a large dot product, and thus for $G_{ij}$ to be large.
Note that there is an unfortunate collision in the variable names used here. We are following common terminology used in the literature, but $G$ is used to denote the Style matrix (or Gram matrix) as well as to denote the generated image $G$. We will try to make sure which $G$ we are referring to is always clear from the context.
In NST, you can compute the Style matrix by multiplying the "unrolled" filter matrix with their transpose:
<img src="images/NST_GM.png" style="width:900px;height:300px;">
The result is a matrix of dimension $(n_C,n_C)$ where $n_C$ is the number of filters. The value $G_{ij}$ measures how similar the activations of filter $i$ are to the activations of filter $j$.
One important part of the gram matrix is that the diagonal elements such as $G_{ii}$ also measures how active filter $i$ is. For example, suppose filter $i$ is detecting vertical textures in the image. Then $G_{ii}$ measures how common vertical textures are in the image as a whole: If $G_{ii}$ is large, this means that the image has a lot of vertical texture.
By capturing the prevalence of different types of features ($G_{ii}$), as well as how much different features occur together ($G_{ij}$), the Style matrix $G$ measures the style of an image.
**Exercise**:
Using TensorFlow, implement a function that computes the Gram matrix of a matrix A. The formula is: The gram matrix of A is $G_A = AA^T$. If you are stuck, take a look at [Hint 1](https://www.tensorflow.org/api_docs/python/tf/matmul) and [Hint 2](https://www.tensorflow.org/api_docs/python/tf/transpose).
```
# GRADED FUNCTION: gram_matrix
def gram_matrix(A):
"""
Argument:
A -- matrix of shape (n_C, n_H*n_W)
Returns:
GA -- Gram matrix of A, of shape (n_C, n_C)
"""
### START CODE HERE ### (≈1 line)
GA = tf.matmul(A, tf.transpose(A))
### END CODE HERE ###
return GA
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
A = tf.random_normal([3, 2*1], mean=1, stddev=4)
GA = gram_matrix(A)
print("GA = " + str(GA.eval()))
```
**Expected Output**:
<table>
<tr>
<td>
**GA**
</td>
<td>
[[ 6.42230511 -4.42912197 -2.09668207] <br>
[ -4.42912197 19.46583748 19.56387138] <br>
[ -2.09668207 19.56387138 20.6864624 ]]
</td>
</tr>
</table>
### 3.2.2 - Style cost
After generating the Style matrix (Gram matrix), your goal will be to minimize the distance between the Gram matrix of the "style" image S and that of the "generated" image G. For now, we are using only a single hidden layer $a^{[l]}$, and the corresponding style cost for this layer is defined as:
$$J_{style}^{[l]}(S,G) = \frac{1}{4 \times {n_C}^2 \times (n_H \times n_W)^2} \sum _{i=1}^{n_C}\sum_{j=1}^{n_C}(G^{(S)}_{ij} - G^{(G)}_{ij})^2\tag{2} $$
where $G^{(S)}$ and $G^{(G)}$ are respectively the Gram matrices of the "style" image and the "generated" image, computed using the hidden layer activations for a particular hidden layer in the network.
**Exercise**: Compute the style cost for a single layer.
**Instructions**: The 3 steps to implement this function are:
1. Retrieve dimensions from the hidden layer activations a_G:
- To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()`
2. Unroll the hidden layer activations a_S and a_G into 2D matrices, as explained in the picture above.
- You may find [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape) useful.
3. Compute the Style matrix of the images S and G. (Use the function you had previously written.)
4. Compute the Style cost:
- You may find [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract) useful.
```
# GRADED FUNCTION: compute_layer_style_cost
def compute_layer_style_cost(a_S, a_G):
"""
Arguments:
a_S -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image S
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image G
Returns:
J_style_layer -- tensor representing a scalar value, style cost defined above by equation (2)
"""
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape the images to have them of shape (n_C, n_H*n_W) (≈2 lines)
a_S = tf.transpose(tf.reshape(a_S, [n_H * n_W, n_C]))
a_G = tf.transpose(tf.reshape(a_G, [n_H * n_W, n_C]))
# Computing gram_matrices for both images S and G (≈2 lines)
GS = gram_matrix(a_S)
GG = gram_matrix(a_G)
# Computing the loss (≈1 line)
J_style_layer = tf.reduce_sum(tf.square(tf.subtract(GS, GG))) / (4 * n_C **2 * (n_W * n_H) ** 2)
### END CODE HERE ###
return J_style_layer
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_S = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_style_layer = compute_layer_style_cost(a_S, a_G)
print("J_style_layer = " + str(J_style_layer.eval()))
```
**Expected Output**:
<table>
<tr>
<td>
**J_style_layer**
</td>
<td>
9.19028
</td>
</tr>
</table>
### 3.2.3 Style Weights
So far you have captured the style from only one layer. We'll get better results if we "merge" style costs from several different layers. After completing this exercise, feel free to come back and experiment with different weights to see how it changes the generated image $G$. But for now, this is a pretty reasonable default:
```
STYLE_LAYERS = [
('conv1_1', 0.2),
('conv2_1', 0.2),
('conv3_1', 0.2),
('conv4_1', 0.2),
('conv5_1', 0.2)]
```
You can combine the style costs for different layers as follows:
$$J_{style}(S,G) = \sum_{l} \lambda^{[l]} J^{[l]}_{style}(S,G)$$
where the values for $\lambda^{[l]}$ are given in `STYLE_LAYERS`.
We've implemented a compute_style_cost(...) function. It simply calls your `compute_layer_style_cost(...)` several times, and weights their results using the values in `STYLE_LAYERS`. Read over it to make sure you understand what it's doing.
<!--
2. Loop over (layer_name, coeff) from STYLE_LAYERS:
a. Select the output tensor of the current layer. As an example, to call the tensor from the "conv1_1" layer you would do: out = model["conv1_1"]
b. Get the style of the style image from the current layer by running the session on the tensor "out"
c. Get a tensor representing the style of the generated image from the current layer. It is just "out".
d. Now that you have both styles. Use the function you've implemented above to compute the style_cost for the current layer
e. Add (style_cost x coeff) of the current layer to overall style cost (J_style)
3. Return J_style, which should now be the sum of the (style_cost x coeff) for each layer.
!-->
```
def compute_style_cost(model, STYLE_LAYERS):
"""
Computes the overall style cost from several chosen layers
Arguments:
model -- our tensorflow model
STYLE_LAYERS -- A python list containing:
- the names of the layers we would like to extract style from
- a coefficient for each of them
Returns:
J_style -- tensor representing a scalar value, style cost defined above by equation (2)
"""
# initialize the overall style cost
J_style = 0
for layer_name, coeff in STYLE_LAYERS:
# Select the output tensor of the currently selected layer
out = model[layer_name]
# Set a_S to be the hidden layer activation from the layer we have selected, by running the session on out
a_S = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model[layer_name]
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute style_cost for the current layer
J_style_layer = compute_layer_style_cost(a_S, a_G)
# Add coeff * J_style_layer of this layer to overall style cost
J_style += coeff * J_style_layer
return J_style
```
**Note**: In the inner-loop of the for-loop above, `a_G` is a tensor and hasn't been evaluated yet. It will be evaluated and updated at each iteration when we run the TensorFlow graph in model_nn() below.
<!--
How do you choose the coefficients for each layer? The deeper layers capture higher-level concepts, and the features in the deeper layers are less localized in the image relative to each other. So if you want the generated image to softly follow the style image, try choosing larger weights for deeper layers and smaller weights for the first layers. In contrast, if you want the generated image to strongly follow the style image, try choosing smaller weights for deeper layers and larger weights for the first layers
!-->
<font color='blue'>
**What you should remember**:
- The style of an image can be represented using the Gram matrix of a hidden layer's activations. However, we get even better results combining this representation from multiple different layers. This is in contrast to the content representation, where usually using just a single hidden layer is sufficient.
- Minimizing the style cost will cause the image $G$ to follow the style of the image $S$.
</font color='blue'>
### 3.3 - Defining the total cost to optimize
Finally, let's create a cost function that minimizes both the style and the content cost. The formula is:
$$J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$$
**Exercise**: Implement the total cost function which includes both the content cost and the style cost.
```
# GRADED FUNCTION: total_cost
def total_cost(J_content, J_style, alpha = 10, beta = 40):
"""
Computes the total cost function
Arguments:
J_content -- content cost coded above
J_style -- style cost coded above
alpha -- hyperparameter weighting the importance of the content cost
beta -- hyperparameter weighting the importance of the style cost
Returns:
J -- total cost as defined by the formula above.
"""
### START CODE HERE ### (≈1 line)
J = alpha * J_content + beta*J_style
### END CODE HERE ###
return J
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(3)
J_content = np.random.randn()
J_style = np.random.randn()
J = total_cost(J_content, J_style)
print("J = " + str(J))
```
**Expected Output**:
<table>
<tr>
<td>
**J**
</td>
<td>
35.34667875478276
</td>
</tr>
</table>
<font color='blue'>
**What you should remember**:
- The total cost is a linear combination of the content cost $J_{content}(C,G)$ and the style cost $J_{style}(S,G)$
- $\alpha$ and $\beta$ are hyperparameters that control the relative weighting between content and style
## 4 - Solving the optimization problem
Finally, let's put everything together to implement Neural Style Transfer!
Here's what the program will have to do:
<font color='purple'>
1. Create an Interactive Session
2. Load the content image
3. Load the style image
4. Randomly initialize the image to be generated
5. Load the VGG16 model
7. Build the TensorFlow graph:
- Run the content image through the VGG16 model and compute the content cost
- Run the style image through the VGG16 model and compute the style cost
- Compute the total cost
- Define the optimizer and the learning rate
8. Initialize the TensorFlow graph and run it for a large number of iterations, updating the generated image at every step.
</font>
Lets go through the individual steps in detail.
You've previously implemented the overall cost $J(G)$. We'll now set up TensorFlow to optimize this with respect to $G$. To do so, your program has to reset the graph and use an "[Interactive Session](https://www.tensorflow.org/api_docs/python/tf/InteractiveSession)". Unlike a regular session, the "Interactive Session" installs itself as the default session to build a graph. This allows you to run variables without constantly needing to refer to the session object, which simplifies the code.
Lets start the interactive session.
```
# Reset the graph
tf.reset_default_graph()
# Start interactive session
sess = tf.InteractiveSession()
```
Let's load, reshape, and normalize our "content" image (the Louvre museum picture):
```
content_image = scipy.misc.imread("images/louvre_small.jpg")
content_image = reshape_and_normalize_image(content_image)
```
Let's load, reshape and normalize our "style" image (Claude Monet's painting):
```
style_image = scipy.misc.imread("images/monet.jpg")
style_image = reshape_and_normalize_image(style_image)
```
Now, we initialize the "generated" image as a noisy image created from the content_image. By initializing the pixels of the generated image to be mostly noise but still slightly correlated with the content image, this will help the content of the "generated" image more rapidly match the content of the "content" image. (Feel free to look in `nst_utils.py` to see the details of `generate_noise_image(...)`; to do so, click "File-->Open..." at the upper-left corner of this Jupyter notebook.)
```
generated_image = generate_noise_image(content_image)
imshow(generated_image[0])
```
Next, as explained in part (2), let's load the VGG16 model.
```
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
```
To get the program to compute the content cost, we will now assign `a_C` and `a_G` to be the appropriate hidden layer activations. We will use layer `conv4_2` to compute the content cost. The code below does the following:
1. Assign the content image to be the input to the VGG model.
2. Set a_C to be the tensor giving the hidden layer activation for layer "conv4_2".
3. Set a_G to be the tensor giving the hidden layer activation for the same layer.
4. Compute the content cost using a_C and a_G.
```
# Assign the content image to be the input of the VGG model.
sess.run(model['input'].assign(content_image))
# Select the output tensor of layer conv4_2
out = model['conv4_2']
# Set a_C to be the hidden layer activation from the layer we have selected
a_C = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model['conv4_2']
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute the content cost
J_content = compute_content_cost(a_C, a_G)
```
**Note**: At this point, a_G is a tensor and hasn't been evaluated. It will be evaluated and updated at each iteration when we run the Tensorflow graph in model_nn() below.
```
# Assign the input of the model to be the "style" image
sess.run(model['input'].assign(style_image))
# Compute the style cost
J_style = compute_style_cost(model, STYLE_LAYERS)
```
**Exercise**: Now that you have J_content and J_style, compute the total cost J by calling `total_cost()`. Use `alpha = 10` and `beta = 40`.
```
### START CODE HERE ### (1 line)
J = total_cost(J_content, J_style, 10, 40)
### END CODE HERE ###
```
You'd previously learned how to set up the Adam optimizer in TensorFlow. Lets do that here, using a learning rate of 2.0. [See reference](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer)
```
# define optimizer (1 line)
optimizer = tf.train.AdamOptimizer(2.0)
# define train_step (1 line)
train_step = optimizer.minimize(J)
```
**Exercise**: Implement the model_nn() function which initializes the variables of the tensorflow graph, assigns the input image (initial generated image) as the input of the VGG16 model and runs the train_step for a large number of steps.
```
def model_nn(sess, input_image, num_iterations = 200):
# Initialize global variables (you need to run the session on the initializer)
### START CODE HERE ### (1 line)
sess.run(tf.global_variables_initializer())
### END CODE HERE ###
# Run the noisy input image (initial generated image) through the model. Use assign().
### START CODE HERE ### (1 line)
sess.run(model['input'].assign(input_image))
### END CODE HERE ###
for i in range(num_iterations):
# Run the session on the train_step to minimize the total cost
### START CODE HERE ### (1 line)
_ = sess.run(train_step)
### END CODE HERE ###
# Compute the generated image by running the session on the current model['input']
### START CODE HERE ### (1 line)
generated_image = sess.run(model['input'])
### END CODE HERE ###
# Print every 20 iteration.
if i%20 == 0:
Jt, Jc, Js = sess.run([J, J_content, J_style])
print("Iteration " + str(i) + " :")
print("total cost = " + str(Jt))
print("content cost = " + str(Jc))
print("style cost = " + str(Js))
# save current generated image in the "/output" directory
save_image("output/" + str(i) + ".png", generated_image)
# save last generated image
save_image('output/generated_image.jpg', generated_image)
return generated_image
```
Run the following cell to generate an artistic image. It should take about 3min on CPU for every 20 iterations but you start observing attractive results after ≈140 iterations. Neural Style Transfer is generally trained using GPUs.
```
model_nn(sess, generated_image)
```
**Expected Output**:
<table>
<tr>
<td>
**Iteration 0 : **
</td>
<td>
total cost = 5.05035e+09 <br>
content cost = 7877.67 <br>
style cost = 1.26257e+08
</td>
</tr>
</table>
You're done! After running this, in the upper bar of the notebook click on "File" and then "Open". Go to the "/output" directory to see all the saved images. Open "generated_image" to see the generated image! :)
You should see something the image presented below on the right:
<img src="images/louvre_generated.png" style="width:800px;height:300px;">
We didn't want you to wait too long to see an initial result, and so had set the hyperparameters accordingly. To get the best looking results, running the optimization algorithm longer (and perhaps with a smaller learning rate) might work better. After completing and submitting this assignment, we encourage you to come back and play more with this notebook, and see if you can generate even better looking images.
Here are few other examples:
- The beautiful ruins of the ancient city of Persepolis (Iran) with the style of Van Gogh (The Starry Night)
<img src="images/perspolis_vangogh.png" style="width:750px;height:300px;">
- The tomb of Cyrus the great in Pasargadae with the style of a Ceramic Kashi from Ispahan.
<img src="images/pasargad_kashi.png" style="width:750px;height:300px;">
- A scientific study of a turbulent fluid with the style of a abstract blue fluid painting.
<img src="images/circle_abstract.png" style="width:750px;height:300px;">
## 5 - Test with your own image (Optional/Ungraded)
Finally, you can also rerun the algorithm on your own images!
To do so, go back to part 4 and change the content image and style image with your own pictures. In detail, here's what you should do:
1. Click on "File -> Open" in the upper tab of the notebook
2. Go to "/images" and upload your images (requirement: (WIDTH = 300, HEIGHT = 225)), rename them "my_content.png" and "my_style.png" for example.
3. Change the code in part (3.4) from :
```python
content_image = scipy.misc.imread("images/louvre.jpg")
style_image = scipy.misc.imread("images/claude-monet.jpg")
```
to:
```python
content_image = scipy.misc.imread("images/my_content.jpg")
style_image = scipy.misc.imread("images/my_style.jpg")
```
4. Rerun the cells (you may need to restart the Kernel in the upper tab of the notebook).
You can also tune your hyperparameters:
- Which layers are responsible for representing the style? STYLE_LAYERS
- How many iterations do you want to run the algorithm? num_iterations
- What is the relative weighting between content and style? alpha/beta
## 6 - Conclusion
Great job on completing this assignment! You are now able to use Neural Style Transfer to generate artistic images. This is also your first time building a model in which the optimization algorithm updates the pixel values rather than the neural network's parameters. Deep learning has many different types of models and this is only one of them!
<font color='blue'>
What you should remember:
- Neural Style Transfer is an algorithm that given a content image C and a style image S can generate an artistic image
- It uses representations (hidden layer activations) based on a pretrained ConvNet.
- The content cost function is computed using one hidden layer's activations.
- The style cost function for one layer is computed using the Gram matrix of that layer's activations. The overall style cost function is obtained using several hidden layers.
- Optimizing the total cost function results in synthesizing new images.
This was the final programming exercise of this course. Congratulations--you've finished all the programming exercises of this course on Convolutional Networks! We hope to also see you in Course 5, on Sequence models!
### References:
The Neural Style Transfer algorithm was due to Gatys et al. (2015). Harish Narayanan and Github user "log0" also have highly readable write-ups from which we drew inspiration. The pre-trained network used in this implementation is a VGG network, which is due to Simonyan and Zisserman (2015). Pre-trained weights were from the work of the MathConvNet team.
- Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, (2015). A Neural Algorithm of Artistic Style (https://arxiv.org/abs/1508.06576)
- Harish Narayanan, Convolutional neural networks for artistic style transfer. https://harishnarayanan.org/writing/artistic-style-transfer/
- Log0, TensorFlow Implementation of "A Neural Algorithm of Artistic Style". http://www.chioka.in/tensorflow-implementation-neural-algorithm-of-artistic-style
- Karen Simonyan and Andrew Zisserman (2015). Very deep convolutional networks for large-scale image recognition (https://arxiv.org/pdf/1409.1556.pdf)
- MatConvNet. http://www.vlfeat.org/matconvnet/pretrained/
| github_jupyter |
## <center>Computer Science Intensive Course - MindX</center>

# <center>LAB 1. ÔN TẬP PYTHON</center>
```
# run this cell FIRST
%run test_cases_1.ipynb
```
## Bài 1. Ví Dụ
**Input**: Một chuỗi ký tự có độ dài > 1.
**Output**: Chuỗi ký tự được in hoa.
**Hướng dẫn**: Hiện thực hàm uppercase(), nhận một tham số string là chuỗi ký tự được nhập.
**Ví dụ**:
- Input: This is a string
- Output: result = "THIS IS A STRING"
```
def uppercase(input_str):
return input_str.upper()
# do some test
uppercase("This is a string")
```
Kiểm tra kết quả:
```
# !!! DO NOT MODIFY THIS CELL
# Check result on test cases
test1(uppercase)
```
## Bài 2. Đường Chéo Hình Vuông
**Input**: Một số thực *a* là độ dài một cạnh của hình vuông.
**Output**: Độ dài đường chéo của hình vuông đó.
Biết:
- Công thức Pytago cho tam giác ABC vuông tại A:
$\begin{equation} AB^2 + AC^2 = BC^2 \end{equation}$
- Hàm tính căn bậc 2 trong thư viện *math*: *sqrt()*
**Ví dụ**:
- Input: a = 2
- Output: 2.8284271247461903
```
import math
def cal_diagonal(edge_length):
pass
# !!! DO NOT MODIFY THIS CELL
# Check result on test cases
test2(cal_diagonal)
```
## Bài 3. Tổng Chữ Số
**Input**: Một số nguyên *n* có 4 chữ số
**Output**: Tổng các chữ số trong *n*
**Ví dụ**:
- Input: n = 1234
- Output: result = 10
- Giải thích: result = 1+2+3+4 = 10
```
def sum_digits(num):
pass
sum_digits(1234)
# !!! DO NOT MODIFY THIS CELL
# Check result on test cases
test3(sum_digits)
```
## Bài 4. Thứ Trong Tuần
**Input**: Một số nguyên *2 <= n <= 8*.
**Output**: Tên của thứ tương ứng trong tuần.
**Ví dụ**:
Ví dụ 1:
- Input: n = 2
- Output: Monday
Ví dụ 2:
- Input: n = 8
- Output: Sunday
```
def day_in_week(day_int):
pass
# !!! DO NOT MODIFY THIS CELL
# Check result on test cases
test4(day_in_week)
```
## Bài 5. Tìm Dãy Con
Một dãy con là một dãy liên tiếp các ký tự trong một chuỗi ký tự (string).
**Input**: Một chuỗi ký tự S và một chuỗi ký tự sub_s có độ dài >= 1.
**Output**: Index đầu tiên của sub_s trong S. Trả về -1 nếu không tìm thấy.
**Ví dụ**:
Ví dụ 1:
- Input: S = "abcdef", s = "bc"
- Output: result = 1
- Giải thích: Dãy con "bc" được tìm thấy tại vị trí S[1]
Ví dụ 2:
- Input: S = "abcdefabcdef", s = "bc"
- Output: result = 1
- Giải thích: Dãy con "bc" được tìm thấy tại vị trí S[1] và S[7], tuy nhiên ta chỉ cần in ra vị trí đầu tiên tìm thấy.
Ví dụ 3:
- Input: S = "ABCDEF", s = "bc"
- Output: result = -1
```
def find_substring(string, substring):
pass
# !!! DO NOT MODIFY THIS CELL
# Check result on test cases
test5(find_substring)
```
| github_jupyter |
```
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/feature_store/gapic-feature-store.ipynb"">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/feature_store/gapic-feature-store.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
## Overview
This Colab introduces Feature Store, a managed cloud service for machine learning engineers and data scientists to store, serve, manage and share machine learning features at a large scale.
This Colab assumes that you understand basic Google Cloud concepts such as [Project](https://cloud.google.com/storage/docs/projects), [Storage](https://cloud.google.com/storage) and [Vertex AI](https://cloud.google.com/vertex-ai/docs). Some machine learning knowledge is also helpful but not required.
### Dataset
This Colab uses a movie recommendation dataset as an example throughout all the sessions. The task is to train a model to predict if a user is going to watch a movie and serve this model online.
### Objective
In this notebook, you will learn how to:
* How to import your features into Feature Store.
* How to serve online prediction requests using the imported features.
* How to access imported features in offline jobs, such as training jobs.
### Costs
This tutorial uses billable components of Google Cloud:
* Vertex AI
* Cloud Storage
* Cloud Bigtable
Learn about [Vertex AI
pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage
pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)
to generate a cost estimate based on your projected usage.
### Set up your local development environment
**If you are using Colab or Google Cloud Notebooks**, your environment already meets
all the requirements to run this notebook. You can skip this step.
**Otherwise**, make sure your environment meets this notebook's requirements.
You need the following:
* The Google Cloud SDK
* Git
* Python 3
* virtualenv
* Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to [Setting up a Python development
environment](https://cloud.google.com/python/setup) and the [Jupyter
installation guide](https://jupyter.org/install) provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)
1. [Install Python 3.](https://cloud.google.com/python/setup#installing_python)
1. [Install
virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv)
and create a virtual environment that uses Python 3. Activate the virtual environment.
1. To install Jupyter, run `pip install jupyter` on the
command-line in a terminal shell.
1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.
1. Open this notebook in the Jupyter Notebook Dashboard.
### Install additional packages
For this Colab, you need the Vertex SDK for Python.
```
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip install {USER_FLAG} --upgrade git+https://github.com/googleapis/python-aiplatform.git@main-test
```
### Restart the kernel
After you install the SDK, you need to restart the notebook kernel so it can find the packages. You can restart kernel from *Kernel -> Restart Kernel*, or running the following:
```
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### Select a GPU runtime
**Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"**
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).
1. [Enable the Vertex AI API and Compute Engine API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component).
1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).
1. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
#### Set your project ID
**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
```
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
```
Otherwise, set your project ID here.
```
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "python-docs-samples-tests" # @param {type:"string"}
```
### Authenticate your Google Cloud account
**If you are using Google Cloud Notebooks**, your environment is already
authenticated. Skip this step.
**If you are using Colab**, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
**Otherwise**, follow these steps:
1. In the Cloud Console, go to the [**Create service account key**
page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).
2. Click **Create service account**.
3. In the **Service account name** field, enter a name, and
click **Create**.
4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "Vertex AI"
into the filter box, and select
**Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.
5. Click *Create*. A JSON file that contains your key downloads to your
local environment.
6. Enter the path to your service account key as the
`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
```
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
```
## Prepare for output
### Step 1. Create dataset for output
You need a BigQuery dataset to host the output data in `us-central1`. Input the name of the dataset you want to created and specify the name of the table you want to store the output later. These will be used later in the notebook.
**Make sure that the table name does NOT already exist**.
```
from datetime import datetime
from google.cloud import bigquery
# Output dataset
DESTINATION_DATA_SET = "movie_predictions" # @param {type:"string"}
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
DESTINATION_DATA_SET = "{prefix}_{timestamp}".format(
prefix=DESTINATION_DATA_SET, timestamp=TIMESTAMP
)
# Output table. Make sure that the table does NOT already exist; the BatchReadFeatureValues API cannot overwrite an existing table
DESTINATION_TABLE_NAME = "training_data" # @param {type:"string"}
DESTINATION_PATTERN = "bq://{project}.{dataset}.{table}"
DESTINATION_TABLE_URI = DESTINATION_PATTERN.format(
project=PROJECT_ID, dataset=DESTINATION_DATA_SET, table=DESTINATION_TABLE_NAME
)
# Create dataset
REGION = "us-central1" # @param {type:"string"}
client = bigquery.Client(project=PROJECT_ID)
dataset_id = "{}.{}".format(client.project, DESTINATION_DATA_SET)
dataset = bigquery.Dataset(dataset_id)
dataset.location = REGION
dataset = client.create_dataset(dataset)
print("Created dataset {}.{}".format(client.project, dataset.dataset_id))
```
### Import libraries and define constants
```
# Other than project ID and featurestore ID and endpoints needs to be set
API_ENDPOINT = "us-central1-aiplatform.googleapis.com" # @param {type:"string"}
INPUT_CSV_FILE = "gs://cloud-samples-data-us-central1/vertex-ai/feature-store/datasets/movie_prediction.csv"
from google.cloud.aiplatform_v1beta1 import (
FeaturestoreOnlineServingServiceClient, FeaturestoreServiceClient)
from google.cloud.aiplatform_v1beta1.types import FeatureSelector, IdMatcher
from google.cloud.aiplatform_v1beta1.types import \
entity_type as entity_type_pb2
from google.cloud.aiplatform_v1beta1.types import feature as feature_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore as featurestore_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_monitoring as featurestore_monitoring_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_online_service as featurestore_online_service_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_service as featurestore_service_pb2
from google.cloud.aiplatform_v1beta1.types import io as io_pb2
from google.protobuf.duration_pb2 import Duration
# Create admin_client for CRUD and data_client for reading feature values.
admin_client = FeaturestoreServiceClient(client_options={"api_endpoint": API_ENDPOINT})
data_client = FeaturestoreOnlineServingServiceClient(
client_options={"api_endpoint": API_ENDPOINT}
)
# Represents featurestore resource path.
BASE_RESOURCE_PATH = admin_client.common_location_path(PROJECT_ID, REGION)
```
## Terminology and Concept
### Featurestore Data model
Feature Store organizes data with the following 3 important hierarchical concepts:
```
Featurestore -> EntityType -> Feature
```
* **Featurestore**: the place to store your features
* **EntityType**: under a Featurestore, an *EntityType* describes an object to be modeled, real one or virtual one.
* **Feature**: under an EntityType, a *feature* describes an attribute of the EntityType
In the movie prediction example, you will create a featurestore called *movie_prediction*. This store has 2 entity types: *Users* and *Movies*. The Users entity type has the age, gender, and like genres features. The Movies entity type has the genres and average rating features.
## Create Featurestore and Define Schemas
### Create Featurestore
The method to create a featurestore returns a
[long-running operation](https://google.aip.dev/151) (LRO). An LRO starts an asynchronous job. LROs are returned for other API
methods too, such as updating or deleting a featurestore. Calling
`create_fs_lro.result()` waits for the LRO to complete.
```
FEATURESTORE_ID = "movie_prediction"
create_lro = admin_client.create_featurestore(
featurestore_service_pb2.CreateFeaturestoreRequest(
parent=BASE_RESOURCE_PATH,
featurestore_id=FEATURESTORE_ID,
featurestore=featurestore_pb2.Featurestore(
online_serving_config=featurestore_pb2.Featurestore.OnlineServingConfig(
fixed_node_count=1
),
),
)
)
# Wait for LRO to finish and get the LRO result.
print(create_lro.result())
```
You can use [GetFeaturestore](https://cloud.google.com/vertex-ai/docs/reference/rpc/google.cloud.aiplatform.v1beta1#google.cloud.aiplatform.v1beta1.FeaturestoreService.GetFeaturestore) or [ListFeaturestores](https://cloud.google.com/vertex-ai/docs/reference/rpc/google.cloud.aiplatform.v1beta1#google.cloud.aiplatform.v1beta1.FeaturestoreService.ListFeaturestores) to check if the Featurestore was successfully created. The following example gets the details of the Featurestore.
```
admin_client.get_featurestore(
name=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID)
)
```
### Create Entity Type
You can specify a monitoring config which will by default be inherited by all Features under this EntityType.
```
# Create users entity type with monitoring enabled.
# All Features belonging to this EntityType will by default inherit the monitoring config.
users_entity_type_lro = admin_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
entity_type_id="users",
entity_type=entity_type_pb2.EntityType(
description="Users entity",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=86400), # 1 day
),
),
),
)
)
# Similarly, wait for EntityType creation operation.
print(users_entity_type_lro.result())
# Create movies entity type without a monitoring configuration.
movies_entity_type_lro = admin_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
entity_type_id="movies",
entity_type=entity_type_pb2.EntityType(description="Movies entity"),
)
)
# Similarly, wait for EntityType creation operation.
print(movies_entity_type_lro.result())
```
### Create Feature
You can also set a custom monitoring configuration at the Feature level, and view the properties and metrics in the console: sample [properties](https://storage.googleapis.com/cloud-samples-data/ai-platform-unified/datasets/featurestore/Feature%20Properties.png), sample [metrics](https://storage.googleapis.com/cloud-samples-data/ai-platform-unified/datasets/featurestore/Feature%20Snapshot%20Distribution.png).
```
# Create features for the 'users' entity.
# 'age' Feature leaves the monitoring config unset, which means it'll inherit the config from EntityType.
# 'gender' Feature explicitly disables monitoring.
# 'liked_genres' Feature is a STRING_ARRAY type, so it is automatically excluded from monitoring.
# For Features with monitoring enabled, distribution statistics are updated periodically in the console.
admin_client.batch_create_features(
parent=admin_client.entity_type_path(PROJECT_ID, REGION, FEATURESTORE_ID, "users"),
requests=[
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.INT64,
description="User age",
),
feature_id="age",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="User gender",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
disabled=True,
),
),
),
feature_id="gender",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING_ARRAY,
description="An array of genres that this user liked",
),
feature_id="liked_genres",
),
],
).result()
# Create features for movies type.
# 'title' Feature enables monitoring.
admin_client.batch_create_features(
parent=admin_client.entity_type_path(PROJECT_ID, REGION, FEATURESTORE_ID, "movies"),
requests=[
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="The title of the movie",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=172800), # 2 days
),
),
),
feature_id="title",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="The genres of the movie",
),
feature_id="genres",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.DOUBLE,
description="The average rating for the movie, range is [1.0-5.0]",
),
feature_id="average_rating",
),
],
).result()
```
## Search created features
While the [ListFeatures](https://cloud.google.com/vertex-ai/docs/reference/rpc/google.cloud.aiplatform.v1beta1#google.cloud.aiplatform.v1beta1.FeaturestoreService.ListFeatures) method allows you to easily view all features of a single
entity type, the [SearchFeatures](https://cloud.google.com/vertex-ai/docs/reference/rpc/google.cloud.aiplatform.v1beta1#google.cloud.aiplatform.v1beta1.FeaturestoreService.SearchFeatures) method searches across all featurestores
and entity types in a given location (such as `us-central1`). This can help you discover features that were created by someone else.
You can query based on feature properties including feature ID, entity type ID,
and feature description. You can also limit results by filtering on a specific
featurestore, feature value type, and/or labels.
```
# Search for all features across all featurestores.
list(admin_client.search_features(location=BASE_RESOURCE_PATH))
```
Now, narrow down the search to features that are of type `DOUBLE`
```
# Search for all features with value type `DOUBLE`
list(
admin_client.search_features(
featurestore_service_pb2.SearchFeaturesRequest(
location=BASE_RESOURCE_PATH, query="value_type=DOUBLE"
)
)
)
```
Or, limit the search results to features with specific keywords in their ID and type.
```
# Filter on feature value type and keywords.
list(
admin_client.search_features(
featurestore_service_pb2.SearchFeaturesRequest(
location=BASE_RESOURCE_PATH, query="feature_id:title AND value_type=STRING"
)
)
)
```
## Import Feature Values
You need to import feature values before you can use them for online/offline serving. In this step, you will learn how to import feature values by calling the ImportFeatureValues API using the Python SDK.
### Source Data Format and Layout
As mentioned above, BigQuery table/Avro/CSV are supported. No matter what format you are using, each imported entity *must* have an ID; also, each entity can *optionally* have a timestamp, sepecifying when the feature values are generated. This Colab uses Avro as an input, located at this public [bucket](https://pantheon.corp.google.com/storage/browser/cloud-samples-data/ai-platform-unified/datasets/featurestore;tab=objects?project=storage-samples&prefix=&forceOnObjectsSortingFiltering=false). The Avro schemas are as follows:
**For the Users entity**:
```
schema = {
"type": "record",
"name": "User",
"fields": [
{
"name":"user_id",
"type":["null","string"]
},
{
"name":"age",
"type":["null","long"]
},
{
"name":"gender",
"type":["null","string"]
},
{
"name":"liked_genres",
"type":{"type":"array","items":"string"}
},
{
"name":"update_time",
"type":["null",{"type":"long","logicalType":"timestamp-micros"}]
},
]
}
```
**For the Movies entity**
```
schema = {
"type": "record",
"name": "Movie",
"fields": [
{
"name":"movie_id",
"type":["null","string"]
},
{
"name":"average_rating",
"type":["null","double"]
},
{
"name":"title",
"type":["null","string"]
},
{
"name":"genres",
"type":["null","string"]
},
{
"name":"update_time",
"type":["null",{"type":"long","logicalType":"timestamp-micros"}]
},
]
}
```
### Import feature values for Users
When importing, specify the following in your request:
* Data source format: BigQuery Table/Avro/CSV
* Data source URL
* Destination: featurestore/entity types/features to be imported
```
import_users_request = featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
avro_source=io_pb2.AvroSource(
# Source
gcs_source=io_pb2.GcsSource(
uris=[
"gs://cloud-samples-data-us-central1/vertex-ai/feature-store/datasets/users.avro"
]
)
),
entity_id_field="user_id",
feature_specs=[
# Features
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="age"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="gender"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="liked_genres"
),
],
feature_time_field="update_time",
worker_count=10,
)
# Start to import, will take a couple of minutes
ingestion_lro = admin_client.import_feature_values(import_users_request)
# Polls for the LRO status and prints when the LRO has completed
ingestion_lro.result()
```
### Import feature values for Movies
Similarly, import feature values for 'movies' into the featurestore.
```
import_movie_request = featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "movies"
),
avro_source=io_pb2.AvroSource(
gcs_source=io_pb2.GcsSource(
uris=[
"gs://cloud-samples-data-us-central1/vertex-ai/feature-store/datasets/movies.avro"
]
)
),
entity_id_field="movie_id",
feature_specs=[
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="title"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="genres"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="average_rating"
),
],
feature_time_field="update_time",
worker_count=10,
)
# Start to import, will take a couple of minutes
ingestion_lro = admin_client.import_feature_values(import_movie_request)
# Polls for the LRO status and prints when the LRO has completed
ingestion_lro.result()
```
## Online serving
The
[Online Serving APIs](https://cloud.google.com/vertex-ai/docs/reference/rpc/google.cloud.aiplatform.v1beta1#featurestoreonlineservingservice)
lets you serve feature values for small batches of entities. It's designed for latency-sensitive service, such as online model prediction. For example, for a movie service, you might want to quickly shows movies that the current user would most likely watch by using online predictions.
### Read one entity per request
The ReadFeatureValues API is used to read feature values of one entity; hence
its custom HTTP verb is `readFeatureValues`. By default, the API will return the latest value of each feature, meaning the feature values with the most recent timestamp.
To read feature values, specify the entity ID and features to read. The response
contains a `header` and an `entity_view`. Each row of data in the `entity_view`
contains one feature value, in the same order of features as listed in the response header.
```
# Fetch the following 3 features.
feature_selector = FeatureSelector(
id_matcher=IdMatcher(ids=["age", "gender", "liked_genres"])
)
data_client.read_feature_values(
featurestore_online_service_pb2.ReadFeatureValuesRequest(
# Fetch from the following feature store/entity type
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
# Fetch the user features whose ID is "alice"
entity_id="alice",
feature_selector=feature_selector,
)
)
```
### Read multiple entities per request
To read feature values from multiple entities, use the
StreamingReadFeatureValues API, which is almost identical to the previous
ReadFeatureValues API. Note that fetching only a small number of entities is recomended when using this API due to its latency-sensitive nature.
```
# Read the same set of features as above, but for multiple entities.
response_stream = data_client.streaming_read_feature_values(
featurestore_online_service_pb2.StreamingReadFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
entity_ids=["alice", "bob"],
feature_selector=feature_selector,
)
)
# Iterate and process response. Note the first one is always the header only.
for response in response_stream:
print(response)
```
Now that you have learned how to featch imported feature values for online serving, the next step is learning how to use imported feature values for offline use cases.
## Batch Serving
Batch Serving is used to fetch a large batch of feature values for high-throughput, typically for training a model or batch prediction. In this section, you will learn how to prepare for training examples by calling the BatchReadFeatureValues API.
### Use case
**The task** is to prepare a training dataset to train a model, which predicts if a given user will watch a given movie. To achieve this, you need 2 sets of input:
* Features: you already imported into the featurestore.
* Labels: the groud-truth data recorded that user X has watched movie Y.
To be more specific, the ground-truth observation is described in Table 1 and the desired training dataset is described in Table 2. Each row in Table 2 is a result of joining the imported feature values from Feature Store according to the entity IDs and timestamps in Table 1. In this example, the `age`, `gender` and `liked_genres` features from `users` and
the `genres` and `average_rating` features from `movies` are chosen to train the model. Note that only positive examples are shown in these 2 tables, i.e., you can imagine there is a label column whose values are all `True`.
BatchReadFeatureValues API takes Table 1 as
input, joins all required feature values from the featurestore, and returns Table 2 for training.
<h4 align="center">Table 1. Ground-truth Data</h4>
users | movies | timestamp
----- | -------- | --------------------
alice | Cinema Paradiso | 2019-11-01T00:00:00Z
bob | The Shining | 2019-11-15T18:09:43Z
... | ... | ...
<h4 align="center">Table 2. Expected Training Data Generated by Batch Read API (Positive Samples)</h4>
timestamp | entity_type_users | age | gender | liked_genres | entity_type_movies | genres | average_rating
-------------------- | ----------------- | --------------- | ---------------- | -------------------- | -------- | --------- | -----
2019-11-01T00:00:00Z | bob | 35 | M | [Action, Crime] | The Shining | Horror | 4.8
2019-11-01T00:00:00Z | alice | 55 | F | [Drama, Comedy] | Cinema Paradiso | Romance | 4.5
... | ... | ... | ... | ... | ... | ... | ...
#### Why timestamp?
Note that there is a `timestamp` column in Table 2. This indicates the time when the ground-truth was observed. This is to avoid data inconsistency.
For example, the 1st row of Table 2 indicates that user `alice` watched movie `Cinema Paradiso` on `2019-11-01T00:00:00Z`. The featurestore keeps feature values for all timestamps but fetches feature values *only* at the given timestamp during batch serving. On 2019-11-01 alice might be 54 years old, but now alice might be 56; featurestore returns `age=54` as alice's age, instead of `age=56`, because that is the value of the feature at the observation time. Similarly, other features might be time-variant as well, such as liked_genres.
### Batch Read Feature Values
Assemble the request which specify the following info:
* Where is the label data, i.e., Table 1.
* Which features are read, i.e., the column names in Table 2.
The output is stored in a BigQuery table.
```
batch_serving_request = featurestore_service_pb2.BatchReadFeatureValuesRequest(
# featurestore info
featurestore=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
# URL for the label data, i.e., Table 1.
csv_read_instances=io_pb2.CsvSource(
gcs_source=io_pb2.GcsSource(uris=[INPUT_CSV_FILE])
),
destination=featurestore_service_pb2.FeatureValueDestination(
bigquery_destination=io_pb2.BigQueryDestination(
# Output to BigQuery table created earlier
output_uri=DESTINATION_TABLE_URI
)
),
entity_type_specs=[
featurestore_service_pb2.BatchReadFeatureValuesRequest.EntityTypeSpec(
# Read the 'age', 'gender' and 'liked_genres' features from the 'users' entity
entity_type_id="users",
feature_selector=FeatureSelector(
id_matcher=IdMatcher(
ids=[
# features, use "*" if you want to select all features within this entity type
"age",
"gender",
"liked_genres",
]
)
),
),
featurestore_service_pb2.BatchReadFeatureValuesRequest.EntityTypeSpec(
# Read the 'average_rating' and 'genres' feature values of the 'movies' entity
entity_type_id="movies",
feature_selector=FeatureSelector(
id_matcher=IdMatcher(ids=["average_rating", "genres"])
),
),
],
)
# Execute the batch read
batch_serving_lro = admin_client.batch_read_feature_values(batch_serving_request)
# This long runing operation will poll until the batch read finishes.
batch_serving_lro.result()
```
After the LRO finishes, you should be able to see the result from the [BigQuery console](https://console.cloud.google.com/bigquery), in the dataset created earlier.
## Cleaning up
To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
You can also keep the project but delete the featurestore:
```
admin_client.delete_featurestore(
request=featurestore_service_pb2.DeleteFeaturestoreRequest(
name=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
force=True,
)
).result()
client.delete_dataset(
DESTINATION_DATA_SET, delete_contents=True, not_found_ok=True
) # Make an API request.
print("Deleted dataset '{}'.".format(DESTINATION_DATA_SET))
```
| github_jupyter |
# Recap
We started by learning about permutation importance and partial dependence plots for an overview of what the model has learned.
We then learned about SHAP values to break down the components of individual predictions.
Now we'll expand on SHAP values, seeing how aggregating many SHAP values can give more detailed alternatives to permutation importance and partial dependence plots.
# SHAP Values Review
Shap values show how much a given feature changed our prediction (compared to if we made that prediction at some baseline value of that feature).
For example, consider an ultra-simple model:
$$y = 4 * x1 + 2 * x2$$
If $x1$ takes the value 2, instead of a baseline value of 0, then our SHAP value for $x1$ would be 8 (from 4 times 2).
These are harder to calculate with the sophisticated models we use in practice. But through some algorithmic cleverness, Shap values allow us to decompose any prediction into the sum of effects of each feature value, yielding a graph like this:

[Link to larger view](https://i.imgur.com/JVD2U7k.png)*
In addition to this nice breakdown for each prediction, the [Shap library](https://github.com/slundberg/shap) offers great visualizations of groups of Shap values. We will focus on two of these visualizations. These visualizations have conceptual similarities to permutation importance and partial dependence plots. So multiple threads from the previous exercises will come together here.
# Summary Plots
[Permutation importance](https://www.kaggle.com/dansbecker/permutation-importance) is great because it created simple numeric measures to see which features mattered to a model. This helped us make comparisons between features easily, and you can present the resulting graphs to non-technical audiences.
But it doesn't tell you how each features matter. If a feature has medium permutation importance, that could mean it has
- a large effect for a few predictions, but no effect in general, or
- a medium effect for all predictions.
SHAP summary plots give us a birds-eye view of feature importance and what is driving it. We'll walk through an example plot for the soccer data:

This plot is made of many dots. Each dot has three characteristics:
- Vertical location shows what feature it is depicting
- Color shows whether that feature was high or low for that row of the dataset
- Horizontal location shows whether the effect of that value caused a higher or lower prediction.
For example, the point in the upper left was for a team that scored few goals, reducing the prediction by 0.25.
Some things you should be able to easily pick out:
- The model ignored the `Red` and `Yellow & Red` features.
- Usually `Yellow Card` doesn't affect the prediction, but there is an extreme case where a high value caused a much lower prediction.
- High values of Goal scored caused higher predictions, and low values caused low predictions
If you look for long enough, there's a lot of information in this graph. You'll face some questions to test how you read them in the exercise.
# Summary Plots in Code
You have already seen the code to load the soccer/football data:
```
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
data = pd.read_csv('../input/fifa-2018-match-statistics/FIFA 2018 Statistics.csv')
y = (data['Man of the Match'] == "Yes") # Convert from string "Yes"/"No" to binary
feature_names = [i for i in data.columns if data[i].dtype in [np.int64, np.int64]]
X = data[feature_names]
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
my_model = RandomForestClassifier(random_state=0).fit(train_X, train_y)
```
We get the SHAP values for all validation data with the following code. It is short enough that we explain it in the comments.
```
import shap # package used to calculate Shap values
# Create object that can calculate shap values
explainer = shap.TreeExplainer(my_model)
# calculate shap values. This is what we will plot.
# Calculate shap_values for all of val_X rather than a single row, to have more data for plot.
shap_values = explainer.shap_values(val_X)
# Make plot. Index of [1] is explained in text below.
shap.summary_plot(shap_values[1], val_X)
```
The code isn't too complex. But there are a few caveats.
- When plotting, we call `shap_values[1]`. For classification problems, there is a separate array of SHAP values for each possible outcome. In this case, we index in to get the SHAP values for the prediction of "True".
- Calculating SHAP values can be slow. It isn't a problem here, because this dataset is small. But you'll want to be careful when running these to plot with reasonably sized datasets. The exception is when using an `xgboost` model, which SHAP has some optimizations for and which is thus much faster.
This provides a great overview of the model, but we might want to delve into a single feature. That's where SHAP dependence contribution plots come into play.
# SHAP Dependence Contribution Plots
We've previously used Partial Dependence Plots to show how a single feature impacts predictions. These are insightful and relevant for many real-world use cases. Plus, with a little effort, they can be explained to a non-technical audience.
But there's a lot they don't show. For instance, what is the distribution of effects? Is the effect of having a certain value pretty constant, or does it vary a lot depending on the values of other feaures. SHAP dependence contribution plots provide a similar insight to PDP's, but they add a lot more detail.

Start by focusing on the shape, and we'll come back to color in a minute. Each dot represents a row of the data. The horizontal location is the actual value from the dataset, and the vertical location shows what having that value did to the prediction. The fact this slopes upward says that the more you possess the ball, the higher the model's prediction is for winning the *Man of the Match* award.
The spread suggests that other features must interact with Ball Possession %. For example, here we have highlighted two points with similar ball possession values. That value caused one prediction to increase, and it caused the other prediction to decrease.

For comparison, a simple linear regression would produce plots that are perfect lines, without this spread.
This suggests we delve into the interactions, and the plots include color coding to help do that. While the primary trend is upward, you can visually inspect whether that varies by dot color.
Consider the following very narrow example for concreteness.

These two points stand out spatially as being far away from the upward trend. They are both colored purple, indicating the team scored one goal. You can interpret this to say **In general, having the ball increases a team's chance of having their player win the award. But if they only score one goal, that trend reverses and the award judges may penalize them for having the ball so much if they score that little.**
Outside of those few outliers, the interaction indicated by color isn't very dramatic here. But sometimes it will jump out at you.
# Dependence Contribution Plots in Code
We get the dependence contribution plot with the following code. The only line that's different from the `summary_plot` is the last line.
```
import shap # package used to calculate Shap values
# Create object that can calculate shap values
explainer = shap.TreeExplainer(my_model)
# calculate shap values. This is what we will plot.
shap_values = explainer.shap_values(X)
# make plot.
shap.dependence_plot('Ball Possession %', shap_values[1], X, interaction_index="Goal Scored")
```
If you don't supply an argument for `interaction_index`, Shapley uses some logic to pick one that may be interesting.
This didn't require writing a lot of code. But the trick with these techniques is in thinking critically about the results rather than writing code itself.
# Your Turn
**[Test yourself](#$NEXT_NOTEBOOK_URL$)** with some questions to develop your skill with these techniques.
| github_jupyter |
### 쿠팡 웹스크래핑 II - 여러 페이지 가져오기
데이터 분석용 노트북 사기 -- 리뷰가 많고 평점이 높은, 광고는 싫어...
```
#Import library
from IPython.display import Image
# Load image from local storage
Image(filename = "쿠팡.png", width = 600, height = 300)
import numpy as np
import re
import requests
from bs4 import BeautifulSoup
headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36"}
#for i in (1,2,3,4,5): # {} 와 format 사용
for i in range(1, 6):
url = "https://www.coupang.com/np/search?q=%EB%85%B8%ED%8A%B8%EB%B6%81&channel=user&component=&eventCategory=SRP&trcid=&traid=&sorter=scoreDesc&minPrice=&maxPrice=&priceRange=&filterType=&listSize=36&filter=&isPriceRange=false&brand=&offerCondition=&rating=0&page={}&rocketAll=false&searchIndexingToken=1=4&backgroundColor=".format(i)
print(url)
for i in range(1, 6):
url = "https://www.coupang.com/np/search?q=%EB%85%B8%ED%8A%B8%EB%B6%81&channel=user&component=&eventCategory=SRP&trcid=&traid=&sorter=scoreDesc&minPrice=&maxPrice=&priceRange=&filterType=&listSize=36&filter=&isPriceRange=false&brand=&offerCondition=&rating=0&page={}&rocketAll=false&searchIndexingToken=1=4&backgroundColor=".format(i)
res = requests.get(url, headers=headers)
res.raise_for_status() # 문제 있으면 프로그램 종료 시킬 것
soup = BeautifulSoup(res.text, "lxml")
items = soup.find_all("li", attrs = {"class":re.compile("^search-product")})
for item in items:
ad_badge = item.find("span", attrs={"class":"ad-badge-text"}) # 광고 상품
if ad_badge:
continue
rate = item.find("em", attrs={"class":"rating"}) # 평점
if rate:
rate = rate.get_text()
else:
# rate = "평점 없음"
continue
rate_cnt = item.find("span", attrs={"class":"rating-total-count"}) # 리뷰 수
if rate_cnt:
rate_cnt = rate_cnt.get_text() # 예 : (26)
rate_cnt = rate_cnt[1:-1]
else:
#rate_cnt = "평점 수 없음"
continue
name = item.find("div", attrs={"class":"name"}).get_text() # 제품명
if "Apple" in name:
continue
price = item.find("strong", attrs={"class":"price-value"}).get_text() # 가격
link = item.find("a", attrs={"class":"search-product-link"})["href"] # 링크
if float(rate) > 4.5 and int(rate_cnt) > 50: # 평점 4.5 이상이고 평점 수가 50 이상
print (name, price, rate, rate_cnt)
print(f"제품명 : {name}")
print(f"가격 : {price}")
print(f"평점 : {rate}점 ({rate_cnt}개)")
print("바로가기 : {}".format("https://www.coupang.com" + link))
print("-"*100) # 줄긋기
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 정형 데이터 다루기
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/structured_data/feature_columns">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
TensorFlow.org에서 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/structured_data/feature_columns.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/structured_data/feature_columns.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
깃허브(GitHub) 소스 보기</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도
불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.
이 번역에 개선할 부분이 있다면
[tensorflow/docs-l10n](https://github.com/tensorflow/docs-l10n/) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.
문서 번역이나 리뷰에 참여하려면
[docs-ko@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ko)로
메일을 보내주시기 바랍니다.
이 튜토리얼은 정형 데이터(structured data)를 다루는 방법을 소개합니다(예를 들어 CSV에서 읽은 표 형식의 데이터). [케라스](https://www.tensorflow.org/guide/keras)를 사용하여 모델을 정의하고 [특성 열](https://www.tensorflow.org/guide/feature_columns)(feature column)을 사용하여 CSV의 열을 모델 훈련에 필요한 특성으로 매핑하겠습니다. 이 튜토리얼은 다음 내용을 포함합니다:
* [판다스](https://pandas.pydata.org/)(Pandas)를 사용하여 CSV 파일을 읽기
* [tf.data](https://www.tensorflow.org/guide/datasets)를 사용하여 행을 섞고 배치로 나누는 입력 파이프라인(pipeline)을 만들기
* CSV의 열을 feature_column을 사용해 모델 훈련에 필요한 특성으로 매핑하기
* 케라스를 사용하여 모델 구축, 훈련, 평가하기
## 데이터셋
클리블랜드(Cleveland) 심장병 재단에서 제공한 작은 [데이터셋](https://archive.ics.uci.edu/ml/datasets/heart+Disease)을 사용하겠습니다. 이 CSV 파일은 수백 개의 행으로 이루어져 있습니다. 각 행은 환자 한 명을 나타내고 각 열은 환자에 대한 속성 값입니다. 이 정보를 사용해 환자의 심장병 발병 여부를 예측해 보겠습니다. 즉 이 데이터셋은 이진 분류 문제입니다.
다음은 이 데이터셋에 대한 [설명](https://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/heart-disease.names)입니다. 수치형과 범주형 열이 모두 있다는 점을 주목하세요.
>열| 설명| 특성 타입 | 데이터 타입
>------------|--------------------|----------------------|-----------------
>Age | 나이 | 수치형 | 정수
>Sex | (1 = 남성; 0 = 여성) | 범주형 | 정수
>CP | 가슴 통증 유형 (0, 1, 2, 3, 4) | 범주형 | 정수
>Trestbpd | 안정 혈압 (병원 입원시 mm Hg) | 수치형 | 정수
>Chol | 혈청 콜레스테롤 (mg/dl) | 수치형 | 정수
>FBS | (공복 혈당 > 120 mg/dl) (1 = true; 0 = false) | 범주형 | 정수
>RestECG | 안정 심전도 결과 (0, 1, 2) | 범주형 | 정수
>Thalach | 최대 심박동수 | 수치형 | 정수
>Exang | 협심증 유발 운동 (1 = yes; 0 = no) | 범주형 | 정수
>Oldpeak | 비교적 안정되기까지 운동으로 유발되는 ST depression | 수치형 | 정수
>Slope | 최대 운동 ST segment의 기울기 | 수치형 | 실수
>CA | 형광 투시된 주요 혈관의 수 (0-3) | 수치형 | 정수
>Thal | 3 = 보통; 6 = 해결된 결함; 7 = 해결가능한 결함 | 범주형 | 문자열
>Target | 심장병 진단 (1 = true; 0 = false) | 분류 | 정수
## 텐서플로와 필요한 라이브러리 임포트하기
```
!pip install sklearn
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import pandas as pd
try:
# %tensorflow_version only exists in Colab.
!pip install tf-nightly-2.0-preview
except Exception:
pass
import tensorflow as tf
from tensorflow import feature_column
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
```
## 판다스로 데이터프레임 만들기
[판다스](https://pandas.pydata.org/)는 정형 데이터를 읽고 조작하는데 유용한 유틸리티 함수를 많이 제공하는 파이썬 라이브러리입니다. 판다스를 이용해 URL로부터 데이터를 다운로드하여 읽은 다음 데이터프레임으로 변환하겠습니다.
```
URL = 'https://storage.googleapis.com/applied-dl/heart.csv'
dataframe = pd.read_csv(URL)
dataframe.head()
```
## 데이터프레임을 훈련 세트, 검증 세트, 테스트 세트로 나누기
하나의 CSV 파일에서 데이터셋을 다운로드했습니다. 이를 훈련 세트, 검증 세트, 테스트 세트로 나누겠습니다.
```
train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), '훈련 샘플')
print(len(val), '검증 샘플')
print(len(test), '테스트 샘플')
```
## tf.data를 사용하여 입력 파이프라인 만들기
그다음 [tf.data](https://www.tensorflow.org/guide/datasets)를 사용하여 데이터프레임을 감싸겠습니다. 이렇게 하면 특성 열을 사용하여 판다스 데이터프레임의 열을 모델 훈련에 필요한 특성으로 매핑할 수 있습니다. 아주 큰 CSV 파일(메모리에 들어갈 수 없을 정도로 큰 파일)을 다룬다면 tf.data로 디스크 디렉토리에서 데이터를 읽을 수 있습니다. 이런 내용은 이 튜토리얼에 포함되어 있지 않습니다.
```
# 판다스 데이터프레임으로부터 tf.data 데이터셋을 만들기 위한 함수
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('target')
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
batch_size = 5 # 예제를 위해 작은 배치 크기를 사용합니다.
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
```
## 입력 파이프라인 이해하기
앞서 만든 입력 파이프라인을 호출하여 반환되는 데이터 포맷을 확인해 보겠습니다. 간단하게 출력하기 위해 작은 배치 크기를 사용합니다.
```
for feature_batch, label_batch in train_ds.take(1):
print('전체 특성:', list(feature_batch.keys()))
print('나이 특성의 배치:', feature_batch['age'])
print('타깃의 배치:', label_batch )
```
이 데이터셋은 (데이터프레임의) 열 이름을 키로 갖는 딕셔너리를 반환합니다. 데이터프레임 열의 값이 매핑되어 있습니다.
## 여러 종류의 특성 열 알아 보기
텐서플로는 여러 종류의 특성 열을 제공합니다. 이 절에서 몇 가지 특성 열을 만들어서 데이터프레임의 열을 변환하는 방법을 알아 보겠습니다.
```
# 특성 열을 시험해 보기 위해 샘플 배치를 만듭니다.
example_batch = next(iter(train_ds))[0]
# 특성 열을 만들고 배치 데이터를 변환하는 함수
def demo(feature_column):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch).numpy())
```
### 수치형 열
특성 열의 출력은 모델의 입력이 됩니다(앞서 정의한 함수를 사용하여 데이터프레임의 각 열이 어떻게 변환되는지 알아 볼 것입니다). [수치형 열](https://www.tensorflow.org/api_docs/python/tf/feature_column/numeric_column)은 가장 간단한 종류의 열입니다. 이 열은 실수 특성을 표현하는데 사용됩니다. 이 열을 사용하면 모델은 데이터프레임 열의 값을 변형시키지 않고 그대로 전달 받습니다.
```
age = feature_column.numeric_column("age")
demo(age)
```
심장병 데이터셋 데이터프레임의 대부분 열은 수치형입니다.
### 버킷형 열
종종 모델에 수치 값을 바로 주입하기 원치 않을 때가 있습니다. 대신 수치 값의 구간을 나누어 이를 기반으로 범주형으로 변환합니다. 원본 데이터가 사람의 나이를 표현한다고 가정해 보죠. 나이를 수치형 열로 표현하는 대신 [버킷형 열](https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column)(bucketized column)을 사용하여 나이를 몇 개의 버킷(bucket)으로 분할할 수 있습니다. 다음에 원-핫 인코딩(one-hot encoding)된 값은 각 열이 매칭되는 나이 범위를 나타냅니다.
```
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
demo(age_buckets)
```
### 범주형 열
이 데이터셋에서 thal 열은 문자열입니다(예를 들어 'fixed', 'normal', 'reversible'). 모델에 문자열을 바로 주입할 수 없습니다. 대신 문자열을 먼저 수치형으로 매핑해야 합니다. 범주형 열(categorical column)을 사용하여 문자열을 원-핫 벡터로 표현할 수 있습니다. 문자열 목록은 [categorical_column_with_vocabulary_list](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list)를 사용하여 리스트로 전달하거나 [categorical_column_with_vocabulary_file](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file)을 사용하여 파일에서 읽을 수 있습니다.
```
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
demo(thal_one_hot)
```
더 복잡한 데이터셋에는 범주형(예를 들면 문자열)인 열이 많을 수 있습니다. 특성 열은 범주형 데이터를 다룰 때 진가가 발휘됩니다. 이 데이터셋에는 범주형 열이 하나 뿐이지만 다른 데이터셋에서 사용할 수 있는 여러 종류의 특성 열을 소개하겠습니다.
### 임베딩 열
가능한 문자열이 몇 개가 있는 것이 아니라 범주마다 수천 개 이상의 값이 있는 경우를 상상해 보겠습니다. 여러 가지 이유로 범주의 개수가 늘어남에 따라 원-핫 인코딩을 사용하여 신경망을 훈련시키는 것이 불가능해집니다. 임베딩 열(embedding column)을 사용하면 이런 제한을 극복할 수 있습니다. 고차원 원-핫 벡터로 데이터를 표현하는 대신 [임베딩 열](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column)을 사용하여 저차원으로 데이터를 표현합니다. 이 벡터는 0 또는 1이 아니라 각 원소에 어떤 숫자도 넣을 수 있는 밀집 벡터(dense vector)입니다. 임베딩의 크기(아래 예제에서는 8입니다)는 튜닝 대상 파라미터입니다.
핵심 포인트: 범주형 열에 가능한 값이 많을 때는 임베딩 열을 사용하는 것이 최선입니다. 여기에서는 예시를 목적으로 하나를 사용하지만 완전한 예제이므로 나중에 다른 데이터셋에 수정하여 적용할 수 있습니다.
```
# 임베딩 열의 입력은 앞서 만든 범주형 열입니다.
thal_embedding = feature_column.embedding_column(thal, dimension=8)
demo(thal_embedding)
```
### 해시 특성 열
가능한 값이 많은 범주형 열을 표현하는 또 다른 방법은 [categorical_column_with_hash_bucket](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket)을 사용하는 것입니다. 이 특성 열은 입력의 해시(hash) 값을 계산한 다음 `hash_bucket_size` 크기의 버킷 중 하나를 선택하여 문자열을 인코딩합니다. 이 열을 사용할 때는 어휘 목록을 제공할 필요가 없고 공간을 절약하기 위해 실제 범주의 개수보다 훨씬 작게 해시 버킷(bucket)의 크기를 정할 수 있습니다.
핵심 포인트: 이 기법의 큰 단점은 다른 문자열이 같은 버킷에 매핑될 수 있다는 것입니다. 그럼에도 실전에서는 일부 데이터셋에서 잘 작동합니다.
```
thal_hashed = feature_column.categorical_column_with_hash_bucket(
'thal', hash_bucket_size=1000)
demo(feature_column.indicator_column(thal_hashed))
```
### 교차 특성 열
여러 특성을 연결하여 하나의 특성으로 만드는 것을 [교차 특성](https://developers.google.com/machine-learning/glossary/#feature_cross)(feature cross)이라고 합니다. 모델이 특성의 조합에 대한 가중치를 학습할 수 있습니다. 이 예제에서는 age와 thal의 교차 특성을 만들어 보겠습니다. `crossed_column`은 모든 가능한 조합에 대한 해시 테이블을 만들지 않고 `hashed_column` 매개변수를 사용하여 해시 테이블의 크기를 선택합니다.
```
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
demo(feature_column.indicator_column(crossed_feature))
```
## 사용할 열 선택하기
여러 가지 특성 열을 사용하는 방법을 보았으므로 이제 이를 사용하여 모델을 훈련하겠습니다. 이 튜토리얼의 목적은 특성 열을 사용하는 완전한 코드(예를 들면 작동 방식)를 제시하는 것이므로 임의로 몇 개의 열을 선택하여 모델을 훈련하겠습니다.
핵심 포인트: 제대로 된 모델을 만들어야 한다면 대용량의 데이터셋을 사용하고 어떤 특성을 포함하는 것이 가장 의미있는지, 또 어떻게 표현해야 할지 신중하게 생각하세요.
```
feature_columns = []
# 수치형 열
for header in ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']:
feature_columns.append(feature_column.numeric_column(header))
# 버킷형 열
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
# 범주형 열
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
feature_columns.append(thal_one_hot)
# 임베딩 열
thal_embedding = feature_column.embedding_column(thal, dimension=8)
feature_columns.append(thal_embedding)
# 교차 특성 열
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
crossed_feature = feature_column.indicator_column(crossed_feature)
feature_columns.append(crossed_feature)
```
### 특성 층 만들기
특성 열을 정의하고 나면 [DenseFeatures](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/DenseFeatures) 층을 사용해 케라스 모델에 주입할 수 있습니다.
```
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
```
앞서 특성 열의 작동 예를 보이기 위해 작은 배치 크기를 사용했습니다. 여기에서는 조금 더 큰 배치 크기로 입력 파이프라인을 만듭니다.
```
batch_size = 32
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
```
## 모델 생성, 컴파일, 훈련
```
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(train_ds,
validation_data=val_ds,
epochs=5)
loss, accuracy = model.evaluate(test_ds)
print("정확도", accuracy)
```
핵심 포인트: 일반적으로 크고 복잡한 데이터셋일 경우 딥러닝 모델에서 최선의 결과를 얻습니다. 이런 작은 데이터셋에서는 기본 모델로 결정 트리(decision tree)나 랜덤 포레스트(random forest)를 사용하는 것이 권장됩니다. 이 튜토리얼의 목적은 정확한 모델을 훈련하는 것이 아니라 정형 데이터를 다루는 방식을 설명하는 것입니다. 실전 데이터셋을 다룰 때 이 코드를 시작점으로 사용하세요.
## 그 다음엔
정형 데이터를 사용한 분류 작업에 대해 배우는 가장 좋은 방법은 직접 실습하는 것입니다. 실험해 볼 다른 데이터셋을 찾아서 위와 비슷한 코드를 사용해 모델을 훈련해 보세요. 정확도를 향상시키려면 모델에 포함할 특성과 표현 방법을 신중하게 생각하세요.
| github_jupyter |
# Tutorial rápido de Python para Matemáticos
© Ricardo Miranda Martins, 2022 - http://www.ime.unicamp.br/~rmiranda/
## Índice
1. [Introdução](1-intro.html)
2. [Python é uma boa calculadora!](2-calculadora.html) [(código fonte)](2-calculadora.ipynb)
3. [Resolvendo equações](3-resolvendo-eqs.html) [(código fonte)](3-resolvendo-eqs.ipynb)
4. **[Gráficos](4-graficos.html)** [(código fonte)](4-graficos.ipynb)
5. [Sistemas lineares e matrizes](5-lineares-e-matrizes.html) [(código fonte)](5-lineares-e-matrizes.ipynb)
6. [Limites, derivadas e integrais](6-limites-derivadas-integrais.html) [(código fonte)](6-limites-derivadas-integrais.ipynb)
7. [Equações diferenciais](7-equacoes-diferenciais.html) [(código fonte)](7-equacoes-diferenciais.ipynb)
# Gráficos
Fazer gráficos no Python é sinônimo para ```matplotlib```. Esse pacote é fantástico e muito poderoso. Produz gráficos bem legais se for bem utilizada. O [site do Matplotlib](https://matplotlib.org/) tem muitos exemplos legais, e aqui vamos abordar somente o básico.
Usaremos também o NumPy, que é o "primo" do SymPy, só que para cálculos numéricos. Para um tutorial do NumPy, [veja no site](https://numpy.org/doc/stable/index.html) deles a documentação.
Os gráficos mais simples feitos no computador são simples amontoados de pontos, bem colocados. Sabe quando a gente tenta fazer o gráfico de $y=x^2$, marca 2 pontos, risca com a régua, e ganha 0 na questão? Então, se você tivesse marcado 5000 pontos bem colocados, seu professor poderia até ter considerado alguma coisa - e é exatamente aí que o NumPy entra em jogo, para construir essas "malhas" de pontos.
Vamos começar com o gráfico de $f(x)=x^2+1$, com $x\in[-3,3]$.
```
# importando as bibliotecas
import numpy as np
import matplotlib.pyplot as plt
# definindo a funcao
def f(x):
return x**2+1
# definindo o intervalo de x e quantos pontos serão criados.
# abaixo definimos que o gráfico será criado com base em 50 pontos
# com coordenada x igualmente distribuída entre -3 e 3.
# a variável x armazenará uma lista com esses 50 pontos.
# esse processo é conhecido por "discretização".
x = np.linspace(-3, 3, 50)
# definicao de y=f(x). na verdade, o que esse comando faz é criar uma
# lista aplicando a funcao f(x) que definimos acima nos pontos da lista
# que contém os valores de x (os tais 50 valores entre -3 e 3)
y = f(x)
# plotando a funcao y=f(x).
plt.plot(x, y)
```
Só por curiosidade, o que acontece se ao invés de 50 pontos, escolhermos menos pontos?
Abaixo plotamos o mesmo gráfico acima, só que com base em somente 4 pontos:
```
x = np.linspace(-3, 3, 4)
y=f(x)
plt.plot(x, y)
```
Viu? É por isso que com poucos pontos a gente ganha zero na questão! :)
Uma boa estratégia ao usar o ```matplotlib``` é construir o gráfico aos poucos, para poder adicionar legendas nos eixos e outros detalhes. Isso pode ser feito como no exemplo abaixo.
```
# vamos chamar de "a" um primeiro plot vazio, só com os eixos.
a = plt.axes()
# agora damos nomes para os eixos do gráfico vazio.
a.set_xlabel('x');
a.set_ylabel('y');
# delimitando a área da janela de visualização para ser com
# x entre -2 e 2 e y entre -4 e 4
a.set(xlim=(-2,2), ylim=(-4,4))
# e finalmente adicionamos o gráfico.
# note que agora vamos até aumentar o valor de x, mas isso
# não tem impacto na janela de visualização, que já definimos
# anteriormente. colocamos 100 pontos para o gráfico ficar suave.
x = np.linspace(-10, 10, 100)
y=f(x)
# agora o plot tem que ser dado com o prefixo a, para usar as configurações
# anteriores.
a.plot(x, y)
```
Podemos também usar o Python para plotar curvas parametrizadas. O comando é parecido:
```
t = np.linspace(0, 2*np.pi, 200)
x = np.cos(t)
y = np.sin(2*t)
plt.plot(x, y)
```
Note que o comando é praticamente o mesmo nos dois casos acima: plotando muitos pontos de uma curva parametrizada (no primeiro caso, a curva é da forma $(t,f(t))$).
Um terceiro tipo de gráfico bidimensional é quando queremos plotar curvas de nível de uma função. Para isso, temos um outro comando, o ```contour```. Veja muitas outras opções sobre plots de curvas de nível [aqui nesse site](https://matplotlib.org/stable/gallery/images_contours_and_fields/contour_demo.html).
```
import numpy as np
import matplotlib.cm as cm
import matplotlib.pyplot as plt
# o comando abaixo é muito parecido com o linspace que usamos antes.
# a diferença é que aqui, ao invés de definirmos o número de pontos
# que iremos usar para plotar, vamos definir ponto inicial, ponto
# final e o incremento, neste caso delta=0.025.
# é só uma outra forma de fazer a discretização.
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
# aqui o primeiro truque: transformamos os dois "intervalos" de x e y
# numa malha bidimensional, fazendo uma espécie de produto cartesiano
# dos pontos, e renomeamos a primeira coordenada desses pontos como x
# e a segunda como y. esse é só um artifício técnico.
x, y = np.meshgrid(x, y)
# aqui definimos a funcao/equacao da qual vamos plotar as curvas de nivel.
# note que ela usa as variáveis com letras maiúsculas, pois o que vamos
# armazenar no vetor Z é o valor dessa funcao em cada um dos pontos da
# malha criada.
z = x**2-2*y**2-1
fig, ax = plt.subplots()
# a sintaxe do contor é intervalo X, intervalo Y, funcao e quantidade
# de contornos. abaixo, plotamos 20 curvas de nível.
contornos = ax.contour(x, y, z, 20)
ax.clabel(contornos, inline=True, fontsize=10)
ax.set_title('Curvas de nível')
plt.contour(x, y, z)
```
O matplotlib também faz gráficos tridimensionais. A ideia do plot continua a mesma: o gráfico vai ser obtido pela união de vários pontos.
```
import numpy as np
import matplotlib.pyplot as plt
# definindo a função
def f(x, y):
return np.sqrt(9-x**2+y**2)
# discretizando x e y
x = np.linspace(-3, 3, 50)
y = np.linspace(-3, 3, 50)
# artifício do produto cartesiano
x, y = np.meshgrid(x, y)
# define z como z=f(x,y)
z = f(x, y)
# indica que iremos fazer um gráfico 3d e nomeia os eixos
ax = plt.axes(projection='3d')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
# finalmente, adiciona a função (no caso, estamos plotando
# em versão wireframe, ou seja, você vai ver a malha sobre a
# superfície.
ax.plot_wireframe(x, y, z)
```
Podemos ainda plotar superfícies parametrizadas. O processo é parecido com o de antes: discretizamos o domínio e daí calculamos a função na malha criada. Vamos fazer isso para plotar um toro, usando sua parametrização usual. O código abaixo foi inspirado [nesse site](https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781849513265/7/ch07lvl1sec76/plotting-a-parametric-3d-surface).
```
# discretizando as duas variáveis de ângulo
angle = np.linspace(0, 2 * np.pi, 32)
theta, phi = np.meshgrid(angle, angle)
# pegando valores comuns para os raios interno e exteno
r, R = 0.3, 1
# parametrização
x = (R + r * np.cos(phi)) * np.cos(theta)
y = (R + r * np.cos(phi)) * np.sin(theta)
z = r * np.sin(phi)
# criando o ambiente 3d
fig = plt.figure()
ax = plt.axes(projection='3d')
# limitando os eixos
ax.set_xlim3d(-1, 1)
ax.set_ylim3d(-1, 1)
ax.set_zlim3d(-1, 1)
# exibindo a figura
ax.plot_surface(x, y, z, rstride=1, cstride=1)
plt.show()
```
Por fim, podemos ainda fazer plots de curvas parametrizadas em 3D:
```
# criando o ambiente 3d
ax = plt.axes(projection='3d')
# discretizando t
t = np.linspace(0, 2*np.pi, 200)
# definindo as coordenadas da parametrizacao
x = np.cos(t)
y = np.sin(2*t)
z = 2*t
# plotando
ax.plot(x, y, z)
```
| github_jupyter |
# NESTS algorithm **Kopuru Vespa Velutina Competition**
Purpose: Bring together weather data, geographic data, food availability data, and identified nests in each municipality of Biscay in order to have a dataset suitable for analysis and potential predictions in a Machine Learning model.
Outputs: QUEENtrain and QUEENpredict datasets *(WBds03_QUEENtrain.csv & WBds03_QUEENpredict.csv)*
@authors:
* mario.bejar@student.ie.edu
* pedro.geirinhas@student.ie.edu
* a.berrizbeitia@student.ie.edu
* pcasaverde@student.ie.edu
## Libraries
```
import pandas as pd
import numpy as np
import math
from plotnine import *
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from sklearn import preprocessing
```
## Functions
```
def silueta(iterations_int, features_df):
silhouettes = []
for i in range(2,iterations_int,1):
model = KMeans(n_clusters=i)
aux = features_df
model.fit(aux)
labels = model.labels_
sol = silhouette_score(aux, labels)
silhouettes.append(sol)
silhouette = pd.DataFrame()
silhouette['Labels'] = silhouettes
silhouette['NumberOfClusters'] = range(2,iterations_int,1)
return silhouette
def codos(numClusters_int, features_df):
inertias = []
for i in range(1,numClusters_int,1):
model = KMeans(n_clusters=i)
aux = features_df
model.fit(aux)
inertias.append(model.inertia_)
elbow = pd.DataFrame()
elbow['Inertia'] = inertias
elbow['NumberOfClusters'] = range(1,numClusters_int,1)
return elbow
def kmedias(numClusters_int, features_df):
model = KMeans(n_clusters = numClusters_int)
aux = features_df
model.fit(aux)
modelLabels = model.labels_
modelCenters = model.cluster_centers_
return pd.Series(modelLabels, index=features_df.index)
```
## Get the data
```
df01 = pd.read_csv('../../../Input_open_data/ds01_PLANTILLA-RETO-AVISPAS-KOPURU.csv', sep=";")
df02 = pd.read_csv('../../../Input_open_data/ds02_datos-nidos-avispa-asiatica.csv', sep=",")
df03 = pd.read_csv('../../../Input_open_data/ds03_APICULTURA_COLMENAS_KOPURU.csv', sep=";")
df04 = pd.read_csv('../../../Input_open_data/ds04_FRUTALES-DECLARADOS-KOPURU.csv', sep=";")
WBdf01 = pd.read_csv('./WBds01_GEO.csv', sep=',')
WBdf02 = pd.read_csv('./WBds02_METEO.csv', sep=',')
df_population = pd.read_csv('../../../Other_open_data/population.csv', sep=',')
```
## Data cleanup
### Getting the names right
```
# Dropping and Renaming columns in accordance to the DataMap
# DataMap's URL: https://docs.google.com/spreadsheets/d/1Ad7s4IOmj9Tn2WcEOz4ArwedTzDs9Y0_EaUSm6uRHMQ/edit#gid=0
df01.columns = ['municip_code', 'municip_name', 'nests_2020']
df01.drop(columns=['nests_2020'], inplace=True) # just note that this is the final variable to predict in the competition
df02.drop(columns=['JARDUERA_ZENBAKIA/NUM_ACTUACION', 'ERABILTZAILEA_EU/USUARIO_EU', 'ERABILTZAILEA_CAS/USUARIO_CAS', 'HELBIDEA/DIRECCION', 'EGOERA_EU/ESTADO_EU', 'ITXIERA_DATA/FECHA CIERRE', 'ITXIERAKO AGENTEA_EU/AGENTE CIERRE_EU', 'ITXIERAKO AGENTEA_CAS/AGENTE CIERRE_CAS'], inplace=True)
df02.columns = ['waspbust_id', 'year', 'nest_foundDate', 'municip_name', 'species', 'nest_locType', 'nest_hight', 'nest_diameter', 'nest_longitude', 'nest_latitude', 'nest_status']
df03.drop(columns=['CP'], inplace=True)
df03.columns = ['municip_name','municip_code','colonies_amount']
df04.columns = ['agriculture_type','municip_code','municip_name']
# We don't have the "months" specified for any of the records in 2017 ('nest_foundDate' is incorrect for this year), so we'll drop those records
df02 = df02.drop(df02[df02['year'] == 2017].index, inplace = False)
# Cleaning municipality names in ds02 with names from ds01
df02_wrong_mun = ['ABADIÑO' ,'ABANTO Y CIERVANA' ,'ABANTO Y CIERVANA-ABANTO ZIERBENA' ,'AJANGIZ' ,'ALONSOTEGI' ,'AMOREBIETA-ETXANO' ,'AMOROTO' ,'ARAKALDO' ,'ARANTZAZU' ,'AREATZA' ,'ARRANKUDIAGA' ,'ARRATZU' ,'ARRIETA' ,'ARRIGORRIAGA' ,'ARTEA' ,'ARTZENTALES' ,'ATXONDO' ,'AULESTI' ,'BAKIO' ,'BALMASEDA' ,'BARAKALDO' ,'BARRIKA' ,'BASAURI' ,'BEDIA' ,'BERANGO' ,'BERMEO' ,'BERRIATUA' ,'BERRIZ' ,'BUSTURIA' ,'DERIO' ,'DIMA' ,'DURANGO' ,'EA' ,'ELANTXOBE' ,'ELORRIO' ,'ERANDIO' ,'EREÑO' ,'ERMUA' ,'ERRIGOITI' ,'ETXEBARRI' ,'ETXEBARRIA', 'ETXEBARRIa','FORUA' ,'FRUIZ' ,'GALDAKAO' ,'GALDAMES' ,'GAMIZ-FIKA' ,'GARAI' ,'GATIKA' ,'GAUTEGIZ ARTEAGA' ,'GERNIKA-LUMO' ,'GETXO' ,'GETXO ' ,'GIZABURUAGA' ,'GORDEXOLA' ,'GORLIZ' ,'GUEÑES' ,'IBARRANGELU' ,'IGORRE' ,'ISPASTER' ,'IURRETA' ,'IZURTZA' ,'KARRANTZA HARANA/VALLE DE CARRANZA' ,'KARRANTZA HARANA-VALLE DE CARRANZA' ,'KORTEZUBI' ,'LANESTOSA' ,'LARRABETZU' ,'LAUKIZ' ,'LEIOA' ,'LEKEITIO' ,'LEMOA' ,'LEMOIZ' ,'LEZAMA' ,'LOIU' ,'MALLABIA' ,'MAÑARIA' ,'MARKINA-XEMEIN' ,'MARURI-JATABE' ,'MEÑAKA' ,'MENDATA' ,'MENDEXA' ,'MORGA' ,'MUNDAKA' ,'MUNGIA' ,'MUNITIBAR-ARBATZEGI' ,'MUNITIBAR-ARBATZEGI GERRIKAITZ' ,'MURUETA' ,'MUSKIZ' ,'MUXIKA' ,'NABARNIZ' ,'ONDARROA' ,'OROZKO' ,'ORTUELLA' ,'OTXANDIO' ,'PLENTZIA' ,'PORTUGALETE' ,'SANTURTZI' ,'SESTAO' ,'SONDIKA' ,'SOPELA' ,'SOPUERTA' ,'SUKARRIETA' ,'TRUCIOS-TURTZIOZ' ,'UBIDE' ,'UGAO-MIRABALLES' ,'URDULIZ' ,'URDUÑA/ORDUÑA' ,'URDUÑA-ORDUÑA' ,'VALLE DE TRAPAGA' ,'VALLE DE TRAPAGA-TRAPAGARAN' ,'ZALDIBAR' ,'ZALLA' ,'ZAMUDIO' ,'ZARATAMO' ,'ZEANURI' ,'ZEBERIO' ,'ZIERBENA' ,'ZIORTZA-BOLIBAR' ]
df02_correct_mun = ['Abadiño' ,'Abanto y Ciérvana-Abanto Zierbena' ,'Abanto y Ciérvana-Abanto Zierbena' ,'Ajangiz' ,'Alonsotegi' ,'Amorebieta-Etxano' ,'Amoroto' ,'Arakaldo' ,'Arantzazu' ,'Areatza' ,'Arrankudiaga' ,'Arratzu' ,'Arrieta' ,'Arrigorriaga' ,'Artea' ,'Artzentales' ,'Atxondo' ,'Aulesti' ,'Bakio' ,'Balmaseda' ,'Barakaldo' ,'Barrika' ,'Basauri' ,'Bedia' ,'Berango' ,'Bermeo' ,'Berriatua' ,'Berriz' ,'Busturia' ,'Derio' ,'Dima' ,'Durango' ,'Ea' ,'Elantxobe' ,'Elorrio' ,'Erandio' ,'Ereño' ,'Ermua' ,'Errigoiti' ,'Etxebarri' , 'Etxebarria', 'Etxebarria','Forua' ,'Fruiz' ,'Galdakao' ,'Galdames' ,'Gamiz-Fika' ,'Garai' ,'Gatika' ,'Gautegiz Arteaga' ,'Gernika-Lumo' ,'Getxo' ,'Getxo' ,'Gizaburuaga' ,'Gordexola' ,'Gorliz' ,'Güeñes' ,'Ibarrangelu' ,'Igorre' ,'Ispaster' ,'Iurreta' ,'Izurtza' ,'Karrantza Harana/Valle de Carranza' ,'Karrantza Harana/Valle de Carranza' ,'Kortezubi' ,'Lanestosa' ,'Larrabetzu' ,'Laukiz' ,'Leioa' ,'Lekeitio' ,'Lemoa' ,'Lemoiz' ,'Lezama' ,'Loiu' ,'Mallabia' ,'Mañaria' ,'Markina-Xemein' ,'Maruri-Jatabe' ,'Meñaka' ,'Mendata' ,'Mendexa' ,'Morga' ,'Mundaka' ,'Mungia' ,'Munitibar-Arbatzegi Gerrikaitz' ,'Munitibar-Arbatzegi Gerrikaitz' ,'Murueta' ,'Muskiz' ,'Muxika' ,'Nabarniz' ,'Ondarroa' ,'Orozko' ,'Ortuella' ,'Otxandio' ,'Plentzia' ,'Portugalete' ,'Santurtzi' ,'Sestao' ,'Sondika' ,'Sopela' ,'Sopuerta' ,'Sukarrieta' ,'Trucios-Turtzioz' ,'Ubide' ,'Ugao-Miraballes' ,'Urduliz' ,'Urduña/Orduña' ,'Urduña/Orduña' ,'Valle de Trápaga-Trapagaran' ,'Valle de Trápaga-Trapagaran' ,'Zaldibar' ,'Zalla' ,'Zamudio' ,'Zaratamo' ,'Zeanuri' ,'Zeberio' ,'Zierbena' ,'Ziortza-Bolibar',]
df02.municip_name.replace(to_replace = df02_wrong_mun, value = df02_correct_mun, inplace = True)
df02.shape
# Translate the `species` variable contents to English
df02.species.replace(to_replace=['AVISPA ASIÁTICA', 'AVISPA COMÚN', 'ABEJA'], value=['Vespa Velutina', 'Common Wasp', 'Wild Bee'], inplace=True)
# Translate the contents of the `nest_locType` and `nest_status` variables to English
# But note that this data makes is of no use from a "forecastoing" standpoint eventually, since we will predict with a one-year offset (and thus, use thigs like weather mostly)
df02.nest_locType.replace(to_replace=['CONSTRUCCIÓN', 'ARBOLADO'], value=['Urban Environment', 'Natural Environment'], inplace=True)
df02.nest_status.replace(to_replace=['CERRADA - ELIMINADO', 'CERRADA - NO ELIMINABLE', 'PENDIENTE DE GRUPO'], value=['Nest Terminated', 'Cannot Terminate', 'Pending classification'], inplace=True)
```
### Getting the dates right
Including the addition of a `year_offset` variable to comply with the competition's rules
```
# Changing 'nest_foundDate' the to "datetime" format
df02['nest_foundDate'] = pd.to_datetime(df02['nest_foundDate'])
# Create a "month" variable in the main dataframe
df02['month'] = pd.DatetimeIndex(df02['nest_foundDate']).month
# Create a "year_offset" variable in the main dataframe
# IMPORTANT: THIS REFLECTS OUR ASSUMPTION THAT `YEAR-1` DATA CAN BE USE TO PREDICT `YEAR` DATA, AS MANDATED BY THE COMPETITION'S BASE REQUIREMENTS
df02['year_offset'] = pd.DatetimeIndex(df02['nest_foundDate']).year - 1
df02.columns
```
### Creating distinct dataFrames for each `species`
```
df02.species.value_counts()
df02_vespas = df02.loc[df02.species == 'Vespa Velutina', :]
df02_wasps = df02.loc[df02.species == 'Common Wasp', :]
df02_bees = df02.loc[df02.species == 'Wild Bee', :]
df02_vespas.shape
```
## Create a TEMPLATE dataframe with the missing municipalities and months
```
template = pd.read_csv('../../../Input_open_data/ds01_PLANTILLA-RETO-AVISPAS-KOPURU.csv', sep=";")
template.drop(columns='NIDOS 2020', inplace=True)
template.columns = ['municip_code', 'municip_name']
template['year2019'] = 2019
template['year2018'] = 2018
template['year2017'] = 2017
template = pd.melt(template, id_vars=['municip_code', 'municip_name'], value_vars=['year2019', 'year2018', 'year2017'], value_name = 'year_offset')
template.drop(columns='variable', inplace=True)
for i in range(1,13,1):
template[i] = i
template = pd.melt(template, id_vars=['municip_code', 'municip_name', 'year_offset'],\
value_vars=[1,2,3,4,5,6,7,8,9,10,11,12], value_name = 'month')
template.drop(columns='variable', inplace=True)
template.shape
112*12*3 == template.shape[0]
template.columns
```
## Merge the datasets
### Match each `municip_name` to its `municip_code` as per the competition's official template (i.e. `df01`)
```
# Merge dataFrames df01 and df02 by 'municip_name', in order to identify every wasp nest with its 'municip_code'
# The intention is that 'all_the_queens-wasps' will be the final dataFrame to use in the ML model eventually
all_the_queens_wasps = pd.merge(df02_vespas, df01, how = 'left', on = 'municip_name')
# check if there are any municipalities missing from the df02 dataframe, and add them if necessary
df01.municip_code[~df01.municip_code.isin(all_the_queens_wasps.municip_code.unique())]
```
### Input municipalities and months missing from the dataset
```
all_the_queens_wasps = pd.merge(all_the_queens_wasps, template,\
how = 'outer', left_on = ['municip_code', 'municip_name', 'year_offset', 'month'],\
right_on = ['municip_code', 'municip_name', 'year_offset', 'month'])
all_the_queens_wasps.isnull().sum()
all_the_queens_wasps.year.fillna(value='no registers', inplace=True)
all_the_queens_wasps.shape
```
### Discarding some variables
Namely: **species** (since by now they are all Vespa Velutina only), **nest_foundDate**,**nest_longitude**, and **nest_latitude**.
```
all_the_queens_wasps.drop(columns=['nest_foundDate', 'nest_longitude', 'nest_latitude', 'species'], inplace=True)
```
### Creating a new categorical variable for Nest Size
[Formula for nest volume](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6723431/)
[Example calculation in cubic meters](https://www.easycalculation.com/shapes/volume-of-prolate-spheroid.php)
```
#ggplot(aes(x='nest_hight', y='nest_diameter'), all_the_queens_wasps) + geom_point(stat='identity')
#all_the_queens_wasps['nest_volume_l'] = 4/3 * math.pi * (all_the_queens_wasps['nest_hight']/100/2)**2 * (all_the_queens_wasps['nest_diameter']/100/2) * 1000
#all_the_queens_wasps['nest_volume_l'].fillna(0, inplace=True)
all_the_queens_wasps['nest_size'] = all_the_queens_wasps['nest_hight'] * all_the_queens_wasps['nest_diameter']
all_the_queens_wasps['nest_size'].fillna(0, inplace=True)
all_the_queens_wasps['nest_size'].describe()
vespaVoluminous = all_the_queens_wasps.loc[:, ['municip_code', 'nest_size']].groupby(by='municip_code', as_index=False).mean()
ggplot(aes(x='nest_size'), vespaVoluminous) + geom_histogram()
#vespaVoluminous['nest_size_equals'] = pd.qcut(vespaVoluminous['nest_size'], 3, labels=['small', 'mid', 'large'])
#vespaVoluminous['nest_size_equals'].value_counts()
vespaVoluminous['nest_size'] = pd.cut(vespaVoluminous['nest_size'], bins=3, labels=['small', 'mid', 'large'])
vespaVoluminous['nest_size'].value_counts()
all_the_queens_wasps = pd.merge(all_the_queens_wasps, vespaVoluminous, how = 'left', on= 'municip_code')
#all_the_queens_wasps
```
### Converting categoricals to dummy variables
... and dropping some variables. Namely: **nest_locType**, **nest_hight**, **nest_diameter**, **nest_size_x**, **nest_size_y**, and **nest_latitude**.
```
queen_big = pd.get_dummies(all_the_queens_wasps.nest_size_y)
all_the_queens_wasps = pd.concat([all_the_queens_wasps, queen_big], axis=1)
queen_cosmo = pd.get_dummies(all_the_queens_wasps.nest_locType)
all_the_queens_wasps = pd.concat([all_the_queens_wasps, queen_cosmo], axis=1)
queen_hastalavista = pd.get_dummies(all_the_queens_wasps.nest_status)
all_the_queens_wasps = pd.concat([all_the_queens_wasps, queen_hastalavista], axis=1)
all_the_queens_wasps.drop(columns=['nest_locType', 'nest_hight', 'nest_diameter', 'nest_size_y', 'nest_size_x', 'nest_status'], inplace=True)
all_the_queens_wasps.rename(columns = {"small":"fv_size_small", "mid":"fv_size_mid", "large":"fv_size_large",\
"Natural Environment":"fv_type_natural", "Urban Environment":"fv_type_urban",\
"Cannot Terminate":"fv_status_cantkill", "Nest Terminated":"fv_status_dead", "Pending classification":"fv_status_pending"}, inplace = True)
#all_the_queens_wasps
#all_the_queens_wasps.isnull().sum()
```
### Counting the amount of wasp nests in each municipality, for each year, ~not for the months~
```
all_the_queens_wasps = all_the_queens_wasps.loc[:, ['waspbust_id', 'fv_size_small', 'fv_size_mid', 'fv_size_large', 'fv_type_natural', 'fv_type_urban',\
'fv_status_cantkill', 'fv_status_dead', 'fv_status_pending',\
'year', 'municip_name', 'municip_code', 'year_offset']]\
.groupby(by =['year', 'municip_name', 'municip_code', 'year_offset'], as_index=False)\
.agg({'waspbust_id':'count', 'fv_size_small':'sum', 'fv_size_mid':'sum', 'fv_size_large':'sum', 'fv_type_natural':'sum', 'fv_type_urban':'sum',\
'fv_status_cantkill':'sum', 'fv_status_dead':'sum', 'fv_status_pending':'sum'})
# let's rename the id to NESTS, now that it has been counted
all_the_queens_wasps.rename(columns = {"waspbust_id":"NESTS"}, inplace = True)
all_the_queens_wasps.columns
# for all those "outer merge" rows with no associated year, set their NESTS to zero
all_the_queens_wasps.loc[all_the_queens_wasps.year == 'no registers', ['NESTS']] = 0
all_the_queens_wasps.NESTS.sum() == df02_vespas.shape[0]
# grouping by 'year_offset' and making the former 'year' variable dissappear
all_the_queens_wasps = all_the_queens_wasps.loc[:, ['municip_name', 'municip_code', 'year_offset', 'NESTS', 'fv_size_small', 'fv_size_mid', 'fv_size_large', 'fv_type_natural', 'fv_type_urban', 'fv_status_cantkill', 'fv_status_dead', 'fv_status_pending']]\
.groupby(by =['municip_name', 'municip_code', 'year_offset'], as_index = False).sum()
# verifying that the DataFrame has the right number of rows
all_the_queens_wasps.shape[0] == 112*3
#all_the_queens_wasps.isnull().sum()
```
### Food sources
```
# Group df03 by 'municip_code' because there are multiple rows for each municipality (and we need a 1:1 relationship)
df03 = df03.groupby(by = 'municip_code', as_index= False).colonies_amount.sum()
# Now merge df03 to add number of bee hives (which is a food source for the wasp) in each municipality
# Note that NaNs (unknown amount of hives) are replaced with zeroes for the 'colonies_amount' variable
all_the_queens_wasps = pd.merge(all_the_queens_wasps, df03, how = 'left', on = 'municip_code')
all_the_queens_wasps.colonies_amount.fillna(value=0, inplace=True)
all_the_queens_wasps.shape
#all_the_queens_wasps.isnull().sum()
# Group df04 (agricultural food sources) by municipality code, after appending variables with the amount of each type of agricultural product
aux = df04.copy(deep=True)
aux.drop(columns=['municip_name'], inplace=True)
aux['food_fruit'] = np.where(aux['agriculture_type'] == 'FRUTALES', '1', '0')
aux['food_fruit'] = aux['food_fruit'].astype('int')
aux['food_apple'] = np.where(aux['agriculture_type'] == 'MANZANO', '1', '0')
aux['food_apple'] = aux['food_apple'].astype('int')
txakoli_string = df04.agriculture_type[45]
aux['food_txakoli'] = np.where(aux['agriculture_type'] == txakoli_string, '1', '0')
aux['food_txakoli'] = aux['food_txakoli'].astype('int')
aux['food_kiwi'] = np.where(aux['agriculture_type'] == 'AKTINIDIA (KIWI)', '1', '0')
aux['food_kiwi'] = aux['food_kiwi'].astype('int')
aux['food_pear'] = np.where(aux['agriculture_type'] == 'PERAL', '1', '0')
aux['food_pear'] = aux['food_pear'].astype('int')
aux['food_blueberry'] = np.where(aux['agriculture_type'] == 'ARANDANOS', '1', '0')
aux['food_blueberry'] = aux['food_blueberry'].astype('int')
aux['food_raspberry'] = np.where(aux['agriculture_type'] == 'FRAMBUESAS', '1', '0')
aux['food_raspberry'] = aux['food_raspberry'].astype('int')
aux = aux.groupby(by='municip_code', as_index=False).sum()
df04 = aux.copy(deep=True)
# Now merge df04 to add number of each type of food source ('agriculture_type') present in each municipality
# Any municipality not present in df04 will get assigned 'zero' food sources for any given type of fruit
all_the_queens_wasps = pd.merge(all_the_queens_wasps, df04, how = 'left', on= 'municip_code')
all_the_queens_wasps.food_fruit.fillna(value=0, inplace=True)
all_the_queens_wasps.food_apple.fillna(value=0, inplace=True)
all_the_queens_wasps.food_txakoli.fillna(value=0, inplace=True)
all_the_queens_wasps.food_kiwi.fillna(value=0, inplace=True)
all_the_queens_wasps.food_pear.fillna(value=0, inplace=True)
all_the_queens_wasps.food_blueberry.fillna(value=0, inplace=True)
all_the_queens_wasps.food_raspberry.fillna(value=0, inplace=True)
all_the_queens_wasps.shape
#all_the_queens_wasps.isnull().sum()
```
### Geographic
Here, a very important assumption regarding which station corresponds to each municipality is being brought from the HONEYCOMB script
```
# Adding weather station code to each municipality in all_the_queens_wasps. "No municipality left behind!"
all_the_queens_wasps = pd.merge(all_the_queens_wasps, WBdf01, how = 'left', on= 'municip_code')
all_the_queens_wasps.shape
#all_the_queens_wasps.isnull().sum()
all_the_queens_wasps.year_offset.value_counts()
```
### Weather
MANDATORY ASSUMPTION: As per the competition's rules. 2020 weather data cannot be used to predict 2020's number of wasp nests.
Therefore, **this merge links 2018's wasp nests to 2017's weather data** (all of which falls under the $2017$ value for `year_offset`).
Likewise, **2019's wasp nests are linked to 2018's weather data** (all of which falls under the $2018$ value for `year_offset`).
Finally, the $2019$ value for `year_offset` contains zero NESTS and the year 2019's weather which we will use to predict 2020's number of NESTS (the target variable of the competition)
```
# Now, merge the Main 'all_the_queens_wasps' dataFrame with the weather data 'WBdf02' dataFrame
all_the_queens_wasps = pd.merge(all_the_queens_wasps, WBdf02, how = 'left',\
left_on = ['station_code', 'year_offset'],\
right_on = ['codigo', 'year'])
all_the_queens_wasps.columns
all_the_queens_wasps_TRAIN = all_the_queens_wasps.loc[all_the_queens_wasps.year_offset.isin([2017, 2018]),:]
all_the_queens_wasps_PREDICT = all_the_queens_wasps.loc[all_the_queens_wasps.year_offset.isin([2019]),:]
```
### Adding `Population`, a publicly available dataset
```
# Adding population by municipality
all_the_queens_wasps_TRAIN = pd.merge(all_the_queens_wasps_TRAIN, df_population, how = 'left',\
left_on= ['municip_code', 'year_offset'],\
right_on = ['municip_code', 'year'])
all_the_queens_wasps_PREDICT = pd.merge(all_the_queens_wasps_PREDICT, df_population, how = 'left',\
left_on= ['municip_code', 'year_offset'],\
right_on = ['municip_code', 'year'])
all_the_queens_wasps_TRAIN.shape
all_the_queens_wasps_PREDICT.shape
```
## Further cleanup
```
#dropping unnecessary/duplicate columns
all_the_queens_wasps_TRAIN.drop(columns=['year_x', 'year_y', 'codigo'], inplace=True)
all_the_queens_wasps_TRAIN.columns
# this step shouldn't be necessary
all_the_queens_wasps_TRAIN.columns = ['municip_name', 'municip_code', 'year_offset', 'NESTS',\
'fv_size_small', 'fv_size_mid', 'fv_size_large', 'fv_type_natural',\
'fv_type_urban', 'fv_status_cantkill', 'fv_status_dead',\
'fv_status_pending', 'colonies_amount', 'food_fruit', 'food_apple',\
'food_txakoli', 'food_kiwi', 'food_pear', 'food_blueberry',\
'food_raspberry', 'station_code', 'weath_days_frost',\
'weath_humidity', 'weath_maxLevel', 'weath_midLevel', 'weath_minLevel',\
'weath_days_rain', 'weath_days_rain1mm', 'weath_accuRainfall',\
'weath_10minRainfall', 'weath_1dayRainfall', 'weath_solar',\
'weath_meanTemp', 'weath_maxTemp', 'weath_maxMeanTemp', 'weath_minTemp',\
'weath_meanWindM', 'weath_maxWindM', 'weath_meanDayMaxWind',\
'population']
all_the_queens_wasps_TRAIN.columns
all_the_queens_wasps_PREDICT.drop(columns=['year_x', 'year_y', 'codigo'], inplace=True)
all_the_queens_wasps_PREDICT.columns
# this step shouldn't be necessary
all_the_queens_wasps_PREDICT.columns = ['municip_name', 'municip_code', 'year_offset', 'NESTS',\
'fv_size_small', 'fv_size_mid', 'fv_size_large', 'fv_type_natural',\
'fv_type_urban', 'fv_status_cantkill', 'fv_status_dead',\
'fv_status_pending', 'colonies_amount', 'food_fruit', 'food_apple',\
'food_txakoli', 'food_kiwi', 'food_pear', 'food_blueberry',\
'food_raspberry', 'station_code', 'weath_days_frost',\
'weath_humidity', 'weath_maxLevel', 'weath_midLevel', 'weath_minLevel',\
'weath_days_rain', 'weath_days_rain1mm', 'weath_accuRainfall',\
'weath_10minRainfall', 'weath_1dayRainfall', 'weath_solar',\
'weath_meanTemp', 'weath_maxTemp', 'weath_maxMeanTemp', 'weath_minTemp',\
'weath_meanWindM', 'weath_maxWindM', 'weath_meanDayMaxWind',\
'population']
all_the_queens_wasps_TRAIN.NESTS.sum() == df02_vespas.shape[0]
all_the_queens_wasps_PREDICT.NESTS.sum() == 0
```
## Clustering municipalities
### by the size of its Vespa Velutina nests (`fv_...`)
```
sizeMatters = all_the_queens_wasps_TRAIN.loc[:, ['municip_code', 'fv_size_small', 'fv_size_mid', 'fv_size_large']].groupby(by='municip_code', as_index=True).mean()
sizeSilhouette = silueta(15, sizeMatters)
ggplot(aes(x='NumberOfClusters', y='Labels'), sizeSilhouette) + geom_line() + geom_point()
clustersby_size = 5
sizeClusters = pd.DataFrame()
sizeClusters['cluster_size'] = kmedias(clustersby_size, sizeMatters)
sizeClusters['cluster_size'].reset_index()
sizeClusters['cluster_size'].value_counts()
all_the_queens_wasps_TRAIN = pd.merge(all_the_queens_wasps_TRAIN, sizeClusters['cluster_size'], how = 'left', on= 'municip_code')
all_the_queens_wasps_PREDICT = pd.merge(all_the_queens_wasps_PREDICT, sizeClusters['cluster_size'], how = 'left', on= 'municip_code')
```
### by the usual environment of its wasp nests (`fv_...`)
```
cosmopolitan = all_the_queens_wasps_TRAIN.loc[:, ['municip_code', 'fv_type_natural', 'fv_type_urban']].groupby(by='municip_code', as_index=True).mean()
cosmoSilhouette = silueta(10, cosmopolitan)
ggplot(aes(x='NumberOfClusters', y='Labels'), cosmoSilhouette) + geom_line() + geom_point()
clustersby_cosmo = 2
cosmoClusters = pd.DataFrame()
cosmoClusters['cluster_cosmo'] = kmedias(clustersby_cosmo, cosmopolitan)
cosmoClusters['cluster_cosmo'].reset_index()
cosmoClusters['cluster_cosmo'].value_counts()
all_the_queens_wasps_TRAIN = pd.merge(all_the_queens_wasps_TRAIN, cosmoClusters['cluster_cosmo'], how = 'left', on= 'municip_code')
all_the_queens_wasps_PREDICT = pd.merge(all_the_queens_wasps_PREDICT, cosmoClusters['cluster_cosmo'], how = 'left', on= 'municip_code')
```
### by the usual status its wasp nests are left in (`fv_...`)
```
survivalists = all_the_queens_wasps_TRAIN.loc[:, ['municip_code', 'fv_status_cantkill', 'fv_status_dead', 'fv_status_pending']].groupby(by='municip_code', as_index=True).mean()
surviveSilhouette = silueta(10, survivalists)
ggplot(aes(x='NumberOfClusters', y='Labels'), surviveSilhouette) + geom_line() + geom_point()
clustersby_survive = 2
surviveClusters = pd.DataFrame()
surviveClusters['cluster_survive'] = kmedias(clustersby_cosmo, survivalists)
surviveClusters['cluster_survive'].reset_index()
surviveClusters['cluster_survive'].value_counts()
all_the_queens_wasps_TRAIN = pd.merge(all_the_queens_wasps_TRAIN, surviveClusters['cluster_survive'], how = 'left', on= 'municip_code')
all_the_queens_wasps_PREDICT = pd.merge(all_the_queens_wasps_PREDICT, surviveClusters['cluster_survive'], how = 'left', on= 'municip_code')
```
### Dropping all that future information (aka, future variables (`fv_...`)) from the dataset
```
all_the_queens_wasps_TRAIN.drop(columns=['fv_size_small', 'fv_size_mid', 'fv_size_large', 'fv_type_natural', 'fv_type_urban', 'fv_status_cantkill', 'fv_status_dead', 'fv_status_pending'], inplace=True)
all_the_queens_wasps_PREDICT.drop(columns=['fv_size_small', 'fv_size_mid', 'fv_size_large', 'fv_type_natural', 'fv_type_urban', 'fv_status_cantkill', 'fv_status_dead', 'fv_status_pending'], inplace=True)
```
### by the availability of food sources (`food_`)
```
foodies = all_the_queens_wasps_TRAIN.loc[:, ['municip_code', 'colonies_amount', 'food_fruit', 'food_apple', 'food_txakoli', 'food_kiwi', 'food_pear', 'food_blueberry', 'food_raspberry']].groupby(by='municip_code', as_index=True).mean()
slimSilhouette = silueta(10, foodies)
ggplot(aes(x='NumberOfClusters', y='Labels'), slimSilhouette) + geom_line() + geom_point()
clustersby_foodie = 2
foodieClusters = pd.DataFrame()
foodieClusters['cluster_food'] = kmedias(clustersby_foodie, foodies)
foodieClusters['cluster_food'].reset_index()
foodieClusters['cluster_food'].value_counts()
all_the_queens_wasps_TRAIN = pd.merge(all_the_queens_wasps_TRAIN, foodieClusters['cluster_food'], how = 'left', on= 'municip_code')
all_the_queens_wasps_PREDICT = pd.merge(all_the_queens_wasps_PREDICT, foodieClusters['cluster_food'], how = 'left', on= 'municip_code')
```
### Exploring clustering of weather variables (`weath_...`)
#### Humidity-related variables
```
# scale the dataset using MinMaxScaler, the most common approach
#scalators = ['weath_days_frost', 'weath_humidity', 'weath_maxLevel', 'weath_midLevel', 'weath_minLevel', 'weath_days_rain', 'weath_days_rain1mm', 'weath_accuRainfall', 'weath_10minRainfall', 'weath_1dayRainfall', 'weath_solar', 'weath_meanTemp', 'weath_maxTemp', 'weath_maxMeanTemp', 'weath_minTemp', 'weath_meanWindM', 'weath_maxWindM', 'weath_meanDayMaxWind']
scalators_wet = ['municip_code', 'weath_days_frost', 'weath_humidity', 'weath_days_rain', 'weath_days_rain1mm', 'weath_accuRainfall', 'weath_10minRainfall', 'weath_1dayRainfall', 'weath_solar']
weathercock_water = all_the_queens_wasps_TRAIN[scalators_wet].copy()
weathercock_water.iloc[:,1:] = preprocessing.minmax_scale(weathercock_water.iloc[:,1:])
weathercock_water = weathercock_water.groupby(by='municip_code', as_index=True).mean()
wetSilhouette = silueta(15, weathercock_water)
ggplot(aes(x='NumberOfClusters', y='Labels'), wetSilhouette) + geom_line() + geom_point()
clustersby_weather_humid = 2
weatherWetClusters = pd.DataFrame()
weatherWetClusters['cluster_weather_wet'] = kmedias(clustersby_weather_humid, weathercock_water)
weatherWetClusters['cluster_weather_wet'].reset_index()
weatherWetClusters['cluster_weather_wet'].value_counts()
all_the_queens_wasps_TRAIN = pd.merge(all_the_queens_wasps_TRAIN, weatherWetClusters['cluster_weather_wet'], how = 'left', on= 'municip_code')
all_the_queens_wasps_PREDICT = pd.merge(all_the_queens_wasps_PREDICT, weatherWetClusters['cluster_weather_wet'], how = 'left', on= 'municip_code')
```
#### Temperature-related variables
```
# scale the dataset using MinMaxScaler, the most common approach
#scalators = ['weath_days_frost', 'weath_humidity', 'weath_maxLevel', 'weath_midLevel', 'weath_minLevel', 'weath_days_rain', 'weath_days_rain1mm', 'weath_accuRainfall', 'weath_10minRainfall', 'weath_1dayRainfall', 'weath_solar', 'weath_meanTemp', 'weath_maxTemp', 'weath_maxMeanTemp', 'weath_minTemp', 'weath_meanWindM', 'weath_maxWindM', 'weath_meanDayMaxWind']
scalators_temp = ['municip_code', 'weath_meanTemp', 'weath_maxTemp', 'weath_maxMeanTemp', 'weath_minTemp']
weathercock_temp = all_the_queens_wasps_TRAIN[scalators_temp].copy()
weathercock_temp.iloc[:,1:] = preprocessing.minmax_scale(weathercock_temp.iloc[:,1:])
weathercock_temp = weathercock_temp.groupby(by='municip_code', as_index=True).mean()
tempSilhouette = silueta(10, weathercock_temp)
ggplot(aes(x='NumberOfClusters', y='Labels'), tempSilhouette) + geom_line() + geom_point()
clustersby_weather_temp = 2
weatherTempClusters = pd.DataFrame()
weatherTempClusters['cluster_weather_temp'] = kmedias(clustersby_weather_temp, weathercock_temp)
weatherTempClusters['cluster_weather_temp'].reset_index()
weatherTempClusters['cluster_weather_temp'].value_counts()
all_the_queens_wasps_TRAIN = pd.merge(all_the_queens_wasps_TRAIN, weatherTempClusters['cluster_weather_temp'], how = 'left', on= 'municip_code')
all_the_queens_wasps_PREDICT = pd.merge(all_the_queens_wasps_PREDICT, weatherTempClusters['cluster_weather_temp'], how = 'left', on= 'municip_code')
```
#### Wind-related variables
```
# scale the dataset using MinMaxScaler, the most common approach
#scalators = ['weath_days_frost', 'weath_humidity', 'weath_maxLevel', 'weath_midLevel', 'weath_minLevel', 'weath_days_rain', 'weath_days_rain1mm', 'weath_accuRainfall', 'weath_10minRainfall', 'weath_1dayRainfall', 'weath_solar', 'weath_meanTemp', 'weath_maxTemp', 'weath_maxMeanTemp', 'weath_minTemp', 'weath_meanWindM', 'weath_maxWindM', 'weath_meanDayMaxWind']
scalators_wind = ['municip_code', 'weath_meanWindM', 'weath_maxWindM', 'weath_meanDayMaxWind']
weathercock_wind = all_the_queens_wasps_TRAIN[scalators_wind].copy()
weathercock_wind.iloc[:,1:] = preprocessing.minmax_scale(weathercock_wind.iloc[:,1:])
weathercock_wind = weathercock_wind.groupby(by='municip_code', as_index=True).mean()
windSilhouette = silueta(15, weathercock_wind)
ggplot(aes(x='NumberOfClusters', y='Labels'), windSilhouette) + geom_line() + geom_point()
clustersby_weather_wind = 2
weatherWindClusters = pd.DataFrame()
weatherWindClusters['cluster_weather_wind'] = kmedias(clustersby_weather_wind, weathercock_wind)
weatherWindClusters['cluster_weather_wind'].reset_index()
weatherWindClusters['cluster_weather_wind'].value_counts()
all_the_queens_wasps_TRAIN = pd.merge(all_the_queens_wasps_TRAIN, weatherWindClusters['cluster_weather_wind'], how = 'left', on= 'municip_code')
all_the_queens_wasps_PREDICT = pd.merge(all_the_queens_wasps_PREDICT, weatherWindClusters['cluster_weather_wind'], how = 'left', on= 'municip_code')
```
#### Other weather variables
```
# scale the dataset using MinMaxScaler, the most common approach
#scalators = ['weath_days_frost', 'weath_humidity', 'weath_maxLevel', 'weath_midLevel', 'weath_minLevel', 'weath_days_rain', 'weath_days_rain1mm', 'weath_accuRainfall', 'weath_10minRainfall', 'weath_1dayRainfall', 'weath_solar', 'weath_meanTemp', 'weath_maxTemp', 'weath_maxMeanTemp', 'weath_minTemp', 'weath_meanWindM', 'weath_maxWindM', 'weath_meanDayMaxWind']
scalators_level = ['municip_code', 'weath_maxLevel', 'weath_midLevel', 'weath_minLevel']
weathercock_level = all_the_queens_wasps_TRAIN[scalators_level].copy()
weathercock_level.iloc[:,1:] = preprocessing.minmax_scale(weathercock_level.iloc[:,1:])
weathercock_level = weathercock_level.groupby(by='municip_code', as_index=True).mean()
levelSilhouette = silueta(10, weathercock_level)
ggplot(aes(x='NumberOfClusters', y='Labels'), slimSilhouette) + geom_line() + geom_point()
clustersby_weather_level = 2
weatherLevelClusters = pd.DataFrame()
weatherLevelClusters['cluster_weather_level'] = kmedias(clustersby_weather_level, weathercock_level)
weatherLevelClusters['cluster_weather_level'].reset_index()
weatherLevelClusters['cluster_weather_level'].value_counts()
all_the_queens_wasps_TRAIN = pd.merge(all_the_queens_wasps_TRAIN, weatherLevelClusters['cluster_weather_level'], how = 'left', on= 'municip_code')
all_the_queens_wasps_PREDICT = pd.merge(all_the_queens_wasps_PREDICT, weatherLevelClusters['cluster_weather_level'], how = 'left', on= 'municip_code')
```
### Cluster table
## Final check
```
all_the_queens_wasps_TRAIN.isnull().sum()
# check how many rows (municipalities) are there in the dataframe for each year/month combination
pd.crosstab(all_the_queens_wasps.municip_code, all_the_queens_wasps.year_offset)
all_the_queens_wasps_TRAIN.NESTS.sum() == df02_vespas.shape[0]
all_the_queens_wasps_PREDICT.NESTS.sum() == 0
```
## Export the TRAINING dataset for the model
A dataset which relates the weather from a previous year (12 months ago) to an amount of NESTS in any given year (and month).
```
#all_the_queens_wasps.to_csv('WBds03_QUEEN.csv', index=False)
all_the_queens_wasps_TRAIN.to_csv('WBds03_QUEENtrainYEARS.csv', index=False)
```
## Export the PREDICTION dataset for the model
```
all_the_queens_wasps_PREDICT.to_csv('WBds03_QUEENpredictYEARS.csv', index=False)
```
| github_jupyter |
# Training Models
We will practice training machine learning models for both regression and for classification problems.
# 1) Regression Models
We will start by fitting regression models. We will download the time series of the GPS station deployed on Montague Island.
<img src="AC29_map.png" alt="AC29 GPS stations on Montague Island" width="600"/>
```
import requests, zipfile, io, gzip, glob, os
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn
%matplotlib inline
# Download data
sta="AC29"
file_url="http://geodesy.unr.edu/gps_timeseries/tenv/IGS14/"+ sta + ".tenv"
r = requests.get(file_url).text.splitlines() # download, read text, split lines into a list
ue=[];un=[];uv=[];se=[];sn=[];sv=[];date=[];date_year=[];df=[]
for iday in r: # this loops through the days of data
crap=iday.split()
if len(crap)<10:
continue
date.append((crap[1]))
date_year.append(float(crap[2]))
ue.append(float(crap[6])*1000)
un.append(float(crap[7])*1000)
uv.append(float(crap[8])*1000)
# # errors
se.append(float(crap[10])*1000)
sn.append(float(crap[11])*1000)
sv.append(float(crap[12])*1000)
# make dataframe
crap={'station':sta,'date':date,'date_year':date_year,'east':ue,'north':un,'up':uv}
if len(df)==0:
df = pd.DataFrame(crap, columns = ['station', 'date','date_year','east','north','up'])
else:
df=pd.concat([df,pd.DataFrame(crap, columns = ['station', 'date','date_year','east','north','up'])])
df.head()
# select the first 2 years of data from the east component
y = ue[0:365*5]
plt.plot(df['date_year'],df['east']);plt.grid(True)
```
### 1.1 Linear regression
Let $y$ be the data, and $\hat{y}$ be the predicted value of the data. A general linear regression can be formulated as
$\hat{y} = w_0 + w_1 x_1 + ... + w_n x_n = h_w (\mathbf{x})$.
$\mathbf{\hat{y}} = \mathbf{G} \mathbf{w}$.
$y$ is a data vector of length $m$, $\mathbf{x}$ is a feature vector of length $n$. $\mathbf{w}$ is a vector of model parameter, $h_w$ is refered to as the *hypothesis function* or the *model* using the model parameter $w$. In the most simple case of a linear regression with time, the formulation becomes:
$\hat{y} = w_0 + w_1 t$,
where $x_1 = t$ the time feature.
To evaluate how well the model performs, we will compute a *loss score*, or a *residual*. It is the result of applying a *loss* or *cost* or *objective* function to the prediction and the data. The most basic *cost function* is the **Mean Square Error (MSE)**:
$MSE(\mathbf{x},h_w) = \frac{1}{m} \sum_{i=1}^{m} \left( h_w(\mathbf{x})_i - y_i \right)^2 = \frac{1}{m} \sum_{i=1}^{m} \left( \hat{y}_i - y_i \right)^2 $, in the case of a linear regression.
The *Normal Equation* is the solution to the linear regression that minimize the MSE.
$\mathbf{w} = \left( \mathbf{x}^T\mathbf{x} \right)^{-1} \mathbf{x}^T \mathbf{y}$
This compares with the classic inverse problem framed by $\mathbf{d} = \mathbf{G} \mathbf{m}$.
$\mathbf{m} = \left( \mathbf{G}^T\mathbf{G} \right)^{-1} \mathbf{G}^T \mathbf{d} $
It can be solved using Numpy linear algebra module. If $\left( \mathbf{x}^T\mathbf{x} \right) $ is singular and cannot be inverted, a lower rank matrix called the *pseudoinverse* can be calculated using singular value decomposition. We also used in a previous class that the Scikit-learn function for ``sklearn.linear_model.LinearRegression``, which is the implementation of the *pseudoinverse* We practice below how to use these standard inversions:
```
x = np.asarray(date_year[0:2*365])
x = x-np.min(x)
y = np.asarray(ue[0:2*365])
G = np.c_[np.ones((2*365,1)),x]
m = len(y)
print(G)
#normal equation
w1 = np.linalg.inv(G.T.dot(G)).dot(G.T).dot(y)
# Pseudoinverse
w2 = np.linalg.pinv(G).dot(y)
# scikitlearn LinearRegression
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(x.reshape(1,-1),y.reshape(1,-1))
print(lin_reg)
w3 = [lin_reg.intercept_, lin_reg.coef_]
y_predict1=G.dot(w1)
y_predict2=G.dot(w2)
y_predict3=lin_reg.predict(x.reshape(1,-1))
plt.plot(x,y);plt.grid(True)
plt.plot(x,y_predict1,'r',linewidth=3);
plt.plot(x,y_predict2,'g--');
plt.plot(x.reshape(1,-1),y_predict3,'k');
plt.xlabel("Time (years)")
plt.ylabel('East displacement (mm) at AC29')
print("modeled parameters. Normal equation")
print(w1)
print("modeled parameters. pseudoinverse")
print(w2)
```
## 1.2 Loss functions for regressions
Loss functions are used to measure the difference between the data and the predictions. Loss functions $\mathcal{L}(\mathbf{w}) $ are differentiable with respect to models.
In the previous example, we used the MSE as a loss function:
$ MSE(\mathbf{x},h_w) = \frac{1}{m} \sum_{i=1}^m \left( \hat{y}_i - y_i \right) ^2 $
The regression aims to find $h_w$ that minimizes the loss function $\mathcal{L}(\mathbf{w}) $. Other examples of loss functions are:
$MAE(\mathbf{x},h_w) = \frac{1}{m} \sum_{i=1}^m |\hat{y}_i - y_i|$
You can find interesting comparisons of Loss functions for regression problems here: https://heartbeat.fritz.ai/5-regression-loss-functions-all-machine-learners-should-know-4fb140e9d4b0
# 2) Gradient Descent
Gradient Descent is used to *train* machine learning models.
Gradient Descent marches down the misfit function through the parameter space: it evaluates the loss function and attempting to find its global minimum. The model $\mathbf{w}$ is updated iteratively in the direction that reduces the loss/misfit:
$w_j^{(k + 1)} = w_j^{(k)} - \alpha \frac{\partial \mathcal{L}}{\partial w_j}$ for $j = 1 , \cdots , n ,$
where $\alpha$ is the **learning rate**.
<table><tr>
<td> <img src="GD_cartoon.jpeg" alt="Gradient Descent" style="width: 400px;"/> </td>
<td> <img src="GD_non_global.png" alt="Gradient Descent non convex" style="width: 400px;"/> </td>
</tr>
<tr>
<td>Gradient descent for a convex, well behaved loss function. </td>
<td> Gradient descent in a poorly behaved loss function with local minima. <td>
</tr>
</table>
## 2.1 Batch Gradient Descent
Batch GD is performing the GD over the entire data and taking the steps to go down the gradient by finding the appropriate learning rate $\alpha$.
<table><tr>
<td> <img src="GD_AlphaTooSmall.png" alt="Learning rate too small" style="width: 400px;"/> </td>
<td> <img src="GD_AlphaTooLarge.png" alt="Learning rate too large" style="width: 400px;"/> </td>
</tr>
<tr>
<td>Learning rate $\alpha$ is too small. It will take longer to converge. </td>
<td> Learning rate $\alpha$ is too large. Converge to global minimum. <td>
</tr>
</table>
The iteration in GD can be stopped by imposing a convergence rate (tolerance) that is a thershold under which the error will not longer be calculated. Gradient Descent require re-scaling the data.
```
# normalize the data. Without normalization this will fail!
x = np.asarray(date_year[0:3*365]).reshape(-1,1)
y = np.asarray(ue[0:3*365]).reshape(-1,1)
x = x-np.min(x)
G = np.c_[np.ones((len(x),1)),x]
scale = (np.max(y)-np.min(y)) # minmax scaling
newy = y / scale
plt.plot(x,newy*scale);plt.grid(True)
alpha = 0.1
n_iterations =1000
for k in range(100): # perform 100 times the random initialization
w = np.random.rand(2,1) # initialize the model parameters.
for iteration in range(n_iterations):
gradients = 2/m *G.T.dot(G.dot(w)-newy.reshape(-1,1))
w = w - alpha * gradients
plt.plot(x,G.dot(w)*scale,'r')
# Now let's vary the learning rate
n_iterations =1000
for alpha in [0.001,0.01,0.1]:
fig,ax=plt.subplots(1,1)
ax.plot(x,newy*scale);ax.grid(True)
for k in range(100): # perform 100 times the random initialization
w = np.random.rand(2,1) # initialize the model parameters.
for iteration in range(n_iterations):
gradients = 2/m *G.T.dot(G.dot(w)-newy.reshape(-1,1))
w = w - alpha * gradients
ax.plot(x,G.dot(w)*scale,'r')
ax.set_title("alpha = "+str(alpha))
```
## 2.2 Stochastic Gradient Descent
SGD takes the gradient for each single instance. By default, SGD in Scikit-learn will minimize the MSE cost function. The advantages of GD are:
* Efficiency.
* Ease of implementation (lots of opportunities for code tuning).
The disadvantages of Stochastic Gradient Descent include:
* SGD requires a number of hyperparameters such as the regularization parameter and the number of iterations.
* SGD is sensitive to feature scaling.
```
from sklearn.linear_model import SGDRegressor
alpha = 0.01 # learning rate
sgd_reg = SGDRegressor(max_iter=1000,tol=1e-2,penalty=None,eta0=alpha)
sgd_reg.fit(x,y)
w=[sgd_reg.intercept_[0],sgd_reg.coef_[0]]
print(w)
fig,ax=plt.subplots(1,1)
ax.plot(x,y);ax.grid(True)
ax.plot(x,G.dot(w),'r')
```
## 2.3 Mini Batch Gradient Descent
It is a combination of Batch GD and SGD. Minibatch computes the gradient over a subset of instances (as against a single one in SGD or the full one in Batched GD). At each step, using one minibatch randomly drawn from our dataset, we will estimate the gradient of the loss with respect to our parameters. Next, we will update our parameters in the direction that may reduce the loss.
# 2) Under-fitting and Overfitting
**Bias**
This part of the generalization error is due to wrong assumptions, such as assuming that the data is linear when it is actually quadratic. A high-bias model is most likely to underfit the training data. Biased is reduced by adjusting, optimizing the model to get the best performance possible on the training data.
**Variance**
This part is due to the model’s excessive sensitivity to small variations in the training data. A model with many degrees of freedom (such as a high-degree polynomial model) is likely to have high variance, and thus to overfit the training data. Variance is reduced
**Irreducible error**
This part is due to the noisiness of the data itself. The only way to reduce this part of the error is to clean up the data (e.g., fix the data sources, such as broken sensors, or detect and remove outliers).
**Underfitting**: the model is too simple, the bias is high but the model variance is low. This occurs in most cases at the beginning of training, where the model has not yet learned to fit the data. With iterative training, the algorithm starts by underfitting the data (high loss for both validation and training data) and progressively "learn" and improve the fit. It remains a problem with the loss in both training and validation have high values.
The solution is to increase the complexity of the model, or to design better feature from the data (feature engineering), and to reduce the constrains on the model (such as the parameterization of model regularization). Underfitting is identified by having a high bias and low variance of the residuals. It is usually obvious and rarely a problem because the training and validation errors are high.
**Overfitting**: the model is too complex, the bias is low but the model variance is high. Data may contain noise that should not be fit by the algorithm. It happens when the model is too complex relative to the amount and the noisiness of the training data. Overfitting is a common problem in geoscience machine learning problems. Overfitting can be detected when the model performs perfectly on the training data, but poorly on the validation and test data. It can also be detected using **cross-validation metrics** and **learning curves**.
Some solutions are to reduce the model size, reduce the number of attributes in the training data, gather more training data, to reduce the noise in the training data (fix data errors and remove outliers). Another way to keep the model complexity but constrain its variance is called **regularization**.
***You do not know if you overfit, until you do***. The model may not be complex enough until your reached overfitting. Once reached, back up a little bit to find the best tradeoff in optimization and generalization.
**Assessing Overfitting**
To evaluate the model's ability to generalize to other data sets, and have the appropriate level of variance, we plot **learning curves**. These plots the model performance on the training and validation set as a function of the training set size.
```
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
def plot_learning_curves(model, X, y,c1="b+",c2="b"):
# Setting the random_state variable in the train_test_split is necessary to reproduce the results.
# When tuning parameters such as test_size, you need to set the random state otherwise too many parameters change.
X_train, X_val, y_train, y_val = train_test_split(x, y, test_size=0.2,random_state=42)
train_errors, val_errors = [], []
for m in range(1, len(X_train)):
model.fit(X_train[:m], y_train[:m])
y_train_predict = model.predict(X_train[:m])
y_val_predict = model.predict(X_val)
train_errors.append(mean_squared_error(y_train[:m], y_train_predict))
val_errors.append(mean_squared_error(y_val, y_val_predict))
plt.plot(np.sqrt(train_errors), c1, linewidth=2, label="train")
plt.plot(np.sqrt(val_errors),c2, linewidth=3, label="val")
plt.legend(['training','validation'])
plt.grid(True)
plt.title("Learning curve")
plt.ylabel("RMSE")
plt.xlabel('Training size')
x = np.asarray(date_year[0:2*365]).reshape(-1,1)
y = np.asarray(ue[0:2*365]).reshape(-1,1)
x = x-np.min(x)
# G = np.c_[np.ones((len(x),1)),x]
scale = (np.max(y)-np.min(y)) # minmax scaling
newy = y / scale
alpha = 0.01 # learning rate
sgd_reg = SGDRegressor(max_iter=1000,tol=1e-2,penalty=None,eta0=alpha)
sgd_reg.fit(x,newy)
y_predict=sgd_reg.predict(x)
plt.plot(x,y);plt.grid(True)
plt.plot(x,y_predict*scale,"m",linewidth=3)
plt.ylabel('East displacement (mm) at AC29')
plot_learning_curves(sgd_reg, x.ravel(), y.ravel())
plt.ylim([1,2])
```
Let's read and interpret these curves.
You will notice that when you re-run the cell with ``plot_learning_curves`` that you will get different answers: this is because the initialization of the SGD will give different answers. This is a first reason why one should run these multiple times and then average over the curves.
* **The good signs**:
Loss curves plateau at low value for both training and validation. Training loss should be smaller, but not by much, than the validation loss. Low loss values are signs of good fit and good generalization.
* **The bad signs: underfitting**:
RMSE are high for both training and validation.
* **The bad signs: overfitting**:
RMSE is low for training but high for validation.
# 3) Regularization
Constraining a model of a given complexity to make it simpler is called **regularization**.
## 3.1 Ridge Regression
To regularize the model, we can reduce model parameter variance by imposing that the norm of the model parameters is small. Assuming that the model parameters follow a normal (Gaussian) distribution, we want to minimize the L2 norm (equivalent to the mean square of the model parameters:
$\mathcal{L}(\mathbf{w}) = MSE(\mathbf{w}) + \lambda \frac{1}{2} || \mathbf{w} ||_2^2$,
where $|| \mathbf{w} ||_2 = \sum_{i=1}^n w_i^2$ is the L2 norm of the model parameters, $\lambda$ is a hyperparameter to tune to balance the contribution of model norm as against the residual norms. L2 norm is sensitive to outliers in the distributions.
Ridge Regression is sensitive to data scale, so do not forget to scale input data.
## 3.2 Lasso Regression
Lasso Regression is just like the Ridge Regression a way to minimize model variance. Instead of mimizing the L2 norm, we mimize the L1 norn:
$\mathcal{L}(\mathbf{w}) = MSE(\mathbf{w}) + \lambda || \mathbf{w} ||_1$,
The L1 norm $|| \mathbf{w} ||_1 = \sum_{i=1}^n | w_i |$ is appropriate for exponential (Laplace) distribution, and allow to not be penalized by outliers. It tends to eliminate the weights of the least important features. It effectively performs a *feature reduction* and output a *sparse model*. It can be called in SGD by using the argument ``penalty="l1"``.
## 3.3 Elastic Net
Combine Ridge and Lasso, weigh the contribution of each norm (L1 and L2) using the hyperparameter $r$, and the contribution of the regularization in the loss function with $\lambda$.
$\mathcal{L}(\mathbf{w}) = MSE(\mathbf{w}) + r \lambda|| \mathbf{w} ||_1 + \frac{1-r}{2} \lambda|| \mathbf{w} ||_2^2$,
```
from sklearn.linear_model import SGDRegressor, ElasticNet, Lasso, Ridge
sgd_reg = SGDRegressor()
ridge_reg = Ridge(alpha=0.1)
lasso_reg = Lasso(alpha=0.1)
ela_reg = ElasticNet(alpha=0.1,l1_ratio=0.5)
# prep the data again
x = np.asarray(date_year[0:4*365]).reshape(-1,1)
y = np.asarray(ue[0:4*365]).reshape(-1,1)
x = x-np.min(x)
G = np.c_[np.ones((len(x),1)),x]
scale = (np.max(y)-np.min(y)) # minmax scaling
# Fit
sgd_reg.fit(x,y)
ridge_reg.fit(x,y)
lasso_reg.fit(x,y)
ela_reg.fit(x,y)
# make prediction
y_sgd=sgd_reg.predict(x)
y_sridge=ridge_reg.predict(x)
y_lasso=lasso_reg.predict(x)
y_ela=ela_reg.predict(x)
w_sgd=[sgd_reg.intercept_[0],sgd_reg.coef_[0]]
w_ridge=[ridge_reg.intercept_[0],ridge_reg.coef_[0]]
w_lasso=[lasso_reg.intercept_[0],lasso_reg.coef_[0]]
w_ela=[ela_reg.intercept_[0],ela_reg.coef_[0]]
print(w_sgd,w_ridge,w_lasso,w_ela)
fig,ax=plt.subplots(1,1)
ax.plot(x,y);ax.grid(True)
ax.plot(x,G.dot(w_sgd))
ax.plot(x,G.dot(w_ridge))
ax.plot(x,G.dot(w_lasso))
ax.plot(x,G.dot(w_ela))
ax.legend(['data','SGD','Ridge','Lasso','Elastic'])
# perform the regressions
plot_learning_curves(sgd_reg, x.ravel(), y.ravel(),"r-+","r")
plot_learning_curves(ridge_reg, x.ravel(), y.ravel(),"g-+","g")
plot_learning_curves(lasso_reg, x.ravel(), y.ravel(),"m-+","m")
plot_learning_curves(ela_reg, x.ravel(), y.ravel(),"y-+","y")
plt.ylim([0,6])
plt.xlim([0,30])
```
We see that there needs to be at least 10 samples in the training set for the models to generalize reasonably well. We also see that all of the regularization mechanisms yield seemingly similar behavior at the training. After a sufficient number of samples, validation loss goes below training loss.
**model complexity**
Now we will try and fit the step in the data.
```
x = np.asarray(date_year[3*365:4*365]).reshape(-1,1)
y = np.asarray(ue[3*365:4*365]).reshape(-1,1)
x = x-np.min(x)
# G = np.c_[np.ones((len(x),1)),x]
scale = (np.max(y)-np.min(y)) # minmax scaling
newy = y / scale
plt.plot(x,newy*scale);plt.grid(True)
```
The data looks complex, with the superposition of a linear trend and oscillatory signals. Let's fit a general polynomial form. We will start with a simple model.
```
from sklearn.preprocessing import PolynomialFeatures
#Let's start with a simple
poly_features = PolynomialFeatures(degree=2)
G = poly_features.fit_transform(x) # G now contains the original feature of X plus the power of the features.
ridge_reg = Ridge(alpha=0.1)
ridge_reg.fit(G,y)
y_ridge=ridge_reg.predict(G)
print(G.shape)
plt.plot(x,y);plt.grid(True)
plt.plot(x,y_ridge)
plot_learning_curves(ridge_reg, G.ravel(), y.ravel(),"b-+","b");plt.xlim([0,100])
# Let's make it complex
poly_features = PolynomialFeatures(degree=400)
G2 = poly_features.fit_transform(x) # G now contains the original feature of X plus the power of the features.
ridge_reg2 = Ridge(alpha=0.001)
ridge_reg2.fit(G2,y)
y_ridge2=ridge_reg2.predict(G2)
fix,ax=plt.subplots(1,2,figsize=(20,8))
ax[0].plot(x,y);ax[0].grid(True)
ax[0].plot(x,y_ridge,"m",linewidth=3)
ax[0].plot(x,y_ridge2,"y",linewidth=3)
# ax[0].set_ylim([-10,20])
ax[0].set_ylabel('Vertical displacement (mm)at AC29')
plot_learning_curves(ridge_reg, G.ravel(), y.ravel(),"m-+","m");#plt.xlim([0,200])
plot_learning_curves(ridge_reg2, G2.ravel(), y.ravel(),"y-+","y");#plt.xlim([0,200])
plt.ylim([2,4])
```
# 4) Early stopping
In gradient descent, learning of the algorithm means that we are "training" the algorithm iteratively. As we keep training the model.
Another strategy to regularize the learning is to stop training as soon as the validation error reaches a minimum. Now instead of looking at the errors as a function of training size, we look at them as a function of epoch.
```
from sklearn.base import clone
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
x = np.asarray(date_year[:]).reshape(-1,1)
x=x-np.min(x)
y = np.asarray(uv[:]).reshape(-1,1)
X_train, X_val, y_train, y_val = train_test_split(x, y, test_size=0.3,random_state=42)
# use the Pipeline function from sklearn to get prepare your data.
poly_scaler = Pipeline([
("poly_features", PolynomialFeatures(degree=50)),
("std_scaler", StandardScaler()) ])
X_train_poly_scaled = poly_scaler.fit_transform(X_train)
X_poly = poly_scaler.fit_transform(x)
X_val_poly_scaled = poly_scaler.transform(X_val)
# set the gradient with a single iteration since we will iterate over epochs.
# warm_start=True says that you should keep the previous state of the model to retrain.
sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True,
penalty=None, learning_rate="constant", eta0=0.0005)
minimum_val_error = float("inf")
best_epoch = None
best_model = None
val_error=np.zeros(1000)
train_error=np.zeros(1000)
for epoch in range(1000):
sgd_reg.fit(X_train_poly_scaled, y_train.ravel()) # continues where it left off
y_val_predict = sgd_reg.predict(X_val_poly_scaled)
y_train_predict = sgd_reg.predict(X_train_poly_scaled)
val_error[epoch] = mean_squared_error(y_val, y_val_predict)
train_error[epoch] = mean_squared_error(y_train, y_train_predict)
if val_error[epoch] < minimum_val_error: # you will stop and save the best model
minimum_val_error = val_error[epoch]
best_epoch = epoch
best_model = clone(sgd_reg)
best_y = sgd_reg.predict(X_poly)
fig,ax=plt.subplots(1,2,figsize=(16,6))
ax[0].plot(x,y);
ax[0].plot(x,best_y)
ax[1].plot(np.arange(1000),val_error)
ax[1].plot(np.arange(1000),train_error)
plt.legend(["validation error","training error"])
plt.xlim([0,100]);plt.ylim([0,30])
```
You may also consider the parameter ``early_stopping=True`` in SGD to automatically implement early stopping and deal with overfitting.
# 5) Training Classification algorithms
Last week, we explored the ***logistic regression***, a classification method to estimate the probability that an instance belongs to a particular class. Here we take example of a binary classificaiton. The Logistic regression estimates the probability that an instance belongs to the positive class. If the probably is ablove a threshold, then the instance is classified in the positive class. The probability is estimted using a **logistic sigmoid function**:
$\sigma(x) = \frac{1}{1+ \exp(-x)}$
Training a logistic regression is to tune the model such that the output score is low for a negative instance and high for a positive instance. The loss function associated with logistic regression is the $\log$ function due to its property that it is really high at low values of $x$ and really low at high values of $x$. The cost function over a batch of $m$ instances it the sum of the individual instance cost functions, which is called the ***Log Loss**:
$ \mathcal{L}(\mathbf{w}) = - \frac{1}{m} \sum_{i=1}^m \left[ y_i \log(\hat{p}_i(\mathbf{w})) + (1 - y_i) \log(1-\hat{p}_i(\mathbf{w}))\right] $,
where $m$ is the number of instances, $\hat{p}_i = \sigma(\mathbf{w}(x)) $ is the probability output by the model of the instence $x$, and $y_i$ is the class of the instance. The log loss is differentiable with respect to the model parameters, and one can use Gradient Descent to optimize the model parameters.
In Scikit-learn, ``LogisticRegression`` is equivalent to training an logistic regression using a log loss ``SGDClassifier(loss='log')``.
The K-class version of logistical regression is the ***softmax or multinomial regression***. The softmax regression model first computes scores $s_k$ for each class, which are computing using a simple linear regression prediction. The probabilities are calculated using the softmax function:
$\hat{p}_k = \sigma(s_k) = \frac{\exp(s_k)}{ \sum_{i=1}^K \exp(s_i)}$
An appropriate loss function to use is called ***Cross Entropy*** cost function:
$ \mathcal{L}(\mathbf{w}) = - \frac{1}{m} \sum_{i=1}^m \sum_{i=1}^K y_i \log(\hat{p}_i(\mathbf{w})) $.
The rest of the training requires similar tricks than the regression model training. The performance metrics are precision, recall, F1 scores etc etc as seen in previous notes.
# Checklist for training an ML model
1. Set the test set aside.
2. Initialize model parameters for optimizer (e.g. SGD)
3. Identify and define machine learning methods
4. Define the Loss Function
There are loss functions for classification (most of them use logs) and for regressions (they may use exponentials). Follow the documentation of your ML API: https://keras.io/api/losses/, https://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics, https://pytorch.org/docs/stable/nn.html#loss-functions
5. Define the optimization algorithm
The most popular optimizer algorithms compute the first derivative (gradient) of the loss functions. They include Gradient Descent, Momentum, Adagrad, RMSProp, Adam.
6. Model training
Prepare the folds for K-fold cross validation. Scale the data.
Define the model parameters in a dictionary. Define the number of epochs, learning rate, batch size.
For each fold:
Initialize the model parameters.
for each epoch (iteration), train the algorithm on a minibatch of training examples. Training consists in 1) passing the training data through our model to obtain a set of predictions, 2) calculating the loss, 3) computing the gradient (either known, or using backward passes in neural networks), and 4) updating the model parameters using an optimization algorithm (e.g. Stochastic Gradient Descent).
7. Fine tune the training
Compute learning rate as a function of training size to get a sense for the batch size desired to properly train.
Compute the validation and training error as a function of epochs. Find the minimum of the validation error and stop the training there.
| github_jupyter |
# Autoregressive models using a feedforward neural network
## PART 2: Applying the methods to health care time series
In this notebook we will use a feedforward neural network to fit a single and ensemble linear and non-linear models to real time series data.
<div class="alert alert-info">
1. Most of the work we will do is data manipulation: preprocessing data and making sure it is the right shape for the neural networks.
2. The ensemble learning method can be computationally expensive. We have included some pre-trained models that can be loaded from file if needed.
</div>
---
**LEARNING OBJECTIVES**
* Learn how to apply feedforward neural networks to real health data.
* Methods to preprocess nn input data.
* Recognise the stochastic nature of neural network training
* Use a ensemble of neural networks to provide a more reliable point forecast
---
# Python dependencies
It is recommended that you use the forecasting course conda environment provided. We are again going to implement neural networks using `tensorflow` and '`keras`. You should be using at least `tensorflow` version `2.1.0`.
```
import statsmodels.api as sm
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# tensorflow imports
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Input, Dense, Flatten, Dropout
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.callbacks import EarlyStopping
from statsmodels.tools.eval_measures import rmse
tf.__version__
```
# Forecasting emergency admissions in England
We will now use feedforward neural networks to predict the number of monthly emergency admissions in England.
## Load the data
**Task**:
* Execute the code below to read the emergency admissions data into pandas
```
url = 'https://raw.githubusercontent.com/health-data-science-OR/data/master' \
+ '/em_admits_ts.csv'
em_admits = pd.read_csv(url)
em_admits.head(3)
em_admits.shape
```
## 2.3. Preprocessing
### 2.3.1 Datetime format
Notice the the `month_year` column in `em_admits` holds a string an invalid date format e.g. 'Aug-10'. Pandas cannot handle this as-is because '10' could refer to any century! So let's do a bit of preprocessing to get it into a valid datetime format.
*Optional Task:*
* Take some time to understand the code that preprocesses the dates. This is real health data and it is likely you will need to deal with formatting issues as experienced here.
First we will format the string to something pandas can parse i.e. 'Aug 2010'. Then we will call the `pd.to_datetime()` function to parse the string and return a `datetime`. We will assign the result to our dataframe's index and set the freq to monthly start 'MS'
```
date_str = em_admits['month_year'].str[:3] + ' 20' \
+ em_admits['month_year'].str[-2:]
date_str.name = 'date'
em_admits = em_admits.set_index(pd.to_datetime(date_str))
em_admits.index.freq = 'MS'
em_admits = em_admits.drop(columns=['month_year'])
em_admits.head()
```
## Visualise the training data
We will be forecasting the last 12 months of the series. Let's take a look at the training data (being careful to exclude the last 12 months)
```
holdout_length = 12
em_admits[:len(em_admits)-holdout_length].plot(figsize=(12,4));
```
## Calender adjustment
This is monthly data so a useful preprocessing step is to transform the data into a daily rate by dividing by the number of days in the month. When we plot this the troughs we saw in Feb each year disappear.
**execute the code below which:**:
* Calculates the average admissions per day series
* Plots the training data (holding back 12 months for testing)
```
admit_rate = em_admits['em_admits'] / em_admits.index.days_in_month
admit_rate[:len(admit_rate)-12].plot(figsize=(12,4));
```
# **Exercise 1**: Convert the time series to format suitable for supervised learning.
The function `sliding_window` has been provided below for you to create your training data.
**Task**:
* Using a sliding window approach convert the time series into a tabular format.
* Use a window size of 12 and assume you are predicting a scalar value of y (1-step ahead).
* Conduct a train test split holding back 12 windows as a test set.
```
def sliding_window(train, window_size=2, horizon=1):
'''
sliding window.
Parameters:
--------
train: array-like
training data for time series method
window_size: int, optional (default=2)
lookback - how much lagged data to include.
horizon: int, optional (default=1)
number of observations ahead to predict
Returns:
array-like, array-like
preprocessed X, preprocessed Y
'''
tabular_X = []
tabular_y = []
for i in range(0, len(train) - window_size - horizon):
X_train = train[i:window_size+i]
y_train = train[i+window_size+horizon-1]
tabular_X.append(X_train)
tabular_y.append(y_train)
return np.asarray(tabular_X), np.asarray(tabular_y).reshape(-1, )
# your code here...
# get data in tabular format for NN
#X_data, y_data = sliding_window(...)
#you will need to use these variable names for the next section.
#X_train, y_train, X_test, y_test = ... train test split code
# example solution
def ts_train_test_split(*arrays, train_size):
'''
time series train test split
Parameters:
X: array-like
X data
y_data
'''
results = ()
for a in arrays:
results += a[:train_size], a[train_size:]
return results
WINDOW_SIZE = 12
X_data, y_data = sliding_window(admit_rate,
window_size=WINDOW_SIZE)
#train test split
train_size = len(y_data) - 12
X_train, X_test, y_train, y_test = ts_train_test_split(X_data,
y_data,
train_size=train_size)
```
# Scaling the features and target to be between -1 and 1
In many machine learning applications data are scaled to be between 0 and 1. For neural network forecasting, *Ord, Fildes and Kourentzes (2017)* recommend scaling to be between -1 and 1. This is what we will do here. To do the scaling we will use
```python
sklearn.preprocessing.MinMaxScaler
```
> Execute the code below to transform the data.
```
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(-1, 1))
# scale on training data
scaler.fit(admit_rate.iloc[:-12].to_numpy().reshape(-1, 1))
y_train = scaler.transform(y_train.reshape(-1, 1))
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
y_test = scaler.transform(y_test.reshape(-1, 1))
```
# **Exercise 2**: A Linear regression model benchmark
The first model we will try is the linear model. Its will serve as our neural network baseline. (In practice we would also check this is better than a naive method such as seasonal naive).
## Exercise 2a Train the model
**Task:**
* Using `Keras`, construct a neural network that mimics a simple linear regression model (see previous notebook).
* Optional: To get comparable results, set the tensorflow random number seed to 1234
* Train the model for 100 epochs.
* Optionally you can use an early stopping callback with patience set to 10.
```
# your code here ...
def get_linear_model(ws):
'''
Sequential Keras model that minics
AR linear model.
'''
pass
# example answer
def get_linear_model(ws, lr=0.01, metrics=None):
'''
Sequential Keras model that minics
AR linear model.
'''
if metrics is None:
metrics = ['mae', 'mse']
model = Sequential([Dense(1, input_shape=(ws,))])
model.compile(loss='mse',
optimizer=Adam(learning_rate=lr),
metrics=metrics)
return model
# set tensorflow random seed for repeatability
tf.random.set_seed(1234)
N_EPOCHS = 100
es = EarlyStopping(monitor='val_loss', patience=10)
#call the linear model function create earlier.
model_lm = get_linear_model(ws=12, metrics=['mae'])
#fit model silently (verbose=0)
results = model_lm.fit(x=X_train,
y=y_train,
epochs=N_EPOCHS,
validation_data=(X_test, y_test),
verbose=0,
callbacks=[es])
plt.plot(results.history['loss'], label='loss')
plt.plot(results.history['val_loss'], label='val_loss')
plt.legend()
```
## Extra: Plot the fitted values
```
plt.plot(scaler.inverse_transform(y_train), label='ground truth')
plt.plot(scaler.inverse_transform(model_lm.predict(X_train)), label='NN fitted')
plt.legend();
```
## Exercise 2b. Generate and evaluate a multi-step forecast
**Task:**
* Using the iterative method produce a 12 step forecast. Save the predictions in a variable called `y_preds_lm`
* Calculate the RMSE
* Optional: Plot the results -> predictions versus validation.
**Hints:**
* A function `autoregressive_iterative_forecast` is provided below. (you could use this function or write your own if you prefer!)
```
def autoregressive_iterative_forecast(model, exog, h):
'''
h-step forecast for an autoregressive
model using the iterative prediction method.
Conduct h one-step forecasts gradually
replacing ground truth autoregressive X
values with predictions.
Parameters:
------
model: forecast object
model that has a .predict(h) interface
exog: array-like
initial vector of lagged values (X)
h: int
forecast horizon. assumed to be > 0
Returns:
------
numpy.ndarray
y_predictions
'''
y_preds = []
current_X = exog
for i in range(h):
y_pred = model.predict(current_X.reshape(1, -1))[0,0]
y_preds.append(y_pred)
current_X = np.roll(current_X, shift=-1)
current_X[-1] = y_pred
return np.array(y_preds)
##### your code here ...
# example solution
def plot_nn_prediction_results(model, X_train, y_train, y_test, y_preds):
'''
utility function to plot the results of the prediction
'''
#create series
fitted_values = scaler.inverse_transform(model.predict(X_train))
ground_truth = scaler.inverse_transform(y_train)
ground_truth_val = scaler.inverse_transform(y_test)
padding = np.full(len(fitted_values), np.NAN)
validation = np.concatenate([padding.reshape(-1, 1), ground_truth_val])
forecast = np.concatenate([padding.reshape(-1, 1), y_preds])
plt.plot(ground_truth, label='ground truth')
plt.plot(validation, label='test')
plt.plot(fitted_values, label='in-sample', linestyle='-.')
plt.plot(forecast, label='out-of-sample', linestyle='-.')
plt.plot(admit_rate.to_numpy()[12:])
plt.legend();
# predict next 12 months and plot
H = 12
y_preds_lm = autoregressive_iterative_forecast(model_lm, X_test[0], h=H)
y_preds_lm = scaler.inverse_transform(y_preds_lm.reshape(-1, 1))
plot_nn_prediction_results(model_lm, X_train, y_train, y_test, y_preds_lm)
rmse(y_preds_lm, scaler.inverse_transform(y_test))[0]
```
# **Exercise 3:** Training a non-linear deep network
Now that you have got the basic structure and mechanics of the code you need for forecasting let's build a more complex model and compare the RMSE on the validation set to your simple linear model.
**Task:**
* Create a new neural network model with 2 hidden layers
* Try 32 and 64 neurons for layer 1 and 2 respectively
* Use a ReLU activation function.
* Use the Adam optimiser with a learning rate of 0.01
* Predict the next 12 months ahead
* Calculate the RMSE
**Hints:**
* Feel free to experiment with the number of hidden layers, neurons and learning rate.
* Perhaps try a dropout layer(s) if you feel your model is overfitting.
* Set a tensorflow random seed if you want to be able to reproduce your results e.g. 45676
```
# your code here ...
def get_network_model(ws, n_neurons_l1=32, n_neurons_l2=64,
include_layer_two=False, include_drop_out=False,
drop_out_rate=0.2, lr=0.01, metrics=None):
'''
A function to allow quick changing of network parameters
'''
if metrics is None:
metrics = ['mse']
model = Sequential()
model.add(Flatten(input_shape=(ws,)))
model.add(Dense(n_neurons_l1, activation='relu'))
if include_layer_two:
model.add(Dense(n_neurons_l2, activation='relu'))
if include_drop_out:
model.add(Dropout(drop_out_rate))
model.add(Dense(1))
model.compile(loss='mse',
optimizer=Adam(learning_rate=lr),
metrics=metrics)
return model
# set tensorflow random seed
tf.random.set_seed(45676)
N_EPOCHS = 100
es = EarlyStopping(monitor='loss', patience=10)
#single layer nn
mlp = get_network_model(ws=12, n_neurons_l1=5, include_layer_two=True,
n_neurons_l2=32)
#fit model silently
results_mlp = mlp.fit(x=X_train,
y=y_train,
epochs=N_EPOCHS,
verbose=0, callbacks=[es])
# predict next 12 months and plot
H = 12
y_preds_mlp = autoregressive_iterative_forecast(mlp, X_test[0], h=H)
y_preds_mlp = scaler.inverse_transform(y_preds_mlp.reshape(-1, 1))
plot_nn_prediction_results(mlp, X_train, y_train, y_test, y_preds_mlp)
rmse_lm = rmse(scaler.inverse_transform(y_test), y_preds_lm)[0]
rmse_mlp = rmse(scaler.inverse_transform(y_test), y_preds_mlp)[0]
print(f'rmse lm: {rmse_lm:.2f}\nrmse mlp: {rmse_mlp:.2f}')
# try changing the network parameters to see the impact on the
# rmse relative to the linear model
# set tensorflow random seed
tf.random.set_seed(45676)
#tf.random.set_seed(1234)
N_EPOCHS = 100
H = 12
es = EarlyStopping(monitor='loss', patience=10)
#single layer nn
mlp = get_network_model(ws=12,
n_neurons_l1=5,
include_layer_two=True,
n_neurons_l2=32,
lr=0.1)
#fit model silently
history = mlp.fit(x=X_train,
y=y_train,
epochs=N_EPOCHS,
verbose=0, callbacks=[es])
y_preds_mlp = autoregressive_iterative_forecast(mlp, X_test[0], h=H)
y_preds_mlp = scaler.inverse_transform(y_preds_mlp.reshape(-1, 1))
rmse_lm = rmse(scaler.inverse_transform(y_test), y_preds_lm)[0]
rmse_mlp = rmse(scaler.inverse_transform(y_test), y_preds_mlp)[0]
print(f'rmse lm: {rmse_lm:.2f}\nrmse mlp: {rmse_mlp:.2f}')
plot_nn_prediction_results(mlp, X_train, y_train, y_test, y_preds_mlp)
```
# Ensemble Learning
In all of the examples above we have been setting a random seed for tensorflow. This 'suggests' that if we used a different randon number seed we would get a slightly different result (this is due to both random initialisation of weights/biases and stochastic gradient descent). Neural networks are extremely flexible and have many parameters. This leads to one of the key challenges with neural networks - overfitting. There are multiple ways to deal with overfitting. In forecasting a common approach is to use an **ensemble** of models.
In an ensemble we train multiple models.
## Training an ensemble
We will train an ensemble of neural networks that mimic a linear model.
The code below has been provided for you to work through.
* We set some parameters e.g. number of models in an the ensemble: 20 to 30 should be plenty.
* We use a python loop to create and train each model and store the model in a python list.
* Optionally we can save the models to file and load pre-trained versions at a later date.
* To predict we the need to loop through the collection of models.
```
def load_pretrained_ensemble(n_models):
'''
Load the pre-trained ensemble models (only use if they exist!)
'''
models = []
url = '/input'
for n in range(n_models):
model_n = tf.keras.models.load_model(f'{url}/ensemble_model_{n}.h5')
models.append(model_n)
return models
# script to train the models.
################# Parameters for the ensemble #################################
# set random seed so that ensemble can be repeated.
tf.random.set_seed(1085)
# number of models to create...
N_MODELS = 20
# max no. of epochs for training of each model.
N_EPOCHS = 100
# no. of autoregressive lags
WINDOW_SIZE = 12
# early stopping reguluarization
es = EarlyStopping(monitor='loss', patience=10)
# I've pretrained 50 models you can load them from file if wanted.
LOAD_FROM_FILE = False
###############################################################################
if LOAD_FROM_FILE:
#it will take a few seconds to load.
models = load_pretrained_ensemble(N_MODELS)
else:
models = []
for n in range(N_MODELS):
#single layer nn
model_n = get_linear_model(WINDOW_SIZE)
#fit model silently (verbose=0)
history = model_n.fit(x=X_train,
y=y_train,
epochs=N_EPOCHS,
verbose=0,
callbacks=[es],
batch_size=32)
#this will overwrite pre-trained models.
model_n.save(f'input/ensemble_model_{n}.h5')
models.append(model_n)
```
### Predictions in an ensemble
In an ensemble, we predict in a loop. In python this is straightfoward as we simply loop through the models we have trained and call `autoregressive_iterative_forecast`. We will store the predictions of each forecast in a python `list` called `e_preds`
<div class="alert alert-info">
In an ensemble we end up with a distribution of forecasts! For point forecasts we could then take the median of the forecasts. We can also get a measure of variability in the forecasts by calculating the quantiles.
</div>
```
# create the forecasts
# this code will take a few seconds to execute
H = 12
e_preds = []
for model in models:
y_preds = autoregressive_iterative_forecast(model, X_test[0], h=H)
e_preds.append(y_preds)
e_preds = np.array(e_preds)
```
Inverse transform the data and calculate the median and 0.025 and 0.975 percentiles of the point forecasts
Remember we can use `scaler.inverse_transform()`
```
e_preds = np.asarray(e_preds)
e_preds_tran = scaler.inverse_transform(e_preds).T
y_preds_mdn = np.percentile(e_preds_tran.T, 50, axis=0)
y_preds_2_5 = np.percentile(e_preds_tran.T, 2.5, axis=0)
y_preds_97_5 = np.percentile(e_preds_tran.T, 97.5, axis=0)
y_preds_mdn.shape
# plot the individual forecasts and the median
fig,ax = plt.subplots(1, 2, sharey=True, figsize=(12, 4))
ax[0].plot(e_preds_tran)
ax[0].plot(scaler.inverse_transform(y_test), label='test', linestyle='--',
color='red')
ax[0].plot(y_preds_mdn, label='median', linestyle='-', color='black')
ax[0].legend()
ax[0].set_title(f'Point forecasts: {N_MODELS} models')
ax[1].plot(scaler.inverse_transform(y_test), label='test', linestyle='--',
color='red')
ax[1].plot(y_preds_mdn, label='median', linestyle='-', color='black')
ax[1].plot(y_preds_2_5, label='0.025 percentile', linestyle='-.', color='black')
ax[1].plot(y_preds_97_5, label='0.975 percentile', linestyle='--', color='black')
#ax[1].plot(y_preds_lm, label='original lmforecast', linestyle='--', color='green')
ax[1].set_title(f'Middle 95% of point forecasts ')
ax[1].legend();
rmse_lm = rmse(scaler.inverse_transform(y_test), y_preds_lm)[0]
rmse_mdn = rmse(scaler.inverse_transform(y_test), y_preds_mdn)[0]
print(f'rmse lm: {rmse_lm:.2f}\nrmse ensemble: {rmse_mdn:.2f}')
rmse_25 = rmse(scaler.inverse_transform(y_test), y_preds_2_5)[0]
rmse_75 = rmse(scaler.inverse_transform(y_test), y_preds_97_5)[0]
print(f'95% of linear models will have rmse between: {rmse_75:.2f} - {rmse_25:.2f}')
```
**Question**: Is the ensemble approach useful? What does it tell us about our original linear model?
## Exercise 4: Create an ensemble of non-linear models.
Is the two layer model more accurate than the simple linear regression model and its ensemble counterpart?
**Task:**
* Create an ensemble of 20 models.
* Each model should be based on your solution to exercise 2 (e.g. a neural network with 2 hidden layers)
* Optional: save your models to file. (recommended)
* Forecast the next 12 periods.
* Calculate the RMSE of the forecast.
**Hints:**
* You have **all of the code** you need to complete this task!
* Remember to back transform your forecasts
* Use the median of the ensemble.
* Look carefully at the previous ensemble example.
**Questions**
* Which out of the simple linear, multi-layer and ensemble models do you think is best in this instance?
```
# your code here ...
# set tensorflow random seed for repeatability
tf.random.set_seed(1066)
N_MODLES = 20
N_EPOCHS = 100
H = 12
es = EarlyStopping(monitor='loss', patience=10)
BATCH_SIZE = 32
models = []
for n in range(N_MODELS):
#multi-layer model
model_n = get_network_model(ws=12,
n_neurons_l1=5,
include_layer_two=True,
n_neurons_l2=32,
lr=0.1)
#fit model silently
history = model_n.fit(x=X_train,
y=y_train,
epochs=N_EPOCHS,
verbose=0,
batch_size=BATCH_SIZE)
#this will overwrite pre-trained models.
model_n.save(f'output/mlp_ensemble_{n}.h5')
models.append(model_n)
# this code will take a few seconds to execute
H = 12
e_preds = []
for model in models:
y_preds = autoregressive_iterative_forecast(model, X_test[0], h=H)
e_preds.append(y_preds)
e_preds = np.array(e_preds)
e_preds = np.asarray(e_preds)
e_preds_tran = scaler.inverse_transform(e_preds).T
y_preds_mdn = np.percentile(e_preds_tran.T, 50, axis=0)
y_preds_2_5 = np.percentile(e_preds_tran.T, 2.5, axis=0)
y_preds_97_5 = np.percentile(e_preds_tran.T, 97.5, axis=0)
y_preds_mdn.shape
# plot the individual forecasts and the median
fig,ax = plt.subplots(1, 2, sharey=True, figsize=(12, 4))
ax[0].plot(e_preds_tran)
ax[0].plot(scaler.inverse_transform(y_test), label='test', linestyle='--',
color='red')
ax[0].plot(y_preds_mdn, label='median', linestyle='-', color='black')
ax[0].legend()
ax[0].set_title(f'Point forecasts: {N_MODELS} models')
ax[1].plot(scaler.inverse_transform(y_test), label='test', linestyle='--',
color='red')
ax[1].plot(y_preds_mdn, label='median', linestyle='-', color='black')
ax[1].plot(y_preds_2_5, label='0.025 percentile', linestyle='-.', color='black')
ax[1].plot(y_preds_97_5, label='0.975 percentile', linestyle='--', color='black')
#ax[1].plot(y_preds_lm, label='original lmforecast', linestyle='--', color='green')
ax[1].set_title(f'Middle 95% of point forecasts ')
ax[1].legend();
rmse_lm = rmse(scaler.inverse_transform(y_test), y_preds_mlp)[0]
rmse_mdn = rmse(scaler.inverse_transform(y_test.T), y_preds_mdn)[0]
print(f'rmse lm: {rmse_mlp:.2f}\nrmse ensemble: {rmse_mdn:.2f}')
```
## Optional Extra exercise for you to think about.
* How would you use a ensemble method with a model that predicts a vector?
# End of lab
| github_jupyter |
<a href="https://colab.research.google.com/github/Dmitri9149/Transformer_From_Scratch/blob/main/Final_Working_Transformer_MXNet_76800_128_22_10_20.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install -U mxnet-cu101==1.7.0
!pip install d2l==0.14.4
### !pip install ipython-autotime
### %load_ext autotime
import math
from d2l import mxnet as d2l
from mxnet import np, npx
from mxnet.gluon import nn
from mxnet import np, npx, init, gluon, autograd
import collections
import os
import time
npx.set_np()
from mxnet import autograd, np, npx
```
The code for Transformer from scratch is collected here. The code is mostly from http://d2l.ai/chapter_attention-mechanisms/transformer.html . I did many comments to the code at most difficult points. I hope my additional code and comments will help in better understanding of the Transformer.
This is the original article for the Transformer :
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems (pp. 5998–6008).
The future work:
1. To learn the Transformer on big data set.
2. Transation from (to) English to Finnish language.
3. Modify the architecture of the Transformer.
4. Better tokenization and preprocessing.
### Attention Mechanism
#### Masked softmax
This is importand auxiliary function.
""" The masked softmax takes a 3-dimensional input and enables us to filter out some elements by specifying a valid length for the last dimension.... As a result, any value outside the valid length will be masked as 0.""" (citation from d2l.ai).
The notion of valid length come from the need to add special <pad> token if our sentence is shorter than length we use for all sentencies in batches. The <pad> tokens will not participate in prediction.
My comments are started with ### ,
the comments with one # are from the original d2l.ai code.
Some functions for plotting and downloading of specific files from specific places are still taken from the d2l.ai library on GitHub : https://github.com/d2l-ai/d2l-en/blob/master/d2l/mxnet.py But the biggest par of the code is collected here (and commented).
```
### from d2l.ai
def masked_softmax(X, valid_len):
"""Perform softmax by filtering out some elements."""
# X: 3-D tensor, valid_len: 1-D or 2-D tensor
### why 3-D tensor ?
### first dimention; we will quantify samples within batch,
### so, the first dimention determines the number of samples in the batch
###
### second dimention; we will quantify queries,
### we may have several queries,
### the second dimention determines the number of queries
###
### we may set up the valid lengths same for every sample in the
### batch, i.e 1-D valid-lengh with size (batch_size, )
### the same means : independent of the queries
### On the contarry: we may set up valid lengths individually for every
### sample in a batch and for every query,
### in this case it will be 2-D valid length
### with size (batch size, number of queries)
###
### Third parameter will correspond to the number of key/value pairs
###
### We may need the valid_length when: 1. we <pad> the end of a sentence: it is too
### short, shorter than num_steps ; 2. when we use the valid_lenght in decoder
### via training, and every word in target sentence is used as query: the query
### can (or may ?) see the all words to the left, but not to the right (see the
### encoder decoder code below). To handle the case we use valid_length too.
###
if valid_len is None:
return npx.softmax(X)
else:
shape = X.shape
if valid_len.ndim == 1:
valid_len = valid_len.repeat(shape[1], axis=0)
else:
valid_len = valid_len.reshape(-1)
# Fill masked elements with a large negative, whose exp is 0
X = npx.sequence_mask(X.reshape(-1, shape[-1]), valid_len, True,
axis=1, value=-1e6)
return npx.softmax(X).reshape(shape)
### from d2l.ai
masked_softmax(np.random.uniform(size=(2, 2, 4)), np.array([2, 3]))
### 2 - number of samples in the batch
### 2 - we make deal with 2 queries
### 4 - four key/value pairs
### for the first sample in our batch , from 4 pairs we will take
### into account only results from first 2 pairs, the rest will be multiplied by 0,
### because that pairs correspond to <pad> tokens
### for the second sample (4 key/value pairs) we will take into account
### only results for first 3 key/value pairs (the rest will masked with 0,
### because the rest pairs correspond to <pad> tokens)
### this is the meaning of np.array([2,3]) as valid length
### the velid length is not dependent from queries in this case
### from d2l.ai
npx.batch_dot(np.ones((2, 1, 3)), np.ones((2, 3, 2)))
### one more example with 1-D valid length
valid_length = np.array([2,3])
### the shape is (2,) : one dimentional length
print('valid_length shape= ', valid_length.shape)
masked_softmax(np.random.uniform (size =(2, 3, 5)), valid_length )
### if we declare 2-D valid_length
valid_length = np.array([[3, 5, 4], [2,4, 1], [1,4, 3],[1,2,3]])
print('valid_length shape= ', valid_length.shape)
masked_softmax(np.random.uniform(size = (4, 3, 5)), valid_length)
### Let us consider the first sample in our batch
### [[0.21225105, 0.31475353, 0.4729953 , 0. , 0. ,
### 0. ],
### [0.19417836, 0.20596693, 0.16711308, 0.15453914, 0.27820238,
### 0. ],
### [0.2753876 , 0.21671425, 0.30811197, 0.19978616, 0. ,
### 0. ]],
### from third dimention in np.random.uniform(size = (4, 3, 5)) we may see it correspond to
### 5 key/value pairs (that is why the length of the lines is 5)
### second dimention in np.random.uniform(size = (4, 3, 5)) means the results are obtained from
### 3 queries, that is why there are 3 lines corresponding to every batch
###
### Below we may see there are 4 groups, because the first dimention, the
### number of samples, is 4 (batch size)
###
### np.array([[3, 5, 4], [2,4, 1], [1,4, 3],[1,2,3]])
### is 2-D array (of size 4 * 3 in our case))
### 4 is batch size, 3 is number of queries : we have 4 groups with 3 lines in each;
### the [3,5,4] subarray correspond to the first sample in the batch,
### in the first group : the first line has first 3 non zero elements,
### the second line 5 first non zero and third line 4 first non zero elements.
```
### Dot product attention
#### Why we need it, how it is calculated
We have query with dimension `d`.
We have #kv_pairs: key/value pairs. Every key and value are vectors of dimension `d`. We pass the query trought the 'grid' with the leng of #kv_pairs and get #kv_pairs of scores. How it works within the pass: we make dot product of query with every of #kv_pairs keys in the 'grid' and get a #kv_pairs scores. We also normilize the scores by dividing on $\sqrt{d}$.
If we have batch with size batch_size and number of queries = #queries, we will get tensor of scores of size (batch_size, #queries, #kv_pairs).
In this way we receive the attention_weights tensor.
We also have tensor 'value' of values of size (batch_size, #kv_pairs, dim_v).
Finally, using npx.batch_dot(attention_weights, value) we will get tensor of size (batch_size, #queries, dim_v) which corresponf of the 'passing' our queries throught the 'grid' of key/value pairs: for every query, for every sample in the batch we will get the transformed vector of size dim_v.
```
### from d2l.ai book
class DotProductAttention(nn.Block):
def __init__(self, dropout, **kwargs):
super(DotProductAttention, self).__init__(**kwargs)
self.dropout = nn.Dropout(dropout)
# `query`: (`batch_size`, #queries, `d`)
# `key`: (`batch_size`, #kv_pairs, `d`)
# `value`: (`batch_size`, #kv_pairs, `dim_v`)
# `valid_len`: either (`batch_size`, ) or (`batch_size`, xx)
def forward(self, query, key, value, valid_len=None):
d = query.shape[-1]
# Set transpose_b=True to swap the last two dimensions of key
scores = npx.batch_dot(query, key, transpose_b=True) / math.sqrt(d)
attention_weights = self.dropout(masked_softmax(scores, valid_len))
return npx.batch_dot(attention_weights, value)
if False:
### the code from d2l.ai
atten = DotProductAttention(dropout=0.5)
atten.initialize()
### batch size of 2, #kv_pairs = 10, every key is vector of size 2 with
### ones : (1.,1.)
keys = np.ones((2, 10, 2))
### we start with vector which keep float numbers from 0 to 39;
### reshape it to tensor which model one sample batch with 10 key/value pairs and
### dimension of values dim_v = 4; finally we repeat the construction to get 2
### similar samples (batch with 2 samples).
values = np.arange(40).reshape(1, 10, 4).repeat(2, axis=0)
atten(np.ones((2, 1, 2)), keys, values, np.array([2, 6]))
if False:
atten = DotProductAttention(dropout=0.5)
atten.initialize()
keys = np.ones((3,10,5)) # keys in batch of size 3; for every line in batch we have
### 10 keys/values pairs ; where every key is 5 dimentional vector (and value will be 7 dimentional vector);
### each key is forming pair with value, there are 10 such pairs
values = np.arange(70).reshape(1,10,7).repeat(3, axis =0) # values in batch of
### size 3 ; 10 values with 7 dimentional vector each;
### in our batch the 3 samples are identical by construction
queries = np.ones((3,4,5)) # quiries in batch of size 3, there are 4 queries,
### where every query is vector of size 5 (same size as for key)
atten(queries, keys, values, np.array([3, 8, 6])) # values in batch of size 3 ;
### 4 quiry per every sample in batch where every query is vector of size 5
### the valid_len is 1-D
### for the 3 samples the valid_length have size 3 , 8 , 6 ;
### size 3 for first sample , ....., ..... size 6 for the last sample
### the outputs are:
### for every entry in the batch (for every of the 3 samples)
### for every of 4 queries
### total : 3*4 = 12 final values: vectors of size 7
### the values are different for different samples in the batch ,
### because we used different valid length,
### but for every sample group in the batch (same sample, different queries),
### all 4 final values are the same:
### even we use 4 queries, all the quiries are equal in our case
```
### Multihead Attention
""" The *multi-head attention* layer consists of $h$ parallel self-attention layers, each one is called a *head*. For each head, before feeding into the attention layer, we project the queries, keys, and values with three dense layers with hidden sizes $p_q$, $p_k$, and $p_v$, respectively. The outputs of these $h$ attention heads are concatenated and then processed by a final dense layer.

Assume that the dimension for a query, a key, and a value are $d_q$, $d_k$, and $d_v$, respectively. Then, for each head $i=1,\ldots, h$, we can train learnable parameters
$\mathbf W_q^{(i)}\in\mathbb R^{p_q\times d_q}$,
$\mathbf W_k^{(i)}\in\mathbb R^{p_k\times d_k}$,
and $\mathbf W_v^{(i)}\in\mathbb R^{p_v\times d_v}$. Therefore, the output for each head is
$$\mathbf o^{(i)} = \mathrm{attention}(\mathbf W_q^{(i)}\mathbf q, \mathbf W_k^{(i)}\mathbf k,\mathbf W_v^{(i)}\mathbf v),$$
where $\textrm{attention}$ can be any attention layer, such as the `DotProductAttention` and `MLPAttention` as we introduced in :numref:`sec_attention`.
After that, the output with length $p_v$ from each of the $h$ attention heads are concatenated to be an output of length $h p_v$, which is then passed the final dense layer with $d_o$ hidden units. The weights of this dense layer can be denoted by $\mathbf W_o\in\mathbb R^{d_o\times h p_v}$. As a result, the multi-head attention output will be
$$\mathbf o = \mathbf W_o \begin{bmatrix}\mathbf o^{(1)}\\\vdots\\\mathbf o^{(h)}\end{bmatrix}.$$
Now we can implement the multi-head attention. Assume that the multi-head attention contain the number heads `num_heads` $=h$, the hidden size `num_hiddens` $=p_q=p_k=p_v$ are the same for the query, key, and value dense layers. In addition, since the multi-head attention keeps the same dimensionality between its input and its output, we have the output feature size $d_o =$ `num_hiddens` as well. """ (citation from d2l.ai book).
There are some problems in the d2l.ai text, there is stated :
$p_q$ = $p_k$ = $p_v$ = num_hiddens,
and
$d_o =$ `num_hiddens` as well.
So, we have $W_o$ transformation from input of size (num_heads * num_hiddens) to output of size (num_hiddens). If h > 1, the input size and output size can not be equal. But in the PyTorch code in the d2l.ai we have:
self.W_o = nn.Linear(num_hiddens, num_hiddens, bias=bias)
with equal input and output. It is hidden in the d2l.ai
MXNet code: self.W_o = nn.Dense(num_hiddens, use_bias=use_bias, flatten=False), because in the
case of Gluon Dense layer we state only output dimension (num_hiddens in the case). The input dimension is not stated.
There is also assumed in the code below (from d2l.ai book), the num_hiddens is multiple of num_heads. No assumptions about it in the main text of the book. But in the d2l.ai code the assumption is used.
The ony interpretation to the code below I may give now:
$p_v$ * num_heads=num_hiddens (same for $p_q$ = $p_k$ = $p_v$),
but not $p_v$=num_hiddens.
I will interpret the code with the assumption.
```
### from d2l.ai
class MultiHeadAttention(nn.Block):
def __init__(self, num_hiddens, num_heads, dropout, use_bias=False, **kwargs):
super(MultiHeadAttention, self).__init__(**kwargs)
self.num_heads = num_heads
self.attention = d2l.DotProductAttention(dropout)
### here, as I understand, the num_hiddens = num_heads * p_v
### where p_v (see the text above) is the dimension of the vector
### to which a query is transformed by single head,
### the size of p_v is to be (num_hidden/num_heads)
### it explains what the code below do
self.W_q = nn.Dense(num_hiddens, use_bias=use_bias, flatten=False)
self.W_k = nn.Dense(num_hiddens, use_bias=use_bias, flatten=False)
self.W_v = nn.Dense(num_hiddens, use_bias=use_bias, flatten=False)
### if every head transform query of size `dim` = num_hiddens to
### p_v = p_q = p_k = (num_hidden/num_heads), when we
### concatenate num_heads of such queries, we will get
### vector of size num_hidden again;
### it explains the input / output dimensions for W_o :
### input and output have same dimension = num_hiddens
self.W_o = nn.Dense(num_hiddens, use_bias=use_bias, flatten=False)
### every query generate num_heads outputs , which we cancatenate in
### one vector of dimention num_hiddens : so the output of every query is
### of size num_heads / num_hiddens;
### to apply self-attention we de-cancatenate the combined output
### to hum_heads of separate outputs from every query
### with size (num_hiddens / num_heads), and
### simultaneously recombine them in single batch (with size num_heads),
### which increase the total batch size to (batch_size * num_heads)
### We have to correct the valid_length to take into account
### the num_heads query transformtions are combined now in single batch.
### After application of self_attention, we make the reverse operation:
### locate the batch samples which correspond to the outputs of the same query
### in different heads, and concatenate them again in one combined output.
### The number of batches decrease and the length of output increase by the
### same factor num_heads.
### These are the roles of transpose_qkv , transpose_output functions below:
def forward(self, query, key, value, valid_len):
# For self-attention, `query`, `key`, and `value` shape:
# (`batch_size`, `seq_len`, `dim`), where `seq_len` is the length of
# input sequence. `valid_len` shape is either (`batch_size`, ) or
# (`batch_size`, `seq_len`).
# Project and transpose `query`, `key`, and `value` from
# (`batch_size`, `seq_len`, `num_hiddens`) to
# (`batch_size` * `num_heads`, `seq_len`, `num_hiddens` / `num_heads`)
query = transpose_qkv(self.W_q(query), self.num_heads)
key = transpose_qkv(self.W_k(key), self.num_heads)
value = transpose_qkv(self.W_v(value), self.num_heads)
if valid_len is not None:
# Copy `valid_len` by `num_heads` times
if valid_len.ndim == 1:
valid_len = np.tile(valid_len, self.num_heads)
else:
valid_len = np.tile(valid_len, (self.num_heads, 1))
# For self-attention, `output` shape:
# (`batch_size` * `num_heads`, `seq_len`, `num_hiddens` / `num_heads`)
output = self.attention(query, key, value, valid_len)
# `output_concat` shape: (`batch_size`, `seq_len`, `num_hiddens`)
output_concat = transpose_output(output, self.num_heads)
return self.W_o(output_concat)
### from d2l.ai
def transpose_qkv(X, num_heads):
# Input `X` shape: (`batch_size`, `seq_len`, `num_hiddens`).
# Output `X` shape:
# (`batch_size`, `seq_len`, `num_heads`, `num_hiddens` / `num_heads`)
X = X.reshape(X.shape[0], X.shape[1], num_heads, -1)
# `X` shape:
# (`batch_size`, `num_heads`, `seq_len`, `num_hiddens` / `num_heads`)
X = X.transpose(0, 2, 1, 3)
# `output` shape:
# (`batch_size` * `num_heads`, `seq_len`, `num_hiddens` / `num_heads`)
output = X.reshape(-1, X.shape[2], X.shape[3])
return output
### from d2l.ai
def transpose_output(X, num_heads):
# A reversed version of `transpose_qkv`
X = X.reshape(-1, num_heads, X.shape[1], X.shape[2])
X = X.transpose(0, 2, 1, 3)
return X.reshape(X.shape[0], X.shape[1], -1)
if False:
### from d2l.ai
### num_hiddens = 100, num_heads=10
cell = MultiHeadAttention(100, 10, 0.5)
cell.initialize()
X = np.ones((2, 4, 5))
valid_len = np.array([2, 3])
cell(X, X, X, valid_len).shape
if False:
### it correspond to scenario size of embedding is 512 ; num_heads = 8 ;
### num_hiddens = 512
cell = MultiHeadAttention(512, 8, 0.5)
cell.initialize()
# num of batches is 3 ; seq_len is 20 ; size of embedding is 512
X = np.ones((3, 20, 512))
valid_len = np.array([15,17,12])
cell(X, X, X, valid_len).shape
```
### Position-wise encoding
Two 1 * 1 convolutional layers are applied. Extract
position independent features of word representations (in the same way the convolution layers are applied in image recognition networks).
""" Similar to the multi-head attention, the position-wise feed-forward network will only change the last dimension size of the input—the feature dimension. In addition, if two items in the input sequence are identical, the according outputs will be identical as well. """ (citation from d2l.ai)
```
### from d2l.ai
class PositionWiseFFN(nn.Block):
def __init__(self, ffn_num_hiddens, pw_num_outputs, **kwargs):
super(PositionWiseFFN, self).__init__(**kwargs)
self.dense1 = nn.Dense(ffn_num_hiddens, flatten=False,
activation='relu')
self.dense2 = nn.Dense(pw_num_outputs, flatten=False)
def forward(self, X):
return self.dense2(self.dense1(X))
if False:
ffn = PositionWiseFFN(4, 8)
ffn.initialize()
ffn(np.ones((2, 3, 4)))[0]
```
### Add and Norm
""" we add a layer that contains a residual structure and a layer normalization after both the multi-head attention layer and the position-wise FFN network. Layer normalization is similar to batch normalization ........ One difference is that the mean and variances for the layer normalization are calculated along the last dimension, e.g X.mean(axis=-1) instead of the first batch dimension, e.g., X.mean(axis=0). Layer normalization prevents the range of values in the layers from changing too much, which allows faster training and better generalization ability. """ (citation from d2l.ai)
```
if False:
### from d2l.ai
layer = nn.LayerNorm()
layer.initialize()
batch = nn.BatchNorm()
batch.initialize()
X = np.array([[1, 2], [2, 3]])
# Compute mean and variance from `X` in the training mode
with autograd.record():
print('layer norm:', layer(X), '\nbatch norm:', batch(X))
```
"""AddNorm accepts two inputs X and Y. We can deem X as the original input in the residual network, and Y as the outputs from either the multi-head attention layer or the position-wise FFN network. In addition, we apply dropout on Y for regularization.""" citation from d2l.ai
```
### from d2l.ai
class AddNorm(nn.Block):
def __init__(self, dropout, **kwargs):
super(AddNorm, self).__init__(**kwargs)
self.dropout = nn.Dropout(dropout)
self.ln = nn.LayerNorm()
def forward(self, X, Y):
return self.ln(self.dropout(Y) + X)
if False:
### d2l.ai
add_norm = AddNorm(0.5)
add_norm.initialize()
add_norm(np.ones((2, 3, 4)), np.ones((2, 3, 4))).shape
```
### Positional Encoding
```
### I used the code as alternative to the original positional encoding;
### just encode position of words (tokens) in sentence ,
### it changes the results , but the results are quite well.
if False:
### from d2l.ai
class PositionalEncoding(nn.Block):
def __init__(self, num_hiddens, dropout, max_len=100):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(dropout)
# Create a long enough `P`
### max_len correspond to sequence length ;
### num_hiddens correspond to embedding size
###
self.P = np.zeros((1, max_len, num_hiddens))
### X = np.arange(0, max_len).reshape(-1, 1) / np.power(
### 10000, np.arange(0, num_hiddens, 2) / num_hiddens)
### self.P[:, :, 0::2] = np.sin(X)
### self.P[:, :, 1::2] = np.cos(X)
###################### my code be carefull !!!!!
X = np.arange(0, max_len).reshape(-1, 1) / max_len
### 10000, np.arange(0, num_hiddens, 2) / num_hiddens)
self.P[:, :, 0::1] = np.sin(X)
### self.P[:, :, 1::2] = np.cos(X)
################################
def forward(self, X):
X = X + self.P[:, :X.shape[1], :].as_in_ctx(X.ctx)
return self.dropout(X)
### from d2l.ai
class PositionalEncoding(nn.Block):
def __init__(self, num_hiddens, dropout, max_len=1000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(dropout)
# Create a long enough `P`
### max_len correspond to sequence length ;
### num_hiddens correspond to embedding size
self.P = np.zeros((1, max_len, num_hiddens))
X = np.arange(0, max_len).reshape(-1, 1) / np.power(
10000, np.arange(0, num_hiddens, 2) / num_hiddens)
self.P[:, :, 0::2] = np.sin(X)
self.P[:, :, 1::2] = np.cos(X)
def forward(self, X):
X = X + self.P[:, :X.shape[1], :].as_in_ctx(X.ctx)
return self.dropout(X)
if False:
### from d2l.ai
### num_hiddens = 20 , dropout = 0
pe = PositionalEncoding(20, 0)
pe.initialize()
### we assume batch_size = 1; max_length = 100 correspond to tokens (here words) in our line;
### num_hiddens = 20 (embedding size)
###
Y = pe(np.zeros((1, 100, 20)))
### dim correspond to coordinate in embedding vector of out tokens (words)
d2l.plot(np.arange(100), Y[0, :, 4:8].T, figsize=(6, 2.5),
legend=["dim %d" % p for p in [4, 5, 6, 7]])
```
### Encoder
"""Armed with all the essential components of Transformer, let us first build a Transformer encoder block. This encoder contains a multi-head attention layer, a position-wise feed-forward network, and two “add and norm” connection blocks. As shown in the code, for both of the attention model and the positional FFN model in the EncoderBlock, their outputs’ dimension are equal to the num_hiddens. This is due to the nature of the residual block, as we need to add these outputs back to the original value during “add and norm”. """ (citation from d2l.ai)
```
### from d2l.ai
### this block will not change the input shape
class EncoderBlock(nn.Block):
def __init__(self, num_hiddens, ffn_num_hiddens, num_heads, dropout,
use_bias=False, **kwargs):
super(EncoderBlock, self).__init__(**kwargs)
self.attention = MultiHeadAttention(num_hiddens, num_heads, dropout,
use_bias)
self.addnorm1 = AddNorm(dropout)
self.ffn = PositionWiseFFN(ffn_num_hiddens, num_hiddens)
self.addnorm2 = AddNorm(dropout)
def forward(self, X, valid_len):
### we sum the original input to the attention block and the output from the
### block + we normilize the result using AddNorm
Y = self.addnorm1(X, self.attention(X, X, X, valid_len))
return self.addnorm2(Y, self.ffn(Y))
```
""" Now it comes to the implementation of the entire Transformer encoder. With the Transformer encoder, $n$ blocks of `EncoderBlock` stack up one after another. Because of the residual connection, the embedding layer size $d$ is same as the Transformer block output size. Also note that we multiply the embedding output by $\sqrt{d}$ to prevent its values from being too small. """ (citation from d2l.ai)
```
### from d2l.ai
class Encoder(nn.Block):
"""The base encoder interface for the encoder-decoder architecture."""
def __init__(self, **kwargs):
super(Encoder, self).__init__(**kwargs)
def forward(self, X, *args):
raise NotImplementedError
### from d2l.ai
class TransformerEncoder(Encoder):
def __init__(self, vocab_size, num_hiddens, ffn_num_hiddens,
num_heads, num_layers, dropout, use_bias=False, **kwargs):
super(TransformerEncoder, self).__init__(**kwargs)
self.num_hiddens = num_hiddens
self.embedding = nn.Embedding(vocab_size, num_hiddens)
self.pos_encoding = PositionalEncoding(num_hiddens, dropout)
self.blks = nn.Sequential()
for _ in range(num_layers):
self.blks.add(
EncoderBlock(num_hiddens, ffn_num_hiddens, num_heads, dropout,
use_bias))
### the order of steps:
### firstly we apply Positinal Encoding to initial word vectors
### FROM HERE: Then several times do:
### apply Multihead Attention
### apply Add Norm
### apply PositionWise transformation
### apply Add Norm
### and again... Go To FROM HERE
def forward(self, X, valid_len, *args):
X = self.pos_encoding(self.embedding(X) * math.sqrt(self.num_hiddens))
for blk in self.blks:
X = blk(X, valid_len)
return X
```
### Decoder
""" During training, the output for the $t$-query could observe all the previous key-value pairs. It results in an different behavior from prediction. Thus, during prediction we can eliminate the unnecessary information by specifying the valid length to be $t$ for the $t^\textrm{th}$ query. """
(citation from d2l.ai)
```
### from d2l.ai
class DecoderBlock(nn.Block):
# `i` means it is the i-th block in the decoder
### the i will be initialized from the TransformerDecoder block
### the block will be used in TransformerDecoder in stack ,
### several blocks will be aranged in sequence, output from
### one block will be input to the next blosk
def __init__(self, num_hiddens, ffn_num_hiddens, num_heads,
dropout, i, **kwargs):
super(DecoderBlock, self).__init__(**kwargs)
self.i = i
### in the block we will aplly (MultiHeadAttention + AddNorm)
### and again (MultiHeadAttention + AddNorm) ;
### then we will apply PositionWiseFFN
self.attention1 = MultiHeadAttention(num_hiddens, num_heads, dropout)
self.addnorm1 = AddNorm(dropout)
self.attention2 = MultiHeadAttention(num_hiddens, num_heads, dropout)
self.addnorm2 = AddNorm(dropout)
self.ffn = PositionWiseFFN(ffn_num_hiddens, num_hiddens)
self.addnorm3 = AddNorm(dropout)
def forward(self, X, state):
### we use state [0] and state[1] to keep output from TransformerEncoder :
### enc_outputs and enc_valid_length;
### which correspond to sentences we are translating (sentences in language FROM
### which we translate);
### the state [0] and state [1] are received from TransformerDecoder
### enclosing block as shared parameter;
enc_outputs, enc_valid_len = state[0], state[1]
# `state[2][i]` contains the past queries for this block
### in the first block (i = 1) , at this place in code,
### the queries = None, see the code in TransformetEncoder :
###
### def init_state(self, enc_outputs, enc_valid_len, *args):
### return [enc_outputs, enc_valid_len, [None]*self.num_layers]
###
### TransformerEncoder is initialized from EncoderDecoder
### using the 'init_state' function (see above) , as
### we can see, the fird element in array is None for self.layers;
### the 'init_state' determines the 'state' in TransformerEncoder,
### in the code above we use state[0] and state[1] to determine
### 'enc_outputs', 'enc_valid_len' in this block
if state[2][self.i] is None:
key_values = X
else:
### queries are processed and concatenated and used as new
### grid of key/value pairs
key_values = np.concatenate((state[2][self.i], X), axis=1)
state[2][self.i] = key_values
if autograd.is_training():
### here are are in training mode
### below in 'attention' function we will use X as queries,
### X correspond to all words in the target sentence within training;
### seq_len correspond to the length of the whole target sentence;
### we will use seq_len queries, for every sample in the batch;
### for us important the following:
### first query from the sentence has to be constrained
### to first key_value pair; second: to the first two key_value pairs,
### etc...
### that is why the valid_len is generated in the way:
batch_size, seq_len, _ = X.shape
# Shape: (batch_size, seq_len), the values in the j-th column
# are j+1
### while training we take into account the result of passing a query
### in target sentence throught the 'grid' of key/value pairs to the
### left of the query ;
### every query in the target sequence has its own valid_len and
### the valid_len correspond to the position of a query in the
### sentence
valid_len = np.tile(np.arange(1, seq_len + 1, ctx=X.ctx),
(batch_size, 1))
else:
valid_len = None
### the attention mechanism is used on key_values corresponding
### to the target sentence key_values (then AddNorm is applied)
X2 = self.attention1(X, key_values, key_values, valid_len)
Y = self.addnorm1(X, X2)
### the attention mechanism is used on TransformerEncoder outputs
### key_values as the 'grid' (then AddNorm is applied);
### the key/values are the learned pairs
### which are originated from the source sentence
Y2 = self.attention2(Y, enc_outputs, enc_outputs, enc_valid_len)
Z = self.addnorm2(Y, Y2)
return self.addnorm3(Z, self.ffn(Z)), state
### from d2l.ai
class Decoder(nn.Block):
"""The base decoder interface for the encoder-decoder architecture."""
def __init__(self, **kwargs):
super(Decoder, self).__init__(**kwargs)
def init_state(self, enc_outputs, *args):
raise NotImplementedError
def forward(self, X, state):
raise NotImplementedError
### from d2l.ai
class TransformerDecoder(Decoder):
def __init__(self, vocab_size, num_hiddens, ffn_num_hiddens,
num_heads, num_layers, dropout, **kwargs):
super(TransformerDecoder, self).__init__(**kwargs)
self.num_hiddens = num_hiddens
self.num_layers = num_layers
self.embedding = nn.Embedding(vocab_size, num_hiddens)
self.pos_encoding = PositionalEncoding(num_hiddens, dropout)
### sequential application of several DecoderBlock's
self.blks = nn.Sequential()
for i in range(num_layers):
self.blks.add(
DecoderBlock(num_hiddens, ffn_num_hiddens, num_heads,
dropout, i))
self.dense = nn.Dense(vocab_size, flatten=False)
def init_state(self, enc_outputs, env_valid_len, *args):
return [enc_outputs, env_valid_len, [None]*self.num_layers]
def forward(self, X, state):
X = self.pos_encoding(self.embedding(X) * math.sqrt(self.num_hiddens))
for blk in self.blks:
X, state = blk(X, state)
return self.dense(X), state
### from d2l.ai
### this block couples together TransformerEncoder and TransformerDecoder
###
class EncoderDecoder(nn.Block):
"""The base class for the encoder-decoder architecture."""
def __init__(self, encoder, decoder, **kwargs):
super(EncoderDecoder, self).__init__(**kwargs)
self.encoder = encoder
self.decoder = decoder
def forward(self, enc_X, dec_X, *args):
### the enc_outputs are moved to decoder from encoder;
### the coupling happens in this point of code
enc_outputs = self.encoder(enc_X, *args)
### initial decoder state: dec_state is calculated using the dec_outputs
### and used as 'state' in TransformerDecoder
dec_state = self.decoder.init_state(enc_outputs, *args)
### use initial state + input dec_X to the decoder to calculate
### the decoder output
return self.decoder(dec_X, dec_state)
```
### Training
```
### from d2l.ai
### because of the padding (and valid_length) we have to filter out some entries
class MaskedSoftmaxCELoss(gluon.loss.SoftmaxCELoss):
# `pred` shape: (`batch_size`, `seq_len`, `vocab_size`)
# `label` shape: (`batch_size`, `seq_len`)
# `valid_len` shape: (`batch_size`, )
def forward(self, pred, label, valid_len):
# weights shape: (batch_size, seq_len, 1)
weights = np.expand_dims(np.ones_like(label), axis=-1)
weights = npx.sequence_mask(weights, valid_len, True, axis=1)
return super(MaskedSoftmaxCELoss, self).forward(pred, label, weights)
if False:
### from d2l.ai
loss = MaskedSoftmaxCELoss()
loss(np.ones((3, 4, 10)), np.ones((3, 4)), np.array([4, 2, 0]))
### from d2l.ai
### prevents too high gradients
def grad_clipping(model, theta):
"""Clip the gradient."""
if isinstance(model, gluon.Block):
params = [p.data() for p in model.collect_params().values()]
else:
params = model.params
norm = math.sqrt(sum((p.grad ** 2).sum() for p in params))
if norm > theta:
for param in params:
param.grad[:] *= theta / norm
### from d2l.ai
### accumulate results in one array, auxiliary function
class Accumulator:
"""For accumulating sums over `n` variables."""
def __init__(self, n):
self.data = [0.0] * n
def add(self, *args):
self.data = [a + float(b) for a, b in zip(self.data, args)]
def reset(self):
self.data = [0.0] * len(self.data)
def __getitem__(self, idx):
return self.data[idx]
### from d2l.ai
def train_s2s_ch9(model, data_iter, lr, num_epochs, device):
model.initialize(init.Xavier(), force_reinit=True, ctx=device)
trainer = gluon.Trainer(model.collect_params(),
'adam', {'learning_rate': lr})
loss = MaskedSoftmaxCELoss()
animator = d2l.Animator(xlabel='epoch', ylabel='loss',
xlim=[1, num_epochs], ylim=[0, 1.00])
for epoch in range(1, num_epochs + 1):
timer = d2l.Timer()
metric = d2l.Accumulator(2) # loss_sum, num_tokens
### use data_iter from load_data_nmt to get X and Y which include:
### the source and target
### sentence representations + X_vlen and Y_vlen : the valid lenghts of
### the sentencies
for batch in data_iter:
X, X_vlen, Y, Y_vlen = [x.as_in_ctx(device) for x in batch]
Y_input, Y_label, Y_vlen = Y[:, :-1], Y[:, 1:], Y_vlen-1
with autograd.record():
Y_hat, _ = model(X, Y_input, X_vlen, Y_vlen)
l = loss(Y_hat, Y_label, Y_vlen)
l.backward()
grad_clipping(model, 1)
num_tokens = Y_vlen.sum()
trainer.step(num_tokens)
metric.add(l.sum(), num_tokens)
if epoch % 10 == 0:
animator.add(epoch, (metric[0]/metric[1],))
print(f'loss {metric[0] / metric[1]:.3f}, {metric[1] / timer.stop():.1f} '
f'tokens/sec on {str(device)}')
```
### Reading and Processing the Text
```
### from d2l.ai
def download_extract(name, folder=None):
"""Download and extract a zip/tar file."""
fname = download(name)
base_dir = os.path.dirname(fname)
data_dir, ext = os.path.splitext(fname)
if ext == '.zip':
fp = zipfile.ZipFile(fname, 'r')
elif ext in ('.tar', '.gz'):
fp = tarfile.open(fname, 'r')
else:
assert False, 'Only zip/tar files can be extracted.'
fp.extractall(base_dir)
return os.path.join(base_dir, folder) if folder else data_dir
```
""" ... a dataset that contains a set of English sentences with the corresponding French translations. As can be seen that each line contains an English sentence with its French translation, which are separated by a TAB.""" (citation from d2l.ai)
```
### d2l.ai
### the data for the translation are prepared by the d2l.ai project (book)
d2l.DATA_HUB['fra-eng'] = (d2l.DATA_URL + 'fra-eng.zip',
'94646ad1522d915e7b0f9296181140edcf86a4f5')
def read_data_nmt():
data_dir = d2l.download_extract('fra-eng')
with open(os.path.join(data_dir, 'fra.txt'), 'r') as f:
return f.read()
raw_text = read_data_nmt()
print(raw_text[0:106])
### from d2l.ai
def preprocess_nmt(text):
def no_space(char, prev_char):
return char in set(',.!') and prev_char != ' '
text = text.replace('\u202f', ' ').replace('\xa0', ' ').lower()
out = [' ' + char if i > 0 and no_space(char, text[i-1]) else char
for i, char in enumerate(text)]
return ''.join(out)
### from d2l.ai
text = preprocess_nmt(raw_text)
print(text[0:95])
### from d2l.ai
def tokenize_nmt(text, num_examples=None):
source, target = [], []
for i, line in enumerate(text.split('\n')):
if num_examples and i > num_examples:
break
parts = line.split('\t')
if len(parts) == 2:
source.append(parts[0].split(' '))
target.append(parts[1].split(' '))
return source, target
### from d2l.ai
source, target = tokenize_nmt(text)
source[0:3], target[0:3]
```
#### Histogram of the number of tokens per sentence
There are mostly 5 token sentencies, num of tokens is
usually less than 10..15.
```
### from d2l.ai
d2l.set_figsize()
d2l.plt.hist([[len(l) for l in source], [len(l) for l in target]],
label=['source', 'target'])
d2l.plt.legend(loc='upper right');
```
### Vocabulary
```
### from d2l.ai
def count_corpus(tokens):
"""Count token frequencies."""
# Here `tokens` is a 1D list or 2D list
if len(tokens) == 0 or isinstance(tokens[0], list):
# Flatten a list of token lists into a list of tokens
tokens = [token for line in tokens for token in line]
return collections.Counter(tokens)
### from d2l.ai
class Vocab:
"""Vocabulary for text."""
def __init__(self, tokens=None, min_freq=0, reserved_tokens=None):
if tokens is None:
tokens = []
if reserved_tokens is None:
reserved_tokens = []
# Sort according to frequencies
counter = count_corpus(tokens)
self.token_freqs = sorted(counter.items(), key=lambda x: x[0])
self.token_freqs.sort(key=lambda x: x[1], reverse=True)
# The index for the unknown token is 0
self.unk, uniq_tokens = 0, ['<unk>'] + reserved_tokens
uniq_tokens += [token for token, freq in self.token_freqs
if freq >= min_freq and token not in uniq_tokens]
self.idx_to_token, self.token_to_idx = [], dict()
for token in uniq_tokens:
self.idx_to_token.append(token)
self.token_to_idx[token] = len(self.idx_to_token) - 1
def __len__(self):
return len(self.idx_to_token)
def __getitem__(self, tokens):
if not isinstance(tokens, (list, tuple)):
return self.token_to_idx.get(tokens, self.unk)
return [self.__getitem__(token) for token in tokens]
def to_tokens(self, indices):
if not isinstance(indices, (list, tuple)):
return self.idx_to_token[indices]
return [self.idx_to_token[index] for index in indices]
### from d2l.ai
src_vocab = Vocab(source, min_freq=3,
reserved_tokens=['<pad>', '<bos>', '<eos>'])
len(src_vocab)
```
### Loading the dataset
```
### from d2l.ai
def truncate_pad(line, num_steps, padding_token):
if len(line) > num_steps:
return line[:num_steps] # Trim
return line + [padding_token] * (num_steps - len(line)) # Pad
### the <pad> is represented by number 1 in Vocabuary
### from d2l.ai
truncate_pad(src_vocab[source[0]], 10, src_vocab['<pad>'])
### from d2l.ai
def build_array(lines, vocab, num_steps, is_source):
lines = [vocab[l] for l in lines]
if not is_source:
lines = [[vocab['<bos>']] + l + [vocab['<eos>']] for l in lines]
array = np.array([truncate_pad(
l, num_steps, vocab['<pad>']) for l in lines])
valid_len = (array != vocab['<pad>']).sum(axis=1)
return array, valid_len
### from d2l.ai
def load_array(data_arrays, batch_size, is_train=True):
"""Construct a Gluon data iterator."""
dataset = gluon.data.ArrayDataset(*data_arrays)
return gluon.data.DataLoader(dataset, batch_size, shuffle=is_train)
### from d2l.ai
### quite importand function to construct dataset for training (data_iter)
### from original data
def load_data_nmt(batch_size, num_steps, num_examples=76800):
text = preprocess_nmt(read_data_nmt())
source, target = tokenize_nmt(text, num_examples)
src_vocab = Vocab(source, min_freq=3,
reserved_tokens=['<pad>', '<bos>', '<eos>'])
tgt_vocab = Vocab(target, min_freq=3,
reserved_tokens=['<pad>', '<bos>', '<eos>'])
src_array, src_valid_len = build_array(
source, src_vocab, num_steps, True)
tgt_array, tgt_valid_len = build_array(
target, tgt_vocab, num_steps, False)
data_arrays = (src_array, src_valid_len, tgt_array, tgt_valid_len)
data_iter = load_array(data_arrays, batch_size)
return src_vocab, tgt_vocab, data_iter
### from d2l.ai
def try_gpu(i=0):
"""Return gpu(i) if exists, otherwise return cpu()."""
return npx.gpu(i) if npx.num_gpus() >= i + 1 else npx.cpu()
```
### Model: training and prediction
```
### the code from d2l.ai
### estimate the execution time for the cell in seconds
start = time.time()
num_hiddens, num_layers, dropout, batch_size, num_steps = 32, 2, 0.0, 128, 15
lr, num_epochs, device = 0.001, 350, try_gpu()
ffn_num_hiddens, num_heads = 64, 4 ### num_hiddens is to be a multiple of num_heads !!
src_vocab, tgt_vocab, train_iter = load_data_nmt(batch_size, num_steps,76800)
encoder = TransformerEncoder(
len(src_vocab), num_hiddens, ffn_num_hiddens, num_heads, num_layers,
dropout)
decoder = TransformerDecoder(
len(src_vocab), num_hiddens, ffn_num_hiddens, num_heads, num_layers,
dropout)
model = EncoderDecoder(encoder, decoder)
train_s2s_ch9(model, train_iter, lr, num_epochs, device)
### estimate the execution time for the cell
end = time.time()
print(end - start)
### from d2l.ai
def predict_s2s_ch9(model, src_sentence, src_vocab, tgt_vocab, num_steps,
device):
src_tokens = src_vocab[src_sentence.lower().split(' ')]
enc_valid_len = np.array([len(src_tokens)], ctx=device)
src_tokens = truncate_pad(src_tokens, num_steps, src_vocab['<pad>'])
enc_X = np.array(src_tokens, ctx=device)
# Add the batch size dimension
enc_outputs = model.encoder(np.expand_dims(enc_X, axis=0),
enc_valid_len)
dec_state = model.decoder.init_state(enc_outputs, enc_valid_len)
dec_X = np.expand_dims(np.array([tgt_vocab['<bos>']], ctx=device), axis=0)
predict_tokens = []
for _ in range(num_steps):
Y, dec_state = model.decoder(dec_X, dec_state)
# The token with highest score is used as the next time step input
dec_X = Y.argmax(axis=2)
py = dec_X.squeeze(axis=0).astype('int32').item()
if py == tgt_vocab['<eos>']:
break
predict_tokens.append(py)
return ' '.join(tgt_vocab.to_tokens(predict_tokens))
for sentence in ['Go .', 'Wow !', "I'm OK .", 'I won !',
'Let it be !', 'How are you ?', 'How old are you ?',
'Cats are cats, dogs are dogs .', 'My friend lives in US .',
'He is fifty nine years old .', 'I like music and science .',
'I love you .', 'The dog is chasing the cat .',
'Somewhere on the earth .', 'Do not worry !',
'Sit down, please !', 'Not at all !', 'It is very very strange .',
'Take it into account .', 'The dark side of the moon .',
'Come on !', 'We are the champions, my friends .']:
print(sentence + ' => ' + predict_s2s_ch9(
model, sentence, src_vocab, tgt_vocab, num_steps, device))
```
| github_jupyter |
```
%%HTML
<script>
function code_toggle() {
if (code_shown){
$('div.input').hide('500');
$('#toggleButton').val('Show Code')
} else {
$('div.input').show('500');
$('#toggleButton').val('Hide Code')
}
code_shown = !code_shown
}
$( document ).ready(function(){
code_shown=false;
$('div.input').hide()
});
</script>
<form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form>
%%HTML
<style type="text/css">
.output_prompt {
display:none !important;
}
</style>
#Always Pyspark first!
ErhvervsPath = "/home/svanhmic/workspace/Python/Erhvervs"
from pyspark.sql import functions as F, Window, WindowSpec
from pyspark.sql import Row
from pyspark.sql.types import StringType,ArrayType,IntegerType,DoubleType,StructField,StructType
sc.addPyFile(ErhvervsPath+"/src/RegnSkabData/ImportRegnskabData.py")
sc.addPyFile(ErhvervsPath+'/src/RegnSkabData/RegnskabsClass.py')
sc.addPyFile(ErhvervsPath+'/src/cvr/Fstat.py')
sc.addPyFile(ErhvervsPath+'/src/cvr/GetNextJsonLayer.py')
import sys
import re
import os
import ImportRegnskabData
import GetNextJsonLayer
import itertools
import functools
%matplotlib inline
import seaborn as sb
import pandas as pan
import matplotlib.pyplot as plt
import numpy as np
import Fstat
import scipy as sp
import IPython
from IPython.display import display, Markdown, Latex
from pandas.tools.plotting import scatter_matrix
regnskabPath = ErhvervsPath+'/data/regnskabsdata/sparkdata/parquet/regnskaber.parquet'
csvPath = ErhvervsPath+'/data/regnskabsdata/cleanCSV'
taxPath = ErhvervsPath+'/data/regnskabsdata/cleanTaxLists'
lenUdf = F.udf(lambda x: ImportRegnskabData.lend(x),IntegerType())
convertedUdf = F.udf(lambda x: str(ImportRegnskabData.convertToSym(x)),StringType())
strs ="Anvendt regnskabspraksis Den anvendte regnskabspraksis er uændret i forhold til sidste år.                Generelt om indregning og måling        Regnskabet er udarbejdet med udgangspunkt i det historiske kostprisprincip.                Indtægter indregnes i resultatopgørelsen i takt med, at de indtjenes. Herudover indregnes værdireguleringer af finansielle aktiver og forpligtelser, der måles til dagsværdi eller amortiseret kostpris. Endvidere indregnes i resultatopgørelsen alle omkostninger, der er afholdt for at opnå årets indtjening, herunder afskrivninger, nedskrivninger og hensatte forpligtelser samt tilbageførsler som følge af ændrede regnskabsmæssige skønstrs ="
def pivotOnText(df,**kvargs):
'''
does the pivotation on text cols and removes the excess counts
input df - dataframe
kvargs - optional arguments included are:
pivotCol - specify column that shoould be pivotated, default type
valueCol - specify column that should be aggregated on, defalut vaerdi
expectedList - specify the values in the pivotated column, default ["KAPITAL"]
'''
#sets some of the optional parameters
pivotCol = kvargs.get("pivotCol","type")
expectedList = kvargs.get("expectedList",["KAPITAL"])
valueCol = kvargs.get("valueCol","vaerdi")
holdOutsCols = [pivotCol,valueCol]
nonHoldOutCols = [i for i in df.columns if i not in holdOutsCols]
newDf = (df
.groupBy(df.columns)
.count()
.groupBy(*nonHoldOutCols)
.pivot(pivotCol,expectedList)
.agg(F.max(F.struct("count",valueCol)))
)
expandedDf = GetNextJsonLayer.expandSubCols(newDf,*expectedList)
newCols = [i for i in expandedDf.columns if i not in [v+"_count" for v in expectedList] ]
return expandedDf.select(newCols)
def showScatterMatrix(df,cols):
featuresDf = df.select(*cols).distinct().drop("cvrNummer").toPandas()
axes = scatter_matrix(featuresDf,alpha=0.5,figsize=[9,9])
[plt.setp(item.yaxis.get_majorticklabels(), 'size', 6) for item in axes.ravel()]
#x ticklabels
[plt.setp(item.xaxis.get_majorticklabels(), 'size', 6) for item in axes.ravel()]
[plt.setp(item.yaxis.get_label(), 'size', 6) for item in axes.ravel()]
#x labels
[plt.setp(item.xaxis.get_label(), 'size', 6) for item in axes.ravel()]
plt.show()
cvrPath = "/home/svanhmic/workspace/Python/Erhvervs/data/cdata/parquet"
namePath = "/home/svanhmic/workspace/Python/Erhvervs/data/cdata/"
cvrfiles = os.listdir(cvrPath)
print(cvrfiles)
#import crv data
cvrDf = (sqlContext
.read
.parquet(cvrPath+"/"+cvrfiles[1])
)
#cvrDf.show(1)
print(cvrDf.select("cvrNummer").distinct().count())
cvrDf.printSchema()
#Extract all Aps and A/S companies
companyByAsApsDf = sqlContext.read.parquet(cvrPath+"/AllApsAs.parquet")
companyByAsApsDf.drop("rank").drop("ansvarligDataleverandoer").drop("virksomhedsformkode").show(10)
```
## Hypotese:
* I hvor høj grad korrelerer kapitalforhøjelser med vækst i virksomhederne? Der skal i den sammenhæng tages højde for (over)kursen ved kapitalforhøjelsen. Der er regnet med antal ansatte intervalskoden og antal årsværk
```
display(Markdown("#### Import medarbejdstal"))
medarbejdsDf = sqlContext.read.parquet(cvrPath+"/TotalAarsVaerker.parquet")
medarbejdsDf.limit(10).toPandas()#.show(10)
# we are only interested in kapital after 1997
mainKapitalDf = (sqlContext
.read
.parquet(cvrPath+"/KaptialDataFrame.parquet")
.drop("KAPITALKLASSER_vaerdi")
.drop("KAPITAL_DELVIST_vaerdi")
.withColumn(col=F.coalesce(F.col("gyldigTil"),F.lit(F.current_date())),colName="gyldigTil")
.withColumn(col=F.datediff(F.col("GyldigTil"),F.col("gyldigFra")),colName="datediff")
.withColumn(col=F.col("KAPITAL_vaerdi").cast("double"),colName="KAPITAL_vaerdi")
.filter(F.year("gyldigFra") >= 1997)
)
mainKapitalDf.show(5)
mainKapitalDf.printSchema()
```
The following cell divides the attributes into two data frames in order to make a proper sampling of medarbejdstal compared to years.
Yeah kapital entry is looked at, in respect to the amount of days, that this entry is current. Meaning, entries that are current, for more than a year gets joined as secondary tabel to medarbejdstal. Entries, that are opposite gets joined as primary tabel.
```
display(Markdown("### Hvornår opdateres kapitalværdierne?"))
#How does the duration look for posting kapitals?
datediffs = mainKapitalDf.select(["cvrNummer","datediff"]).distinct().na.drop("any").toPandas()
plt.hist(datediffs["datediff"],bins=100,range=[0,8000])
plt.title("Histogram of durration of submissions for kapital")
plt.xlabel("Days")
plt.ylabel("Count")
plt.axis()
plt.show()
#datediffs
avgKapital = (mainKapitalDf
.filter(F.col("KAPITALVALUTA_vaerdi") == "DKK")
.select("cvrNummer","KAPITAL_vaerdi","gyldigFra")
.distinct()
.groupBy("cvrNummer")
.mean("KAPITAL_vaerdi")
.withColumnRenamed(existing="avg(KAPITAL_vaerdi)",new="avgkapital")
.na
.drop("any")
.toPandas())
p1 = plt.hist(avgKapital["avgkapital"],bins=150,range=[125000,1000000000])
plt.yscale('log')
plt.title("Average kapital for each Company in DKK")
plt.ylabel("Count")
plt.xlabel("Kroner")
display(Markdown("### Hvad er den gennemsnitlig kapital i virksomhederne?"))
plt.show()
```
Medarbejdstal is created here!
```
#the kapital gets joined with years in mainKap over
kapOverDf = (medarbejdsDf
.join(other=mainKapitalDf,on=((medarbejdsDf["cvrNummer"] == mainKapitalDf["cvrNummer"])
& (medarbejdsDf["aar"] == mainKapitalDf["aar"])
& (medarbejdsDf["maaned"] == mainKapitalDf["maaned"])),how="inner")
.drop(mainKapitalDf["cvrNummer"])
.drop(mainKapitalDf["aar"])
.drop(mainKapitalDf["maaned"])
.filter(F.col("KAPITALVALUTA_vaerdi")=="DKK")
)
desckapOverDf = kapOverDf.describe()
kapOverDf.orderBy("cvrNummer","aar","maaned").show()
#totalDf.printSchema()
#totalDf.orderBy("cvrNummer","aar").show()
describeKapMedDf = (kapOverDf
.filter(F.col("KAPITALVALUTA_vaerdi")=="DKK")
.withColumnRenamed(existing="lower_intervalKodeAntalAarsvaerk",new="AntalAarsvaerk")
.withColumnRenamed(existing="lower_intervalKodeAntalAnsatte",new="AntalAnsatte")
.drop("cvrNummer")
.drop("timeStampFra")
.drop("timeStampTil")
.drop("gyldigFra")
.drop("gyldigTil")
.drop("ts"))
describeKapMedDf.show()
```
OK lets try the correlation (Pearsons) between kapital and the two work-figures...
```
#The three beskæftigelses numbers are joined together and re-sampled
display(Markdown("### Standard korrelations koeficienter."))
print("Korrelationen imellem kapital og årsværker: "+str(kapOverDf.corr("KAPITAL_vaerdi","lower_intervalKodeAntalAarsvaerk"))[:5])
print("Korrelationen imellem kapital og årsværker: "+str(kapOverDf.corr("KAPITAL_vaerdi","lower_intervalKodeAntalAnsatte"))[:5])
#do stuff to the description dataframe
def scaleEm(df,labelCol,featCols):
meanAndStd = (df.describe().filter( (F.col("summary") == "mean")|(F.col("summary") == "stddev") )
.rdd
.map(lambda x: (x["summary"],x.asDict())).collectAsMap())
mstdBroadcast = sc.broadcast(meanAndStd)
#the function columns are made here!
scaleCol = [((F.col(i) - F.lit(mstdBroadcast.value["mean"][i]) )/F.lit(mstdBroadcast.value["stddev"][i])).alias(i) for i in featcols]
featuresDf = (kapOverDf
.select(labelsCol+scaleCol)
.distinct()
)
return featuresDf
# OK so we're taking the log1p first if that doesn't work then we'll scale 'em
labelsCol = ["cvrNummer","lower_intervalKodeAntalAarsvaerk","lower_intervalKodeAntalAnsatte","aar"]
featcols = ["KAPITAL_vaerdi"]
onlyLogKapCols = [F.log1p("KAPITAL_vaerdi").alias("KAPITAL_vaerdi"),"lower_intervalKodeAntalAarsvaerk","lower_intervalKodeAntalAnsatte","aar"]
#funcsCol = [((F.col(i) - F.lit(mstdBroadcast.value["mean"][i]) )/F.lit(mstdBroadcast.value["stddev"][i])).alias(i) for i in featcols]
#logFuncCol = [F.log1p(i) for i in featcols]
featuresDf = (scaleEm(kapOverDf,labelsCol,onlyLogKapCols)
.withColumnRenamed(existing="lower_intervalKodeAntalAarsvaerk",new="pAntalAarsvaerk")
.withColumnRenamed(existing="lower_intervalKodeAntalAnsatte",new="pAntalAnsatte"))
showScatterMatrix(featuresDf,labelsCol+featcols)
def translateCols(df,months):
'''
NOTE: needs to be more general!
'''
windowYearLag = (Window
.partitionBy(F.col("cvrNummer"))
.orderBy(F.col("aar"),F.col("maaned")))
return (df
.withColumn(col=F.lead(F.col("lower_intervalKodeAntalAarsvaerk"),count=months).over(windowYearLag),colName="pAntalAarsvaerk")
.withColumn(col=F.lead(F.col("lower_intervalKodeAntalAnsatte"),count=months).over(windowYearLag),colName="pAntalAnsatte")
.na
.drop("all",subset=["pAntalAarsvaerk","pAntalAnsatte"])
.select(["cvrNummer","aar","maaned","ts","KAPITAL_vaerdi","pAntalAarsvaerk","pAntalAnsatte"])
)
oneYearDf = translateCols(kapOverDf,12).cache()
twoYearsDf = translateCols(kapOverDf,24).cache()
threeYearsDf = translateCols(kapOverDf,36).cache()
allDfs = [featuresDf,oneYearDf,twoYearsDf,threeYearsDf]
allDfs[0].show()
showScatterMatrix(oneYearDf,["aar","cvrNummer",F.log1p("KAPITAL_vaerdi"),"pAntalAarsvaerk","pAntalAnsatte"])
#oneYearDf.show()
display(Markdown("### Korrelation med forskudt kapiptal"))
print("Korrelation imellem Årsværk og kapital efter 1 årsforskydning: "+str(oneYearDf.select(F.log1p("KAPITAL_vaerdi").alias("vaerdi"),"pAntalAarsvaerk").corr("vaerdi","pAntalAarsvaerk"))[:5])
print("Korrelation imellem Ansatte og kapital efter 1 årsforskydning: "+str(oneYearDf.select(F.log1p("KAPITAL_vaerdi").alias("vaerdi"),"pAntalAnsatte").corr("vaerdi","pAntalAnsatte"))[:5])
print("Årsværk og kapital efter 2 år: "+str(twoYearsDf.select(F.log1p("KAPITAL_vaerdi").alias("vaerdi"),"pAntalAarsvaerk").corr("vaerdi","pAntalAarsvaerk"))[:5])
print("Ansatte og kapital efter 2 år: "+str(twoYearsDf.select(F.log1p("KAPITAL_vaerdi").alias("vaerdi"),"pAntalAnsatte").corr("vaerdi","pAntalAnsatte"))[:5])
print("Årsværk og kapital efter 3 år: "+str(threeYearsDf.select(F.log1p("KAPITAL_vaerdi").alias("vaerdi"),"pAntalAarsvaerk").corr("vaerdi","pAntalAarsvaerk"))[:5])
print("Ansatte og kapital efter 3 år: "+str(threeYearsDf.select(F.log1p("KAPITAL_vaerdi").alias("vaerdi"),"pAntalAnsatte").corr("vaerdi","pAntalAnsatte"))[:5])
display(Markdown("Ikke den store overaskelse..."))
#twoYearsDf.show()
print(oneYearDf.count())
print(twoYearsDf.count())
print(threeYearsDf.count())
import time
def quantile(rdd, p, sample=None, seed=None):
"""Compute a quantile of order p ∈ [0, 1]
:rdd a numeric rdd
:p quantile(between 0 and 1)
:sample fraction of and rdd to use. If not provided we use a whole dataset
:seed random number generator seed to be used with sample
"""
assert 0 <= p <= 1
assert sample is None or 0 < sample <= 1
seed = seed if seed is not None else time.time()
rdd = rdd if sample is None else rdd.sample(False, sample, seed)
rddSortedWithIndex = (rdd
.sortBy(lambda x: x)
.zipWithIndex()
.map(lambda x: (x[1], x[0]))
.cache())
n = rddSortedWithIndex.count()
h = (n - 1) * p
rddX, rddXPlusOne = (
rddSortedWithIndex.lookup(x)[0]
for x in int(np.floor(h)) + np.array([0, 1]))
return rddX + (h - np.floor(h)) * (rddXPlusOne - rddX)
#heres what you'll do. Filter on pantalansatte
def getQuantileOutliers(df,group=0,subset=["cvrNummer","aar","KAPITAL_vaerdi","pAntalAarsvaerk","pAntalAnsatte"],valueCol="KAPITAL_vaerdi",groupCol="pAntalAnsatte"):
groupPdf = (oneYearDf
.dropDuplicates(subset)
.filter((F.col(groupCol)==group))
.toPandas())
q1 = groupPdf.quantile(0.25)
q3 = groupPdf.quantile(0.75)
iQR = q3 - q1
#print(q1-iQR*1.5)
#print(q3)
#print(iQR["KAPITAL_vaerdi"])
return (oneYearDf
.dropDuplicates(subset)
.filter((~F.col(valueCol).between(q1[valueCol]-1.5*iQR[valueCol],q3[valueCol]+1.5*iQR[valueCol]))
& (F.col(groupCol)==group))
)
#quantile(oneYearDf.select("KAPITAL_vaerdi").na.drop().rdd.map(lambda x: x[0]),0.75)
```
Box plot for aarsværkstal and medarbejdstal with displacement
```
plotLength = len(allDfs)
years = ["Årsværker", "Antal ansatte"]
funCols = ["pAntalAnsatte","pAntalAarsvaerk"]
fig, axes = plt.subplots(1,2,figsize=(10,5))
#allDfs[i].printSchema()
df = (allDfs[0]
.filter(F.col("aar")==2012)
.select(F.log1p("KAPITAL_vaerdi").alias("log_kapital"),"pAntalAnsatte","pAntalAarsvaerk")
.toPandas())
#allDfs[0].show()
for i in range(2):
axes[i].set_title("kapital sammenlignet med "+years[i])
sb.boxplot(x=funCols[i],y="log_kapital",data=df,ax=axes[i])
#sb.boxplot(x="pAntalAarsvaerk",y="log_kapital",data=df,ax=aarsAx)
display(Markdown("### Boxplot for Årsværk og antal ansatte kombineret med kapital i 2012"))
[plt.setp(item.yaxis.get_majorticklabels(), 'size', 5) for item in axes.ravel()]
#x ticklabels
[plt.setp(item.xaxis.get_majorticklabels(), 'size', 5) for item in axes.ravel()]
[plt.setp(item.yaxis.get_label(), 'size', 5) for item in axes.ravel()]
#x labels
[plt.setp(item.xaxis.get_label(), 'size', 5) for item in axes.ravel()]
plt.show()
#display(Markdown("Boxplot for Årsværk og antal ansatte kombineret med 1 forskudt kapital"))
df = (allDfs[1]
.filter(F.col("aar") == 2012)
.select(F.log1p("KAPITAL_vaerdi").alias("log_kapital"),"pAntalAnsatte","pAntalAarsvaerk")
.toPandas())
fig, axes = plt.subplots(1,2,figsize=(10,5))
for i in range(2,4):
#allDfs[i].printSchema()
axes[i-2].set_title("Forskudt kapital sammenlignet med "+years[i-2])
sb.boxplot(x=funCols[i-2],y="log_kapital",data=df,ax=axes[i-2])
#sb.boxplot(x="pAntalAarsvaerk",y="log_kapital",data=df,ax=aarsAx)
[plt.setp(item.yaxis.get_majorticklabels(), 'size', 5) for item in axes.ravel()]
#x ticklabels
[plt.setp(item.xaxis.get_majorticklabels(), 'size', 5) for item in axes.ravel()]
[plt.setp(item.yaxis.get_label(), 'size', 5) for item in axes.ravel()]
#x labels
[plt.setp(item.xaxis.get_label(), 'size', 5) for item in axes.ravel()]
display(Markdown("### Boxplot for Årsværk og antal ansatte kombineret med 1 års forskudt kapital i 2012"))
plt.show()
df = (allDfs[2]
.filter(F.col("aar") == 2012)
.select(F.log1p("KAPITAL_vaerdi").alias("log_kapital"),"pAntalAnsatte","pAntalAarsvaerk")
.toPandas())
fig, axes = plt.subplots(1,2,figsize=(10,5))
for i in range(4,6):
#allDfs[i].printSchema()
axes[i-4].set_title("Forskudt kapital sammenlignet med "+years[i-4])
sb.boxplot(x=funCols[i-4],y="log_kapital",data=df,ax=axes[i-4])
#sb.boxplot(x="pAntalAarsvaerk",y="log_kapital",data=df,ax=aarsAx)
[plt.setp(item.yaxis.get_majorticklabels(), 'size', 5) for item in axes.ravel()]
#x ticklabels
[plt.setp(item.xaxis.get_majorticklabels(), 'size', 5) for item in axes.ravel()]
[plt.setp(item.yaxis.get_label(), 'size', 5) for item in axes.ravel()]
#x labels
[plt.setp(item.xaxis.get_label(), 'size', 5) for item in axes.ravel()]
display(Markdown("### Boxplot for Årsværk og antal ansatte kombineret 2 års forskudt med kapital i 2012"))
plt.show()
windowSpecRank =(Window.partitionBy(F.col("cvrNummer"))).orderBy(F.col("periode_gyldigFra").desc())
groupCols = ["cvrNummer","vaerdi"]
companyNameDf = (sqlContext
.read
.parquet(namePath+"companyCvrData")
.withColumn(colName="rank",col=F.rank().over(windowSpecRank))
.filter((F.col("rank")==1) & (F.col("sekvensnr")==0))
.select([F.col(i) for i in groupCols])
.withColumnRenamed(existing="vaerdi",new="navn")
.orderBy(F.col("cvrNummer"))
.cache()
)
qOutliersDf = getQuantileOutliers(allDfs[1].filter(F.col("aar")==2012),group=1)
withCompanies = (qOutliersDf
.join(other=companyNameDf,on=(qOutliersDf["cvrNummer"]==companyNameDf["cvrNummer"]),how="left")
.select("navn","KAPITAL_vaerdi")
.groupBy("navn")
.agg(F.mean("KAPITAL_vaerdi"))
.orderBy(F.col("avg(KAPITAL_vaerdi)").desc())
)#join companyname her!
display(Markdown("### Top 20 outliers med gennemsnitlig kapital for 1 ansat forskudt med 1 år"))
withCompanies.show(truncate=False)
print( qOutliersDf.count())
qOutliersDf = getQuantileOutliers(allDfs[1].filter(F.col("aar")==2012),group=50)
withCompanies = (qOutliersDf
.join(other=companyNameDf,on=(qOutliersDf["cvrNummer"]==companyNameDf["cvrNummer"]),how="left")
.select("navn","KAPITAL_vaerdi")
.groupBy("navn")
.agg(F.mean("KAPITAL_vaerdi"))
.orderBy(F.col("avg(KAPITAL_vaerdi)").desc())
)#join companyname her!
display(Markdown("### Top 20 outliers med gennemsnitlig kapital for 50 ansatte forskudt med 1 år"))
withCompanies.show(truncate=False)
qOutliersDf = getQuantileOutliers(allDfs[2].filter(F.col("aar")==2012),group=50)
withCompanies = (qOutliersDf
.join(other=companyNameDf,on=(qOutliersDf["cvrNummer"]==companyNameDf["cvrNummer"]),how="left")
.select("navn","KAPITAL_vaerdi")
.groupBy("navn")
.agg(F.mean("KAPITAL_vaerdi"))
.orderBy(F.col("avg(KAPITAL_vaerdi)").desc())
)#join companyname her!
display(Markdown("### Top 20 outliers med gennemsnitlig kapital for 50 ansatte forskudt med 2 år"))
withCompanies.show(truncate=False)
```
### Opsummering
* Medarbejds- og Årsværkstal er indelt i kategorier, mens kapital er mere frit indsat.
* Ændringer i kapital er ret uregelmæssigt indberettet, mens årsværker og antal ansatte indberettes fra års, kvartals og månedsbasis.
* Det ses at der findes mange "outliers" ift. virksomheder der har få ansatte eller antalårsværk i forhold til kapital. Dog ses det også at flere firmaer ligger "pænt" når kapitalen forskydes med 1 og 2 år.
* Yderligere undersøgelse kunne omhandle outliers i de forskellige grupper, for at se om firmaer vandre fra gruppe til gruppe.
```
qOutliersArr = [getQuantileOutliers(allDfs[i].filter(F.col("aar")==2012),group=1) for i in range(1,4)]
withCompanies = [(qOutliersArr[i]
.join(other=companyNameDf,on=(qOutliersArr[i]["cvrNummer"]==companyNameDf["cvrNummer"]),how="left")
.select("navn","KAPITAL_vaerdi")
.groupBy("navn")
.agg(F.mean("KAPITAL_vaerdi"))
.orderBy(F.col("avg(KAPITAL_vaerdi)").desc())
) for i in range(0,3)]
display(Markdown("Gennemsnitlig "))
.subtract(withCompanies1).show(truncate=False)
(withCompanies[1].subtract(withCompanies1)).show(truncate=False)
withCompanies[2].subtract(withCompanies1).show(truncate=False)
```
#### anova test
```
def computeExplainedVar(df,groupCol,summationCol):
'''
This method computes the explained variance also called
'''
funcCols = [F.count,F.avg]
exprsCols = [f(summationCol) for f in funcCols]
secondFuncCols = [F.count,F.sum]
secondExpsCols = [f("avgKapital") for f in secondFuncCols]
totalMean = df.na.drop().groupBy().mean(summationCol).collect()[0]
groupMeanDf = (df
.na
.drop()
.select(groupCol,summationCol)
.groupBy(groupCol)
.agg(*exprsCols)
.withColumn(col=
F.col("count(KAPITAL_VAERDI)")*(F.col("avg(KAPITAL_VAERDI)")-totalMean[0])**2
,colName="avgKapital")
.groupBy()
.agg(*secondExpsCols)
.withColumn(col=F.col("count(avgKapital)")-F.lit(1),colName="DegreeOFExplained")
.withColumn(col=F.col("sum(avgKapital)")/(F.col("DegreeOFExplained")),colName="ExplainedVar")
)
return groupMeanDf
computeExplainedVar(twoYearsDf,"pAntalAnsatte","KAPITAL_VAERDI").show()
def computeUnexplainedVar(df,groupCol,summationCol):
'''
This method computes the unexplained variance or within-group variability which is the denominator in the F-test
computation
Input:
- df spark data frame containing the data. Data should at least contain a group column and the column that is
subjected to variance
- groupCol string that keeps the name of the column listing the group variabels
- summationCol string that keeps the name of the column with variability
Output:
- subtractMeanDf spark data frame that contains the unexplained variance.
'''
noMissingDf = (df
.select(groupCol,summationCol)
.na
.drop())
funcCols = [F.mean]
exprsCols = [f(summationCol) for f in funcCols]
groupMeanRdd = (noMissingDf
.groupBy(groupCol)
.agg(*exprsCols)
.rdd
)
meanMap = groupMeanRdd.collectAsMap()
subtractMeanRdd = (noMissingDf
.rdd
.map(lambda x: (x[0],x[1],meanMap[x[0]]))
)
NminusK = noMissingDf.count()-groupMeanRdd.count()
schema = StructType([StructField(groupCol,IntegerType()),StructField(summationCol,DoubleType()),StructField("groupMean",DoubleType())])
meanFuncUdf = F.udf(lambda x,y: float(((x-y)**2)/(NminusK)),DoubleType())
subtractMeanDf = (sqlContext
.createDataFrame(subtractMeanRdd,schema=schema)
.withColumn(col=meanFuncUdf(F.col(summationCol),F.col("groupMean")),colName="subSums")
.groupBy()
.sum()
.withColumn(col=F.lit(NminusK),colName="DegreeOFunexplained")
)
#subtractMeanDf.show()
return subtractMeanDf
#twoYearsDf.show()
computeUnexplainedVar(twoYearsDf,"pAntalAnsatte","KAPITAL_VAERDI").show()
def computeF(df,groupCol,summationCol):
explainedVar = computeExplainedVar(df,groupCol,summationCol).collect()[0]
unExplainedVar = computeUnexplainedVar(df,groupCol,summationCol).collect()[0]
F_val = float(explainedVar["ExplainedVar"]/unExplainedVar["sum(subSums)"])
return [F_val,explainedVar["DegreeOFExplained"],unExplainedVar["DegreeOFunexplained"]]
F1 = computeF(oneYearDf,"pAntalAnsatte","KAPITAL_VAERDI")
F2 = computeF(twoYearsDf,"pAntalAnsatte","KAPITAL_VAERDI")
sp.stats.f.sf(F2[0], float(F2[1]), float(F2[2]))
```
#### Debug ftest here!
```
sp.stats.f.sf(F1[0], float(F1[1]), float(F1[2]))
#print(sp.stats.f.sf(F2[0], float(F2[1]), float(F2[2])))
t1 = [164, 172, 168, 177, 156, 195]
t2 = [178, 191, 197, 182, 185, 177]
t3 = [175, 193, 178, 171, 163, 176]
t4 = [155, 166, 149, 164, 170, 168]
val = pan.DataFrame([t1,t2,t3,t4],index=['type1', 'type2', 'type3', 'type4'],columns=["ex0","ex1","ex2","ex3","ex4","ex5"])
val["label"] = [1, 2, 3, 4]
fxUdf = F.udf(lambda x,y,z,v,w,a: [float(x),float(y),float(z),float(v),float(w),float(a)],ArrayType(DoubleType()))
dftestF = (sqlContext
.createDataFrame(data=val)
.withColumn(col=fxUdf(F.col("ex0"),F.col("ex1"),F.col("ex2"),F.col("ex3"),F.col("ex4"),F.col("ex5")),colName="vector")
.select("label",F.explode("vector").alias("KAPITAL_vaerdi"))
)
dftestF.printSchema()
#dftestF.show()
Ft = computeF(dftestF,"label","KAPITAL_vaerdi")
sp.stats.f.sf(Ft[0], float(Ft[1]), float(Ft[2])) # this shows that own implementation of F.test works, p-value at 0.68
```
| github_jupyter |
# Support Vector Machine
```
!pip install six
!pip install pandas
!pip install numpy
!pip install sklearn
!pip install matplotlib
!pip install imbalanced-learn
import pandas as pd
import numpy as np
import sklearn
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
import matplotlib.pyplot as plt
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
from imblearn.under_sampling import RandomUnderSampler
train_set = pd.read_csv('train_set_with_features.csv')
```
## Data Prep
```
# Random undersampler to reduce the number of majority class instances to match number of minority class instance.
undersample = RandomUnderSampler(sampling_strategy='majority')
# Extract only engineered features into x and y
x = train_set.drop(['id', 'qid1', 'qid2', 'question1', 'question2', 'is_duplicate', 'Unnamed: 0'], axis=1)
y = train_set[['is_duplicate']]
# Because gridSearch parameter tuning is slow, only use 50% of model data for training the gridSearch model while searching for best parameters for final SVM model.
x_grid_train, x_grid_test, y_grid_train, y_grid_test = train_test_split(x, y, test_size = 0.5, random_state = 42)
# Split 80% of data for the final model training and 20% for testing.
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state = 42)
# Normalize then undersample data used by final model
scaler = StandardScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_test)
x_train, y_train = undersample.fit_resample(x_train, y_train)
# Normalize then undersample data used by gridSearch model
x_grid_train = scaler.fit_transform(x_grid_train)
x_grid_test = scaler.transform(x_grid_test)
x_grid_train, y_grid_train = undersample.fit_resample(x_grid_train, y_grid_train)
# gridSearch requires labels to be of a particular shape.
y_grid_train = y_grid_train.to_numpy().reshape(-1)
y_grid_test = y_grid_test.to_numpy().reshape(-1)
```
## Parameter tuning
```
# Execute gridSearch to try these parameters for SVM.
param_grid = {'C': [0.1,1, 10, 100], 'gamma': [1,0.1,0.01,0.001],'kernel': ['rbf', 'sigmoid']}
grid = GridSearchCV(SVC(),param_grid,refit=True,verbose=2, n_jobs=3)
grid.fit(x_grid_train ,y_grid_train)
# Best parameters for SVM, but best kernel is not shown
print(grid.best_estimator_)
# Print out the performance of the SVM model trained by gridSearch using the best parameters.
grid_predictions = grid.predict(x_test)
print(confusion_matrix(y_test,grid_predictions))
print(classification_report(y_test,grid_predictions))
```
## Fitting model based on tuned parameters
```
# Use the parameters found by gridSearch to train the final SVM model with more data (80% instead of 50%).
# After trying multiple kernel types since gridSearch did not reveal the best kernel type, 'rbf' is the best.
# Kernel = 'rbf'
SVM = SVC(C=10, kernel='rbf', degree=3, gamma=0.01)
clf = SVM.fit(x_train,y_train)
predictions_SVM = SVM.predict(x_test)
# Print out the performance of SVM that is trained using the best parameters and
print(classification_report(y_test,predictions_SVM))
```
### Process:
1. Normalize feature engineered training data
2. Parameter tuning using GridSearchCV which fits the SVM model using several values of each parameter and evaluating it with a 5-fold cross validation. (10000 rows)
3. Resulting parameters are C = 100, gamma = 0.01.
4. Upon testing, best kernel for those parameters is rbf.
Results suggest that the model is better used to predict that a question is NOT a duplicate.
### Advantages:
1. By using a kernel, there can be separation of the classes even if the data provided is not linearly separable. (https://core.ac.uk/reader/6302770)
2. SVM provides good out of sample generalization as it makes use of regularization which helps to prevent overfitting on the dataset.
3. SVM can classify data points faster than some other models because it only relies on the support vectors to decide the decision boundary and not all of the data points used to train the model (like kNN).
### Disadvantages:
1. Does not perform too well with skewed dataset, as in our case. There would be high variance of the decision boundary as the under represented class can skew the decision boundary by a lot.
https://www.quora.com/Why-does-SVM-not-perform-well-for-imbalanced-data
2. Takes a long time to train the model if the data set is large. "As you mention, storing the kernel matrix requires memory that scales quadratically with the number of data points. Training time for traditional SVM algorithms also scales superlinearly with the number of data points. So, these algorithms aren't feasible for large data sets."
https://stats.stackexchange.com/questions/314329/can-support-vector-machine-be-used-in-large-data
| github_jupyter |
# Problem Statement:
Given profiles representing fictional customers from an e-commerce company. The profiles contain information about the customer, their orders, their transactions ,what payment methods they used and whether the customer is fraudulent or not. We need to predict the given customer is fraudulent or not based on the above factors
##The data given below represents fictional customers from an e-commerce website
The data contain information about the customerEmail,orders,transaction they have made,what payment method they have used,through which card the payment has been done and whether the customer is fraudulent or not
1)The first thing is loading the dataset
```
import pandas as pd
import numpy as np
data1 = pd.read_csv('Customer_DF (1).csv')
data2 = pd.read_csv('cust_transaction_details (1).csv')
#this code is just to make the copy of the dataset
data_copy_1 = pd.read_csv('Customer_DF (1).csv')
data_copy_2 = pd.read_csv('cust_transaction_details (1).csv')
data1.head()
data2.head()
```
Checking whether if they are some null values
```
data1.isnull().sum()
data2.isnull().sum()
```
Printing the columns of both the table
```
data1.columns
data2.columns
```
Shape of the datasets
```
data1.shape
data2.shape
print('total customers records',data1.shape[0], 'and total unique customers',len(data1.customerEmail.unique()))
```
Duplicates customersEmail ID's are....
```
data1[data1['customerEmail'].duplicated()]
```
We see that all the records that are duplicated are fraudulent
So now our job is to remove all the duplicate entries from the dataset
```
data2=data2.drop(["transactionId","transactionId","paymentMethodId","orderId","Unnamed: 0"],axis=1)
#filtering the email if their is "." present in them
data2["customerEmail"]=data2.apply(lambda x:x.customerEmail if("." in x.customerEmail) else "f",axis=1)
#setting customerEmail as the index of the dataframe
data2 = data2.set_index("customerEmail")
#dropping the email which does not have '.' in them
data2=data2.drop("f",axis=0)
#taking out the mean of the customerEmail to avoid duplicates
n1=data2.groupby("customerEmail")["paymentMethodRegistrationFailure"].mean().astype(int)
n2=data2.groupby("customerEmail")["transactionAmount"].mean().astype(int)
n3=data2.groupby("customerEmail")["transactionFailed"].mean().astype(int)
data2=data2.drop(["transactionFailed","transactionAmount","paymentMethodRegistrationFailure"],axis=1)
data2=data2.drop(["paymentMethodProvider"],axis=1)
#creating dummy variables for the dataset
data2= pd.get_dummies(data2)
data2
m1=data2.groupby("customerEmail")["orderState_failed"].mean().astype(int)
m2=data2.groupby("customerEmail")["orderState_fulfilled"].mean().astype(int)
m3=data2.groupby("customerEmail")["orderState_pending"].mean().astype(int)
l1=data2.groupby("customerEmail")["paymentMethodType_card"].mean().astype(int)
l2=data2.groupby("customerEmail")["paymentMethodType_paypal"].mean().astype(int)
l3=data2.groupby("customerEmail")["paymentMethodType_apple pay"].mean().astype(int)
l4=data2.groupby("customerEmail")["paymentMethodType_bitcoin"].mean().astype(int)
#concatenating the variables after removing duplicates
nresult = pd.concat([m1,m2,m3,l1,l2,l3,l4,n1,n2,n3], axis=1, join='inner')
data1=data1.drop(["customerPhone","customerDevice","customerIPAddress","customerBillingAddress","Unnamed: 0"],axis=1)
#converting the target variable from bool to int for the creation of dummy variable
data1['Fraud'] = data1['Fraud'].astype(int)
#merging both the datasets into single object called result
result = pd.merge(data1,nresult, on='customerEmail')
result.isnull().sum()
#unique email id's in result dataset
len(result["customerEmail"].unique())
#dropping the email id as it is of no use now
result=result.drop(["customerEmail"],axis=1)
result.columns
#creating the dummies for the merged dataset
result2= pd.get_dummies(result)
result2
```
Now exploring the data and analysing it
```
#maximum number of transaction done by the customer
data1[data1['No_Transactions']==data1['No_Transactions'].max()]
#maximum number of orders done by the customer
data1[data1['No_Orders']==data1['No_Orders'].max()]
#maximum number of payments done by the customer
data1[data1['No_Payments']==data1['No_Payments'].max()]
data_copy_2['paymentMethodRegistrationFailure'].value_counts()
import seaborn as sns
import matplotlib.pyplot as plt
sns.countplot(x='paymentMethodRegistrationFailure',data=data_copy_2,palette='hls')
plt.show()
```
Count of Payment method Registration Failure occ
INFERENCE --> There is a very less probability of payment to fail
```
data_copy_2['paymentMethodType'].value_counts()
sns.countplot(x='paymentMethodType',data=data_copy_2,palette='hls')
plt.show()
```
PREFERRED PAYMENT METHOD
INFERENCE --> People prefer Card over other payment methods types
```
data_copy_2['paymentMethodProvider'].value_counts()
sns.countplot(y="paymentMethodProvider",data=data_copy_2)
```
Payment Method Provider
INFERENCE --> JCB 16 DIGIT is widely used followed by VISA 16 DIGIT and rest
```
data_copy_2['transactionFailed'].value_counts()
sns.countplot(x='transactionFailed',data=data_copy_2,orient='vertical',palette='hls')
plt.show()
```
transaction failed
INFERENCE --> after the payment is completed , the probability of transaction to fail is low
```
data_copy_2['orderState'].value_counts()
sns.countplot(x='orderState',data=data_copy_2,orient='vertical',palette='hls')
plt.show()
```
Order State
INFERENCE --> it is found out that the most of the orders were fullfilled
```
result['Fraud'].value_counts()
sns.countplot(x='Fraud',data=data1,orient='vertical',palette='hls')
plt.show()
```
FRAUD
INFERENCE --> it is seen that the cases aof fraud is neaarly half of those that are not fraud
```
result
#number of transaction that went fraud and not fraud
plt.scatter(result['No_Transactions'],result['Fraud'],color='#2ca02c')
#number of orders that went fraud and not fraud
plt.scatter(result['No_Orders'],result['Fraud'],color='#2ca02c')
#number of payments that went fraud and not fraud
plt.scatter(result['No_Payments'],result['Fraud'],color='#2ca02c')
sns.catplot(x="No_Payments",y="No_Transactions",data=result,kind="box")
```
INFERENCE --> although there is no particular trend , but it seems that as the no.of payments increase the no,of transactions tend to decrease
```
sns.barplot(y="No_Payments",x="No_Orders",data=result)
```
INFERENCE --> as the no,of orders increase the no.of payments tend to increase
```
data1[data1['No_Payments']==0]
#No. number of fullfilled orders
len(result[result['orderState_fulfilled'] == 1])
#No. number of pending orders
len(result[result['orderState_pending'] == 1])
#No. number of failed orders
len(result[result['orderState_failed'] == 1])
%matplotlib inline
pd.crosstab(result['paymentMethodType_card'],result['Fraud']).plot(kind='bar')
plt.title('paymentMethodProvider for card vs fraud')
plt.xlabel('paymentMethodProvider')
plt.ylabel('Fraud')
```
INFERENCE --> when the payment method is not card then it is seen that the not fraud and fraud cases are nearly same and when card is used the non fraud case is higher than the fraud case
```
%matplotlib inline
pd.crosstab(result['paymentMethodType_paypal'],result['Fraud']).plot(kind='bar')
plt.title('paymentMethodProvider for paypal vs fraud')
plt.xlabel('paymentMethodProvider')
plt.ylabel('Fraud')
```
INFERENCE --> when the payment method is not paypal then the cases of not fraud is higher than cases of fraud and when paypal is used there is no case of fraud at all
```
%matplotlib inline
pd.crosstab(result['paymentMethodType_bitcoin'],result['Fraud']).plot(kind='bar')
plt.title('paymentMethodProvider for bitcoin vs fraud')
plt.xlabel('paymentMethodProvider')
plt.ylabel('Fraud')
```
INFERENCE --> when the payment type is not bitcoin it is found that the cases of non fraud is higher than the cases of fraud and when bitcoin is used it is found that fraud and non fraud cases is almost same
Till Now we have done some EDA for our datasets
Now we have to construct our model to predict if the customer if fraudulent or not
```
result.describe(include='all')
#creating dependent and independent variables
features = result2.drop('Fraud',axis=1) #->independent variables
labels = result2['Fraud'] #->dependent variable
#splitting the data into training and testing
#and performing Logistic Regression and fitting the training dataset in the Logistic model
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(features,labels,test_size=0.20,random_state=0)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train,y_train)
#predicating the output from the test data
ypred = lr.predict(X_test)
ypred
from sklearn.metrics import confusion_matrix
#creating a confusion matrix to check if the variable is predicated correctly or not
confusion_matrix(y_test,ypred)
#normalizing the data and plotting the confusion matrix
from sklearn.metrics import confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Oranges):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.figure(figsize = (10, 10))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title, size = 24)
plt.colorbar(aspect=4)
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45, size = 14)
plt.yticks(tick_marks, classes, size = 14)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
# Labeling the plot
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt), fontsize = 20,
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.grid(None)
plt.tight_layout()
plt.ylabel('True label', size = 18)
plt.xlabel('Predicted label', size = 18)
cm = confusion_matrix(y_test, ypred)
plot_confusion_matrix(cm, classes = ['fraud', 'not fraud'],
title = 'FRAUD DETECTION CONFUSION MATRIX')
#finding out the accuracy_score for the model
#the below accuracy for the model is 68%,hyperparameters are yet to be applied
from sklearn.metrics import accuracy_score
print("Logistic regression")
accuracy_score(y_test,ypred) * 100
from sklearn.metrics import classification_report
print(classification_report(y_test,ypred))
#basically in the below code we are performing pipelining in the model itself will Normalize the data and perform PCA
from sklearn import linear_model, decomposition, datasets
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV, cross_val_score
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
pca = decomposition.PCA(n_components=2)
logistic = linear_model.LogisticRegression()
pipe = Pipeline(steps=[('sc', sc),
('pca', pca),
('logistic', logistic)])
n_components = list(range(1,features.shape[1]+1,1))
# Create a list of values of the regularization parameter
C = np.logspace(-4, 4, 15)
# Create a list of options for the regularization penalty
penalty = ['l1', 'l2']
# Create a dictionary of all the parameter options
# accessing the parameters of steps of a pipeline by using '__’
parameters = dict(pca__n_components=n_components,
logistic__C=C,
logistic__penalty=penalty)
clf = GridSearchCV(pipe, parameters,verbose = True,n_jobs=-1,scoring='accuracy')
# Fit the grid search
clf.fit(features, labels)
print('Best Penalty:', clf.best_estimator_.get_params()['logistic__penalty'])
print('Best C:', clf.best_estimator_.get_params()['logistic__C'])
print('Best Number Of Components:', clf.best_estimator_.get_params()['pca__n_components'])
#this will return the best parameters that are required for our model
clf.best_params_
#this will return the mean accuracy of the model
clf.best_score_
clf.best_estimator_
clf.best_estimator_.get_params()['logistic']
#here cross_val_score will split the whole data into training and testing and perforn cross validations
cross_val = cross_val_score(clf,features,labels,cv=3,scoring='accuracy',n_jobs=-1)
#this will return the accuracy of each dataset that was splitted on the basics of 'cv' value which is 3
cross_val * 100
print('the mean accuracy of our model is',(cross_val * 100).mean())
print('the maximum accuracy of our model is',max(cross_val * 100))
ypred_new = clf.predict(X_test)
ypred_new
accuracy_score(y_test,ypred_new) * 100
```
# INFERENCES
* There is a very less probability of payment to fail
* People prefer Card over other payment methods types
* JCB 16 DIGIT is widely used followed by VISA 16 DIGIT and rest
* After the payment is completed , the probability of transaction to fail is low
* It is found out that the most of the orders were fullfilled
* It is seen that the cases of fraud is nearly half of those that are not fraud
* It seems that as the no.of payments increase the no,of transactions tend to decrease
* As the no.of orders increase the no.of payments tend to increase
* When the payment method is not card then it is seen that the not fraud and fraud cases are nearly same and when card is used the non fraud case is higher than the fraud case
* When the payment method is not paypal then the cases of not fraud is higher than cases of fraud and when paypal is used there is no case of fraud at all
* When the payment type is not bitcoin it is found that the cases of non fraud is higher than the cases of fraud and when bitcoin is used it is found that fraud and non fraud cases is almost same
# Model_Selection
* Initially , we decided to use Logistic Regression as it seems to appear as a binary problem and achieved an accuracy of 75%
*
* Implemented a pipeline such that it will Normalize the data and perform PCA
*
* Applied CrossValidation and fitting the model via GridSearch
*
* The mean accuracy of our model is 77.13536848596978 %
The highest accuracy of our model is 82.97872340425532 %
The testing accuracy of our model is 78.57142857142857 %
```
#applying ANN here
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.fit_transform(X_test)
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
from keras.models import Sequential
from keras.layers import Dense, Activation, Embedding, Flatten, LeakyReLU, BatchNormalization, Dropout
from keras.activations import relu, sigmoid
def create_model(layers,activation):
model = Sequential()
for i,node in enumerate(layers):
if i == 0:
model.add(Dense(node,input_dim=X_train.shape[1]))
model.add(Activation(activation))
model.add(Dropout(0.3))
else:
model.add(Dense(node))
model.add(Activation(activation))
model.add(Dropout(0.3))
model.add(Dense(units=1,activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy',metrics=['accuracy'])
return model
model = KerasClassifier(build_fn=create_model,verbose=0)
layers = [[20,15],[40,20,15],[45,25,20,15]]
activation = ['sigmoid','relu']
param_grid = dict(layers=layers, activation=activation, batch_size = [64,128,150,175], epochs=[35])
grid = GridSearchCV(estimator=model, param_grid=param_grid,cv=3)
grid_result = grid.fit(X_train, y_train)
ypred_new = grid.predict(X_test)
ypred_new
accuracy_score(ypred_new,y_test)
grid_result.best_params_
grid_result.best_score_
```
| github_jupyter |
```
# importing required modules
from zipfile import ZipFile
# specifying the zip file name
file_name = "/content/drive/MyDrive/Datasets/parlerData.zip"
# opening the zip file in READ mode
with ZipFile(file_name, 'r') as zip:
# printing all the contents of the zip file
zip.printdir()
# extracting all the files
print('Extracting all the files now...')
zip.extractall('/content/drive/MyDrive/Datasets/parlerData')
# zip.extractall()
print('Done!')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import json
import nltk
nltk.download('punkt')
from wordcloud import WordCloud, STOPWORDS
from queue import PriorityQueue
import plotly.express as px
```
#posts data
CreatedAT <t> : only <b>int</b> data type <br>
Creator <t> : 101945 - 101938 = 7 null
-----
#users data
Score, Joined, Interactions : only <b>int</b> data type<br>
Name <t> : 22326 - 22325 = 1 null<br>
Bio <t> : 22326 - 17015 = 5311 null<br>
---
---
---
#Post data
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 101945 entries, 0 to 101944<br>
Data columns (total 9 columns):<br>
| # | Column | Non-Null Count | Dtype
|----|-------------|-----------------|-------
| 0 | Impressions | 101945 non-null | object
| 1 | Id | 101945 non-null | object
| 2 | Upvotes | 101945 non-null | object
| 3 | At | 101945 non-null | object
| 4 | Comments | 101945 non-null | object
| 5 | Reposts | 101945 non-null | object
| 6 | CreatedAt | 101945 non-null | int64
| 7 | Body | 101944 non-null | object
| 8 | Creator | 101938 non-null | object
dtypes: int64(1), object(8)<br>
memory usage: 7.0+ MB<br>
-----
#User data
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 22326 entries, 0 to 22325<br>
Data columns (total 9 columns):
| # | Column | Non-Null Count | Dtype
|----|--------------|----------------|-------
| 0 | Name | 22325 non-null | object
| 1 | Score | 22326 non-null | int64
| 2 | Id | 22326 non-null | object
| 3 | Bio | 17015 non-null | object
| 4 | Joined | 22326 non-null | int64
| 5 | Username | 22326 non-null | object
| 6 | Interactions | 22326 non-null | int64
| 7 | Human | 22326 non-null | bool
| 8 | Verified | 22326 non-null | bool
dtypes: bool(2), int64(3), object(4)<br>
memory usage: 1.2+ MB
```
working_directory = "/content/drive/MyDrive/Datasets/parlerData"
# working_directory = '/content'
class preprocessing :
def __init__(self,working_dir) :
self.post_df = pd.read_csv(working_dir + '/parler_postsData.csv')
self.user_df = pd.read_csv(working_dir + '/parler_userData.csv')
self.post_df['Upvotes'] = self.post_df['Upvotes'].apply(lambda x : int(self.str_2_float(x)) )
self.user_df['Interactions'] = pd.to_numeric(self.user_df['Interactions'])
# self.post_df.info()
# self.user_df.info()
def get_UserName_verified_details(self, df) :
mappings = self.user_df.set_index('Id').T.to_dict('records')
UID = df.index
df['Username'] = UID.map(mappings[4]) # UserName
df['Human'] = UID.map(mappings[6]) # Human
df['Verified'] = UID.map(mappings[7]) # Verified
return df
def str_2_float(self, x) :
if('k' in x) :
num = float(x[:x.index('k')]) * 1000
else :
num = float(x)
return num
def plot_bar_graph_MultiColor(self, df , y_label , y_header) :
df['color'] = (df['Human'] * 10) + (df['Verified'] * 1)
color_dict = {0 : (0, 0, 0) , 1 : (1, 0, 0) , 10 : (0, 0, 1) , 11 : (0, 1, 0)}
fig = plt.figure(figsize=(df.shape[0], 5))
plt.bar(df['Username'], df[y_header], color = df['color'].map(color_dict).to_list() , width = 0.7)
plt.xlabel("Usernames")
plt.ylabel(y_label)
plt.title("Bar graph of " + y_label + " per user for top " + str(df.shape[0]) + " users.")
plt.xticks(rotation=45)
handle = [plt.Rectangle((0,0),1,1, color = color_dict[i]) for i in color_dict]
labels = ['Neither verified nor human' , 'Verified but not human' , 'Human but not verified' , 'Verified Human']
plt.legend(handle , labels)
plt.show()
def plot_bar_graph(self, X , Y , x_label , y_label ) :
fig = plt.figure(figsize=(len(X), 5))
plt.bar(X, Y, width = 0.7)
plt.xlabel(x_label)
plt.ylabel(y_label)
plt.title("Bar graph of top "+ str(len(X)) + " " + x_label + " based upon " + y_label)
plt.xticks(rotation=45)
plt.show()
def question_1(self, n = 10) :
UIDrank_numContent = self.post_df['Creator'].value_counts()[:n] # UID ranking based upon number of content
UID_post_df = UIDrank_numContent.to_frame(name = 'postCounts') * 100 / self.post_df.shape[0]
top_n_content_df = self.get_UserName_verified_details(UID_post_df)
print('There are' , top_n_content_df['postCounts'].sum() , '% of posts posted by top' , n , 'users')
self.plot_bar_graph_MultiColor(top_n_content_df , 'Percentage of content posted' , 'postCounts')
def question_2a(self, n) :
#------------------------------ bar graph based on number of upvotes per username ------------------------------
# UIDrank_Upvotes = self.post_df[['Creator' , 'Upvotes']].groupby(['Creator']).agg(['sum'])
# UIDrank_Upvotes.columns = ['Upvotes_sum'] # reducing multi_index to columns
# UIDrank_Upvotes = UIDrank_Upvotes.sort_values(by=['Upvotes_sum'] , ascending = False )[:n]
# top_n_upvotes_df = self.get_UserName_verified_details(UIDrank_Upvotes)
# self.plot_bar_graph_MultiColor(top_n_upvotes_df , 'Upvotes posted' , 'Upvotes_sum')
#------------------------------ bar graph based on score per username ------------------------------
UIDrank_Upvotes = self.user_df[['Id' , 'Score']].groupby(['Id']).agg(['sum'])
UIDrank_Upvotes.columns = ['Upvotes_sum'] # reducing multi_index to columns
UIDrank_Upvotes = UIDrank_Upvotes.sort_values(by=['Upvotes_sum'] , ascending = False )[:n]
top_n_upvotes_df = self.get_UserName_verified_details(UIDrank_Upvotes)
# print(top_n_upvotes_df)
self.plot_bar_graph_MultiColor(top_n_upvotes_df , 'Number of Upvotes' , 'Upvotes_sum')
def question_2b(self, n) :
#------------------------------ bar graph based on number of interactions per username ------------------------------
UIDrank_Interactions = self.user_df[['Id' , 'Interactions']].groupby(['Id']).agg(['sum'])
UIDrank_Interactions.columns = ['Interaction_sum'] # reducing multi_index to columns
UIDrank_Interactions = UIDrank_Interactions.sort_values(by=['Interaction_sum'] , ascending = False )[:n]
top_n_Interactions_df = self.get_UserName_verified_details(UIDrank_Interactions)
self.plot_bar_graph_MultiColor(top_n_Interactions_df , 'Number of Interactions' , 'Interaction_sum')
return top_n_Interactions_df
def question_2c(self, n) :
# ------------------------------ bar graph based on number of interactions per username ------------------------------
UID_mentioned_freq = {}
for datapoint in self.post_df['At'] :
for UID in json.loads(datapoint.replace("'" , '"')).values() :
if(UID in UID_mentioned_freq) :
UID_mentioned_freq[UID] += 1
else :
UID_mentioned_freq[UID] = 1
UIDrank_freq_dict = {k : UID_mentioned_freq[k] for k in sorted(UID_mentioned_freq , key = UID_mentioned_freq.get , reverse = True)[:n]}
UIDrank_mentions = pd.DataFrame.from_dict(UIDrank_freq_dict , orient = 'index' , columns = ['Mentions_count'])
top_n_mentions_df = self.get_UserName_verified_details(UIDrank_mentions)
self.plot_bar_graph_MultiColor(top_n_mentions_df , 'Number of Mentions' , 'Mentions_count')
def word_cloud_plot(self, word_cloud_sentence) :
# wordcloud = WordCloud(background_color ='white', stopwords = set(STOPWORDS)).generate(word_cloud_sentence)
wordcloud = WordCloud(width = 1200, height = 600, background_color ='white', stopwords = set(STOPWORDS)).generate(word_cloud_sentence)
# wordcloud = WordCloud(width = 1200, height = 600, background_color ='white', stopwords = set(STOPWORDS), min_font_size = 10).generate(word_cloud_sentence)
plt.figure()
plt.imshow(wordcloud)
plt.axis("off")
plt.tight_layout(pad = 0)
plt.show()
def question_2d(self, df) :
mappings = self.user_df.set_index('Id').T.to_dict('records')
UID = df.index
df['Bio'] = UID.map(mappings[2]) # UserName
word_cloud_sentence = ""
punctuations = '!"$%&\'()*+,-./:;<=>?[\\]^_`{|}~' # string.punctuations - {'#' , '@'}
for bio_txt in df['Bio'] :
if(bio_txt is not np.nan) :
# print(bio_txt)
bio_txt = bio_txt.replace("\\n" , " " ).translate(str.maketrans("","",punctuations))
# tokens = nltk.tokenize.word_tokenize(bio_txt)
cleaned_tokens = []
for token in bio_txt.split() :
if(token[0] in ['#' , '@']) :
continue
if(token[:4] == 'http') :
continue
cleaned_tokens.append(token)
# print(cleaned_tokens)
word_cloud_sentence += " ".join([x.lower() for x in cleaned_tokens])
self.word_cloud_plot(word_cloud_sentence)
def question_2(self, n = 10) :
self.question_2a(n)
# -------------- TODO
# UIDrank_Upvotes = self.user_df[['Id' , 'Score']].sort_values(by=['Score'] , ascending = False )[:n]
# top_n_upvotes_df = self.get_UserName_verified_details(UIDrank_Upvotes)
# print(top_n_upvotes_df)
# self.plot_bar_graph_MultiColor(top_n_upvotes_df , 'Upvotes score' , 'Score')
top_n_Interactions_df = self.question_2b(n)
self.question_2c(n)
self.question_2d(top_n_Interactions_df)
def get_top_n(self , n , pq) :
top_n_X = []
top_n_Y = []
while not pq.empty() :
if(len(top_n_X) >= n) :
break
word_freq = pq.get()
top_n_X.append(word_freq[1])
top_n_Y.append(-1 * word_freq[0])
return top_n_X , top_n_Y
def question_3a(self) :
word_cloud_sentence = ""
punctuations = '!"$%&\'()*+,-./:;<=>?[\\]^_`{|}~' # string.punctuations - {'#' , '@'}
total_words = {}
for bio_txt in self.post_df['Body'] :
if(bio_txt is not np.nan) :
bio_txt = bio_txt.replace("\\n" , " " ).translate(str.maketrans("","",punctuations))
# tokens = nltk.tokenize.word_tokenize(bio_txt)
# cleaned_tokens = []
txt_word = {}
for token in bio_txt.split() :
token = token.lower()
if('#' in token or '@' in token or token[:4] == 'http') :
continue
if(token.isalpha()) :
if(token not in txt_word) :
txt_word[token] = 1
else :
txt_word[token] += 1
# cleaned_tokens.append(token)
for word in txt_word :
if(word not in total_words) :
total_words[word] = txt_word[word]
else :
total_words[word] += txt_word[word]
word_cloud_sentence += " ".join(txt_word.keys())
self.word_cloud_plot(word_cloud_sentence)
# global_token = nltk.tokenize.word_tokenize(word_cloud_sentence)
# global_token = word_cloud_sentence.split()
word_rank = PriorityQueue()
for word in total_words :
if(len(word) > 1 and word not in set(STOPWORDS)) :
word_rank.put((-1 * total_words[word] , word))
return word_rank
def question_3b(self) :
punctuations = '!"$%&\'()*+,-./:;<=>?[\\]^_`{|}~' # string.punctuations - {'#' , '@'}
hastags = {}
for bio_txt in self.post_df['Body'] :
if(bio_txt is not np.nan) :
# print(bio_txt)
bio_txt_list = bio_txt.replace("\\n" , " " ).translate(str.maketrans("","",punctuations)).split()
# tokens = nltk.tokenize.word_tokenize(bio_txt)
cleaned_tokens = []
txt_hastag = {}
for token in bio_txt_list :
token = token.lower()
if(token[:4] == 'http' or token[0] == '@') :
continue
if(token[0] in '#') :
# print(token)
if(token[1:] not in txt_hastag) :
txt_hastag[token[1:]] = 1
else :
txt_hastag[token[1:]] += 1
else :
cleaned_tokens.append(token)
word_len = len(cleaned_tokens)
cleaned_tokens = []
for i in txt_hastag :
if(i not in hastags) :
hastags[i] = np.array([txt_hastag[i] , txt_hastag[i] * word_len]) # TODO : check this out
else :
hastags[i] += np.array([txt_hastag[i] , txt_hastag[i] * word_len]) # TODO : check this out
# print(hastags)
hastag_rank_freq = PriorityQueue()
hastag_rank_post = PriorityQueue()
for hastag in hastags :
hastag_rank_freq.put((-1 * hastags[hastag][0] , hastag))
hastag_rank_post.put((-1 * hastags[hastag][1] // hastags[hastag][0] , hastag))
return hastag_rank_freq , hastag_rank_post
def question_3(self, n = 10) :
#------------------------------ Word cloud ------------------------------
top_words = self.question_3a()
#------------------------------ bar graph based on number of occurrences per word ------------------------------
top_n_words , top_n_freq = self.get_top_n(n,top_words)
self.plot_bar_graph(top_n_words , top_n_freq , "Words" , "Number of occurance")
#------------------------------ bar graph based on hastags ------------------------------
hastag_rank_freq , hastag_rank_post = self.question_3b()
# ------------------------------- ranking top n hastags based upon number of occurance -------------------------------
top_n_hastag , top_n_freq = self.get_top_n(n,hastag_rank_freq)
self.plot_bar_graph(top_n_hastag , top_n_freq , "Hastags" , "Number of occurance")
# ------------------------------- ranking top n hastags based upon average post length -------------------------------
top_n_hastag , top_n_avg_post_len = self.get_top_n(n , hastag_rank_post)
self.plot_bar_graph(top_n_hastag , top_n_avg_post_len , "Hastags" , "Average post length")
def set_precision_args(self , label) :
if(label == 'year') :
return 10 ** 10 , '%Y'
if(label == 'month') :
return 10 ** 8 , '%Y%m'
if(label == 'date') :
return 10 ** 6 , '%Y%m%d'
if(label == 'hour') :
return 10 ** 4 , '%Y%m%d%H'
if(label == 'minute') :
return 10 ** 2 , '%Y%m%d%H%M'
if(label == 'second') :
return 1 , '%Y%m%d%H%M%S'
def question_4a(self , precision_label = 'date') :
if(precision_label not in ['year' , 'month' , 'date' , 'hour' , 'minute' , 'second']) :
print('Not adequate precision')
return
div , frmt = self.set_precision_args(precision_label)
self.post_df['datetime'] = (pd.to_datetime(self.post_df['CreatedAt'] // div , format= frmt, errors='ignore'))
df = self.post_df[['datetime' , 'Id']].groupby(['datetime']).agg(['count'])
df = df.reset_index()
df.columns = ['datetime' , 'new_post_count']
fig = px.line(df, x='datetime', y='new_post_count')
fig.show()
def question_4b(self , precision_label = 'date') :
if(precision_label not in ['year' , 'month' , 'date' , 'hour' , 'minute' , 'second']) :
print('Not adequate precision')
return
div , frmt = self.set_precision_args(precision_label)
self.user_df['datetime'] = pd.to_datetime(self.user_df['Joined'] // div , format= frmt, errors='ignore')
df = self.user_df[['datetime' , 'Id']].groupby(['datetime']).agg(['count'])
df = df.reset_index()
df.columns = ['datetime' , 'new_user_count']
fig = px.line(df, x='datetime', y='new_user_count')
fig.show()
def question_4(self , precision_label = 'date') :
self.question_4a(precision_label)
self.question_4b(precision_label)
preprocessor = preprocessing(working_directory)
```
# Question 1
```
preprocessor.question_1()
```
# Question 2
If you want to run the whole question 2 in one go, use<br>
```
preprocessor.question_2()
```
However every bellow mentioned cell solves the problem in divided categories.<br>
Where I have hard coded the n value as 10. you can change it for diffrent values.
```
preprocessor.question_2a(10)
top_n_Interactions_df = preprocessor.question_2b(10)
preprocessor.question_2c(10)
preprocessor.question_2d(top_n_Interactions_df)
```
# Question 3
If you want to run the whole question 3 in one go, use<br>
```
preprocessor.question_3()
```
However every bellow mentioned cell solves the problem in divided categories.<br>
Where I have hard coded the n value as 10. you can change it for diffrent values.
Assumptions:<br>
• Both word and hashtags are used in lower case.<br>
• Emoticons (i.e., word with len 1) are ignored.
```
top_words = preprocessor.question_3a()
top_n_words , top_n_freq = preprocessor.get_top_n(10,top_words)
preprocessor.plot_bar_graph(top_n_words , top_n_freq , "Words" , "Number of occurance")
hastag_rank_freq , hastag_rank_post = preprocessor.question_3b()
top_n_hastag , top_n_freq = preprocessor.get_top_n(10 ,hastag_rank_freq)
preprocessor.plot_bar_graph(top_n_hastag , top_n_freq , "Hastags" , "Number of occurance")
top_n_hastag , top_n_avg_post_len = preprocessor.get_top_n(10 , hastag_rank_post)
preprocessor.plot_bar_graph(top_n_hastag , top_n_avg_post_len , "Hastags" , "Average post length")
```
# Question 4
If you want to run the whole question 4 in one go, use<br>
```
preprocessor.question_4()
```
However every bellow mentioned cell solves the problem in divided categories.<br>
Where I have hard coded the precision as 'date'. you can change it for diffrent precision values.
Here precision_label = 'date' means the count is summed up for every date. That means the graph shows number of post posted or user joined on a particular date. Where hour and are in-seprable.
If you want diffrent precision levels, use bellow code
```
preprocessor.question_4('year')
preprocessor.question_4('month')
preprocessor.question_4('date') # default case
preprocessor.question_4('hour')
preprocessor.question_4('minute')
preprocessor.question_4('second')
```
```
preprocessor.question_4a()
preprocessor.question_4b()
```
| github_jupyter |
```
!rm -Rf HMP_Dataset
!git clone https://github.com/wchill/HMP_Dataset
#!ls HMP_Dataset/Brush_teeth
import os
#get list of folders/files in folder HMP_Dataset
file_list = os.listdir('HMP_Dataset')
#filter list for folders containing data
file_list_filtered = [s for s in file_list if '_' in s]
import pandas as pd
#create pandas data frame for all the data
data_frames = []
for category in file_list_filtered:
data_files = os.listdir('HMP_Dataset/'+category)
#create a temporary pandas data frame for each data file
for data_file in data_files:
print(data_file)
temp_df = pd.read_csv('HMP_Dataset/'+category+'/'+data_file, header=None, names=['x','y','z'], sep=' ')
#getting number of records by checking length of 1st column
length = len(temp_df.iloc[:,0])
#create a column called "source" storing the current CSV file
temp_df['source'] = pd.Series([data_file]*length, index=temp_df.index)
#create a column called "class" storing the current data folder
temp_df['class'] = pd.Series([category]*length, index=temp_df.index)
#append to existing data frame list
data_frames = data_frames + [temp_df]
#create big dataframe from all small ones
df = pd.concat(data_frames)
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(df['class'].as_matrix())
encoded_classes = le.transform(df['class'].as_matrix())
encoded_classes
df_encoded = df.join(pd.DataFrame(encoded_classes,columns=['class_factorized']))
df_encoded
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder()
one_hot_encoded_classes = ohe.fit_transform(encoded_classes.reshape(-1, 1)).toarray()
one_hot_encoded_classes
display(pd.DataFrame(one_hot_encoded_classes))
df_one_hot = pd.DataFrame(one_hot_encoded_classes,dtype=int)
df_one_hot
#df = pd.concat([df, dfOneHot], axis=1)
df_final = df_encoded.join(df_one_hot)
from sklearn import preprocessing
from pandas import Series
x_scaled = preprocessing.scale(df_encoded['x'])
y_scaled = preprocessing.scale(df_encoded['y'])
z_scaled = preprocessing.scale(df_encoded['z'])
df_encoded['x_scaled'] = Series(x_scaled, index=df_encoded.index)
df_encoded['y_scaled'] = Series(y_scaled, index=df_encoded.index)
df_encoded['z_scaled'] = Series(z_scaled, index=df_encoded.index)
df_encoded
from sklearn_pandas import DataFrameMapper
from sklearn.preprocessing import StandardScaler
mapper = DataFrameMapper([
(['x','y','z'], StandardScaler())
])
df_scaled = mapper.fit_transform(df)
df_scaled
!pip install ibex
from ibex.sklearn.preprocessing import StandardScaler
from ibex.sklearn.preprocessing import LabelEncoder
from ibex.sklearn.preprocessing import OneHotEncoder
from ibex import trans
pipeline = (trans(LabelEncoder(), in_cols='class') +
trans(StandardScaler(), in_cols=['x', 'y', 'z']) +
trans(OneHotEncoder(), in_cols=['functiontransformer_0'][0]) +
trans(None, in_cols='source')
)
df_scaled = pipeline.fit_transform(df)
df_scaled
%matplotlib inline
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(df_pd_tools['x'], df_pd_tools['y'], df_pd_tools['z'],c=df_pd_tools['class_factorized'])
ax.view_init(30, 130)
plt.show()
#for angle in range(0, 360):
# ax.view_init(30, angle)
# plt.draw()
# plt.pause(.001)
%matplotlib inline
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
df_pd_tools_filtered1 = df_pd_tools[df_pd_tools['class']=='Standup_chair']
df_pd_tools_filtered2 = df_pd_tools[df_pd_tools['class']=='Pour_water']
df_pd_tools_filtered = pd.concat([df_pd_tools_filtered1,df_pd_tools_filtered2])
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(df_pd_tools_filtered['x'], df_pd_tools_filtered['y'], df_pd_tools_filtered['z'],c=df_pd_tools_filtered['class_factorized'])
ax.view_init(30, 130)
plt.show()
```
| github_jupyter |
## Divide y vencerás
Este es un método de diseño de algoritmos que se basa en *subdividir* el problema en sub-problemas, resolverlos *recursivamente*, y luego *combinar* las soluciones de los sub-problemas para construir la solución del problema original. Es necesario que los subproblemas tengan la misma estructura que el problema original, de modo que se pueda aplicar la recursividad.
## Ejemplo: Obtener la subsecuencia de suma máxima
Dada una sucesión de enteros $$ a_{1}, a_{2}, …, a_{n}$$
encontrar e identificar el valor máximo de la suma de una porción consecutiva de la secuencia.
Cuando todos los enteros son negativos entendemos que la subsecuencia de suma máxima es la vacía, siendo su suma cero.
Ejemplos:
1. { -2, 11, -4, 13, -5, 2}
2. {1, -3, 4, -2, -1, 6}
Si revisamos el problema de forma intuitiva, mediante el uso de esquemas por ejemplo, para 1 y 2 las subsecuencias de suma máxima están marcadas en negrita:
1. { -2, **11, -4, 13**, -5, 2}
y la suma de esta es 20.
2. {1, -3, **4, -2, -1, 6**}
y la suma de esta es 7.
Esta solución intuitiva utiliza del orden de $n^2$ comparaciones: $\Theta(n^2)$.
## Usando divide y vencerás
Supongamos ahora que la sucesión dada es {4, -3, 5, -2, -1, 2, 6, -2}. Dividiremos esta secuencia en dos partes iguales, como se muestra en la figura a continuación.

Entonces la subsecuencia de suma máxima puede aparecer en una de estas tres formas:
* *Caso 1*: está totalmente incluida en la primera mitad.
* *Caso 2*: está totalmente incluida en la segunda mitad.
* *Caso 3*: comienza en la primera mitad, pero termina en la segunda.
La figura anterior muestra que podemos calcular, para cada elemento de la primera mitad, la suma de la subsecuencia contigua que termina en el elemento situado más a la derecha. Hacemos esto con un recorrido de derecha a izquierda, partiendo del elemento situado entre las dos mitades. Análogamente, podemos calcular la suma de todas las subsecuencias contiguas que comiencen con el primer elemento de la segunda mitad. Entonces se puede combinar estas dos subsecuencias para formar la subsecuencia de suma máxima que cruza la línea divisoria. En el ejemplo de la figura, la secuencia resultante va desde el primer elemento de la primera mitad hasta el penúltimo elemento de la segunda mitad. La suma total es la suma de las dos subsecuencias, 4+7 = 11. Esto nos muestra que el caso 3 se puede resolver en tiempo lineal.
Tanto para el caso 1 como el caso 2, tenemos el mismo problema original, pero para una secuencia de tamaño $n/2$, es decir el mismo $\Theta(n^2)$ (recordemos que se obvían los coeficientes).
Sin embargo, podemos aplicar la misma estrategia de división por la mitad en los casos 1 y 2. Podemos continuar dividiendo hasta que sea imposible dividir más. Esto equivale, más concretamente, a resolver los casos 1 y 2 recursivamente. Se puede demostrar que esto reduce el tiempo de ejecución por de bajo de cuadrático, pues los ahorros se acumulan a lo largo de la ejecución del algoritmo. Mostramos a continuación un esquema del algoritmo:
1. Calcular recursivamente la subsecuencia de suma máxima que está totalmente contenida en la primera mitad.
2. Calcular recursivamente la subsecuencia de suma máxima que está totalmente contenida en la segunda mitad.
3. Calcular, usando dos bucles consecutivos, la subsecuencia de suma máxima que comienza en la primera mitad pero termina en la segunda.
4. Elegir la mayor de las tres sumas.
El método resultante aparece a continuación. Un algoritmo recursivo nos exige definir un caso base. Naturalmente, cuando el dato es un solo elemento, no usamos recursión.
A la llamada recursiva se le pasa el vector de entrada junto con los límites izquierdo y derecho, los cuales delimitan la porción de vector sobre la que se está operando. Una rutina guía de una línea inicializa los parámetros límite a 0 y N - 1.
```
def max3(a,b,c):
aux=[]
aux.append(a)
aux.append(b)
aux.append(c)
max = -1000
for e in aux:
if e > max:
max = e
return max
def maxSumaRec( a, izq, der): # Sub-problema cualquiera
# a: [][ ][ ] | [][][]. (n=6)
# izq der
#
# Sub-problema inicial
# a: [ ][ ][ ] | [][][ ]
# izq=0. centro=2. der=5
maxSumIzqBorde = 0; maxSumDerBorde = 0
sumIzqBorde = 0; sumDerBorde = 0
centro = int((izq+der)/2)
# CASO BASE:
if izq == der:
if a[izq] > 0:
return a[izq]
else:
return 0
# Se busca de forma recursiva la suma máxima en la sección izquierda
maxSumIzq = maxSumaRec(a, izq, centro)
# Se busca de forma recursiva la suma máxima en la sección derecha
maxSumDer = maxSumaRec(a, centro+1, der)
for i in range(centro, izq-1,-1): # al ppio: 3, 2, 1, 0 , para el ej.
sumIzqBorde += a[i]
if sumIzqBorde > maxSumIzqBorde:
maxSumIzqBorde = sumIzqBorde
for j in range(centro+1, der+1): # al ppio: 4, 5, 6, 7
sumDerBorde += a[j]
if sumDerBorde > maxSumDerBorde:
maxSumDerBorde = sumDerBorde
return max3(maxSumIzq,maxSumDer, maxSumIzqBorde+maxSumDerBorde)
a = [4,-3,5,-2,-1,2,6,-2]
print(maxSumaRec(a,0,len(a)-1))
b = [-2, 11, -4, 13, -5, 2]
print(maxSumaRec(b,0,len(b)-1))
c = [1, -3, 4, -2, -1, 6]
print(maxSumaRec(c,0,len(c)-1))
```
### Ejercicio de divide y vencerás
Si tenemos dos números complejos
$$
\begin{align}
u&=a+bi\\
v&=c+di
\end{align}
$$
podemos calcular su producto
$$
uv=(ac-bd)+(ad+bc)i
$$
haciendo 4 multiplicación de números reales.
Encuentre una forma de realizar este cálculo haciendo solo 3 multiplicaciones de números reales.
## Referencias
1. Weiss, M. A., & Marroquín, O. (2000). Estructuras de datos en JavaTM. Addison-Wesley.
2. Apuntes de Patricio Poblete (U. de Chile) disponible en: https://github.com/ivansipiran/AED-Apuntes#Algoritmos-y-Estructuras-de-Datos (visitado en mayo 2021)
| github_jupyter |
# Sequence classification model for IMDB Sentiment Analysis
(c) Deniz Yuret, 2019
* Objectives: Learn the structure of the IMDB dataset and train a simple RNN model.
* Prerequisites: [RNN models](60.rnn.ipynb)
```
# Set display width, load packages, import symbols
ENV["COLUMNS"] = 72
using Statistics: mean
using IterTools: ncycle
using Knet: Knet, AutoGrad, RNN, param, dropout, minibatch, nll, accuracy, progress!, adam, save, load, gc
# Set constants for the model and training
EPOCHS=3 # Number of training epochs
BATCHSIZE=64 # Number of instances in a minibatch
EMBEDSIZE=125 # Word embedding size
NUMHIDDEN=100 # Hidden layer size
MAXLEN=150 # maximum size of the word sequence, pad shorter sequences, truncate longer ones
VOCABSIZE=30000 # maximum vocabulary size, keep the most frequent 30K, map the rest to UNK token
NUMCLASS=2 # number of output classes
DROPOUT=0.5 # Dropout rate
LR=0.001 # Learning rate
BETA_1=0.9 # Adam optimization parameter
BETA_2=0.999 # Adam optimization parameter
EPS=1e-08 # Adam optimization parameter
```
## Load and view data
```
include(Knet.dir("data","imdb.jl")) # defines imdb loader
@doc imdb
@time (xtrn,ytrn,xtst,ytst,imdbdict)=imdb(maxlen=MAXLEN,maxval=VOCABSIZE);
println.(summary.((xtrn,ytrn,xtst,ytst,imdbdict)));
# Words are encoded with integers
rand(xtrn)'
# Each word sequence is padded or truncated to length 150
length.(xtrn)'
# Define a function that can print the actual words:
imdbvocab = Array{String}(undef,length(imdbdict))
for (k,v) in imdbdict; imdbvocab[v]=k; end
imdbvocab[VOCABSIZE-2:VOCABSIZE] = ["<unk>","<s>","<pad>"]
function reviewstring(x,y=0)
x = x[x.!=VOCABSIZE] # remove pads
"""$(("Sample","Negative","Positive")[y+1]) review:\n$(join(imdbvocab[x]," "))"""
end
# Hit Ctrl-Enter to see random reviews:
r = rand(1:length(xtrn))
println(reviewstring(xtrn[r],ytrn[r]))
# Here are the labels: 1=negative, 2=positive
ytrn'
```
## Define the model
```
struct SequenceClassifier; input; rnn; output; pdrop; end
SequenceClassifier(input::Int, embed::Int, hidden::Int, output::Int; pdrop=0) =
SequenceClassifier(param(embed,input), RNN(embed,hidden,rnnType=:gru), param(output,hidden), pdrop)
function (sc::SequenceClassifier)(input)
embed = sc.input[:, permutedims(hcat(input...))]
embed = dropout(embed,sc.pdrop)
hidden = sc.rnn(embed)
hidden = dropout(hidden,sc.pdrop)
return sc.output * hidden[:,:,end]
end
(sc::SequenceClassifier)(input,output) = nll(sc(input),output)
```
## Experiment
```
dtrn = minibatch(xtrn,ytrn,BATCHSIZE;shuffle=true)
dtst = minibatch(xtst,ytst,BATCHSIZE)
length.((dtrn,dtst))
# For running experiments
function trainresults(file,maker; o...)
if (print("Train from scratch? "); readline()[1]=='y')
model = maker()
progress!(adam(model,ncycle(dtrn,EPOCHS);lr=LR,beta1=BETA_1,beta2=BETA_2,eps=EPS))
Knet.save(file,"model",model)
GC.gc(true) # To save gpu memory
else
isfile(file) || download("http://people.csail.mit.edu/deniz/models/tutorial/$file",file)
model = Knet.load(file,"model")
end
return model
end
maker() = SequenceClassifier(VOCABSIZE,EMBEDSIZE,NUMHIDDEN,NUMCLASS,pdrop=DROPOUT)
# model = maker()
# nll(model,dtrn), nll(model,dtst), accuracy(model,dtrn), accuracy(model,dtst)
# (0.69312066f0, 0.69312423f0, 0.5135817307692307, 0.5096153846153846)
model = trainresults("imdbmodel132.jld2",maker);
# ┣████████████████████┫ [100.00%, 1170/1170, 00:15/00:15, 76.09i/s]
# nll(model,dtrn), nll(model,dtst), accuracy(model,dtrn), accuracy(model,dtst)
# (0.05217469f0, 0.3827392f0, 0.9865785256410257, 0.8576121794871795)
```
## Playground
```
predictstring(x)="\nPrediction: " * ("Negative","Positive")[argmax(Array(vec(model([x]))))]
UNK = VOCABSIZE-2
str2ids(s::String)=[(i=get(imdbdict,w,UNK); i>=UNK ? UNK : i) for w in split(lowercase(s))]
# Here we can see predictions for random reviews from the test set; hit Ctrl-Enter to sample:
r = rand(1:length(xtst))
println(reviewstring(xtst[r],ytst[r]))
println(predictstring(xtst[r]))
# Here the user can enter their own reviews and classify them:
println(predictstring(str2ids(readline(stdin))))
```
| github_jupyter |
# Função Massa de Probabilidade da soma (e diferença) de Variáveis Aleatórias Discretas Independentes
______________________
2017 Pedro Cruz up200506513@fc.up.pt
```
import numpy as np
import matplotlib.pyplot as plt
import time
plt.style.use('ggplot')
np.random.seed(1)
```
## Um exemplo com duas VADs
### Método 1
Sejam $X_1$ e $X_2$ variáveis aleatórias discretas (VAD) independentes cujo espaço amostral é, respetivamente, $S_{X_1}=\left\{0,1,3,4,8,11\right\}$ e $S_{X_2}=\left\{1,3,7,9\right\}$. As frequências absolutas registadas para cada acontecimento elementar de cada uma das VADs são, na mesma ordenação, $f_{X_1} = \left\{15, 28, 48, 14, 3, 7\right\}$ e $f_{X_2} = \left\{12, 8, 28, 34\right\}$. Assim, as funções massa de probabilidade de cada VAD são dadas por:
```
# Variável Aleatória X1
S_X1 = np.array([0., 1., 3., 4., 8., 11.]) # Espaço amostral de X1
fr_X1 = np.array([15., 28., 48., 14., 3., 7.]) # Frequências absolutas de X1
P_X1 = fr_X1 / fr_X1.sum() # Frequências relativas de X1
# Variável Aleatória X2
S_X2 = np.array([1., 3., 7., 9.]) # Espaço amostral de X2
fr_X2 = np.array([12., 8., 28., 34.]) # Frequências absolutas de X2
P_X2 = fr_X2 / fr_X2.sum() # Frequências relativas
print("P_X1 = ", P_X1)
print("P_X2 = ", P_X2)
```
Para calcular a função de probabilidade conjunta $F\left(\mathbf{W}\right)$ da VAD bidimensional $\mathbf{W}=(X_1=x, X_2=y)$ podemos fazer,
```
F_W = np.multiply.outer(P_X1, P_X2)
print("F_W = ", F_W, "\n")
print("F_X1(y) = P_X1 = ", F_W.sum(axis=1))
print("F_X2(x) = P_X2 = ", F_W.sum(axis=0))
print("F_(X1=3, X2=9) = ", F_W[2,3])
```
Ou seja,
\begin{array}{rr} \hline
& X_{2}=1 & X_{2}=3 & X_{2}=7 & X_{2}=9 & & F_{X_{1}}\left(x\right) \\ \hline
X_{1}=0 & 0.01908802 & 0.01272534 & 0.04453871 & 0.05408271 & & 0.13043478 \\ \hline
X_{1}=1 & 0.03563097 & 0.02375398 & 0.08313892 & 0.1009544 & & 0.24347826 \\ \hline
X_{1}=3 & 0.06108165 & 0.0407211 & 0.14252386 & 0.17306469 & & 0.4173913 \\ \hline
X_{1}=4 & 0.01781548 & 0.01187699 & 0.04156946 & 0.0504772 & & 0.12173913 \\ \hline
X_{1}=8 & 0.0038176 & 0.00254507 & 0.00890774 & 0.01081654 & & 0.02608696 \\ \hline
X_{1}=11 & 0.00890774 & 0.00593849 & 0.02078473 & 0.0252386 & & 0.06086957 \\ \hline
& & & & & & \\ \hline
F_{X_{2}}\left(y\right) & 0.14634146 & 0.09756098 & 0.34146341 & 0.41463415 & \\ \hline
\end{array}
Se a cada par ordenado $\mathbf{W}=(X_1=x, X_2=y)$ associarmos a soma $x+y$ teremos as somas para cada par dadas por,
```
somas = np.add.outer(S_X1, S_X2)
print(somas)
```
Portanto, considerando uma nova variável $Z = X_1 + X_2$, temos o espaço amostral $S_Z= \left\{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,17,18,20\right\}$ e a função de probabilidade associada, na mesma ordem, é dada por:
```
# FMP de Z - Espaço amostral em *keys* e probabilidade em *values*
P_Z = {}
for soma, prob in zip(somas.ravel(), F_W.ravel()):
if soma in P_Z:
P_Z[soma] += prob
else:
P_Z[soma] = prob
S_Z = list(P_Z.keys())
P_Z = list(P_Z.values())
print("S_Z = ", S_Z, "\n")
print("P_Z = ", P_Z)
```
### Método 2
Podemos obter $P_Z$ de forma mais direta fazendo a convolução de $P_{X_1}$ com $P_{X_2}$, tendo o cuidado de introduzir os vetores das funções de probabilidade de cada VAD $X_i$ com todos os valores de probabilidade definidos num intervalo com espaçamento regular $\delta$ entre $\left[min\left(S_{X_i}\right), max\left(S_{X_i}\right)\right]$
$$P\left(Z=z\right)=\sum_{k=-\infty}^{\infty}P_{X_{1}}\left(X_{1}=k\cdot \delta\right)P_{X_{2}}\left(X_{2}=z-k\cdot\delta\right)$$
```
# Espaços amostrais de X1, X2, e Z=X1+X2
S_X1b = np.array([0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11.])
S_X2b = np.array([1., 2., 3., 4., 5., 6., 7., 8., 9.])
S_Zb = np.arange(S_X1b.min() + S_X2b.min(), S_X1b.max() + S_X2b.max() + 1)
# Funções de massa de probabilidade
P_X1b = np.array([0.13043478, 0.24347826, 0., 0.4173913, 0.12173913, 0., 0., 0., 0.02608696, 0., 0., 0.06086957])
P_X2b = np.array([0.14634146, 0., 0.09756098, 0., 0., 0., 0.34146341, 0., 0.41463415])
P_Zb = np.convolve(P_X1b, P_X2b, mode="full")
# Coluna 0: valor de Z ; Coluna 1: probabilidade correspondente
par_Z_PZ = np.vstack([S_Zb, P_Zb]).T
print(par_Z_PZ)
```
## Casos gerais
No módulo **somaVADs.py** encontram-se as funções aqui apresentadas. Começamos pelas funções **somaVADs_biv** e **somaVADs_conv** relativas ao método 1 e 2, respetivamente, generalizadas para o caso geral da soma de *n* VADs independentes. Note-se que a função **somaVADs_biv** é a única função disponível que não requer que as VADs de input estejam definidas em intervalos regulares, os outros métodos não só o exigem como exigem que o espaçamento $\delta$ seja igual em todas as VADs a somar.
```
from fmp_soma_vadis import somaVADs_biv, somaVADs_conv
```
### Exemplo 1 - Soma de duas VADs
Usam-se novamente as mesmas VADs já usadas acima para exemplificar a utilização dos métodos e confirmar a obtenção dos mesmos resultados.
```
# Construção dos elementos X_i, FMP_i para usar na lista de input
VAD1 = np.vstack([S_X1, P_X1]).T
VAD2 = np.vstack([S_X2, P_X2]).T
print("VAD da soma:\n", somaVADs_biv([VAD1,VAD2]))
```
No caso do uso do método **somaVADs_conv**, a função de probabilidade de massa deve ser introduzida com todos os valores definidos num intervalo com espaçamento regular $\delta$ entre $\left[min\left(S_{X_i}\right), max\left(S_{X_i}\right)\right]$. Usamos as variáveis já definidas atrás quando se introduziu o método 2.
```
# Construção dos elementos X_i, FMP_i para usar na lista de input
VAD1b = np.array([S_X1b, P_X1b]).T
VAD2b = np.array([S_X2b, P_X2b]).T
print("VAD da soma:\n", somaVADs_conv([VAD1b, VAD2b], delta=1))
```
### Exemplo 2 - Diferença de duas VADs
Para calcular a diferença de VADs basta introduzir na função de soma a VAD a subtrair definida de forma adequada. No caso de se usar **somaVADs_biv**, basta simetrizar a coluna do espaço amostral original da VAD a subtrair; no caso da função **somaVADs_conv**, para além de simetrizar esta coluna, é preciso reordenar as linhas do array para que fiquem por ordem crescente. Por exemplo, usando as mesmas duas VADs do *Exemplo 1* para calcular a FMP de $Y=X_1-X_2$ fazemos
```
# Construção dos elementos X_i, FMP_i para usar na lista de input de *somaVADs_biv*
VAD1c = np.vstack([S_X1, P_X1]).T
VAD2c = np.vstack([-S_X2, P_X2]).T # Note-se o sinal
# Construção dos elementos X_i, FMP_i para usar na lista de input de *somaVADs_conv*
VAD1d = VAD1b
VAD2d = np.flipud(VAD2b.copy()) # Flip vertical
VAD2d[:,0] = - VAD2d[:,0] # Simetrizam-se os valores da col 0
Y_res1 = somaVADs_biv([VAD1c, VAD2c])
Y_res2 = somaVADs_conv([VAD1d, VAD2d], delta=1)
# Passo desnecessário em geral - usado aqui apenas para retirar do array as linhas onde a FMP é nula
# e assim deixar o array na forma em que *somaVADs_biv* devolve para permitir comparar os
# resultados com os dois métodos.
Y_res2 = Y_res2[Y_res2[:,1] != 0.]
print("Resultado igual com os dois métodos? ", np.allclose(Y_res1, Y_res2))
```
### Exemplo 3 - Soma de várias VADs (com *somaVADs_conv*)
Exemplo com 3 VADs e verificação da propriedade comutativa na soma das VADs e na convolução.
```
print("\nSoma de três VADs:\n", somaVADs_conv([VAD1b, VAD2b, VAD1b]))
print("\nComutatividade:\n", np.allclose(somaVADs_conv([VAD1b, VAD2b, VAD1b]), somaVADs_conv([VAD1b, VAD1b, VAD2b])))
```
### Exemplo 4 (com *somaVADs_conv*)
Somam-se 3 VADs gaussianas $X_{10}$, $X_{11}$ e $X_{12}$ com médias, respetivamente, $0,10,-5$ e variâncias $13,11,9$. Confirma-se a forma gaussiana da VAD resultante $Z=X_{10}+X_{11}+X_{12}$. A FMP de cada VAD é gerada fazendo um histograma normalizado com 119 bins a partir de 100'000 amostras. Estas 3 FMPs são usadas para obter a FMP da soma das VADs.
```
namostras = 100000
bins_edges = np.arange(-60, 60, 1)
# VAD 10
mu_X10, sigma_X10 = 0, 13 # média e desvio padrão
X10 = np.random.normal(mu_X10, sigma_X10, namostras)
hist_X10, edges_X10 = np.histogram(X10, bins=bins_edges, density=True)
S_X10 = (edges_X10[:-1] + edges_X10[1:])/2. # usa-se o centro de cada bin como abscissa
# VAD 11
mu_X11, sigma_X11 = 10, 11 # média e desvio padrão
X11 = np.random.normal(mu_X11, sigma_X11, namostras)
hist_X11, edges_X11 = np.histogram(X11, bins=bins_edges, density=True)
S_X11 = (edges_X11[:-1] + edges_X11[1:])/2.
# VAD 12
mu_X12, sigma_X12 = -5, 9 # média e desvio padrão
X12 = np.random.normal(mu_X12, sigma_X12, namostras)
hist_X12, edges_X12 = np.histogram(X12, bins=bins_edges, density=True)
S_X12 = (edges_X12[:-1] + edges_X12[1:])/2.
# Construção dos elementos X_i, FMP_i para usar na lista de input
VAD10 = np.array([S_X10, hist_X10]).T
VAD11 = np.array([S_X11, hist_X11]).T
VAD12 = np.array([S_X12, hist_X12]).T
t1 = time.time()
Z_PZ = somaVADs_conv([VAD10, VAD11, VAD12]) # Soma das VADs
t2 = time.time()
print("Tempo de execução da soma: %.5f seg\n" % (t2-t1))
# Plot
fig1 = plt.figure(figsize=(14,7))
ax1 = fig1.add_subplot(111)
ax1.plot(VAD10[:,0], VAD10[:,1], label='FMP-X10')
ax1.plot(VAD11[:,0], VAD11[:,1], label='FMP-X11')
ax1.plot(VAD12[:,0], VAD12[:,1], label='FMP-X12')
ax1.plot(Z_PZ[:,0], Z_PZ[:,1], '--', label='FMP-Z por convolucao')
ax1.legend()
plt.show()
```
O que foi feito utiliza as **funções de massa de probabilidade** individuais para obter a FMP da soma das VAD's, mas se tivermos acesso às amostras (como é o caso neste exemplo) também podemos **fazer diretamente a sua soma** (com seleção **aleatória**) e chegar ao conjunto de amostras da VAD da soma para depois obter o histograma (e FMP) desta.
```
# Cada uma das variáveis, X10, X11, X12 é um vetor com 100000 amostras obtidas com amostragem
# aleatória a partir das respetivas funções de probabilidade, portanto obtemos amostras aleatórias
# de Z somando os elementos destes vetores (na mesma ordem).
Z_amostras = X10 + X11 + X12
print("Mean(X10)+Mean(X11)+Mean(X12) = ", X10.mean()+X11.mean()+X12.mean())
print("Mean(Z_amostras) = ", Z_amostras.mean())
print("\nVar(X10)+Var(X11)+Var(X12) = ", X10.var()+X11.var()+X12.var())
print("Var(Z_amostras) = ", Z_amostras.var())
# Histograma com as amostras de Z_amostras
hist_Z_amostras, edges_Z_amostras = np.histogram(Z_amostras, bins=bins_edges, density=True)
S_Z_amostras = (edges_Z_amostras[:-1] + edges_Z_amostras[1:])/2.
# Plot
fig2 = plt.figure(figsize=(14,7))
ax2 = fig2.add_subplot(111)
ax2.plot(S_Z_amostras, hist_Z_amostras, label='FMP por soma das amostras')
ax2.plot(Z_PZ[:,0], Z_PZ[:,1], '--', label='FMP-Z por convolucao')
ax2.legend()
plt.show()
```
Obter a FMP desta forma é muito mais simples e direto, mas depende da técnica de amostragem e da semente aleatória utilizadas.
O gráfico abaixo pretende apenas ilustrar que se ordenarmos os vetores das amostras de cada VAD antes de os somar obtemos uma FMP diferente para Z, ou seja, deve ter-se o cuidado de garantir que a amostragem é aleatória.
```
# Soma dos vetores de amostras aleatórias depois de ordenados
Z_amostras_b = np.sort(X10) + np.sort(X11) + np.sort(X12)
# Histograma com as amostras de Z_amostras
hist_Z_amostras_b, edges_Z_amostras_b = np.histogram(Z_amostras_b, bins=bins_edges, density=True)
S_Z_amostras_b = (edges_Z_amostras_b[:-1] + edges_Z_amostras_b[1:])/2.
# Plot
fig3 = plt.figure(figsize=(14,7))
ax3 = fig3.add_subplot(111)
ax3.plot(S_Z_amostras_b, hist_Z_amostras_b, label='FMP-Z por soma das amostras ORDENADAS')
ax3.plot(Z_PZ[:,0], Z_PZ[:,1], '--', label='FMP-Z por convolucao')
ax3.legend()
plt.show()
```
## Scripts otimizados
No módulo **somaVADs.py** encontram-se também dois outros métodos para obter a FMP da soma de VADs a partir das FMPs individuais: **somaVADs_FFT** e **somaVADs_hib**. Estes scripts procuram otimizar o método 2 e simplificar o input de várias FMPs iguais (bastando introduzir uma só vez a VAD a somar, seguida do número $n$ de vezes que será somada).
- **somaVADs_FFT** faz todas as convoluções necessárias no domínio das frequências, utilizando FFT;
- **somaVADs_hib** é um método híbrido que faz primeiro a convolução de todas as FMPs iguais passando ao domínio das frequências (FFT), calculando a potência $n$ da transformada, e obtendo a Transformada de Fourier inversa; depois prossegue fazendo convolução simples (método 2) com todas as FMPs diferentes assim encontradas.
```
from fmp_soma_vadis import somaVADs_FFT, somaVADs_hib
```
### Exemplo 5
Resolvem-se do novo as somas feitas no *Exemplo 4* com *somaVADs_biv*, *somaVADs_conv*, *somaVADs_FFT* e *somaVADs_hib* para comprovar resultados iguais.
```
# FMPs do Exemplo 4
res_ex2_biv = somaVADs_biv([VAD10, VAD11, VAD12])
res_ex2_conv = somaVADs_conv([VAD10, VAD11, VAD12])
res_ex2_FFT = somaVADs_FFT([(VAD10,1), (VAD11,1), (VAD12,1)])
res_ex2_hib = somaVADs_hib([(VAD10,1), (VAD11,1), (VAD12,1)])
eq1 = np.allclose(res_ex2_biv, res_ex2_conv)
eq2 = np.allclose(res_ex2_biv, res_ex2_FFT)
eq3 = np.allclose(res_ex2_biv, res_ex2_hib)
print("Todos iguais ao obtido no Ex.4? ", all([eq1, eq2, eq3]))
```
### Exemplo 6 - Testes ao tempo de execução de cada rotina
**Teste 1**
```
# Usam-se as VADs definidas no Exemplo 4: VAD10, VAD11 e VAD12
t1 = time.time()
resultado_1 = somaVADs_biv([VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,
VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,
VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,
VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,
VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,
VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12])
t2 = time.time()
resultado_2 = somaVADs_conv([VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,VAD10,
VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,
VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,VAD11,
VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,
VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,
VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12,VAD12],
delta=1)
t3 = time.time()
resultado_3 = somaVADs_FFT([(VAD10,10), (VAD11,20), (VAD12,30)], delta=1)
t4 = time.time()
resultado_4 = somaVADs_hib([(VAD10,10), (VAD11,20), (VAD12,30)], delta=1)
t5 = time.time()
eq1 = np.allclose(resultado_1, resultado_2)
eq2 = np.allclose(resultado_1, resultado_3)
eq3 = np.allclose(resultado_1, resultado_4)
print("Todos os resultados iguais? ", all([eq1, eq2, eq3]))
print("Tempo somaVADs_biv : ", t2-t1)
print("Tempo somaVADs_conv : ", t3-t2)
print("Tempo somaVADs_FFT : ", t4-t3)
print("Tempo somaVADs_hib : ", t5-t4)
```
**Teste 2**
```
# Usam-se as VADs definidas no Exemplo 1: VAD1b e VAD2b, e no Exemplo 4: VAD10, VAD11 e VAD12
t1 = time.time()
resultado_1 = somaVADs_FFT([(VAD10,330),(VAD11,190),(VAD12,260),(VAD1b,244),(VAD2b,133)], delta=1)
t2 = time.time()
resultado_2 = somaVADs_hib([(VAD10,330),(VAD11,190),(VAD12,260),(VAD1b,244),(VAD2b,133)], delta=1)
t3 = time.time()
#resultado_2 = somaVADs_biv([VAD10]*330 + [VAD11]*190 + [VAD12]*260 + [VAD1b]*244 + [VAD2b]*133)
t4 = time.time()
print("Todos os resultados iguais? ", np.allclose(resultado_1, resultado_2))
print("Tempo somaVADs_FFT : ", t2-t1)
print("Tempo somaVADs_hib : ", t3-t2)
#print("Tempo somaVADs_biv : ", t4-t3)
```
| github_jupyter |
# REINFORCE
---
In this notebook, we will train REINFORCE with OpenAI Gym's Cartpole environment.
### 1. Import the Necessary Packages
```
import gym
gym.logger.set_level(40) # suppress warnings (please remove if gives error)
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
import torch
torch.manual_seed(0) # set random seed
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.distributions import Categorical
# Udacity add-on from the online workspace
!python -m pip install pyvirtualdisplay
from pyvirtualdisplay import Display
display = Display(visible=0, size=(1400, 900))
display.start()
is_ipython = 'inline' in plt.get_backend()
if is_ipython:
from IPython import display
plt.ion()
```
### 2. Define the Architecture of the Policy
```
env = gym.make('CartPole-v0')
env.seed(0)
print('observation space:', env.observation_space)
print('action space:', env.action_space)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
class Policy(nn.Module):
def __init__(self, s_size=4, h_size=16, a_size=2):
super(Policy, self).__init__()
self.fc1 = nn.Linear(s_size, h_size)
self.fc2 = nn.Linear(h_size, a_size)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.softmax(x, dim=1)
def act(self, state):
state = torch.from_numpy(state).float().unsqueeze(0).to(device)
probs = self.forward(state).cpu()
m = Categorical(probs)
action = m.sample()
return action.item(), m.log_prob(action)
```
### 3. Train the Agent with REINFORCE
```
policy = Policy().to(device)
optimizer = optim.Adam(policy.parameters(), lr=1e-2)
def reinforce(n_episodes=1000, max_t=1000, gamma=1.0, print_every=100):
scores_deque = deque(maxlen=100)
scores = []
for i_episode in range(1, n_episodes+1):
saved_log_probs = []
rewards = []
state = env.reset()
for t in range(max_t):
action, log_prob = policy.act(state)
saved_log_probs.append(log_prob)
state, reward, done, _ = env.step(action)
rewards.append(reward)
if done:
break
scores_deque.append(sum(rewards))
scores.append(sum(rewards))
discounts = [gamma**i for i in range(len(rewards)+1)]
R = sum([a*b for a,b in zip(discounts, rewards)])
policy_loss = []
for log_prob in saved_log_probs:
policy_loss.append(-log_prob * R)
policy_loss = torch.cat(policy_loss).sum()
optimizer.zero_grad()
policy_loss.backward()
optimizer.step()
if i_episode % print_every == 0:
print('Episode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
if np.mean(scores_deque)>=195.0:
print('Environment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_deque)))
break
return scores
#scores = reinforce()
scores = reinforce(gamma=0.99995, max_t=2000)
```
### 4. Plot the Scores
```
fig = plt.figure()
ax = fig.add_subplot(111)
ax.tick_params(axis='x', colors='blue')
ax.tick_params(axis='y', colors='blue')
plt.plot(np.arange(1, len(scores)+1), scores)
plt.ylabel('Score', color='blue')
plt.xlabel('Episode #', color='blue')
plt.show()
```
### 5. Watch a Smart Agent!
```
# Updated with add-on from Udacity online workspace
env = gym.make('CartPole-v0')
state = env.reset()
img = plt.imshow(env.render(mode='rgb_array'))
for t in range(1000):
action, _ = policy.act(state)
img.set_data(env.render(mode='rgb_array'))
plt.axis('off')
display.display(plt.gcf())
display.clear_output(wait=True)
state, reward, done, _ = env.step(action)
if done:
break
env.close()
```
| github_jupyter |
```
#remotes::install_github("coolbutuseless/ggpattern")
list.of.packages <- c("ggpubr","magick","ggpattern","tidyverse","stargazer","dplyr","ggplot2")
new.packages <- list.of.packages[!(list.of.packages %in% installed.packages()[,"Package"])]
if(length(new.packages)) install.packages(new.packages, repos = "http://cran.us.r-project.org")
invisible(lapply(list.of.packages, library, character.only = TRUE))
select <- dplyr::select
options(repr.matrix.max.rows=50, repr.matrix.max.cols=500)
```
### Publication year distribution
```
path <- "../2_Treatment_database/output/database_one_row_each_paper.csv"
df <- read_csv(path)
sprintf("%i x %i dataframe", nrow(df), ncol(df))
head(df,1)
df_pub <- df %>%
select(publication_year,horizon_year)%>%
#for easier reading aggregate all pub year before 2000 to 2000
mutate(pub_year = ifelse(publication_year <= 2002, 2002, publication_year),
hor_year = ifelse(horizon_year <= 2030, "[2025;2030]",
ifelse(horizon_year > 2050, "]2050;2100]", "]2030;2050]")))%>%
#calculate the sum of publi by region
group_by(pub_year,hor_year) %>%
summarise('number'=n()) %>%
ungroup() %>%
#calculate percentage for column labels
mutate('relative'=unlist(by(data = number, INDICES = pub_year,
FUN = function(x) round(x/sum(x)*100, digits = 0)))) %>%
mutate(Estimated = ifelse(pub_year == 2002, "aggregated", "yearly"))
plot_pub <- ggplot(data = df_pub,aes(x=pub_year,y=number,fill=hor_year, pattern = Estimated)) +
ggtitle('a) Horizon year per publication year')+
geom_bar_pattern(stat="identity",
color = "black",
pattern_fill = "black",
pattern_angle = 45,
pattern_density = 0.1,
pattern_spacing = 0.02,
pattern_key_scale_factor = 0.2) +
scale_pattern_manual(values = c(aggregated = "stripe", yearly = "none")) +
labs(x = " \n Publication Year", y = "Number of Papers \n ", fill = "Horizon Year",
pattern = "Level"
) +
guides(pattern = FALSE, fill = guide_legend(override.aes = list(pattern = "none")))+
geom_vline(xintercept= 2002.5, linetype="dashed", size=0.5)+
annotate("text", x = 2002, y = 200, label = "Until 2002",angle = 90) +
xlab(" \n Publication Year")+
ylab("Number of Papers \n ")+
geom_text(data = subset(df_pub,pub_year ==2002 & relative >=15),
aes(x = pub_year, label = paste0(relative,'%')),
colour = 'black', position=position_stack(vjust=0.5))+
geom_text(data = subset(df_pub,pub_year >=2007 & pub_year <=2013 & relative >=15),
aes(x = pub_year, label = paste0(relative,'%')),
colour = 'black', position=position_stack(vjust=0.5))+
geom_text(data = subset(df_pub,pub_year >=2014 & pub_year <=2016 & relative >=5),
aes(x = pub_year, label = paste0(relative,'%')),
colour = 'black', position=position_stack(vjust=0.5))+
geom_text(data = subset(df_pub,pub_year >=2017 & pub_year <=2020 & relative >=2),
aes(x = pub_year, label = paste0(relative,'%')),
colour = 'black', position=position_stack(vjust=0.5))+
theme_minimal()+
theme(
plot.title = element_text(size = rel(2)),
legend.title = element_text(size = 16,face ="bold"),
legend.text = element_text(size = 16),
legend.position = 'top',
axis.text.x = element_text(size = 16),
axis.text.y = element_text(size = 16),
axis.title.x = element_text(size = 16, hjust = 0.5,face ="bold"),
axis.title.y = element_text(size = 16, hjust = 0.5,face ="bold")
)
```
### Horizon year distribution
```
df_hor <- df %>%
select(horizon_year,Region)%>%
mutate(hor_year = ifelse(horizon_year >=2026 & horizon_year <= 2029, 2027,
ifelse(horizon_year >=2031 & horizon_year <= 2039, 2035,
ifelse(horizon_year >=2041 & horizon_year <= 2049, 2045,
ifelse(horizon_year >=2051 & horizon_year <= 2099, 2075,horizon_year)))),
hor_year = as.character(hor_year),
Region=factor(Region, levels = c('Antarctica','Oceania','Africa','Latin America',
'North America','European Union','Europe','Asia'))) %>%
#calculate the sum of publi by region
group_by(hor_year,Region) %>%
summarise('number'=n()) %>%
ungroup() %>%
#calculate percentage for column labels
mutate('relative'=unlist(by(data = number, INDICES = hor_year,
FUN = function(x) round(x/sum(x)*100, digits = 0))))%>%
mutate(Estimated = ifelse(hor_year == 2027 | hor_year == 2035 | hor_year == 2045 | hor_year == 2075, "aggregated", "yearly"))
head(df_hor,2)
options(repr.plot.width=12, repr.plot.height=10)
plot_hor <- ggplot(data = df_hor, aes(x=hor_year,y=number,fill=Region, pattern = Estimated)) +
ggtitle('b) Regional distribution per horizon year')+
geom_bar_pattern(stat="identity",
color = "black",
pattern_fill = "black",
pattern_angle = 45,
pattern_density = 0.1,
pattern_spacing = 0.02,
pattern_key_scale_factor = 0.2) +
scale_pattern_manual(values = c(aggregated = "stripe", yearly = "none")) +
labs(x = " \n Horizon Year", y = "Number of Papers \n ", fill = "Region", pattern = "Level") +
guides(pattern = FALSE, fill = guide_legend(override.aes = list(pattern = "none")))+
scale_x_discrete(labels = c("2025","","2030","","2040","","2050","","2100")) +
scale_fill_manual(values=c('Asia'='darkorange',
'European Union'='#7CAE00',
'Europe'='seagreen4',
'North America'='darkblue',
'Latin America'='dodgerblue2',
'Africa'='orchid',
'Oceania'='coral2',
'Antarctica'='#CAB2D6')) +
geom_text(data = subset(df_hor,hor_year != 2030 & hor_year !=2050 & hor_year !=2027 & hor_year !=2045 &
relative >=15),
aes(x = hor_year, label = paste0(relative,'%')),
colour = 'black', position=position_stack(vjust=0.5))+
geom_text(data = subset(df_hor,hor_year == 2030 | hor_year ==2050),
aes(x = hor_year, label = paste0(relative,'%')),
colour = 'black', position=position_stack(vjust=0.5))+
theme_minimal()+
theme(
plot.title = element_text(size = rel(2)),
legend.title = element_text(size = 16,face ="bold"),
legend.text = element_text(size = 16),
legend.position = 'top',
axis.text.x = element_text(size = 16),
axis.text.y = element_text(size = 16),
axis.title.x = element_text(size = 16, hjust = 0.5,face ="bold"),
axis.title.y = element_text(size = 16, hjust = 0.5,face ="bold")
)
plot_hor
options(repr.plot.width=20, repr.plot.height=10)
plot <- ggarrange(plot_pub, plot_hor,
widths = c(5,4),
#common.legend = FALSE,
#legend = "bottom",
ncol=2, nrow = 1
)
plot
ggsave('./output/Fig3_distribution_years.png', height=10, width=20, plot=plot)
```
| github_jupyter |
$\newcommand{\mb}[1]{\mathbf{ #1 }}$
$\newcommand{\bb}[1]{\mathbb{ #1 }}$
$\newcommand{\bs}[1]{\boldsymbol{ #1 }}$
$\newcommand{\norm}[1]{\left\Vert #1 \right\Vert}$
$\newcommand{\der}[2]{\frac{ \mathrm{d} #1 }{ \mathrm{d} #2 }}$
$\newcommand{\derp}[2]{\frac{ \partial #1 }{ \partial #2 }}$
$\newcommand{\R}{\bb{R}}$
# Learning Dynamics
```
from matplotlib.pyplot import show, subplots
```
## Robotic Systems
Let $\mathcal{Q} \subseteq \R^n$ be a configuration space. Consider a robotic system governed by:
\begin{equation}
\mb{D}(\mb{q})\ddot{\mb{q}} + \mb{C}(\mb{q}, \dot{\mb{q}})\dot{\mb{q}} + \mb{G}(\mb{q}) = \mb{B}\mb{u},
\end{equation}
for generalized coordinates $\mb{q} \in \mathcal{Q}$, coordinate rates $\dot{\mb{q}} \in \R^n$, actions $\mb{u} \in \R^m$, inertia matrix function $\mb{D}: \mathcal{Q} \to \bb{S}^n_{++}$ (the space of $n \times n$ positive definite matrices), Coriolis terms $\mb{C}: \mathcal{Q} \times \R^n \to \R^{n \times n}$, potential terms $\mb{G}: \mathcal{Q} \to \R^n$, and static actuation matrix $\mb{B} \in \R^{n \times m}$. Assume $m \leq n$ and $\mb{B}$ is full rank.
### Inverted Pendulum
```
from numpy import array, identity, linspace
from core.controllers import FBLinController, LQRController
from core.systems import InvertedPendulum
ip = InvertedPendulum(m=0.25, l=0.5)
Q_ip = identity(2)
R_ip = identity(1)
lqr_ip = LQRController.build(ip, Q_ip, R_ip)
fb_lin_ip = FBLinController(ip, lqr_ip)
x_0_ip = array([1, 0])
ts_ip = linspace(0, 10, 1000 + 1)
xs_ip, _ = ip.simulate(x_0_ip, fb_lin_ip, ts_ip)
fig_ip, ax_ip = subplots(figsize=(6, 4))
ax_ip.plot(ts_ip, xs_ip[:, 0], linewidth=3, label='Oracle')
ax_ip.grid()
ax_ip.legend(fontsize=16)
ax_ip.set_title('Inverted Pendulum', fontsize=16)
ax_ip.set_xlabel('$t$ (sec)', fontsize=16)
ax_ip.set_ylabel('$\\theta$ (rad)', fontsize=16)
show()
```
### Double Inverted Pendulum
```
from core.systems import DoubleInvertedPendulum
dip = DoubleInvertedPendulum(m_1=0.25, m_2=0.25, l_1=0.5, l_2=0.5)
Q_dip = identity(4)
R_dip = identity(2)
lqr_dip = LQRController.build(dip, Q_dip, R_dip)
fb_lin_dip = FBLinController(dip, lqr_dip)
x_0_dip = array([1, 0, 0, 0])
ts_dip = linspace(0, 10, 1000 + 1)
xs_dip, _ = dip.simulate(x_0_dip, fb_lin_dip, ts_dip)
fig_dip, (ax_dip_1, ax_dip_2) = subplots(2, figsize=(6, 8))
ax_dip_1.set_title('Double Inverted Pendulum', fontsize=16)
ax_dip_1.plot(ts_dip, xs_dip[:, 0], linewidth=3, label='Oracle')
ax_dip_1.grid()
ax_dip_1.legend(fontsize=16)
ax_dip_1.set_xlabel('$t$ (sec)', fontsize=16)
ax_dip_1.set_ylabel('$\\theta_1$ (rad)', fontsize=16)
ax_dip_2.plot(ts_dip, xs_dip[:, 1], linewidth=3, label='Oracle')
ax_dip_2.grid()
ax_dip_2.legend(fontsize=16)
ax_dip_2.set_xlabel('$t$ (sec)', fontsize=16)
ax_dip_2.set_ylabel('$\\theta_2$ (rad)', fontsize=16)
show()
```
## Uncertain Robotic Systems
Suppose $\mb{D}$, $\mb{C}$, $\mb{G}$, and $\mb{B}$ are unknown, and instead we have access to corresponding estimates $\hat{\mb{D}}$, $\hat{\mb{C}}$, $\hat{\mb{G}}$, and $\hat{\mb{B}}$ satisfying:
\begin{equation}
\hat{\mb{D}}(\mb{q})\ddot{\mb{q}} + \hat{\mb{C}}(\mb{q}, \dot{\mb{q}})\dot{\mb{q}} + \hat{\mb{G}}(\mb{q}) = \hat{\mb{B}}\mb{u}.
\end{equation}
Assume that $\hat{\mb{B}}$ is also full rank.
The system dynamics can be in terms of the estimated terms as:
\begin{equation}
\der{}{t} \begin{bmatrix} \mb{q} \\ \dot{\mb{q}} \end{bmatrix} = \begin{bmatrix} \dot{\mb{q}} \\ -\hat{\mb{D}}(\mb{q})^{-1}(\hat{\mb{C}}(\mb{q}, \dot{\mb{q}})\dot{\mb{q}} + \hat{\mb{G}}(\mb{q})) \end{bmatrix} + \begin{bmatrix} \mb{0}_{n \times m} \\ \hat{\mb{D}}(\mb{q})^{-1}\hat{\mb{B}} \end{bmatrix} \mb{u} + \begin{bmatrix} \mb{0}_n \\ \hat{\mb{D}}(\mb{q})^{-1}(\hat{\mb{C}}(\mb{q}, \dot{\mb{q}})\dot{\mb{q}} + \hat{\mb{G}}(\mb{q}))-\mb{D}(\mb{q})^{-1}(\mb{C}(\mb{q}, \dot{\mb{q}})\dot{\mb{q}} + \mb{G}(\mb{q})) \end{bmatrix} + \begin{bmatrix} \mb{0}_{n \times m} \\ \mb{D}(\mb{q})^{-1}\mb{B} - \hat{\mb{D}}(\mb{q})^{-1}\hat{\mb{B}} \end{bmatrix} \mb{u}
\end{equation}
### Inverted Pendulum
```
ip_est = InvertedPendulum(m=0.24, l=0.48)
lqr_ip_est = LQRController.build(ip_est, Q_ip, R_ip)
fb_lin_ip_est = FBLinController(ip_est, lqr_ip_est)
xs_ip, us_ip = ip.simulate(x_0_ip, fb_lin_ip_est, ts_ip)
ax_ip.plot(ts_ip, xs_ip[:, 0], linewidth=3, label='Estimated')
ax_ip.legend(fontsize=16)
fig_ip
```
### Double Inverted Pendulum
```
dip_est = DoubleInvertedPendulum(m_1=0.24, m_2=0.24, l_1=0.48, l_2=0.48)
lqr_dip_est = LQRController.build(dip_est, Q_dip, R_dip)
fb_lin_dip_est = FBLinController(dip_est, lqr_dip_est)
xs_dip, us_dip = dip.simulate(x_0_dip, fb_lin_dip_est, ts_dip)
ax_dip_1.plot(ts_dip, xs_dip[:, 0], linewidth=3, label='Esimated')
ax_dip_1.legend(fontsize=16)
ax_dip_2.plot(ts_dip, xs_dip[:, 1], linewidth=3, label='Esimated')
ax_dip_2.legend(fontsize=16)
fig_dip
```
## Learning Dynamics
```
from tensorflow.logging import ERROR, set_verbosity
set_verbosity(ERROR)
```
### Inverted Pendulum
```
from core.dynamics import LearnedFBLinDynamics
from core.learning.keras import KerasResidualAffineModel
d_drift_in_ip = 3
d_act_in_ip = 3
d_hidden_ip = 20
d_out_ip = 1
res_model_ip = KerasResidualAffineModel(d_drift_in_ip, d_act_in_ip, d_hidden_ip, 1, d_out_ip)
ip_learned = LearnedFBLinDynamics(ip_est, res_model_ip)
data = ip_learned.process_episode(xs_ip, us_ip, ts_ip)
ip_learned.fit(data, num_epochs=10, validation_split=0.1)
x_dots = array([ip.eval_dot(x, u, t) for x, u, t in zip(xs_ip, us_ip, ts_ip)])
_, (ax_ip_1, ax_ip_2) = subplots(2, figsize=(6, 8))
ax_ip_1.set_title('Inverted Pendulum', fontsize=16)
ax_ip_1.plot(ts_ip[:-1], x_dots[:, 0], linewidth=3, label='True')
ax_ip_1.grid()
ax_ip_1.set_xlabel('$t$ (sec)', fontsize=16)
ax_ip_1.set_ylabel('$\\dot{\\theta}$ (rad / sec)', fontsize=16)
ax_ip_2.plot(ts_ip[:-1], x_dots[:, 1], linewidth=3, label='True')
ax_ip_2.grid()
ax_ip_2.set_xlabel('$t$ (sec)', fontsize=16)
ax_ip_2.set_ylabel('$\\ddot{\\theta}$ (rad / sec$^2$)', fontsize=16)
x_dots = array([ip_learned.eval_dot(x, u, t) for x, u, t in zip(xs_ip, us_ip, ts_ip)])
ax_ip_1.plot(ts_ip[:-1], x_dots[:, 0], linewidth=3, label='Learned')
ax_ip_1.legend(fontsize=16)
ax_ip_2.plot(ts_ip[:-1], x_dots[:, 1], linewidth=3, label='Learned')
ax_ip_2.legend(fontsize=16)
show()
```
### Double Inverted Pendulum
```
d_drift_in_dip = 5
d_act_in_dip = 5
d_hidden_dip = 40
d_out_dip = 2
res_model_dip = KerasResidualAffineModel(d_drift_in_dip, d_act_in_dip, d_hidden_dip, 2, d_out_dip)
dip_learned = LearnedFBLinDynamics(dip_est, res_model_dip)
data = dip_learned.process_episode(xs_dip, us_dip, ts_dip)
dip_learned.fit(data, num_epochs=10, validation_split=0.1)
x_dots = array([dip.eval_dot(x, u, t) for x, u, t in zip(xs_dip, us_dip, ts_dip)])
_, axs_dip = subplots(4, figsize=(6, 16))
axs_dip[0].set_title('Double Inverted Pendulum', fontsize=16)
ylabels = ['$\\dot{\\theta}_1$ (rad / sec)', '$\\dot{\\theta}_2$ (rad / sec)', '$\\ddot{\\theta}_1$ (rad / sec$^2$)', '$\\ddot{\\theta}_2$ (rad / sec$^2$)']
for ax, data, ylabel in zip(axs_dip, x_dots.T, ylabels):
ax.plot(ts_dip[:-1], data, linewidth=3, label='True')
ax.grid()
ax.set_xlabel('$t$ (sec)', fontsize=16)
ax.set_ylabel(ylabel, fontsize=16)
x_dots = array([dip_learned.eval_dot(x, u, t) for x, u, t in zip(xs_dip, us_dip, ts_dip)])
for ax, data in zip(axs_dip, x_dots.T):
ax.plot(ts_dip[:-1], data, linewidth=3, label='Learned')
ax.legend(fontsize=16)
show()
```
## Overfitting
### Inverted Pendulum
```
lqr_learned_ip = LQRController.build(ip_learned, Q_ip, R_ip)
fb_lin_learned_ip = FBLinController(ip_learned, lqr_learned_ip)
xs, _ = ip.simulate(x_0_ip, fb_lin_learned_ip, ts_ip)
_, ax = subplots(figsize=(6, 4))
ax.plot(ts_ip, xs[:, 0], linewidth=3)
ax.grid()
ax.set_title('Inverted Pendulum', fontsize=16)
ax.set_xlabel('$t$ (sec)', fontsize=16)
ax.set_ylabel('$\\theta$ (rad)', fontsize=16)
show()
```
### Double Inverted Pendulum
```
lqr_learned_dip = LQRController.build(dip_learned, Q_dip, R_dip)
fb_lin_learned_dip = FBLinController(dip_learned, lqr_learned_dip)
xs, _ = dip.simulate(x_0_dip, fb_lin_learned_dip, ts_dip)
_, (ax_1, ax_2) = subplots(2, figsize=(6, 8))
ax_1.set_title('Double Inverted Pendulum', fontsize=16)
ax_1.plot(ts_dip, xs[:, 0], linewidth=3)
ax_1.grid()
ax_1.set_xlabel('$t$ (sec)', fontsize=16)
ax_1.set_ylabel('$\\theta_1$ (rad)', fontsize=16)
ax_2.plot(ts_dip, xs[:, 1], linewidth=3)
ax_2.grid()
ax_2.set_xlabel('$t$ (sec)', fontsize=16)
ax_2.set_ylabel('$\\theta_2$ (rad)', fontsize=16)
show()
```
| github_jupyter |
**Source of the materials**: Biopython Tutorial and Cookbook (adapted)
<img src="images/biopython.jpg">
# Introduction
## What is Biopython?
The Biopython Project is an international association of developers of freely available Python (http://www.python.org) tools for computational molecular biology. Python is an object oriented, interpreted, flexible language that is becoming increasingly popular for scientific computing. Python is easy to learn, has a very clear syntax and can easily be extended with modules written in C, C++ or FORTRAN.
The Biopython web site (http://www.biopython.org) provides an online resource for modules, scripts, and web links for developers of Python-based software for bioinformatics use and research. Basically, the goal of Biopython is to make it as easy as possible to use Python for bioinformatics by creating high-quality, reusable modules and classes. Biopython features include parsers for various Bioinformatics file formats (BLAST, Clustalw, FASTA, Genbank,...), access to online services (NCBI, Expasy,...), interfaces to common and not-so-common programs (Clustalw, DSSP, MSMS...), a standard sequence class, various clustering modules, a KD tree data structure etc. and even documentation.
Basically, we just like to program in Python and want to make it as easy as possible to use Python for bioinformatics by creating high-quality, reusable modules and scripts.
## What can I find in the Biopython package
The main Biopython releases have lots of functionality, including:
- The ability to parse bioinformatics files into Python utilizable data structures, including support for the following formats:
- Blast output – both from standalone and WWW Blast
- Clustalw
- FASTA
- GenBank
- PubMed and Medline
- ExPASy files, like Enzyme and Prosite
- SCOP, including ‘dom’ and ‘lin’ files
- UniGene
- SwissProt
- Files in the supported formats can be iterated over record by record or indexed and accessed via a Dictionary interface.
- Code to deal with popular on-line bioinformatics destinations such as:
- NCBI – Blast, Entrez and PubMed services
- ExPASy – Swiss-Prot and Prosite entries, as well as Prosite searches
- Interfaces to common bioinformatics programs such as:
- Standalone Blast from NCBI
- Clustalw alignment program
- EMBOSS command line tools
-A standard sequence class that deals with sequences, ids on sequences, and sequence features.
- Tools for performing common operations on sequences, such as translation, transcription and weight calculations.
- Code to perform classification of data using k Nearest Neighbors, Naive Bayes or Support Vector Machines.
- Code for dealing with alignments, including a standard way to create and deal with substitution matrices.
- Code making it easy to split up parallelizable tasks into separate processes.
- GUI-based programs to do basic sequence manipulations, translations, BLASTing, etc.
- Extensive documentation and help with using the modules, including this file, on-line wiki documentation, the web site, and the mailing list.
- Integration with BioSQL, a sequence database schema also supported by the BioPerl and BioJava projects.
We hope this gives you plenty of reasons to download and start using Biopython!
## About these notebooks
These notebooks were prepared on Python 3 for Project Jupyter 4+ (formely IPython Notebook). Biopython should be installed and available (v1.66 or newer recommended).
You can check the basic installation and inspect the version by doing:
```
import Bio
print(Bio.__version__)
```
| github_jupyter |
```
import numpy as np
import scipy.sparse as sp
import os
import scipy.io
import tensorflow as tf
import gnn.GNN as GNN
import gnn.gnn_utils as gnn_utils
import examples.Net_Subgraph as n
import gnn.load as ld
import networkx as nx
%load_ext autoreload
%autoreload 2
```
### Example
```
E_tot = [[0, 1, 0],
[0, 2, 0],
[0, 4, 0],
[1, 0, 0],
[1, 2, 0],
[1, 3, 0],
[2, 0, 0],
[2, 1, 0],
[2, 3, 0],
[2, 4, 0],
[3, 1, 0],
[3, 2, 0],
[4, 0, 0],
[4, 2, 0],
[5, 7, 1],
[5, 8, 1],
[6, 7, 1],
[6, 8, 1],
[7, 5, 1],
[7, 6, 1],
[7, 8, 1],
[8, 5, 1],
[8, 6, 1],
[8, 7, 1]]
N_tot = [[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.]]
E_tot=np.array(E_tot)
N_tot=np.array(N_tot)
inp, arcnode, graphnode = gnn_utils.from_EN_to_GNN(E_tot, N_tot)
# random labels
labels = np.random.randint(2, size=(N_tot.shape[0]))
labels = np.eye(max(labels)+1, dtype=np.int32)[labels] # one-hot encoding of labels
E_tot.shape
arcnode
threshold = 0.01
learning_rate = 0.01
state_dim = 5
input_dim = inp.shape[1]
output_dim = labels.shape[1]
max_it = 50
num_epoch = 10000
# Create the state transition function, output function, loss function and metrics
net = n.Net(input_dim, state_dim, output_dim)
# Create the graph neural network model
g = GNN.GNN(net, input_dim, output_dim, state_dim)
```
### Graph
```
mat_dir="./Data"
mat_fn="sub_15_7_200"
mat=ld.loadmat(os.path.join(mat_dir,mat_fn))
train=mat['dataSet']["trainSet"]
target=np.array(train['targets'])
train['connMatrix']
graph=nx.from_scipy_sparse_matrix(train['connMatrix'])
component_no=4
component_size=15
node_list=range(component_no*component_size,(component_no+1)*component_size)
G=nx.subgraph(graph,node_list)
target_G=target[node_list]
pos_nodes=(target_G==1).nonzero()[0]+component_no*component_size
G_sub=nx.subgraph(G,pos_nodes)
pos=nx.spring_layout(G_sub)
nx.draw_networkx_nodes(G_sub,pos,nodelist=pos_nodes,node_color='red')
nx.draw_networkx_edges(G_sub,pos,nodelist=pos_nodes)
graph=nx.from_scipy_sparse_matrix(train['connMatrix'])
component_no=10
component_size=15
node_list=range(component_no*component_size,(component_no+1)*component_size)
G=nx.subgraph(graph,node_list)
target_G=target[node_list]
pos=nx.spring_layout(G)
pos_nodes=(target_G==1).nonzero()[0]+component_no*component_size
neg_nodes=(target_G==-1).nonzero()[0]+component_no*component_size
nx.draw_networkx_nodes(G,pos,nodelist=pos_nodes,node_color='red')
nx.draw_networkx_nodes(G,pos,nodelist=neg_nodes,node_color='blue')
nx.draw_networkx_edges(G,pos)
```
### tf.sparse
```
st=tf.sparse.SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4])
gpu_options = tf.GPUOptions(allow_growth=True)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
stv=sess.run(st)
st.op
```
### tf.while_loop
```
i = tf.constant(0)
c = lambda i: tf.less(i, 10)
b = lambda i: tf.add(i, 1)
r = tf.while_loop(c, b, [i])
sess.run(r)
import collections
Pair = collections.namedtuple('Pair', 'j, k')
ijk_0 = [tf.constant(0), Pair(tf.constant(1), tf.constant(2))]
c = lambda i, p: i < 10
b = lambda i, p: [i + 1, Pair((p.j + p.k), (p.j - p.k))]
ijk_final = tf.while_loop(c, b, ijk_0)
sess.run(ijk_final)
i0 = tf.constant(0)
m0 = tf.ones([2, 2])
c = lambda i, m: i < 10
b = lambda i, m: [i+1, tf.concat([m, m], axis=0)]
final=tf.while_loop(
c, b, loop_vars=[i0, m0],
shape_invariants=[i0.get_shape(), tf.TensorShape([None, 2])])
sess.run(final)
a=tf.placeholder(tf.float32,shape=[None,2])
sess.run(tf.shape(a),feed_dict={a:np.array([[1,2],[3,2]])})
n = 10000
x = tf.constant(list(range(n)))
c = lambda i, x: i < n
b = lambda i, x: (tf.compat.v1.Print(i + 1, [i]), tf.compat.v1.Print(x + 1,
[i], "x:"))
i_final, x_final = tf.while_loop(c, b, (0, x))
with tf.compat.v1.Session() as sess:
print(sess.run(i_final)) # prints [0] ... [9999]
# The following line may increment the counter and x in parallel.
# The counter thread may get ahead of the other thread, but not the
# other way around. So you may see things like
# [9996] x:[9987]
# meaning that the counter thread is on iteration 9996,
# while the other thread is on iteration 9987
# print(sess.run(x_final).shape)
x = tf.constant(2)
y = tf.constant(-1)
def f1(): return [tf.multiply(x, 17)]
def f2(): return [tf.add(y, 23)]
r1 = tf.cond(tf.less(x, y), f1, f2,strict=True)
r2 = tf.cond(tf.less(x, y), f1, f2,strict=False)
with tf.Session() as sess:
print(sess.run(r1))
print(sess.run(r2))
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.