code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6 Data
# language: python
# name: py36data
# ---
# # A brief analysis of QQ message
#
# There was a message collector-bot developed around Oct 2017, and it started collect from 29/10/2017.
#
# 几点结论:
# - 每天11点和下午3天最活跃
# - 每周三最活跃
# - 林芝是一号活跃人物
# - 复读机是灵芝好基友,而且压倒性优势
# +
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
chat_file = 'data/chat_hist_20180101.csv'
# -
# read all data ignore bad lines
df = pd.read_csv(chat_file, error_bad_lines=False)
# ## Pre processing
# drop all null values
non_na_df = df.dropna()
# create timestamp index
index = pd.to_datetime(non_na_df['created_at'])
non_na_df.index = index
# overall info
print('Total records {}'.format(len(non_na_df)))
print('Start / End : {}, {}'.format(non_na_df.index.min(), non_na_df.index.max()))
non_na_df['hour'] = (non_na_df.index.hour + 10) % 24
non_na_df['dayofweek'] = non_na_df.index.dayofweek
# ## Chat activities by hours
non_na_df.groupby('hour').count()['sender_qq'].plot(kind='bar', figsize=(18,6))
plt.tight_layout()
plt.title('Activity by aHour')
plt.ylabel('Total Chat Count')
plt.xlabel('Hour (24h)')
# ## Chat activities by day of week
non_na_df.groupby('dayofweek').count()['sender_qq'].plot(kind='bar')
plt.tight_layout()
plt.title('Activity by Day of Week')
plt.ylabel('Total Chat Count')
plt.xlabel('Day of Week')
# ## Top chatter over time
# top chatter over time
top_chater = non_na_df['sender_card'].value_counts()
top_chater.nlargest(20).plot(kind='bar', figsize=(14,6))
# ## 找到好基友...
# find all message with @someone
at_messages = non_na_df[non_na_df.message_text.str.startswith('@')]
# extract the name card from who has been mentioned
def get_atee(m):
if m.startswith('@'):
try:
return m.split()[0]
except:
return 'N/A'
else:
return 'N/A'
at_messages['who'] = at_messages.message_text.apply(get_atee)
at_messages.who.value_counts().nlargest(20).plot(kind='bar', figsize=(18,6))
who_at_lynch = at_messages[at_messages['who'].str.startswith('@Ade-Lynch')]
who_at_lynch['sender_card'].value_counts().nlargest(5).plot(kind='bar', figsize=(10,10))
| Look Back 2017.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Color
#
# I can do no better than this blog post by Juice Analytics when it comes to introducing the promise and pitfalls of using color:
#
# > More often than not, dashboards [and charts] get lit up with color like an over-dressed Christmas tree. The color is applied indiscriminately and adds little to the meaning of the dashboard. Appropriate use of color requires restraint. In our dashboard designs, we typically start by using only grey, then we gradually add color where it conveys useful information. [Color had meaning](http://www.juiceanalytics.com/design-principles/color-has-meaning/)
#
# Why is color so problematic?
#
# For one thing, it is often overused, usually to add "interest" rather than used consciously to convey information...to encode relationships and help readers decode those relationships. We are not usually taught how to use color effectively, thinking that it is just a matter of "taste" or something left to designers and artists.
# ## Color has meaning
#
# Another obstacle with color is that it has meaning. We associate different colors with different emotions and feelings and these can change based on the context. Red is hot but it's also a warning and sometimes it conveys romantic love and passion. But not necessarily in every culture. White represents purity in the West but death in many Asian countries.
# 
# Saturation can also affect color meaning. Over-saturated, unnatural colors are jarring. Darker colors can be cooling whilst brighter colors can be warming.
# {width="50%"}
# Can you define a graphic that doesn't mean something different in *some* culture? Probably not. But, as we'll see when we get to the Guidelines, if you start with gray and introduce color sparingly *knowing your audience* then you should steer clear of most trouble.
#
# ## Color doesn't have meaning
#
# Unfortunately, color doesn't have the meaning we *think* it does and often use it for...value. Let's take the example of the problematic "rainbow" or "jet" color maps used for Heat Maps.
#
# When we discussed Ware's Preattentive Processing, we noted that color has no inherent value. This means we do not associate colors with numeric values or even an ordering of any kind. We cannot say that red is greater than blue in the same way we can say that a longer line is greater than a shorter line.
#
# Additionally, what we call "color" includes a few different characteristics such as hue, saturation, and "value" (this is value in the artist's sense of light or dark). The figure below shows the popular rainbow and jet color maps on the left and the *values* of those color maps expressed as grayscale images.
#
# We can see that the values are not perceptually uniform. In the case of the rainbow, we actually go from dark to light to dark when we want to go uniformly from dark to light (or light to dark). For jet, there are actually two peaks of light at cyan and yellow.
# {width="50%"}
# This is not true of color maps designed for perceptual uniformity:
# {width="50%"}
# If we want to encode sequential values, we should use a single color gradient. If we have diverging values or values with positives and negative values with a "natural" zero, we can use a diverging color gradient. For categorical variables, we use discordant colors. The last one is interesting because the default is normally, you guessed it, a rainbow...adjacent colors are definitely not distinct in a rainbow.
#
# If you do use a colormap, use a perceptually uniform one like Viridis pictured above.
# {width="50%"}
# Color can also be used to tie elements together in a collection of charts. For example, a set of charts can use green for financial charts and blue for personal/head count charts. Conversely, color used indiscriminately can be confusing. If two elements are the same color, we assume they're related and it's confusing if they are not.
# ## Color blindness
#
# In general, you should avoid including both red and green in your charts. 8% of men and 0.2% of women (4.5% of the total population) have some form of color blindness. Red/green color blindness is the most common. Hundreds of dashboards have green for good and red for bad...designed, I imagine, by designers who were not color blind.
| fundamentals_2018.9/visualization/color.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1-2.1 Intro Python
# ## Strings: input, testing, formatting
# - **input() - gathering user input**
# - print() formatting
# - Quotes inside strings
# - Boolean string tests methods
# - String formatting methods
# - Formatting string input()
# - Boolean `in` keyword
#
# -----
#
# ><font size="5" color="#00A0B2" face="verdana"> <B>Student will be able to</B></font>
# - **gather, store and use string `input()`**
# - format `print()` output
# - test string characteristics
# - format string output
# - search for a string in a string
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font>
#
# ## input()
# ### get information from users with `input()`
# the **`input()`** function prompts the user to supply data returning that data as a string
#
# []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/7a8881cb-0bdd-493c-b1a1-9849a95d05e6/Unit1_Section2-1-input-basic.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/7a8881cb-0bdd-493c-b1a1-9849a95d05e6/Unit1_Section2-1-input-basic.vtt","srclang":"en","kind":"subtitles","label":"english"}])
#
# <font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font>
# review and run code - enter a small integer in the text box
print("enter a small int: ")
small_int = input()
print("small int: ")
print(small_int)
# #
# <font size="6" color="#B24C00" face="verdana"> <B>Task 1</B></font>
# ## storing input in a variable
# - **[ ]** create code to store input in student_name variable
# an input box should when run
# - **[ ]** type a name in the input box and press **Enter**
# - **[ ]** determine the **`type()`** of **student_name**
# +
# [ ] get input for the variable student_name
print("Enter Student's Name:")
student_name = input()
print("Student Name:")
print(student_name)
# [ ] determine the type of student_name
type(student_name)
# -
# <font size="4" color="#B24C00" face="verdana"> <B>Task 1 continued...</B></font>
# > **note**: **`input()`** returns a string (type = str) regardless of entry
# - if a string is entered **`input()`** returns a string
# - if a number is entered **`input()`** returns a string
#
# - **[ ]** determine the **`type()`** of input below by entering
# - a name
# - an integer (whole number no decimal)
# - a float a number with a decimal point
# +
# [ ] run cell several times entering a name a int number and a float number after adding code below
print("enter a name or number")
test_input = input()
# [ ] insert code below to check the type of test_input
type(test_input)
# -
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Concepts</B></font>
# ### user prompts using `input()`
#
# the **`input()`** function has an optional string argument which displays the string intended to inform a user what to enter
# **`input()`** works similar to **`print()`** in the way it displays arguments as output
#
# []( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c607aa57-b18b-4f29-a317-7b13db66d8e8/Unit1_Section2-1-input-prompt.ism/manifest","type":"application/vnd.ms-sstr+xml"}],[{"src":"http://jupyternootbookwams.streaming.mediaservices.windows.net/c607aa57-b18b-4f29-a317-7b13db66d8e8/Unit1_Section2-1-input-prompt.vtt","srclang":"en","kind":"subtitles","label":"english"}])
#
# #
# <font size="6" color="#00A0B2" face="verdana"> <B>Examples</B></font>
student_name = input("enter the student name: ")
print("Hi " + student_name)
# #
# <font size="6" color="#B24C00" face="verdana"> <B>Task 2</B></font>
# ## prompting the user for input
# - **[ ]** create a variable named **city** to store input, add a prompt for the name of a city
# - **[ ]** print "the city name is " followed by the value stored in **city**
# +
# [ ] get user input for a city name in the variable named city
city = input("Enter city name: ")
# [ ] print the city name
print(city)
# -
#
# <font size="4" color="#B24C00" face="verdana"> <B>Task 2 cont...</B></font>
# ## multiple prompts for user input
# often programs need information on multiple items
# - **[ ]** create variables to store input: **name**, **age**, **get_mail**
# - **[ ]** create prompts for name, age and yes/no to being on an email list
# - **[ ]** print description + input values
#
# >example print output:
# `name = Alton`
# `age = 17`
# `wants email = yes`
#
# **tip**: with multiple input statements, after each prompt, **click 'in' the input box** to continue entering input
# [ ]create variables to store input: name, age, get_mail with prompts
# for name, age and yes/no to being on an email list
name = input("Enter Name: ")
age = input('Enter Age: ')
wants_email = input("Would you like to get emails? ")
# [ ] print a description + variable value for each variable
print("name = " + name)
print("age = " + age)
print("wants email = " + wants_email)
# [Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) © 2017 Microsoft
| Python Absolute Beginner/Module_1_2.1_Absolute_Beginner.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Scrapes Google Play store website for apps using search results
from string import ascii_lowercase
from bs4 import BeautifulSoup as bs
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import time
search_url = "https://play.google.com/store/search?q="
search_urls = []
for c1 in ascii_lowercase:
for c2 in ascii_lowercase:
for c3 in ascii_lowercase:
search_urls.append(search_url + c1 + c2 + c3)
chrome_options = Options()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Chrome(options = chrome_options)
search_result_pages = []
for url in search_urls:
print(url)
driver.get(url)
time.sleep(2)
# assert "Google Play" in driver.title
html = driver.page_source
soup = bs(html, 'html.parser')
link_found = False
for a_link in soup.find_all("a"):
if(a_link.get("class") is not None):
a_link_class_list = a_link.get("class")
if(len(a_link_class_list) > 4 and a_link_class_list[4] == "apps"):
link_found = True
search_result_pages.append("https://play.google.com" + a_link.get("href"))
if(link_found is False):
search_result_pages.append(url)
driver.quit()
search_urls_file = open("search_results_urls.txt","w+")
search_urls_output = ""
for url in search_result_pages:
search_urls_output = search_urls_output + url + "\n"
search_urls_file.write(search_urls_output)
search_urls_file.close()
| play_store_scraper.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: VPython
# language: python
# name: vpython
# ---
# # First Animation
# ---
#
# by <NAME>
#
# We draw a ball, and will also define its center, radius and color of the sphere. The nominal colors are *reb, blue, green, yellow, cyan, magenta, white and black*.
# You can also specify the color using *rgb* with ``color=vector(r,g,b)``.
# Run the following code snippet:
from vpython import *
from math import *
ball = sphere(pos=vector(0,0,0),radius=0.5,color=color.cyan)
wallL = box(pos=vector(-10,0,0),size=vector(0.1,10,5),color=color.blue)
wallR = box(pos=vector(10,0,0),size=vector(0.1,10,5),color=color.blue)
# We also entered two more commands, that draw the walls, at the left and right sides of the screen. You can now explore the varios zoom and perspective functions with this image.
# You can also change the position of the ball this way:
ball.pos.x = ball.pos.x - 5
# Now add the following lines to your code:
v = 1
dt = 0.1
while ball.pos.x < 10:
rate(20)
ball.pos.x = ball.pos.x + v*dt
# And that’s your first 3-D animation!
# The rate(20) line just slows down the computer to make the animation rate about right. You can adjust this parameter.
# If you want to repeat the process, or run into a problem, select ``Kernel -> Restart`` or the *double arrows* button.
# Now try to modify the animation to have the ball bounce between the walls. Try this:
t = 0
while t<1000:
rate(100)
ball.pos.x = ball.pos.x + v*dt
t += dt
if ball.pos.x > 10:
v = -1
# Note that if we run this, the ball just goes off through the wall for a very long time. It knows nothing about walls!
# If you want to end the loop earlier, stop the process by selecting ``Kernel -> Restart & Clear Output``
# ## Assignment
# Get the program working with the ball bouncing off of both walls. For example, you can add another if statement to handle bounces off the second wall. If you have time you can think about improving and generalizing your script by letting v be a vector.
| BouncingBall_FirstAnimation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
tf.__version__
# +
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = tf.keras.utils.normalize(x_train, axis=1)
x_test = tf.keras.utils.normalize(x_test, axis=1)
# -
len(x_train)
# +
import matplotlib.pyplot as plt
plt.imshow(x_train[0], cmap = plt.cm.binary)
# -
x_train[0]
# +
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
model.fit(x_train, y_train, epochs=4)
# -
val_loss, val_acc = model.evaluate(x_test, y_test)
print(val_loss, val_acc)
model.save('saved_mnist.model')
model_l = tf.keras.models.load_model('saved_mnist.model')
p = model_l.predict(x_test)
# Based on sentdex's https://www.youtube.com/watch?v=wQ8BIBpya2k&t=850s
import numpy as np
t = 9999
plt.imshow(x_test[t])
np.argmax(p[t])
| Jupyter/mnist/.ipynb_checkpoints/Untitled-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <div class="alert alert-block alert-info" style="margin-top: 20px">
# <a href="https://cocl.us/corsera_da0101en_notebook_top">
# <img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/Images/TopAd.png" width="750" align="center">
# </a>
# </div>
#
# <a href="https://www.bigdatauniversity.com"><img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/Images/CCLog.png" width = 300, align = "center"></a>
#
# <h1 align=center><font size=5>Data Analysis with Python</font></h1>
# <h1>Module 4: Model Development</h1>
# <p>In this section, we will develop several models that will predict the price of the car using the variables or features. This is just an estimate but should give us an objective idea of how much the car should cost.</p>
# Some questions we want to ask in this module
# <ul>
# <li>do I know if the dealer is offering fair value for my trade-in?</li>
# <li>do I know if I put a fair value on my car?</li>
# </ul>
# <p>Data Analytics, we often use <b>Model Development</b> to help us predict future observations from the data we have.</p>
#
# <p>A Model will help us understand the exact relationship between different variables and how these variables are used to predict the result.</p>
# <h4>Setup</h4>
# Import libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# load data and store in dataframe df:
# This dataset was hosted on IBM Cloud object click <a href="https://cocl.us/DA101EN_object_storage">HERE</a> for free storage.
# path of data
path = 'https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/automobileEDA.csv'
df = pd.read_csv(path)
df.head()
# <h3>1. Linear Regression and Multiple Linear Regression</h3>
# <h4>Linear Regression</h4>
#
# <p>One example of a Data Model that we will be using is</p>
# <b>Simple Linear Regression</b>.
#
# <br>
# <p>Simple Linear Regression is a method to help us understand the relationship between two variables:</p>
# <ul>
# <li>The predictor/independent variable (X)</li>
# <li>The response/dependent variable (that we want to predict)(Y)</li>
# </ul>
#
# <p>The result of Linear Regression is a <b>linear function</b> that predicts the response (dependent) variable as a function of the predictor (independent) variable.</p>
#
#
# $$
# Y: Response \ Variable\\
# X: Predictor \ Variables
# $$
#
# <b>Linear function:</b>
# $$
# Yhat = a + b X
# $$
# <ul>
# <li>a refers to the <b>intercept</b> of the regression line0, in other words: the value of Y when X is 0</li>
# <li>b refers to the <b>slope</b> of the regression line, in other words: the value with which Y changes when X increases by 1 unit</li>
# </ul>
# <h4>Lets load the modules for linear regression</h4>
from sklearn.linear_model import LinearRegression
# <h4>Create the linear regression object</h4>
lm = LinearRegression()
lm
# <h4>How could Highway-mpg help us predict car price?</h4>
# For this example, we want to look at how highway-mpg can help us predict car price.
# Using simple linear regression, we will create a linear function with "highway-mpg" as the predictor variable and the "price" as the response variable.
X = df[['highway-mpg']]
Y = df['price']
# Fit the linear model using highway-mpg.
lm.fit(X,Y)
# We can output a prediction
Yhat=lm.predict(X)
Yhat[0:5]
# <h4>What is the value of the intercept (a)?</h4>
lm.intercept_
# <h4>What is the value of the Slope (b)?</h4>
lm.coef_
# <h3>What is the final estimated linear model we get?</h3>
# As we saw above, we should get a final linear model with the structure:
# $$
# Yhat = a + b X
# $$
# Plugging in the actual values we get:
# <b>price</b> = 38423.31 - 821.73 x <b>highway-mpg</b>
# <div class="alert alert-danger alertdanger" style="margin-top: 20px">
# <h1>Question #1 a): </h1>
#
# <b>Create a linear regression object?</b>
# </div>
# +
# Write your code below and press Shift+Enter to execute
lm1 = LinearRegression()
lm1
# -
# Double-click <b>here</b> for the solution.
#
# <!-- The answer is below:
#
# lm1 = LinearRegression()
# lm1
#
# -->
# <div class="alert alert-danger alertdanger" style="margin-top: 20px">
# <h1> Question #1 b): </h1>
#
# <b>Train the model using 'engine-size' as the independent variable and 'price' as the dependent variable?</b>
# </div>
# +
# Write your code below and press Shift+Enter to execute
X = df[['highway-mpg']]
Y = df['price']
lm1.fit(X,Y)
lm1
# -
# Double-click <b>here</b> for the solution.
#
# <!-- The answer is below:
#
# lm1.fit(df[['highway-mpg']], df[['price']])
# lm1
#
# -->
#
# <div class="alert alert-danger alertdanger" style="margin-top: 20px">
# <h1>Question #1 c):</h1>
#
# <b>Find the slope and intercept of the model?</b>
# </div>
# <h4>Slope</h4>
# Write your code below and press Shift+Enter to execute
lm1.coef_
# <h4>Intercept</h4>
# Write your code below and press Shift+Enter to execute
lm1.intercept_
# Double-click <b>here</b> for the solution.
#
# <!-- The answer is below:
#
# # Slope
# lm1.coef_
# # Intercept
# lm1.intercept_
#
# -->
# <div class="alert alert-danger alertdanger" style="margin-top: 20px">
# <h1>Question #1 d): </h1>
#
# <b>What is the equation of the predicted line. You can use x and yhat or 'engine-size' or 'price'?</b>
# </div>
# # You can type you answer here
#
# Price(Y) = 38423.305858157386 -821.73337832(Engine size(X))
# Double-click <b>here</b> for the solution.
#
# <!-- The answer is below:
#
# # using X and Y
# Yhat=38423.31-821.733*X
#
# Price=38423.31-821.733*engine-size
#
# -->
# <h4>Multiple Linear Regression</h4>
# <p>What if we want to predict car price using more than one variable?</p>
#
# <p>If we want to use more variables in our model to predict car price, we can use <b>Multiple Linear Regression</b>.
# Multiple Linear Regression is very similar to Simple Linear Regression, but this method is used to explain the relationship between one continuous response (dependent) variable and <b>two or more</b> predictor (independent) variables.
# Most of the real-world regression models involve multiple predictors. We will illustrate the structure by using four predictor variables, but these results can generalize to any integer:</p>
# $$
# Y: Response \ Variable\\
# X_1 :Predictor\ Variable \ 1\\
# X_2: Predictor\ Variable \ 2\\
# X_3: Predictor\ Variable \ 3\\
# X_4: Predictor\ Variable \ 4\\
# $$
# $$
# a: intercept\\
# b_1 :coefficients \ of\ Variable \ 1\\
# b_2: coefficients \ of\ Variable \ 2\\
# b_3: coefficients \ of\ Variable \ 3\\
# b_4: coefficients \ of\ Variable \ 4\\
# $$
# The equation is given by
# $$
# Yhat = a + b_1 X_1 + b_2 X_2 + b_3 X_3 + b_4 X_4
# $$
# <p>From the previous section we know that other good predictors of price could be:</p>
# <ul>
# <li>Horsepower</li>
# <li>Curb-weight</li>
# <li>Engine-size</li>
# <li>Highway-mpg</li>
# </ul>
# Let's develop a model using these variables as the predictor variables.
Z = df[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']]
# Fit the linear model using the four above-mentioned variables.
lm.fit(Z, df['price'])
# What is the value of the intercept(a)?
lm.intercept_
# What are the values of the coefficients (b1, b2, b3, b4)?
lm.coef_
# What is the final estimated linear model that we get?
# As we saw above, we should get a final linear function with the structure:
#
# $$
# Yhat = a + b_1 X_1 + b_2 X_2 + b_3 X_3 + b_4 X_4
# $$
#
# What is the linear function we get in this example?
# <b>Price</b> = -15678.742628061467 + 52.65851272 x <b>horsepower</b> + 4.69878948 x <b>curb-weight</b> + 81.95906216 x <b>engine-size</b> + 33.58258185 x <b>highway-mpg</b>
# <div class="alert alert-danger alertdanger" style="margin-top: 20px">
# <h1> Question #2 a): </h1>
# Create and train a Multiple Linear Regression model "lm2" where the response variable is price, and the predictor variable is 'normalized-losses' and 'highway-mpg'.
# </div>
# +
# Write your code below and press Shift+Enter to execute
lm2 = LinearRegression()
lm2
# Assigning variable X and Y to the dependant and independant variables
Y = df[['price']]
X = df[['normalized-losses','highway-mpg']]
# Fitting the MLR model
lm2.fit(X,Y)
lm2
# -
# Double-click <b>here</b> for the solution.
#
# <!-- The answer is below:
#
# lm2 = LinearRegression()
# lm2.fit(df[['normalized-losses' , 'highway-mpg']],df['price'])
#
# -->
# <div class="alert alert-danger alertdanger" style="margin-top: 20px">
# <h1>Question #2 b): </h1>
# <b>Find the coefficient of the model?</b>
# </div>
# Write your code below and press Shift+Enter to execute
lm2.coef_
# Double-click <b>here</b> for the solution.
#
# <!-- The answer is below:
#
# lm2.coef_
#
# -->
# <h3>2) Model Evaluation using Visualization</h3>
# Now that we've developed some models, how do we evaluate our models and how do we choose the best one? One way to do this is by using visualization.
# import the visualization package: seaborn
# import the visualization package: seaborn
import seaborn as sns
# %matplotlib inline
# <h3>Regression Plot</h3>
# <p>When it comes to simple linear regression, an excellent way to visualize the fit of our model is by using <b>regression plots</b>.</p>
#
# <p>This plot will show a combination of a scattered data points (a <b>scatter plot</b>), as well as the fitted <b>linear regression</b> line going through the data. This will give us a reasonable estimate of the relationship between the two variables, the strength of the correlation, as well as the direction (positive or negative correlation).</p>
# Let's visualize Horsepower as potential predictor variable of price:
width = 12
height = 10
plt.figure(figsize=(width, height))
sns.regplot(x="highway-mpg", y="price", data=df)
plt.ylim(0,)
# <p>We can see from this plot that price is negatively correlated to highway-mpg, since the regression slope is negative.
# One thing to keep in mind when looking at a regression plot is to pay attention to how scattered the data points are around the regression line. This will give you a good indication of the variance of the data, and whether a linear model would be the best fit or not. If the data is too far off from the line, this linear model might not be the best model for this data. Let's compare this plot to the regression plot of "peak-rpm".</p>
plt.figure(figsize=(width, height))
sns.regplot(x="peak-rpm", y="price", data=df)
plt.ylim(0,)
# <p>Comparing the regression plot of "peak-rpm" and "highway-mpg" we see that the points for "highway-mpg" are much closer to the generated line and on the average decrease. The points for "peak-rpm" have more spread around the predicted line, and it is much harder to determine if the points are decreasing or increasing as the "highway-mpg" increases.</p>
# <div class="alert alert-danger alertdanger" style="margin-top: 20px">
# <h1>Question #3:</h1>
# <b>Given the regression plots above is "peak-rpm" or "highway-mpg" more strongly correlated with "price". Use the method ".corr()" to verify your answer.</b>
# </div>
# Write your code below and press Shift+Enter to execute
df[["peak-rpm","highway-mpg","price"]].corr()
# Double-click <b>here</b> for the solution.
#
# <!-- The answer is below:
#
# The variable "peak-rpm" has a stronger correlation with "price", it is approximate -0.704692 compared to "highway-mpg" which is approximate -0.101616. You can verify it using the following command:
# df[["peak-rpm","highway-mpg","price"]].corr()
#
# -->
# <h3>Residual Plot</h3>
#
# <p>A good way to visualize the variance of the data is to use a residual plot.</p>
#
# <p>What is a <b>residual</b>?</p>
#
# <p>The difference between the observed value (y) and the predicted value (Yhat) is called the residual (e). When we look at a regression plot, the residual is the distance from the data point to the fitted regression line.</p>
#
# <p>So what is a <b>residual plot</b>?</p>
#
# <p>A residual plot is a graph that shows the residuals on the vertical y-axis and the independent variable on the horizontal x-axis.</p>
#
# <p>What do we pay attention to when looking at a residual plot?</p>
#
# <p>We look at the spread of the residuals:</p>
#
# <p>- If the points in a residual plot are <b>randomly spread out around the x-axis</b>, then a <b>linear model is appropriate</b> for the data. Why is that? Randomly spread out residuals means that the variance is constant, and thus the linear model is a good fit for this data.</p>
width = 12
height = 10
plt.figure(figsize=(width, height))
sns.residplot(df['highway-mpg'], df['price'])
plt.show()
# <i>What is this plot telling us?</i>
#
# <p>We can see from this residual plot that the residuals are not randomly spread around the x-axis, which leads us to believe that maybe a non-linear model is more appropriate for this data.</p>
# <h3>Multiple Linear Regression</h3>
# <p>How do we visualize a model for Multiple Linear Regression? This gets a bit more complicated because you can't visualize it with regression or residual plot.</p>
#
# <p>One way to look at the fit of the model is by looking at the <b>distribution plot</b>: We can look at the distribution of the fitted values that result from the model and compare it to the distribution of the actual values.</p>
# First lets make a prediction
Y_hat = lm.predict(Z)
# +
plt.figure(figsize=(width, height))
ax1 = sns.distplot(df['price'], hist=False, color="r", label="Actual Value")
sns.distplot(Yhat, hist=False, color="b", label="Fitted Values" , ax=ax1)
plt.title('Actual vs Fitted Values for Price')
plt.xlabel('Price (in dollars)')
plt.ylabel('Proportion of Cars')
plt.show()
plt.close()
# -
# <p>We can see that the fitted values are reasonably close to the actual values, since the two distributions overlap a bit. However, there is definitely some room for improvement.</p>
# <h2>Part 3: Polynomial Regression and Pipelines</h2>
# <p><b>Polynomial regression</b> is a particular case of the general linear regression model or multiple linear regression models.</p>
# <p>We get non-linear relationships by squaring or setting higher-order terms of the predictor variables.</p>
#
# <p>There are different orders of polynomial regression:</p>
# <center><b>Quadratic - 2nd order</b></center>
# $$
# Yhat = a + b_1 X^2 +b_2 X^2
# $$
#
#
# <center><b>Cubic - 3rd order</b></center>
# $$
# Yhat = a + b_1 X^2 +b_2 X^2 +b_3 X^3\\
# $$
#
#
# <center><b>Higher order</b>:</center>
# $$
# Y = a + b_1 X^2 +b_2 X^2 +b_3 X^3 ....\\
# $$
# <p>We saw earlier that a linear model did not provide the best fit while using highway-mpg as the predictor variable. Let's see if we can try fitting a polynomial model to the data instead.</p>
# <p>We will use the following function to plot the data:</p>
def PlotPolly(model, independent_variable, dependent_variabble, Name):
x_new = np.linspace(15, 55, 100)
y_new = model(x_new)
plt.plot(independent_variable, dependent_variabble, '.', x_new, y_new, '-')
plt.title('Polynomial Fit with Matplotlib for Price ~ Length')
ax = plt.gca()
ax.set_facecolor((0.898, 0.898, 0.898))
fig = plt.gcf()
plt.xlabel(Name)
plt.ylabel('Price of Cars')
plt.show()
plt.close()
# lets get the variables
x = df['highway-mpg']
y = df['price']
# Let's fit the polynomial using the function <b>polyfit</b>, then use the function <b>poly1d</b> to display the polynomial function.
# Here we use a polynomial of the 3rd order (cubic)
f = np.polyfit(x, y, 3)
p = np.poly1d(f)
print(p)
# Let's plot the function
PlotPolly(p, x, y, 'highway-mpg')
np.polyfit(x, y, 3)
# <p>We can already see from plotting that this polynomial model performs better than the linear model. This is because the generated polynomial function "hits" more of the data points.</p>
# <div class="alert alert-danger alertdanger" style="margin-top: 20px">
# <h1>Question #4:</h1>
# <b>Create 11 order polynomial model with the variables x and y from above?</b>
# </div>
# +
# Write your code below and press Shift+Enter to execute
f1 = np.polyfit(x, y, 11)
p1 = np.poly1d(f1)
print(p)
PlotPolly(p1,x,y,'Length')
# -
# Double-click <b>here</b> for the solution.
#
# <!-- The answer is below:
#
# # calculate polynomial
# # Here we use a polynomial of the 3rd order (cubic)
# f1 = np.polyfit(x, y, 11)
# p1 = np.poly1d(f1)
# print(p)
# PlotPolly(p1,x,y, 'Length')
#
# -->
# <p>The analytical expression for Multivariate Polynomial function gets complicated. For example, the expression for a second-order (degree=2)polynomial with two variables is given by:</p>
# $$
# Yhat = a + b_1 X_1 +b_2 X_2 +b_3 X_1 X_2+b_4 X_1^2+b_5 X_2^2
# $$
# We can perform a polynomial transform on multiple features. First, we import the module:
from sklearn.preprocessing import PolynomialFeatures
# We create a <b>PolynomialFeatures</b> object of degree 2:
pr=PolynomialFeatures(degree=2)
pr
Z_pr=pr.fit_transform(Z)
# The original data is of 201 samples and 4 features
Z.shape
# after the transformation, there 201 samples and 15 features
Z_pr.shape
# <h2>Pipeline</h2>
# <p>Data Pipelines simplify the steps of processing the data. We use the module <b>Pipeline</b> to create a pipeline. We also use <b>StandardScaler</b> as a step in our pipeline.</p>
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
# We create the pipeline, by creating a list of tuples including the name of the model or estimator and its corresponding constructor.
Input=[('scale',StandardScaler()), ('polynomial', PolynomialFeatures(include_bias=False)), ('model',LinearRegression())]
# we input the list as an argument to the pipeline constructor
pipe=Pipeline(Input)
pipe
# We can normalize the data, perform a transform and fit the model simultaneously.
pipe.fit(Z,y)
# Similarly, we can normalize the data, perform a transform and produce a prediction simultaneously
ypipe=pipe.predict(Z)
ypipe[0:4]
# <div class="alert alert-danger alertdanger" style="margin-top: 20px">
# <h1>Question #5:</h1>
# <b>Create a pipeline that Standardizes the data, then perform prediction using a linear regression model using the features Z and targets y</b>
# </div>
# +
# Write your code below and press Shift+Enter to execute
Input=[('scale',StandardScaler()),('model',LinearRegression())]
pipe=Pipeline(Input)
pipe.fit(Z,y)
ypipe=pipe.predict(Z)
ypipe[0:10]
# -
# </div>
# Double-click <b>here</b> for the solution.
#
# <!-- The answer is below:
#
# Input=[('scale',StandardScaler()),('model',LinearRegression())]
#
# pipe=Pipeline(Input)
#
# pipe.fit(Z,y)
#
# ypipe=pipe.predict(Z)
# ypipe[0:10]
#
# -->
# <h2>Part 4: Measures for In-Sample Evaluation</h2>
# <p>When evaluating our models, not only do we want to visualize the results, but we also want a quantitative measure to determine how accurate the model is.</p>
#
# <p>Two very important measures that are often used in Statistics to determine the accuracy of a model are:</p>
# <ul>
# <li><b>R^2 / R-squared</b></li>
# <li><b>Mean Squared Error (MSE)</b></li>
# </ul>
#
# <b>R-squared</b>
#
# <p>R squared, also known as the coefficient of determination, is a measure to indicate how close the data is to the fitted regression line.</p>
#
# <p>The value of the R-squared is the percentage of variation of the response variable (y) that is explained by a linear model.</p>
#
#
#
# <b>Mean Squared Error (MSE)</b>
#
# <p>The Mean Squared Error measures the average of the squares of errors, that is, the difference between actual value (y) and the estimated value (ŷ).</p>
# <h3>Model 1: Simple Linear Regression</h3>
# Let's calculate the R^2
#highway_mpg_fit
lm.fit(X, Y)
# Find the R^2
print('The R-square is: ', lm.score(X, Y))
# We can say that ~ 49.659% of the variation of the price is explained by this simple linear model "horsepower_fit".
# Let's calculate the MSE
# We can predict the output i.e., "yhat" using the predict method, where X is the input variable:
Yhat=lm.predict(X)
print('The output of the first four predicted value is: ', Yhat[0:4])
# lets import the function <b>mean_squared_error</b> from the module <b>metrics</b>
from sklearn.metrics import mean_squared_error
# we compare the predicted results with the actual results
mse = mean_squared_error(df['price'], Yhat)
print('The mean square error of price and predicted value is: ', mse)
# <h3>Model 2: Multiple Linear Regression</h3>
# Let's calculate the R^2
# fit the model
lm.fit(Z, df['price'])
# Find the R^2
print('The R-square is: ', lm.score(Z, df['price']))
# We can say that ~ 80.896 % of the variation of price is explained by this multiple linear regression "multi_fit".
# Let's calculate the MSE
# we produce a prediction
Y_predict_multifit = lm.predict(Z)
# we compare the predicted results with the actual results
print('The mean square error of price and predicted value using multifit is: ', \
mean_squared_error(df['price'], Y_predict_multifit))
# <h3>Model 3: Polynomial Fit</h3>
# Let's calculate the R^2
# let’s import the function <b>r2_score</b> from the module <b>metrics</b> as we are using a different function
from sklearn.metrics import r2_score
# We apply the function to get the value of r^2
r_squared = r2_score(y, p(x))
print('The R-square value is: ', r_squared)
# We can say that ~ 67.419 % of the variation of price is explained by this polynomial fit
# <h3>MSE</h3>
# We can also calculate the MSE:
mean_squared_error(df['price'], p(x))
# <h2>Part 5: Prediction and Decision Making</h2>
# <h3>Prediction</h3>
#
# <p>In the previous section, we trained the model using the method <b>fit</b>. Now we will use the method <b>predict</b> to produce a prediction. Lets import <b>pyplot</b> for plotting; we will also be using some functions from numpy.</p>
# +
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
# -
# Create a new input
new_input=np.arange(1, 100, 1).reshape(-1, 1)
# Fit the model
lm.fit(X, Y)
lm
# Produce a prediction
yhat=lm.predict(new_input)
yhat[0:5]
# we can plot the data
plt.plot(new_input, yhat)
plt.show()
# <h3>Decision Making: Determining a Good Model Fit</h3>
# <p>Now that we have visualized the different models, and generated the R-squared and MSE values for the fits, how do we determine a good model fit?
# <ul>
# <li><i>What is a good R-squared value?</i></li>
# </ul>
# </p>
#
# <p>When comparing models, <b>the model with the higher R-squared value is a better fit</b> for the data.
# <ul>
# <li><i>What is a good MSE?</i></li>
# </ul>
# </p>
#
# <p>When comparing models, <b>the model with the smallest MSE value is a better fit</b> for the data.</p>
#
#
# <h4>Let's take a look at the values for the different models.</h4>
# <p>Simple Linear Regression: Using Highway-mpg as a Predictor Variable of Price.
# <ul>
# <li>R-squared: 0.49659118843391759</li>
# <li>MSE: 3.16 x10^7</li>
# </ul>
# </p>
#
# <p>Multiple Linear Regression: Using Horsepower, Curb-weight, Engine-size, and Highway-mpg as Predictor Variables of Price.
# <ul>
# <li>R-squared: 0.80896354913783497</li>
# <li>MSE: 1.2 x10^7</li>
# </ul>
# </p>
#
# <p>Polynomial Fit: Using Highway-mpg as a Predictor Variable of Price.
# <ul>
# <li>R-squared: 0.6741946663906514</li>
# <li>MSE: 2.05 x 10^7</li>
# </ul>
# </p>
# <h3>Simple Linear Regression model (SLR) vs Multiple Linear Regression model (MLR)</h3>
# <p>Usually, the more variables you have, the better your model is at predicting, but this is not always true. Sometimes you may not have enough data, you may run into numerical problems, or many of the variables may not be useful and or even act as noise. As a result, you should always check the MSE and R^2.</p>
#
# <p>So to be able to compare the results of the MLR vs SLR models, we look at a combination of both the R-squared and MSE to make the best conclusion about the fit of the model.
# <ul>
# <li><b>MSE</b>The MSE of SLR is 3.16x10^7 while MLR has an MSE of 1.2 x10^7. The MSE of MLR is much smaller.</li>
# <li><b>R-squared</b>: In this case, we can also see that there is a big difference between the R-squared of the SLR and the R-squared of the MLR. The R-squared for the SLR (~0.497) is very small compared to the R-squared for the MLR (~0.809).</li>
# </ul>
# </p>
#
# This R-squared in combination with the MSE show that MLR seems like the better model fit in this case, compared to SLR.
# <h3>Simple Linear Model (SLR) vs Polynomial Fit</h3>
# <ul>
# <li><b>MSE</b>: We can see that Polynomial Fit brought down the MSE, since this MSE is smaller than the one from the SLR.</li>
# <li><b>R-squared</b>: The R-squared for the Polyfit is larger than the R-squared for the SLR, so the Polynomial Fit also brought up the R-squared quite a bit.</li>
# </ul>
# <p>Since the Polynomial Fit resulted in a lower MSE and a higher R-squared, we can conclude that this was a better fit model than the simple linear regression for predicting Price with Highway-mpg as a predictor variable.</p>
# <h3>Multiple Linear Regression (MLR) vs Polynomial Fit</h3>
# <ul>
# <li><b>MSE</b>: The MSE for the MLR is smaller than the MSE for the Polynomial Fit.</li>
# <li><b>R-squared</b>: The R-squared for the MLR is also much larger than for the Polynomial Fit.</li>
# </ul>
# <h2>Conclusion:</h2>
# <p>Comparing these three models, we conclude that <b>the MLR model is the best model</b> to be able to predict price from our dataset. This result makes sense, since we have 27 variables in total, and we know that more than one of those variables are potential predictors of the final car price.</p>
# <h1>Thank you for completing this notebook</h1>
# <div class="alert alert-block alert-info" style="margin-top: 20px">
#
# <p><a href="https://cocl.us/corsera_da0101en_notebook_bottom"><img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/Images/BottomAd.png" width="750" align="center"></a></p>
# </div>
#
# <h3>About the Authors:</h3>
#
# This notebook was written by <a href="https://www.linkedin.com/in/mahdi-noorian-58219234/" target="_blank"><NAME> PhD</a>, <a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank"><NAME></a>, <NAME>, <NAME>, <NAME>, Parizad, <NAME> and <a href="https://www.linkedin.com/in/fiorellawever/" target="_blank"><NAME></a> and <a href=" https://www.linkedin.com/in/yi-leng-yao-84451275/ " target="_blank" >Yi Yao</a>.
#
# <p><a href="https://www.linkedin.com/in/joseph-s-50398b136/" target="_blank"><NAME></a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>
# <hr>
# <p>Copyright © 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href="https://cognitiveclass.ai/mit-license/">MIT License</a>.</p>
| Data Analysis with Python/Data Analysis with Python - Week 4 - Model Development.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#<NAME> - Programming with Data Project
#Prediction using KNN and SVM model
# -
#Importing Libraries
import sys
import scipy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from sklearn.model_selection import cross_validate
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn import model_selection
from sklearn.metrics import classification_report, accuracy_score
from pandas.plotting import scatter_matrix
import matplotlib.pyplot as plt
import pandas as pd
#Exploring Datasets
df_labels = pd.read_csv("label_data.csv")
df_labels
df_features = pd.read_csv("grabfeatures1.csv")
df_features
#Merging Dataset
df = pd.merge(df_features,df_labels,on="bookingID")
df
#Preprocessing the data
#Exploring data to pick which algo to use
print(df.axes)
#droping bookingID as it may affect the Machine learning algo
df.drop(['bookingID'], 1, inplace=True)
#print the shape of the dataset
print(df.shape)
# Dataset visualization
print(df.loc[88])
print(df.describe())
#Label 1 = Dangerous Driving
#Label 0 = Safe Driving
#Plot histograms for each variable to better understand the data
df.hist(figsize = (10,10))
plt.show()
# +
#Create X and Y datasets for training and validation
X = np.array(df.drop(['label'], 1))
y = np.array(df['label'])
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size = 0.001)
# -
# Specify testing options
seed = 8
scoring = 'accuracy'
# +
#Define the models to train
models = []
models.append(('KNN', KNeighborsClassifier(n_neighbors = 5)))
models.append(('SVM', SVC()))
#Evaluate each model in turn
results = []
names = []
for name, model in models:
kfold = model_selection.KFold(n_splits=2, random_state=seed)
cv_results = model_selection.cross_val_score(model, X_train, y_train, cv=kfold, scoring=scoring)
results.append(cv_results)
names.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)
# -
| Grab_AI_for_SEA_Challenge_-_AL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import sys
sys.path.append("..")
import numpy as np
import torch
from matplotlib import pyplot as plt
from inflation import BBI
import time
import json
from numpy import arange
from utils_synthetic import *
from mpl_toolkits.mplot3d import Axes3D
import sys
# #!{sys.executable} -m pip install hyperopt
# !mkdir -p results
from hyperopt import hp, tpe, Trials, fmin
from hyperopt.pyll.base import scope
# +
n=10
i_vec = torch.arange(1,n+1)
print(i_vec)
device = 'cpu'
def zakharov(xs):
return torch.sum(xs**2)+ (0.5*torch.sum(i_vec*xs))**2+(0.5*torch.sum(i_vec*xs))**4
global_min = torch.zeros(n)
print(zakharov(global_min))
xs = torch.tensor([1., 1., 1., 1., 1.,1.,1.,1.,1.,1.])
print("Initial value: ", zakharov(xs).item())
# +
#The hyperopt tuning function
potential = zakharov
def hyperopt_tuning(ranges, ranges_integer, optimizer, tune_iterations=1000, n_trials=100, **fixed_pars):
def optimizer_func(pars):
xs = optimizer(x0, potential, iterations=tune_iterations, **pars, **fixed_pars)
return potential(xs).item()
fspace = {}
for par, range in ranges.items(): fspace[par] = hp.uniform(par, *range)
for par, range in ranges_integer.items(): fspace[par] = scope.int(hp.uniform(par, *range))
trials = Trials()
best = fmin(fn=optimizer_func, space=fspace, algo=tpe.suggest, trials=trials, max_evals=n_trials)
return best
# +
#common parameters
tune_iterations = 2500
n_trials = 500
test_iterations = 4*tune_iterations
x0 =xs.tolist()
# +
#sgd
best_par_sgd = hyperopt_tuning({'lr': [1e-10,.5], 'momentum': [0,1.0]},{}, sgd_optimizer, tune_iterations=tune_iterations, n_trials=n_trials)
print("sgd:", best_par_sgd)
with open('sgd-param-zakharov.txt', 'w') as filehandle:
json.dump( best_par_sgd, filehandle)
# +
#since it is not converging, we restrict the lr range. - The result is ok since the determined lr is not at the edge of the interval
best_par_sgd = hyperopt_tuning({'lr': [1e-10,1e-5], 'momentum': [0,1.0]},{}, sgd_optimizer, tune_iterations=tune_iterations, n_trials=n_trials)
print("sgd:", best_par_sgd)
with open('sgd-param-zakharov.txt', 'w') as filehandle:
json.dump( best_par_sgd, filehandle)
# +
#sgd - gamma (tuning on -log of momentum) - it's not better
best_par_sgd_gamma = hyperopt_tuning({'lr': [1e-10,1e-5], 'gamma': [0,10000000000]},{}, sgd_optimizer_gamma, tune_iterations=tune_iterations, n_trials=n_trials)
print("sgd gamma:", best_par_sgd_gamma)
with open('sgd-gamma-param-zakharov.txt', 'w') as filehandle:
json.dump( best_par_sgd, filehandle)
# +
##BBI
#turning off bounces
threshold0 = 1e20
n_fixed_bounces = 0
threshold = 1e25
v0 = 1e-22
consEn = True
deltaEn = 0.0
best_par_BBI = hyperopt_tuning({'lr': [1e-6,1e-2]},{}, BBI_optimizer, tune_iterations=tune_iterations, n_trials=n_trials,
threshold0 = threshold0,threshold = threshold, deltaEn = deltaEn, v0 = v0, n_fixed_bounces = n_fixed_bounces , consEn = consEn)
best_par_BBI['deltaEn'] = deltaEn
best_par_BBI['v0'] = v0
best_par_BBI['threshold0'] = threshold0
best_par_BBI['threshold'] = threshold
best_par_BBI['n_fixed_bounces'] = n_fixed_bounces
best_par_BBI['consEn'] = True
print("BBI:", best_par_BBI)
with open('bbi-param-zakharov.txt', 'w') as filehandle:
json.dump( best_par_BBI, filehandle)
# +
x0 =xs.tolist()
xs_list_BBI = BBI_optimizer_fullhistory(x0, potential, iterations=test_iterations, **best_par_BBI)
min_temp = 10e20
for elem in xs_list_BBI:
elem_tens_val = zakharov(torch.tensor(elem))
if elem_tens_val < min_temp:
min_temp = elem_tens_val.item()
min_BBI = min_temp
print("Final loss: ", min_BBI )
plotting(zakharov, xs_list_BBI, "loss-bbi")
plotting(distance, xs_list_BBI, "distance from the origin")
# +
#sgd
x0 =xs.tolist()
xslist_sgd = sgd_optimizer_fullhistory(x0, potential, iterations=test_iterations, **best_par_sgd )
min_temp = 10e20
for elem in xslist_sgd:
elem_tens_val = zakharov(torch.tensor(elem))
if elem_tens_val < min_temp:
min_temp = elem_tens_val.item()
min_sgd = min_temp
print("Final loss: ", min_sgd )
plotting(zakharov, xslist_sgd, "loss-sgd")
plotting(distance, xslist_sgd, "distance from the origin")
# +
#sgd-gamma
#nchecks > 1 is useful for non-deterministic algorithms
x0 =xs.tolist()
xslist_sgd_gamma = sgd_optimizer_gamma_fullhistory(x0, potential, iterations=test_iterations, **best_par_sgd_gamma )
min_temp = 10e20
for elem in xslist_sgd_gamma:
elem_tens_val = zakharov(torch.tensor(elem))
if elem_tens_val < min_temp:
min_temp = elem_tens_val.item()
min_sgd_gamma = min_temp
print("Final loss: ", min_sgd_gamma )
plotting(zakharov, xslist_sgd_gamma, "loss-sgd-gamma")
plotting(distance, xslist_sgd_gamma, "distance from the origin")
# +
losses_sgd = []
for elem in xslist_sgd: losses_sgd.append(zakharov(torch.tensor(elem)))
losses_sgd_gamma = []
for elem in xslist_sgd_gamma: losses_sgd_gamma.append(zakharov(torch.tensor(elem)))
losses_bbi = []
for elem in xs_list_BBI: losses_bbi.append(zakharov(torch.tensor(elem)))
plt.plot(losses_sgd, label="sgd")
plt.plot(losses_sgd_gamma, label="sgd - gamma")
plt.plot(losses_bbi, label="bbi")
plt.yscale('log')
plt.legend(loc='upper center', shadow=False, fontsize='x-large')
# +
plt.figure(figsize=(12, 5))
plt.plot(losses_sgd, label="GDM", alpha=.8, linewidth=2, color='turquoise')
plt.plot(losses_bbi, label="BBI", alpha = .8, linewidth=2, color='orchid')
plt.xlabel('Iteration', fontsize = 20)
plt.ylabel('V', fontsize = 20)
plt.yscale('log')
#plt.legend(loc='best', shadow=False, fontsize='x-large')
plt.xticks(fontsize = 20)
plt.yticks(fontsize = 20)
plt.savefig('hyperopt-zakharov.pdf',bbox_inches='tight')
# -
| synthetic/zakharov.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import cv2 as cv
import numpy as np
img = cv.imread('filter_blur.jpg')
rx, ry = 12, 6
sx, sy = 6, 3
kernel = np.zeros((ry*2+1, rx*2+1), np.float32)
for i in range(kernel.shape[0]):
for j in range(kernel.shape[1]):
x = j - rx
y = i - ry
kernel[i,j] = np.exp(-(x*x)/(2*sx*sx)-(y*y)/(2*sy*sy))
cv.imshow('kernel', cv.resize(kernel, (400,200)))
kernel /= kernel.sum()
img_smoothed = cv.filter2D(img, -1, kernel)
img_blurred = cv.GaussianBlur(img, (rx*2+1, ry*2+1), sigmaX=sx, sigmaY=sy)
img_blurred_autosigma = cv.GaussianBlur(img, (rx*2+1, ry*2+1), 0)
img_blurred_autokernel = cv.GaussianBlur(img, (0,0), sigmaX=sx, sigmaY=sy)
cv.imshow('original', img)
cv.imshow('smoothed', img_smoothed)
cv.imshow('GaussianBlurred', img_blurred)
cv.imshow('GaussianBlurredAutoSigma', img_blurred_autosigma)
cv.imshow('GaussianBlurredAutoKernel', img_blurred_autokernel)
cv.waitKey()
cv.destroyAllWindows()
# +
import cv2 as cv
img = cv.imread('filter_blur.jpg')
img_blurred = cv.GaussianBlur(img, (5,5), 0)
img_sharpened = cv.addWeighted(img, 3.5, img_blurred, -2.5, 0)
cv.imshow('img', img)
cv.imshow('img_blurred', img_blurred)
cv.imshow('img_sharpened', img_sharpened)
cv.waitKey()
cv.destroyAllWindows()
# +
import cv2 as cv
img = cv.imread('filter_blur.jpg')
img_blurred = cv.GaussianBlur(img, (5,5), 0)
img_sharpened = cv.addWeighted(img, 3.5, img_blurred, -2.5, 0)
cv.imshow('img', img)
cv.imshow('img_blurred', img_blurred)
cv.imshow('img_sharpened', img_sharpened)
cv.waitKey()
cv.destroyAllWindows()
# -
| AI 이노베이션 스퀘어 시각지능 과정/202005/20200515/OpenCV smoothing & sharpening.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
df = pd.read_csv('train.csv')
df_test = pd.read_csv('test.csv')
df.head()
# Columns have white spaces and we need to remove them if we want to use the columns' labels
df.columns = df.columns.str.strip()
X = np.asarray(df)[:,:-2]
y = np.asarray(df)[:,-1]
X_test = np.asarray(df_test)[:,:-1]
from sklearn.model_selection import train_test_split, KFold, cross_val_score
X_train, X_val, y_train, y_val = train_test_split(X, y, train_size=0.7, random_state=42)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler = scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_val)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train_scaled,y_train)
y_hat = lr.predict(X_test_scaled)
lr.score(X_test_scaled, y_val)
from sklearn.ensemble import RandomForestClassifier
rand_forest = RandomForestClassifier(class_weight='balanced')
rand_forest.fit(X_train_scaled, y_train)
rand_forest.score(X_test_scaled, y_val)
rand_forest.score(X_train_scaled, y_train)
from sklearn.svm import SVC
svm = SVC()
svm.fit(X_train, y_train)
svm.score(X_val, y_val)
df.corr()
# # Model selection
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size = 0.8, random_state=42)
# +
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_auc_score, accuracy_score
lr = LogisticRegression()
sgd = SGDClassifier()
svc = SVC()
knn = KNeighborsClassifier()
rfc = RandomForestClassifier()
models = {lr: [], sgd: [], svc: [], knn: [], rfc: []}
KF = KFold(n_splits=5)
for train_index, val_index in KF.split(X):
X_train, X_val = X[train_index], X[val_index]
y_train, y_val = y[train_index], y[val_index]
for model in models.keys():
model.fit(X_train, y_train)
y_hat = model.predict(X_val)
roc = roc_auc_score(y_val, y_hat)
models[model] = np.append(models[model], roc)
# -
models
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler = scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
import lightgbm as lgb
import xgboost as xgb
from xgboost import XGBClassifier
# +
params_lgb = {'num_leaves': 127,
'min_data_in_leaf': 10,
'objective': 'binary',
'max_depth': -1,
'learning_rate': 0.01,
"boosting_type": "gbdt",
"bagging_seed": 11,
"metric": 'logloss',
"verbosity": 0
}
params_xgb = {'colsample_bytree': 0.8,
'learning_rate': 0.0003,
'max_depth': 31,
'subsample': 1,
'objective':'binary:logistic',
'eval_metric':'logloss',
'min_child_weight':3,
'gamma':0.25,
'n_estimators':5000,
'verbosity':0
}
# -
xmodel = xgb.XGBClassifier(objective='binary:logistic',learning_rate=0.1, max_depth=5, eval_metric='logloss')
xmodel.fit(X_train_scaled, y_train)
from sklearn.metrics import roc_auc_score, accuracy_score
y_hat = xmodel.predict(X_test_scaled)
roc_auc_score(y_test, y_hat)
X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, train_size=0.7, random_state=42)
# +
train_set = xgb.DMatrix(X_train, y_train)
val_set = xgb.DMatrix(X_valid, y_valid)
test_set = xgb.DMatrix(X_test)
clf = xgb.train(params_xgb, train_set,num_boost_round=5000, evals=[(train_set, 'train'), (val_set, 'val')], early_stopping_rounds=100, verbose_eval=100)
# -
y_hat_x = clf.predict(test_set)
roc_auc_score(y_test, y_hat_x)
| Raquel/KaggleCompetition.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## `arcgis.mapping` module
# The `arcgis.mapping` module contains classes and functions to represent and interact with web maps, scenes, and certain layer types such as map image and vector tiles. In this page we will observe how to visualize maps, scenes and layers using the map widget in Jupyter notebook environment.
#
# Contents of this page:
# - [Using the map widget](#Using-the-map-widget)
# - [Setting the map properties](#Setting-the-map-properties)
# - [Zoom level](#Zoom-level)
# - [Map center](#Map-center)
# - [Basemaps](#Basemaps)
# - [Using custom basemaps](#Using-custom-basemaps)
# - [Adding layers to the map](#Adding-layers-to-the-map)
# - [Adding Item objects to the map](#Adding-Item-objects-to-the-map)
# - [Adding layer objects to the map](#Adding-layer-objects-to-the-map)
# - [Adding layers with custom symbology](#Adding-layers-with-custom-symbology)
# - [Adding Imagery layers](#Adding-imagery-layers)
# - [Listing the layers added to the map](#Listing-the-layers-added-to-the-map)
# - [Removing layers from the map](#Removing-layers-from-the-map)
# - [Drawing graphics on the map](#Drawing-graphics-on-the-map)
# - [Drawing with custom symbols](#Drawing-with-custom-symbols)
# - [Clearing the drawn graphics](#Clearing-the-drawn-graphics)
# - [Saving the map as a web map](#Saving-the-map-as-a-web-map)
#
# ## Using the map widget
# The `GIS` object includes a map widget for displaying geographic locations, visualizing GIS content, as well as the results of your analysis. To use the map widget, call `gis.map()` and assign it to a variable, that you can then query to bring up the widget in the notebook:
import arcgis
from arcgis.gis import GIS
# Create a GIS object, as an anonymous user for this example
gis = GIS()
# Create a map widget
map1 = gis.map('Paris') # Passing a place name to the constructor
# will initialize the extent of the map.
map1
# 
# ## Setting the map properties
# ### Zoom level
# The map widget has several properties that you can query and set, such as its zoom level, basemap, height, etc:
map1.zoom
# Assigning a value to the `zoom` property will update the widget.
map1.zoom = 10
# Your notebook can have as many of these widgets as you wish. Let us create another map widget and modify some of its properties.
# ### Map center
# The center property reveals the coordinates of the center of the map.
map2 = gis.map() # creating a map object with default parameters
map2
map2.center
# If you know the latitude and longitude of your place of interest, you can assign it to the center property.
map2.center = [34,-118] # here we are setting the map's center to Los Angeles
# You can use geocoding to get the coordinates of place names and drive the widget. Geocoding converts place names to coordinates and can be used using `arcgis.geocoding.geocode()` function.
# Let us geocode `Times Square, NY` and set the map's extent to the geocoded location's extent.
location = arcgis.geocoding.geocode('Times Square, NY', max_locations=1)[0]
map2.extent = location['extent']
# ## Basemaps
# Basemap are layers on your map over which all other operational layers that you add are displayed. Basemaps typically span the full extent of the world and provide context to your GIS layers. It helps viewers understand where each feature is located as they pan and zoom to various extents.
#
# Your map can have a number of different basemaps. To see what basemaps are included with the widget, query the `basemaps` property
map3 = gis.map()
map3.basemaps
# You can assign any one of the supported basemaps to the `basemap` property to change the basemap. For instance, you can change the basemap to the dark gray vector basemap as below:
map3.basemap = 'dark-gray-vector'
map3
# Query the `basemap` property to find what the current basemap is
map3.basemap
# Let us animate a new map widget by cycling through basemaps and assigning it to the basemap property of the map widget.
map4 = gis.map('New York City, NY')
map4
# 
# +
import time
for basemap in map4.basemaps:
map4.basemap = basemap
time.sleep(3)
# -
# ### Using custom basemaps
# Basemaps are essentially web map items and the help [here](https://doc.arcgis.com/en/arcgis-online/create-maps/choose-basemap.htm) can walk you through the steps involved in creating your own basemaps. The administrator of a GIS has the ability to designate a particular group in the GIS as the [basemap gallery group](https://doc.arcgis.com/en/arcgis-online/administer/configure-map.htm). Web maps from this group are treated as custom basemaps for that GIS.
#
# To find the list of custom basemaps available from this group, use the `gallery_basemaps` property.
# Log into to GIS that has basemap gallery option enabled.
gis = GIS("https://www.arcgis.com", "arcgis_python", "<PASSWORD>")
map5 = gis.map('London, UK', zoomlevel=10)
map5.gallery_basemaps
# To create a map using the custom basemap, simply assign that to the `basemap` property of your map.
map5.basemap = 'os_open_carto'
map5
# 
# <blockquote><b>Note:</b> Gallery basemaps serve another important purpose. They allow you to publish basemaps with layers that can be used in disconnected environments where the notebooks cannot connect to ArcGIS Online to display the default basemap layers.
#
# If you are using the Python API and Jupyter notebooks in such environments, you can publish a few basemaps to the gallery basemaps of your GIS and make use of them in your map widgets.</blockquote>
# ## Adding layers to the map
# An important functionality of the map widget is its ability to add and render GIS layers. To a layer call the `add_layer()` method and pass the layer object as an argument.
# Log into to GIS as we will save the widget as a web map later
gis = GIS("https://www.arcgis.com", "arcgis_python", "<PASSWORD>")
usa_map = gis.map('USA', zoomlevel=4) # you can specify the zoom level when creating a map
usa_map
# 
# Next, search from some layers to add to the map
flayer_search_result = gis.content.search("owner:esri","Feature Layer", outside_org=True)
flayer_search_result
# ### Adding `Item` objects to the map
# You can add `Item` objects to a map by passing it to the `add_layer()` method.
world_timezones_item = flayer_search_result[5]
usa_map.add_layer(world_timezones_item)
# ### Adding layer objects to the map
# You can add a number of different layer objects such as `FeatureLayer`, `FeatureCollection`, `ImageryLayer`, `MapImageLayer` to the map. You can add a `FeatureLayer` as shown below:
world_countries_item = flayer_search_result[-2]
world_countries_layer = world_countries_item.layers[0]
world_countries_layer
usa_map.add_layer(world_countries_layer, options={'opacity':0.4})
# ### Adding layers with custom symbology
# While calling the `add_layer()` method, you can specify a set of renderer instructions as a dictionary to the `options` parameter. The previous cell shows how you can set the transparency for a layer. The `opacity` value ranges from `0 - 1`, with `0` being fully transparent and `1` being fully opaque.
#
# You can make use of the **"smart mapping"** capability to render feature layers with symbology that varies based on an attribute field of that layer. The cell below adds the 'USA Freeway System' layer to the map and changes the width of the line segments based on the length of the freeway.
usa_freeways = flayer_search_result[-3].layers[0]
usa_map.add_layer(usa_freeways, {'renderer':'ClassedSizeRenderer',
'field_name':'DIST_MILES'})
# Refer to the guide on [smart mapping](../smart-mapping/) to learn more about this capability.
# ### Adding imagery layers
# Similar to `FeatureLayer`s, you can also add `ImageryLayer`s and imagery layer items. You can also specify either a built-in raster function or a custom one for rendering.
world_terrain_item = gis.content.get('58a541efc59545e6b7137f961d7de883')
terrain_imagery_layer = world_terrain_item.layers[0]
type(terrain_imagery_layer)
usa_map.add_layer(terrain_imagery_layer)
# ## Listing the layers added to the map
# You can list the layers added to be map using the `layers` property.
usa_map.layers
# ## Removing layers from the map
# To remove one or more layers, call the `remove_layers()` method and pass a list of layers that you want removed. To get a list of valid layers that can be removed, call the `layers` property as shown in the previous cell.
#
# The code below shows how to remove the USA freeways layer
usa_map.remove_layers(layers=[usa_freeways])
# To remove all layers, call the `remove_layers()` method with any parameters.
# ## Drawing graphics on the map
# You can draw or sketch graphics on the map using the `draw()` method. For instance, you can draw and annotate rectangles, ellipses, arrow marks etc. as shown below:
usa_map.draw('rectangle')
# Now scroll to the map and draw a rectangle.
usa_map.draw('uparrow')
# Scoll back to the map again and place an 'up arrow' below Los Angeles. Refer to the API reference documentation for [draw](http://esri.github.io/arcgis-python-api/apidoc/html/arcgis.widgets.html#arcgis.widgets.MapView.draw) to get the list of supported shapes that you can sketch on the map.
# ### Drawing `FeatureSet` objects on the map
# In addition to sketches, you can send `FeatureSet` objects to the `draw()` method. This capability comes in handy as you can get a `FeatureSet` object through various different operations using the Python API. For instance, you can get results of a `geocoding` operation as a `FeatureSet`, results of a `query()` operation on a `FeatureLayer` as a `FeatureSet` that can you visualize on the map using the `draw()` method.
#
# The snippet below geocodes the locations of a few capitol buildings in the USA.
from arcgis.geocoding import geocode
usa_extent = geocode('USA')[0]['extent']
usa_extent
usa_capitols_fset = geocode('Capitol', search_extent=usa_extent, max_locations=10, as_featureset=True)
usa_capitols_fset
# ### Drawing with custom symbols
# While drawing a graphic, you can specify a custom symbol. Users of the Python API can make use of a custom [symbol selector web app](https://esri.github.io/arcgis-python-api/tools/symbol.html) and pick a symbol for point layers. For instance, you can pick a business marker symbol for the capitol buildings as shown below:
# +
capitol_symbol = {"angle":0,"xoffset":0,"yoffset":0,"type":"esriPMS",
"url":"http://static.arcgis.com/images/Symbols/PeoplePlaces/esriBusinessMarker_57.png",
"contentType":"image/png","width":24,"height":24}
usa_map.draw(usa_capitols_fset, symbol=capitol_symbol)
# -
# ## Clearing the drawn graphics
# You can clear all drawn graphics from the map by calling the `clear_graphics()` method.
usa_map.clear_graphics()
# ## Saving the map as a web map
# Starting with the Python API version `1.3`, you can save the map widget as a web map in your GIS. This process persistes all the basemaps, layers added with or without your custom symbology including smart mapping, pop-ups, extent, graphics drawn with or without custom symbols as layers in your web map.
#
# To save the map, call the `save()` method. This method creates and returns a new Web Map `Item` object. As parameters, you can specify all valid Item properties as shown below:
# +
webmap_properties = {'title':'USA time zones and capitols',
'snippet': 'Jupyter notebook widget saved as a web map',
'tags':['automation', 'python']}
webmap_item = usa_map.save(webmap_properties, thumbnail='./webmap_thumbnail.png', folder='webmaps')
webmap_item
# -
# You can use this web map back in the notebook, or in any ArcGIS app capabale of rendering web maps. To learn how you can use this read this web map using the Python API, refer to the guide titled [working with web maps and scenes](../working-with-web-maps-and-web-scenes/)
| guide/09-mapping-and-visualization/using-the-map-widget.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: TensorFlow-1.13.1
# language: python
# name: tensorflow-1.13.1
# ---
# # 文本情感分析
#
# 文本情感分析是NLP(自然语言处理)领域的重要研究领域。在NLP领域,文本情感分析(Text Sentiment Analysis)是指识别一段文本中流露出的说话者的情感态度,情感态度一般使用“积极”或者“消极”表示。文本情感分析可以广泛应用于社交媒体挖掘、电商平台订单评价挖掘、电影评论分析等领域。
#
# 为了定量表示情感偏向,一般使用[0,1]之间的一个浮点数给文本打上情感标签,越接近1表示文本的情感越正向,越接近0表示情感越负向。
#
# 本实践为基于BERT的中文短句文本情感分析。
#
#
# ## 数据集
#
# 数据集使用的是谭松波老师从某酒店预定网站上整理的酒店评论数据,共7000多条评论数据,5000多条正向评论,2000多条负向评论。
#
# 数据格式:
#
# | 字段 | label | review |
# | ---- | ------- | ---------- |
# | 含义 | 情感标签 | 评论文本 |
#
#
# ## 预训练模型
#
# 本实践同样使用中文**BERT-Base,Chinese**预训练模型,可以从链接[BERT-Base, Chinese](https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip)下载并解压使用。
# ## BERT 简介
#
# BERT(Bidirectional Encoder Representations from Transformers)是一种预训练NLP模型,由Google在2018年10月发布的论文[《BERT:Pre-training of Deep Bidirectional Transformers for Language Understanding》](https://arxiv.org/abs/1706.03762)中提出。
#
# BERT的通过联合调节所有层中的双向Transformer来训练预训练深度双向表示,只需要一个额外的输出层来对预训练BERT进行微调就可以满足各种任务,没有必要针对特定任务对模型进行修改,其先进性基于两点:其一,是使用Masked Langauge Model(MLM)和Next Sentense Prediction(NSP)的新预训练任务,两种方法分别捕捉词语和句子级别的representation;其二,是大量数据和计算能力满足BERT的训练强度,BERT训练数据采用了英文的开源语料BooksCropus 以及英文维基百科数据,一共有33亿个词,同时BERT模型的标准版本有1亿的参数量,而BERT的large版本有3亿多参数量,其团队训练一个预训练模型需要在64块TPU芯片上训练4天完成,而一块TPU的速度约是目前主流GPU的7-8倍。Google团队开源了多个预训练模型,以供多种下游任务需求使用。开源的预训练模型如下:
#
# - [BERT-Base, Uncased](https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip): 12-layer, 768-hidden, 12-heads, 110M parameters
# - [BERT-Large, Uncased](https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-24_H-1024_A-16.zip): 24-layer, 1024-hidden, 16-heads, 340M parameters
# - [BERT-Base, Cased](https://storage.googleapis.com/bert_models/2018_10_18/cased_L-12_H-768_A-12.zip): 12-layer, 768-hidden, 12-heads , 110M parameters
# - [BERT-Large, Cased](https://storage.googleapis.com/bert_models/2018_10_18/cased_L-24_H-1024_A-16.zip): 24-layer, 1024-hidden, 16-heads, 340M parameters
# - [BERT-Base, Multilingual Cased (New, recommended)](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip): 104 languages, 12-layer, 768-hidden, 12-heads, 110M parameters
# - [BERT-Base, Multilingual Uncased (Orig, not recommended)](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip)(Not recommended, use Multilingual Cased instead): 102 languages, 12-layer, 768-hidden, 12-heads, 110M parameters
# - [BERT-Base, Chinese](https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip): Chinese Simplified and Traditional, 12-layer, 768-hidden, 12-heads, 110M parameters
#
# 前4个是英文模型,Multilingual 是多语言模型,最后一个是中文模型(只有字级别的)。其中 Uncased 是字母全部转换成小写,而Cased是保留了大小写。 这里layer是layers层数(即Transformer blocks个数),hidden是hidden vector size,heads是self-attention的heads。
#
# ### 特征提取器
#
# 从BERT的全称(Bidirectional Encoder Representations from Transformers)可以看出,BERT采用Transformer作为特征提取器。Transformer是目前NLP领域最强的特征提取器。
#
#
# ### BERT预训练的两个重要步骤
#
# #### Masked语言模型
#
# 为了训练深度双向语言表示向量,作者用了一个非常直接的方式,遮住句子里某些单词,让编码器预测这个单词是什么。
#
# 训练方法为:
#
# 1)80%的单词用***[MASK]*** token来代替
#
# my dog is ***hairy*** → my dog is ***[MASK]***
#
# 2)10%单词用任意的词来进行代替
#
# my dog is ***hairy*** → my dog is ***apple***
#
# 3)10%单词不变
#
# my dog is ***hairy*** → my dog is ***hairy***
#
# 作者在论文中提到这样做的好处是,编码器不知道哪些词需要预测的,哪些词是错误的,因此被迫需要学习每一个token的表示向量。另外作者表示,每个batchsize只有15%的词被遮盖的原因,是性能开销。双向编码器比单项编码器训练要慢。
#
# #### 预测下一个句子:Next Sentence Prediction(NSP)
#
# 预训练一个二分类的模型,来学习句子之间的关系。预测下一个句子的方法对学习句子之间关系很有帮助。
#
# 训练方法:正样本和负样本比例是1:1,50%的句子是正样本,随机选择50%的句子作为负样本。
#
# ## 在华为云ModelArts上准备开发环境
#
# ### 进入ModelArts
#
# 点击如下链接:https://www.huaweicloud.com/product/modelarts.html , 进入ModelArts主页。点击“立即使用”按钮,输入用户名和密码登录,进入ModelArts使用页面。
#
# ### 创建ModelArts notebook
#
# 下面,我们在ModelArts中创建一个notebook开发环境,ModelArts notebook提供网页版的Python开发环境,可以方便的编写、运行代码,并查看运行结果。
#
# 第一步:在ModelArts服务主界面依次点击“开发环境”、“创建”
#
# 
#
# 第二步:填写notebook所需的参数:
#
# | 参数 | 说明 |
# | - - - - - | - - - - - |
# | 计费方式 | 按需计费 |
# | 名称 | Notebook实例名称,如 text_sentiment_analysis |
# | AI引擎 | 本案例使用Multi-Engine中的tensorflow 1.13.1,Python版本为3.6或更高 |
# | 资源池 | 选择"公共资源池"即可 |
# | 类型 | 本案例使用较为复杂的深度神经网络模型,需要较高算力,选择"GPU" |
# | 规格 | 选择"8核 | 64GiB | 1*p100" |
# | 存储配置 | 选择EVS,磁盘规格5GB |
#
# 第三步:配置好notebook参数后,点击下一步,进入notebook信息预览。确认无误后,点击“立即创建”
#
# 
#
# 第四步:创建完成后,返回开发环境主界面,等待Notebook创建完毕后,打开Notebook,进行下一步操作。
# 
#
# ### 在ModelArts中创建开发环境
#
# 接下来,我们创建一个实际的开发环境,用于后续的实验步骤。
#
# 第一步:点击下图所示的“打开”按钮,进入刚刚创建的Notebook
# 
#
# 第二步:创建一个Python3环境的的Notebook。点击右上角的"New",然后根据本案例使用的AI引擎(即TensorFlow 1.13.1)选择对应的环境。
#
# 第三步:点击左上方的文件名"Untitled",并输入一个与本实验相关的名称,如"text_sentiment_analysis"
# 
# 
#
#
# ### 在Notebook中编写并执行代码
#
# 在Notebook中,我们输入一个简单的打印语句,然后点击上方的运行按钮,可以查看语句执行的结果:
# 
#
# ### 下载实验数据集
# +
from modelarts.session import Session
import os
session = Session()
if not os.path.exists('./text_sentiment_analysis/data'):
print("start download data.")
session.download_data(bucket_path="ai-course-common-26/text_sentiment_analysis/text_sentiment_analysis.tar.gz"
, path="./text_sentiment_analysis.tar.gz")
# 使用tar命令解压资源包
# !tar xf ./text_sentiment_analysis.tar.gz
# 使用rm命令删除压缩包
# !rm ./text_sentiment_analysis.tar.gz
# -
# ### 导入依赖包
# +
import tensorflow as tf
from tensorflow import keras
import os
import re
from sklearn.model_selection import train_test_split
import pandas as pd
#设置 TensorFlow 日志打印级别为info
tf.logging.set_verbosity(tf.logging.INFO)
print('导入系统依赖包成功!')
# -
# 添加BERT源码路径至系统路径,让BERT源代码可以被导入。
os.sys.path.append('text_sentiment_analysis/bert')
# 导入BERT工具库
# +
import tokenization
import modeling
import optimization
print('导入bert工具库成功!')
# -
# ### 设置模型和数据相关参数
#
# 设置BERT模型文件、训练数据和模型输出路径
# +
# BERT模型配置文件
vocab_file = 'text_sentiment_analysis/model/chinese_L-12_H-768_A-12/vocab.txt'
bert_config_file = 'text_sentiment_analysis/model/chinese_L-12_H-768_A-12/bert_config.json'
init_checkpoint = 'text_sentiment_analysis/model/chinese_L-12_H-768_A-12/bert_model.ckpt'
# 数据集路径
data_dir = 'text_sentiment_analysis/data/'
# 模型训练输出位置
output_dir = 'text_sentiment_analysis/output/'
print("数据集路径为:",data_dir)
print("输出路径为:",output_dir)
print("中文字典路径为:",vocab_file)
print("预训练模型参数路径为:",bert_config_file)
print("预训练模型checkpoint路径为:",init_checkpoint)
# -
# ### 设置模型参数
# +
batch_size = 32 # 批大小
learning_rate = 2e-5 # 学习率
num_train_epochs = 3 # 训练轮数
# 预热比例。在预测阶段,学习率很小并且逐渐增加,会有助于训练。
warmup_proportion = 0.1
# 保存训练过程日志的频率
save_checkpoints_steps = 500
save_summary_steps = 100
print("批大小:", batch_size)
print("训练轮数:", num_train_epochs)
print("预热的比例:",warmup_proportion)
print("学习率:",learning_rate)
print("保存检查点的步数频率:",save_checkpoints_steps)
print("保存summart的步数频率:",save_summary_steps)
# -
# ### 读取数据集
# +
# 获取非倾斜的数据集(标签的比例基本相等)
def get_balance_corpus(corpus_size, corpus_pos, corpus_neg):
sample_size = corpus_size // 2
pd_corpus_balance = pd.concat([corpus_pos.sample(sample_size, replace=corpus_pos.shape[0]<sample_size), \
corpus_neg.sample(sample_size, replace=corpus_neg.shape[0]<sample_size)])
print('评论数目(总体):%d' % pd_corpus_balance.shape[0])
print('评论数目(正向):%d' % pd_corpus_balance[pd_corpus_balance.label==1].shape[0])
print('评论数目(负向):%d' % pd_corpus_balance[pd_corpus_balance.label==0].shape[0])
return pd_corpus_balance
# 读取数据集文件
reviews_all = pd.read_csv(data_dir + 'ChnSentiCorp_htl_all.csv')
pd_positive = reviews_all[reviews_all.label==1]
pd_negative = reviews_all[reviews_all.label==0]
# 获取非倾斜的数据集,防止模型有偏见
reviews_4000 = get_balance_corpus(4000, pd_positive, pd_negative)
# 切分为训练集和测试集
train, test = train_test_split(reviews_4000, test_size=0.2)
print('数据读取和切分完毕!')
# -
# 总的数据集大小
len(reviews_all)
# 展示训练集样本
train.sample(10)
# ### 读取BERT预训练模型中文字典
tokenizer = tokenization.FullTokenizer(vocab_file=vocab_file, do_lower_case=False)
# 标记器会将句子分成单个的字。
#
# 下面是一个样例:
tokenizer.tokenize("今天的天气真好!")
# ### 创建相关类型
# +
# 样本输入类
class InputExample(object):
def __init__(self, guid, text_a, text_b=None, label=None):
self.guid = guid
self.text_a = text_a
self.text_b = text_b
self.label = label
# 特征输入类,BERT可识别
class InputFeatures(object):
def __init__(self,
input_ids,
input_mask,
segment_ids,
label_id,
is_real_example=True):
self.input_ids = input_ids
self.input_mask = input_mask
self.segment_ids = segment_ids
self.label_id = label_id
self.is_real_example = is_real_example
# 填充类
class PaddingInputExample(object):
pass
# -
# 抽取数据信息,转换成 InputExample 类型
# +
# 设置数据列和标签列
DATA_COLUMN = 'review'
LABEL_COLUMN = 'label'
train_InputExamples = train.apply(lambda x: InputExample(guid=None, # 全局唯一ID
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
test_InputExamples = test.apply(lambda x: InputExample(guid=None,
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
# -
# InputExample格式转换为 InputFeature 格式。
#
# 日志中打印了五个转换后的InputFeature格式样本的案例,可以查看。input_id中的数字即词在词汇表中的行数。
# +
# 序列截断
def truncate_seq_pair(tokens_a, tokens_b, max_length):
""" 将一个序列截断,使得长度不超过最大长度."""
while True:
total_length = len(tokens_a) + len(tokens_b)
if total_length <= max_length:
break
if len(tokens_a) > len(tokens_b):
tokens_a.pop()
else:
tokens_b.pop()
#convert_single_example函数用来把一句话的每个字转换成BERT的输入,即词向量、文本向量、位置向量,
#以及每个字对应的实体标签。同时在句首和句尾分别加上[CLS]和[SEP]标志。
def convert_single_example(ex_index, example, label_list, max_seq_length,
tokenizer):
"""将单个的 InputExample 转换成单个的 InputFeatures."""
if isinstance(example, PaddingInputExample):
return InputFeatures(
input_ids=[0] * max_seq_length,
input_mask=[0] * max_seq_length,
segment_ids=[0] * max_seq_length,
label_id=0,
is_real_example=False)
#将一个样本进行分析,然后将字转化为id, 标签转化为id,然后结构化到InputFeatures对象中
label_map = {}
# 1表示从1开始对标签进行index化
for (i, label) in enumerate(label_list):
label_map[label] = i
# 中文是分字,但是对于一些不在BERT的vocab.txt中得字符会被进行WordPiece处理
tokens_a = tokenizer.tokenize(example.text_a)
tokens_b = None
if example.text_b:
tokens_b = tokenizer.tokenize(example.text_b)
if tokens_b:
# 截断 tokens_a + tokens_b 序列,保证长度不超指定长度
# "- 3" 是因为[CLS], [SEP], [SEP] 三个符号
truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)
else:
# "- 2" 是因为[CLS] 和 [SEP]两个符号
if len(tokens_a) > max_seq_length - 2:
tokens_a = tokens_a[0:(max_seq_length - 2)]
tokens = []
segment_ids = []
tokens.append("[CLS]") # 句头添加 [CLS] 标志
segment_ids.append(0)
for token in tokens_a:
tokens.append(token)
segment_ids.append(0)
tokens.append("[SEP]") # 句尾添加[SEP] 标志
segment_ids.append(0)
if tokens_b:
for token in tokens_b:
tokens.append(token)
segment_ids.append(1)
tokens.append("[SEP]")
segment_ids.append(1)
input_ids = tokenizer.convert_tokens_to_ids(tokens) # 将序列中的字(tokens)转化为ID形式
input_mask = [1] * len(input_ids)
# 用0填充序列空余位置
while len(input_ids) < max_seq_length:
input_ids.append(0)
input_mask.append(0)
segment_ids.append(0)
assert len(input_ids) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_ids) == max_seq_length
label_id = label_map[example.label]
# 打印前5个样例数据
if ex_index < 5:
tf.logging.info("*** Example ***")
tf.logging.info("guid: %s" % (example.guid)) #每个句子的独立id
tf.logging.info("tokens: %s" % " ".join([tokenization.printable_text(x) for x in tokens])) #每个字作为一个token
tf.logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids])) #字向量token embeddings
tf.logging.info("input_mask: %s" % " ".join([str(x) for x in input_mask])) #位置向量position embeddings
tf.logging.info("segment_ids: %s" % " ".join([str(x) for x in segment_ids])) #文本向量segment embeddings
tf.logging.info("label: %s (id = %d)" % (example.label, label_id)) #标签labels
# 结构化为一个类
feature = InputFeatures(
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
label_id=label_id,
is_real_example=True)
return feature
# 将InputExample转换为InputFeature
def convert_examples_to_features(examples, label_list, max_seq_length, tokenizer):
features = []
for (ex_index, example) in enumerate(examples):
if ex_index % 10000 == 0:
tf.logging.info("Writing example %d of %d" % (ex_index, len(examples)))
feature = convert_single_example(ex_index, example, label_list,
max_seq_length, tokenizer)
features.append(feature)
return features
# 标签的列表
label_list = [0, 1]
# 设置token的长度上限
max_seq_length = 128
# 将InputExample格式的数据转换为BERT可以理解的InputFeature格式的数据
train_features = convert_examples_to_features(train_InputExamples, label_list, max_seq_length, tokenizer)
test_features = convert_examples_to_features(test_InputExamples, label_list, max_seq_length, tokenizer)
# -
# ### 加载BERT模型参数
# +
bert_config = modeling.BertConfig.from_json_file(bert_config_file)
# !cat $bert_config_file # 打印BERT神经网络的参数
# -
# ### 构造模型结构
# +
# 创建一个分类模型
def create_model(bert_config, is_training, input_ids, input_mask, segment_ids,
labels, num_labels, use_one_hot_embeddings):
# 加载预训练bert模型,获取对应的字embedding
model = modeling.BertModel(
config=bert_config,
is_training=is_training,
input_ids=input_ids,
input_mask=input_mask,
token_type_ids=segment_ids,
use_one_hot_embeddings=use_one_hot_embeddings)
output_layer = model.get_pooled_output()
hidden_size = output_layer.shape[-1].value
output_weights = tf.get_variable(
"output_weights", [num_labels, hidden_size],
initializer=tf.truncated_normal_initializer(stddev=0.02))
output_bias = tf.get_variable(
"output_bias", [num_labels], initializer=tf.zeros_initializer())
with tf.variable_scope("loss"):
if is_training:
# 0.1比例的 dropout
output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)
logits = tf.matmul(output_layer, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
# 计算 softmax 值
probabilities = tf.nn.softmax(logits, axis=-1)
log_probs = tf.nn.log_softmax(logits, axis=-1)
one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)
per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
# 计算loss
loss = tf.reduce_mean(per_example_loss)
return (loss, per_example_loss, logits, probabilities)
# 返回一个 `model_fn`函数闭包给 TPUEstimator
def model_fn_builder(bert_config, num_labels, init_checkpoint, learning_rate,
num_train_steps, num_warmup_steps):
#构建模型
def model_fn(features, labels, mode, params):
tf.logging.info("*** Features ***")
for name in sorted(features.keys()):
tf.logging.info(" name = %s, shape = %s" % (name, features[name].shape))
input_ids = features["input_ids"]
input_mask = features["input_mask"]
segment_ids = features["segment_ids"]
label_ids = features["label_ids"]
is_real_example = None
if "is_real_example" in features:
is_real_example = tf.cast(features["is_real_example"], dtype=tf.float32)
else:
is_real_example = tf.ones(tf.shape(label_ids), dtype=tf.float32)
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
use_one_hot_embeddings = False
# 使用参数构建模型,input_idx 就是输入的样本idx表示,label_ids 就是标签的idx表示
(total_loss, per_example_loss, logits, probabilities) = create_model(
bert_config, is_training, input_ids, input_mask, segment_ids, label_ids,
num_labels, use_one_hot_embeddings)
tvars = tf.trainable_variables()
initialized_variable_names = {}
scaffold_fn = None
# 加载BERT预训练模型
if init_checkpoint:
(assignment_map, initialized_variable_names
) = modeling.get_assignment_map_from_checkpoint(tvars, init_checkpoint)
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
tf.logging.info("**** Trainable Variables ****")
for var in tvars:
init_string = ""
if var.name in initialized_variable_names:
init_string = ", *INIT_FROM_CKPT*"
tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape, init_string)
output_spec = None
# 训练模式
if mode == tf.estimator.ModeKeys.TRAIN:
train_op = optimization.create_optimizer(
total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu=False)
output_spec = tf.estimator.EstimatorSpec(
mode=mode,
loss=total_loss,
train_op=train_op)
# 评估模式
elif mode == tf.estimator.ModeKeys.EVAL:
def metric_fn(per_example_loss, label_ids, logits, is_real_example):
predictions = tf.argmax(logits, axis=-1, output_type=tf.int32)
accuracy = tf.metrics.accuracy(
labels=label_ids, predictions=predictions, weights=is_real_example)
loss = tf.metrics.mean(values=per_example_loss, weights=is_real_example)
return {
"eval_accuracy": accuracy,
"eval_loss": loss,
}
eval_metrics = metric_fn(per_example_loss, label_ids, logits, is_real_example)
output_spec = tf.estimator.EstimatorSpec(
mode=mode,
loss=total_loss,
eval_metric_ops=eval_metrics)
# 测试模式
else:
output_spec = tf.estimator.EstimatorSpec(
mode=mode,
predictions={"probabilities": probabilities})
return output_spec
return model_fn
# 计算训练步数和预热步数
num_train_steps = int(len(train_features) / batch_size * num_train_epochs)
num_warmup_steps = int(num_train_steps * warmup_proportion)
model_fn = model_fn_builder(
bert_config=bert_config,
num_labels=len(label_list),
learning_rate=learning_rate,
init_checkpoint=init_checkpoint,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps)
print('模型构建成功。')
# -
# ### 模型训练
#
# 模型训练,大约需要10min
# +
# 创建一个input_fn供 TPUEstimator 使用
def input_fn_builder(features, seq_length, is_training, drop_remainder):
all_input_ids = []
all_input_mask = []
all_segment_ids = []
all_label_ids = []
for feature in features:
all_input_ids.append(feature.input_ids)
all_input_mask.append(feature.input_mask)
all_segment_ids.append(feature.segment_ids)
all_label_ids.append(feature.label_id)
# 正真的 input 方法
def input_fn(params):
batch_size = params["batch_size"]
num_examples = len(features)
d = tf.data.Dataset.from_tensor_slices({
"input_ids":
tf.constant(
all_input_ids, shape=[num_examples, seq_length],
dtype=tf.int32),
"input_mask":
tf.constant(
all_input_mask,
shape=[num_examples, seq_length],
dtype=tf.int32),
"segment_ids":
tf.constant(
all_segment_ids,
shape=[num_examples, seq_length],
dtype=tf.int32),
"label_ids":
tf.constant(all_label_ids, shape=[num_examples], dtype=tf.int32),
})
# 训练模式需要shuffling
if is_training:
d = d.repeat()
d = d.shuffle(buffer_size=100)
d = d.batch(batch_size=batch_size, drop_remainder=drop_remainder)
return d
return input_fn
# 生成 tf.estimator 运行配置
run_config = tf.estimator.RunConfig(
model_dir=output_dir,
save_summary_steps=save_summary_steps,
save_checkpoints_steps=save_checkpoints_steps)
#模型estimator
estimator = tf.estimator.Estimator(
model_fn=model_fn,
config=run_config,
params={"batch_size": batch_size})
# 训练input函数
train_input_fn = input_fn_builder(
features=train_features,
seq_length=max_seq_length,
is_training=True,
drop_remainder=False) # 是否丢弃batch剩余样本
#训练
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
print("训练结束")
# -
# ### 评估模型精度
# +
# 测试input函数
eval_input_fn = input_fn_builder(
features=test_features,
seq_length=max_seq_length,
is_training=False,
drop_remainder=False)
# 评估
evaluate_info = estimator.evaluate(input_fn=eval_input_fn, steps=None)
# -
# 打印评估精度信息。其中eval_accuracy表示精度。
print(evaluate_info)
# ### 效果展示
#
# 挑选几条样本句子预测,直观查看结果。
# +
# 计算函数
def getPrediction(in_sentences):
labels = ["Negative", "Positive"]
input_examples = [InputExample(guid="", text_a = x, text_b = None, label = 0) for x in in_sentences] # here, "" is just a dummy label
input_features = convert_examples_to_features(input_examples, label_list, max_seq_length, tokenizer)
predict_input_fn = input_fn_builder(features=input_features, seq_length=max_seq_length, is_training=False, drop_remainder=False)
predictions = estimator.predict(predict_input_fn)
return [(sentence, prediction['probabilities'], labels[int(round(prediction['probabilities'][1]))]) for sentence, prediction in zip(in_sentences, predictions)]
# 挑选的样例评论
pred_sentences = [
"这家酒店实在太糟了",
"这家酒店的服务不友好",
"服务还行",
"房间外面的风景很好",
"前台的服务很周到"
]
predictions = getPrediction(pred_sentences)
# -
# 打印预测信息。
#
# 结果会返回一个三元组。第一个元素是原输入句子;第二个元素是预测结果的数组表示;第三个元素是预测得到的结果(Negative 或者 Positive)。
predictions
# 从预测结果可见,本实践可基本正确判断酒店评论的情感倾向。
#
# 读者可自行修改上面程序`pred_sentences`中的评论语句来进行情感分析。
# ## 小结
#
# 本实验展示了基于BERT预训练模型的下游任务文本情感分析,并且在训练数据量不大的情况下,得到不错的结果。同时,BERT预训练模型的通用性很好,只需要对输入和输出稍加改造就可以适用于大部分NLP任务,包括序列标注类(分词、NER、语义标注)、分类任务(文本分类、情感计算)和句子关系判断(问答、自然语言推理)等。BERT将大部分的工作放到预训练阶段去做,在fine tuning阶段,只需要做极少的工作,即可完成一项NLP任务。
| notebook/DL_text_sentiment_analysis/text_sentiment_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Objective
#
# Investigate ways to bound regions with many crowded spots. These bounds will allow us to effectively "zoom in" on these regions and generate crops of these regions.
#
# - **Input:** Array of spot coordinates.
# - **Output:** Bounding boxes for regions with many crowded spots.
#
# # Result
# This approach may have potential:
# 1. Identify all crowded spots. Crowded spots are less than a crosshair arm length from the nearest neighbor.
# 2. Separate regions with many crowded spots.
# 3. Define a bounding box around each region with many crowded spots.
#
# # Next Steps
#
# ### Some questions:
# (Refer to the plot 'Original coords with crops shown' in this notebook.)
# - Do we want crop boxes to be squares?
# - If yes, do we want bounds around coordinates to be squares or just resultant images to be squares?
# - If yes, do we want to shrink or grow the rectangles to make them squares?
# - In cases such as the blue box with stray but included points, do we want to try to exclude those stray points?
# - Some cyan (non-crowded) points are left out of the crop boxes. Do we want to extend the crop boxes to fit these spots or neglect them and stipulate that they must be found through a first pass annotating of the original image?
#
# ### To investigate:
# - Automatically setting the preference parameter for AffinityPropagation
import numpy as np
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
from sklearn.neighbors import KDTree
from sklearn.cluster import AffinityPropagation
coords = np.genfromtxt('smfish.csv', delimiter=',')
for coord in coords:
plt.scatter(coord[0], coord[1], facecolors = 'c')
plt.title('Simulated spots from smFISH test')
plt.show()
# # Approach
# 1. Identify all crowded spots. Crowded spots are less than a crosshair arm length from the nearest neighbor.
# 2. Separate regions with many crowded spots.
# 3. Define a bounding box around each region with many crowded spots.
#
# ## Goal 1: Identify crowded spots.
#
# Highlight spots which are too close to (i.e. less than an crosshair arm length from) the nearest neighbor.
#
# #### Min distance between two spots = crosshair arm length, relative to image width
#
# There's a minimum distance between two spots for the crosshair mark left on one spot to not obscure the other spot. This minimum distance is the length of one arm of a crosshair. This minimum distance is in proportion with the pixel width of the image, since in Quantius the crosshairs take up the same proportion of the image regardless of image size.
#
# Measuring by hand, I found the crosshair to image width ratio to be about 7:115, or 0.0609. Therefore one crosshair arm length is 0.03045 times the image width, so spots should be at least that far apart.
#
# **"Crowded spots"** are spots which are less than a crosshair arm's length from the nearest neighbor.
# +
kdt = KDTree(coords, leaf_size=2, metric='euclidean')
def get_nnd(coord, kdt):
dist, ind = kdt.query([coord], k=2)
return dist[0][1]
crosshair_arm_to_image_width_ratio = 0.03045 # measured empirically in Quantius's UI
image_width = 700
crosshair_arm_length = crosshair_arm_to_image_width_ratio * image_width
print('crosshair_arm_length = ' + str(crosshair_arm_length))
close_distances = []
crowded_spots = []
for coord in coords:
nnd = get_nnd(coord, kdt)
if nnd < crosshair_arm_length:
close_distances.append(nnd)
crowded_spots.append(coord)
plt.scatter(coord[0], coord[1], facecolors = 'm')
else:
plt.scatter(coord[0], coord[1], facecolors = 'c')
print('crowded spots / total spots = ' + str(len(crowded_spots)) + ' / ' + str(len(coords)) + ' = ' + str(round((float(len(crowded_spots))/len(coords)), 2)) + ' %')
plt.title('magenta = crowded spots, cyan = other spots')
plt.show()
plt.hist(close_distances, color = 'm')
plt.title('Distances between each crowded spots and the nearest crowded spot')
plt.show()
# -
# ## Goal 2: Separate regions with many crowded spots.
#
# Use AffinityPropagation on crowded spots to separate out regions with many crowded spots. Smaller preference parameter results in fewer separated regions.
pref_param = -50000
crowded_coords = np.asarray(crowded_spots)
af = AffinityPropagation(preference = pref_param).fit(crowded_coords)
centers = [crowded_coords[index] for index in af.cluster_centers_indices_]
print(centers)
for coord in coords:
nnd = get_nnd(coord, kdt)
if nnd < crosshair_arm_length:
plt.scatter(coord[0], coord[1], facecolors = 'm')
else:
plt.scatter(coord[0], coord[1], facecolors = 'c')
for center in centers:
plt.scatter(center[0], center[1], facecolors = 'orange')
plt.show()
# ## Goal 3: Define a bounding box around each region with many crowded spots.
# +
cluster_members_lists = [[] for i in range(len(centers))]
for label_index, coord in zip(af.labels_, crowded_coords):
cluster_members_lists[label_index].append(coord)
crop_bounds = []
for l in cluster_members_lists:
l = np.asarray(l)
x = l[:,0]
y = l[:,1]
crop_bounds.append((min(x), max(x), min(y), max(y)))
print(crop_bounds)
# +
from matplotlib.patches import Rectangle
fig,ax = plt.subplots(1)
for coord in coords:
nnd = get_nnd(coord, kdt)
if nnd < crosshair_arm_length:
ax.scatter(coord[0], coord[1], facecolors = 'm')
else:
ax.scatter(coord[0], coord[1], facecolors = 'c')
for center in centers:
plt.scatter(center[0], center[1], facecolors = 'orange')
colors = ['red', 'orange', 'black', 'green', 'blue', 'purple', 'violet']
for crop, col in zip(crop_bounds, colors):
rect = Rectangle((crop[0], crop[2]), crop[1]-crop[0], crop[3]-crop[2], edgecolor = col, facecolor = 'none')
ax.add_patch(rect)
plt.title('Original coords with crops shown')
plt.show()
# -
# ## Analyze each crop seperately to see whether spots are now spaced far enough apart.
#
# On scatter plots, spots closer to nearest neighbor than the width of a crosshair arm are marked in magenta.
for i in range(len(crop_bounds)):
print('-------------------------------------------------')
print('Crop ' + str(i))
crop = crop_bounds[i]
col = colors[i]
xmin = crop[0]
xmax = crop[1]
ymin = crop[2]
ymax = crop[3]
crop_width = crop[1]-crop[0]
crosshair_arm_length = crosshair_arm_to_image_width_ratio * crop_width
print('crosshair_arm_length = ' + str(crosshair_arm_length))
crop_coords = []
for coord in coords:
if coord[0] >= xmin and coord[0] <= xmax:
if coord[1] >= ymin and coord[1] <= ymax:
crop_coords.append(coord)
crop_kdt = KDTree(crop_coords, leaf_size=2, metric='euclidean')
close_distances = []
crowded_spots = []
for coord in crop_coords:
nnd = get_nnd(coord, crop_kdt)
if nnd < crosshair_arm_length:
close_distances.append(nnd)
crowded_spots.append(coord)
plt.scatter(coord[0], coord[1], facecolors = 'm')
else:
plt.scatter(coord[0], coord[1], facecolors = col)
print('crowded spots / total spots = ' + str(len(crowded_spots)) + ' / ' + str(len(crop_coords)) + ' = ' + str(round((float(len(crowded_spots))/len(crop_coords)), 2)) + ' %')
plt.title('magenta = crowded spots, ' + str(col) + ' = other spots')
plt.show()
plt.hist(close_distances, color = 'm')
plt.yticks(np.arange(0, 10, step=1))
plt.title('Dist. from each crowded spot to the nearest crowded spot')
plt.show()
| datasets/zoom_test/bound_crowded_regions_smfish_v2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/AuFeld/DS-Unit-2-Applied-Modeling/blob/master/module1/LS_DS10_231.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="WWmRDm1LQqnw"
# Lambda School Data Science
#
# *Unit 2, Sprint 3, Module 1*
#
# ---
#
# + [markdown] colab_type="text" id="z8vAIylDQ3ZQ"
# # Define ML problems
# - Choose a target to predict, and check its distribution
# - Choose an appropriate evaluation metric
# - Choose what data to hold out for your test set
# - Avoid leakage of information from test to train or from target to features
# + [markdown] colab_type="text" id="pSiUZFrlSJDQ"
# ### Setup
# + colab_type="code" id="RAxU4rqHSBcu" colab={}
# %%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# !pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# + [markdown] colab_type="text" id="3O6z2z41gZPi"
# # Classification example: Burrito reviews
#
# From the [Logistic Regression assignment](https://nbviewer.jupyter.org/github/LambdaSchool/DS-Unit-2-Linear-Models/blob/master/module4-logistic-regression/LS_DS_214_assignment.ipynb) (Unit 2, Sprint 1, Module 4)
# + colab_type="code" id="EmbSulX3aTuD" colab={}
import pandas as pd
df = pd.read_csv(DATA_PATH+'burritos/burritos.csv')
# + [markdown] colab_type="text" id="fxAF4cIPgfYh"
# ## Choose your target.
#
# Which column in your tabular dataset will you predict?
# + colab_type="code" id="JQ2bc6FGt1tj" outputId="1b3890f6-e672-4f9d-f45d-b1f4cb1c0a11" colab={"base_uri": "https://localhost:8080/", "height": 467}
df.head()
# + id="ZOc34XjnsyqX" colab_type="code" outputId="1da03c3b-e238-4e8f-dd27-33a2c98d5a24" colab={"base_uri": "https://localhost:8080/", "height": 168}
df['overall'].describe()
# + id="suzSmu7DtBkO" colab_type="code" colab={}
# Derive binary classification target:
# We define a 'Great' burrito as having an
# overall rating of 4 or higher, on a 5 point scale.
# Drop unrated burritos.
df = df.dropna(subset=['overall'])
df['Great'] = df['overall'] >= 4
# + [markdown] id="LdH5-UWht7YZ" colab_type="text"
# I've derived my own target to redefine the target (previously 1-5 stars) to a binary classification problem: Predict if a burrito is "Great" or not.
# + [markdown] colab_type="text" id="FHcess9Ug7gX"
# ## How is your target distributed?
#
# Classification: How many classes? Are the classes imbalanced?
# + id="zEoyD5YrunPC" colab_type="code" colab={}
y = df['Great']
# + colab_type="code" id="MghM94kUgob_" outputId="e45e4f8d-4d02-4eb4-8978-6eb0f7109b6e" colab={"base_uri": "https://localhost:8080/", "height": 34}
y.nunique()
# + id="TZ78UfU7uvV5" colab_type="code" outputId="1b31d6ed-17c5-4e7d-f3eb-9759b49d6c09" colab={"base_uri": "https://localhost:8080/", "height": 34}
y.value_counts(normalize=True).max()
# + [markdown] id="gZmeZK7Ruv9x" colab_type="text"
# There are 2 classes, this is a binary classification problem.
#
# The majority class occurs with 57% frequency, so this is not too imbalanced. I could just use accuracy score as my evaluation metric if I want to.
#
# + [markdown] colab_type="text" id="53P4TOOM1nJx"
# ## Choose your evaluation metric(s)
# + [markdown] id="H2wKYb491mJB" colab_type="text"
# Precision when predicting great burritos may be most important because I'm only going to eat one, so I want to make sure it's good.
#
# (On the other hand, is a "bad" burriito really that bad? Not to me, not from a taste perspective.)
#
# Which metric would you emphasize if choosing a burrito place to take a first date to? Precision.
#
# Which metric would you emphasize if you are feeling adventurous? Recall. Could mean trying more thing to make sure you don't miss some new, different, great burrito that you otherwise wouldn't have tried.
# + [markdown] colab_type="text" id="vaOZNuktxY44"
# ## Begin to clean and explore your data
# + [markdown] id="AxbSDI5j2oXx" colab_type="text"
# How many kinds of burritos?
# + colab_type="code" id="6aJm5nDBN4VY" outputId="d06389d0-5333-4e92-fa0e-7a52d3a1c28c" colab={"base_uri": "https://localhost:8080/", "height": 218}
df['Burrito'].value_counts()
# + id="yBEuLCgO2vnR" colab_type="code" outputId="715ded18-3789-488e-85e0-79285a43f762" colab={"base_uri": "https://localhost:8080/", "height": 34}
df['Burrito'].nunique()
# + id="BNSqii-c2xvk" colab_type="code" outputId="164a3bb0-ce25-485c-d1fd-9811973273e8" colab={"base_uri": "https://localhost:8080/", "height": 655}
df['Burrito'].unique()
# + [markdown] id="WCmvUAlt221R" colab_type="text"
# Combine Burrito categories
# + id="06OdDQFx210t" colab_type="code" colab={}
df['Burrito'] = df['Burrito'].str.lower()
# + id="AaRS3shM29ZB" colab_type="code" colab={}
california = df['Burrito'].str.contains('california')
asada = df['Burrito'].str.contains('asada')
surf = df['Burrito'].str.contains('surf')
carnitas = df['Burrito'].str.contains('carnitas')
# + id="v3kdH_Yj3ANK" colab_type="code" colab={}
df.loc[california, 'Burrito'] = 'California'
df.loc[asada, 'Burrito'] = 'Asada'
df.loc[surf, 'Burrito'] = 'Surf & Turf'
df.loc[carnitas, 'Burrito'] = 'Carnitas'
df.loc[~california & ~asada & ~surf & ~carnitas, 'Burrito'] = 'Other'
# + id="ER7BaQtm3PCT" colab_type="code" outputId="5b51f0d5-2c62-4a4a-afe1-1c885529523e" colab={"base_uri": "https://localhost:8080/", "height": 118}
df['Burrito'].value_counts()
# + [markdown] id="3SaS_qI63TuT" colab_type="text"
# Drop some high cardinality categoricals
# + id="A9ag1E5I3XdD" colab_type="code" colab={}
df = df.drop(columns=['Notes', 'Location', 'Reviewer', 'Address', 'URL', 'Neighborhood'])
# + [markdown] id="-ujRvdLB3Y-2" colab_type="text"
# Deal with missing values
# + id="BF3FHYQJ3b3I" colab_type="code" outputId="851a3061-2b16-45d6-928b-48b67d65cc99" colab={"base_uri": "https://localhost:8080/", "height": 218}
df.isna().sum().sort_values()
# + id="sXc4ERI43gOl" colab_type="code" outputId="e9aae250-72f3-4fb0-ac89-4cc19a852fad" colab={"base_uri": "https://localhost:8080/", "height": 218}
df.isna().sum().sort_values()
# + id="I1nZXOh33w0y" colab_type="code" colab={}
df = df.fillna('Missing')
# + [markdown] colab_type="text" id="QkO5X_wHfJ5s"
# ## Choose which observations you will use to train, validate, and test your model
# + [markdown] id="wxExhdAB36vs" colab_type="text"
# Train on reviews from 2016 & earlier. Validate on 2017. Test on 2018 & later.
# + colab_type="code" id="Wu08JjnvfRbP" colab={}
df['Date'] = pd.to_datetime(df['Date'])
# + id="qAEYPG7A4G7Z" colab_type="code" colab={}
train = df[df['Date'].dt.year <= 2016]
val = df[df['Date'].dt.year == 2017]
test = df[df['Date'].dt.year >= 2018]
# + id="90r1T_ng4XFh" colab_type="code" outputId="2fe53b4e-ed0f-4870-dcab-24ce9a0944de" colab={"base_uri": "https://localhost:8080/", "height": 34}
train.shape, val.shape, test.shape
# + [markdown] colab_type="text" id="5NWkwHeIzD5e"
# ## Begin to choose which features, if any, to exclude. Would some features "leak" future information?
# + [markdown] id="Goplslf14xsZ" colab_type="text"
# What happens if we *DON'T* drop features with leakage?
# + colab_type="code" id="HiywE-1AfhmE" outputId="eaa49c05-f8b6-4491-b9a6-a2ae6989f0e5" colab={"base_uri": "https://localhost:8080/", "height": 34}
import category_encoders as ce
from sklearn.pipeline import make_pipeline
from sklearn.tree import DecisionTreeClassifier
target = 'Great'
features = df.columns.drop([target, 'Date'])
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
pipeline = make_pipeline(
ce.OrdinalEncoder(),
DecisionTreeClassifier(max_depth=3)
)
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
# + id="CPyfzi4V5UbD" colab_type="code" outputId="ef50ae0a-43e3-4c64-d2b1-f09cc0a65a58" colab={"base_uri": "https://localhost:8080/", "height": 241}
# Visualize decision tree
import graphviz
from sklearn.tree import export_graphviz
tree = pipeline.named_steps['decisiontreeclassifier']
dot_data = export_graphviz(
tree,
out_file=None,
feature_names=X_train.columns,
class_names=y.unique().astype(str),
filled=True,
impurity=False,
proportion=True
)
graphviz.Source(dot_data)
# + id="SqKvmm7V6HQ2" colab_type="code" colab={}
# Drop feature with "leakage"
df = df.drop(columns=['overall'])
# + id="yolfAFTM6Pd6" colab_type="code" outputId="23fd755c-9c30-44cd-9dc0-7d25a3d30065" colab={"base_uri": "https://localhost:8080/", "height": 34}
target = 'Great'
features = df.columns.drop([target, 'Date'])
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
pipeline = make_pipeline(
ce.OrdinalEncoder(),
DecisionTreeClassifier(max_depth=3)
)
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
# + id="JS8HalcS6SfO" colab_type="code" outputId="8ca962d0-4ccd-4fe7-9f3f-703664f3aa34" colab={"base_uri": "https://localhost:8080/", "height": 538}
# Visualize decision tree
import graphviz
from sklearn.tree import export_graphviz
tree = pipeline.named_steps['decisiontreeclassifier']
dot_data = export_graphviz(
tree,
out_file=None,
feature_names=X_train.columns,
class_names=y.unique().astype(str),
filled=True,
impurity=False,
proportion=True
)
graphviz.Source(dot_data)
# + [markdown] colab_type="text" id="FHsxJxSwd_zU"
# ## Get ROC AUC (Receiver Operating Characteristic, Area Under the Curve)
#
# [Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"
#
# ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative."
#
# ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**
#
# ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5.**
#
# #### Scikit-Learn docs
# - [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.html#receiver-operating-characteristic-roc)
# - [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)
# - [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html)
#
# #### More links
# - [StatQuest video](https://youtu.be/4jRBRDbJemM)
# - [Data School article / video](https://www.dataschool.io/roc-curves-and-auc-explained/)
# - [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
# + colab_type="code" id="9XD0nW4cd5bJ" colab={}
# "The ROC curve is created by plotting the true positive rate (TPR)
# against the false positive (FPR)
# at various threshold settings."
# Use scikit-learn to calculate TPR & FPT at various thresholds
from sklearn.metrics import roc_curve
y_pred_proba = pipeline.predict_proba(X_val)[:, -1] # Probability for the last class
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba)
# + id="qLKCcnG_7fMO" colab_type="code" outputId="b7acbfdb-452c-40bf-ae87-7ce5d4aa734b" colab={"base_uri": "https://localhost:8080/", "height": 284}
# See the results in a table
pd.DataFrame({
'False Positive Rate': fpr,
'True Positive Rate': tpr,
'Threshold': thresholds
})
# + id="f7o-SR7m7zDD" colab_type="code" outputId="280ae9f3-8e14-4dad-f2fc-866fdd9979ac" colab={"base_uri": "https://localhost:8080/", "height": 295}
# See the results on a plot.
# This is the "Receiver Operating Characteristic curve"
import matplotlib.pyplot as plt
plt.scatter(fpr, tpr)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate');
# + id="u2f_6jXd8Fte" colab_type="code" outputId="8d2e6a64-02e2-4f8c-fc5e-f55d0c489139" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Use scikit-learn to calculate the area under the curve
from sklearn.metrics import roc_auc_score
roc_auc_score(y_val, y_pred_proba)
# + [markdown] colab_type="text" id="qPvV48UgeStb"
# **Recap:** ROC AUC measures how well a classifier ranks predicted probabilities. So, when you get your classifier’s ROC AUC score, you need to use predicted probabilities, not discrete predictions.
#
# Your code may look something like this:
#
# ```python
# from sklearn.metrics import roc_auc_score
# y_pred_proba = model.predict_proba(X_test_transformed)[:, -1] # Probability for last class
# print('Test ROC AUC:', roc_auc_score(y_test, y_pred_proba))
# ```
#
# ROC AUC ranges from 0 to 1. Higher is better. A naive majority class baseline will have an ROC AUC score of 0.5.
# + [markdown] colab_type="text" id="2JjgVxD8sIXy"
# # Regression example: NYC apartments
# + colab_type="code" id="aWU1rEsLsNX-" colab={}
# Read New York City apartment rental listing data
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
assert df.shape == (49352, 34)
# + [markdown] colab_type="text" id="RUhYmfTW3HBd"
# ## Choose your target
#
# Which column in your tabular dataset will you predict?
# + colab_type="code" id="vLko0fOTsQij" colab={}
y = df['price']
# + [markdown] colab_type="text" id="I4oJbVIO3cgx"
# ## How is your target distributed?
#
# Regression: Is the target right-skewed?
# + colab_type="code" id="xlsfr_2GsWRp" outputId="e483c6b2-39b6-47e1-d830-bc5fcf4a9163" colab={"base_uri": "https://localhost:8080/", "height": 279}
import seaborn as sns
sns.distplot(y);
# + id="J06WZh5oAXDs" colab_type="code" outputId="214b446d-0430-44d7-b0bc-9c0918829135" colab={"base_uri": "https://localhost:8080/", "height": 168}
y.describe()
# + [markdown] colab_type="text" id="dpauHixJtDxI"
# ## Are some observations outliers?
#
# Will you exclude
# # them?
# + colab_type="code" id="C2-coskUtDFA" colab={}
# Yes! There are outliers
# Some prices are so high or low it doesn't really make sense.
# Some locations that aren't even in New York City
# + id="BAIIgEnxAwTW" colab_type="code" colab={}
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
import numpy as np
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
# + id="9rwAUeWrBHRf" colab_type="code" outputId="f158a071-4e0a-4519-b5e1-c4b7c3216da8" colab={"base_uri": "https://localhost:8080/", "height": 279}
y = df['price']
sns.distplot(y);
# + id="bfCDiQKOBMt1" colab_type="code" outputId="5b98fefe-f18e-4843-fc96-f085becddd89" colab={"base_uri": "https://localhost:8080/", "height": 168}
y.describe()
# + [markdown] colab_type="text" id="a77rXRru3sag"
# ## Log-Transform
#
# If the target is right-skewed, you may want to "log transform" the target.
# + colab_type="code" id="1vqIJQHpupxD" colab={}
import numpy as np
y_log = np.log1p(y)
# + id="nLBMcxcmB3xK" colab_type="code" outputId="080aa28c-cc9e-4b47-9a09-03b66315f545" colab={"base_uri": "https://localhost:8080/", "height": 295}
sns.distplot(y)
plt.title('Original target, in the unit of US dollars');
# + id="xiB6CO2LB917" colab_type="code" outputId="06f02352-e705-45ea-ad78-2a6aa3ac3c9e" colab={"base_uri": "https://localhost:8080/", "height": 295}
sns.distplot(y_log)
plt.title('Log-transformed target, in log-dollars');
# + id="dlyA7hcuCD0F" colab_type="code" outputId="cfda0918-6e34-4d3f-e4d6-633dbfd36e24" colab={"base_uri": "https://localhost:8080/", "height": 295}
y_untransformed = np.expm1(y_log)
sns.distplot(y_untransformed)
plt.title('Back to the original units');
# + id="JoKT_mbEC-7E" colab_type="code" colab={}
| module1/LS_DS10_231.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Demonstrate the "target-decoy" approach, as applied to metagenomic variant calling
# %run "Header.ipynb"
import copy
import json
import pickle
import skbio
import pileup
from math import ceil
from collections import defaultdict
from pysam import VariantFile
from parse_sco import parse_sco
seq2pos2pileup = pileup.load()
linesep_default = "-" * 79
def output_and_print(text, output_file, linesep=linesep_default):
"""Convenience function to both output something to a misc-text *.tex file and print it."""
with open(output_file, "w") as of:
# see https://tex.stackexchange.com/a/18018
of.write("{}\endinput".format(text))
print(linesep_default)
print(f"Results that we just output to {output_file}:")
print(linesep_default)
print(text)
# ## Lower limit of $p$
#
# Basically: since we only call a mutation at a given position `pos` using the naive method if `alt(pos) ≥ 2`, it doesn't make sense to use values of $p$ less than `2 / coverage` (coverage varies throughout a MAG, but let's use the rounded average coverage as the denominator here). So,
# +
min_mean_cov = float("inf")
# This is copied from the SequenceCoveragePlots.ipynb notebook; easier to just duplicate the code for now, sorry!
seq2mean_cov = {}
for seq in SEQS:
covs = []
for pos in range(1, seq2len[seq] + 1):
covs.append(pileup.get_cov(seq2pos2pileup[seq][pos]))
mean_cov = mean(covs)
seq2mean_cov[seq] = mean_cov
min_mean_cov = min(mean_cov, min_mean_cov)
print(f"The mean cov for seq {seq2name[seq]} is {mean_cov:,.2f}x.")
print(f"Across the three selected MAGs, the min mean cov is {min_mean_cov:,.2f}x.")
# +
# yeah yeah i know this is technically just 200 / min_mean_cov but clarity > performance imo and also this is
# just an analysis notebook
min_p = 100 * (2 / min_mean_cov)
print(f"So the lowest value of p it makes sense to use is p = 100 * (2 / {min_mean_cov:,.2f}) = {min_p:.4f}%.")
effective_min_p = ceil(min_p * 100) / 100
# I know .4f is unnecessary precision since we just rounded to a multiple of 0.01%,
# this is to sanity check that I did this right
print(f"Effective min p (taking the ceiling to the nearest 0.01%): {effective_min_p:.4f}%.")
avg_covs_info = (
f"The average coverages of the three selected genomes are "
f"{seq2mean_cov['edge_6104']:,.0f}x, {seq2mean_cov['edge_1671']:,.0f}x, and "
f"{seq2mean_cov['edge_2358']:,.0f}x for the {seq2name['edge_6104']}, {seq2name['edge_1671']}, and "
f"{seq2name['edge_2358']} genomes, respectively."
)
output_and_print(avg_covs_info, "misc-text/fdr-min-p-avg-covs.tex")
min_p_val_info = (
f"We thus set the lower limit of $p$ as "
r"$\frac{2}{"
f"{seq2mean_cov['edge_1671']:,.2f}"
r"} \approx "
f"{min_p:.4f}\%$. Rounding this value up to the next largest "
f"0.01\% gives us a lower limit $p$ of {effective_min_p:.2f}\%."
)
output_and_print(min_p_val_info, "misc-text/fdr-min-p-val.tex")
# -
# ## First: naive variant calling
#
# We don't limit to "sufficiently-covered" positions here -- so we consider all regions throughout a genome.
# Percentages go from 5%, 4.99%, 4.98%, ..., min_p + 0.01%, min_p%.
# Since we draw FDR curves, etc. by varying p by 0.01%, we'll use a lower limit of ceil(min_p * 100) / 100.
# We limit the start here to 5% because the whole "rare mutations" definition means that we ignore p-mutations
# at higher values of p. Since this notebook is pretty inefficient, there's no point unnecessarily computing
# stuff for p = 50%, etc.
effective_min_p_times_100 = ceil(min_p * 100)
percentages = [p / 100 for p in range(effective_min_p_times_100, 500, 1)][::-1]
print(f"First two percentages: {percentages[:2]}")
print(f"Last two percentages: {percentages[-2:]}")
print(f"Number of percentages: {len(percentages):,}")
# +
def compute_num_mutations_per_mb(num_called_mutations_in_positions, num_positions_considered):
# We have the equation
#
# # called mutations f
# ---------------------------- = --------------
# # of positions to consider 1,000,000 bp
#
# This function just solves for f by multiplying the left side of the equation by 1,000,000.
#
# I guess if you're gonna be calling this thousands of time on the same set of positions
# it might speed this up slightly to precompute the (1,000,000 / # of positions to consider) value,
# but convenience is the most important thing here IMO.
return (num_called_mutations_in_positions / num_positions_considered) * 1000000
def naive_calling(seq, positions_to_consider, verbose=True, superverbose=False):
"""seq should be in SEQS.
positions_to_consider should be a collection of 1-indexed positions in the sequence to consider when
calling mutations. This makes it possible to, for example, just consider the CP 2 positions in a sequence.
Returns a tuple with three elements:
1. p2called_mutations: A dict mapping values in "percentages" (defined above) to a list of 1-indexed
"called" p-mutations in the positions to consider in the sequence, using this percentage for p.
2. p2numpermb: A dict with the same keys as p2called_mutations, but the values are the number of called
p-mutations per megabase (1,000,000 bp = 1 Mbp) in the positions to consider in this sequence.
IT'S DEFINITELY WORTH NOTING that we scale this by the number of positions to consider, not the full
sequence length (although it's possible that those values could be equal if positions_to_consider
is equal to range(1, seq2len[seq] + 1)). So, if you select a subset of positions where most of them
are mutations, this'll result in a really high number of mutations per megabase!
...I recognize "numpermb" doesn't really roll off the tongue, but I couldn't think
of a better name for this :P
3. poslen: Length of positions_to_consider, for reference.
NOTE THAT this'll not include "high-frequency" mutations in the results, as discussed in the paper.
We will not decrease poslen or anything -- positions with high-frequency mutations will just be
treated as if they were non-mutated positions.
"""
poslen = len(positions_to_consider)
seqlen = seq2len[seq]
positions_to_consider_pct = 100 * (poslen / seqlen)
if verbose:
print(f"Naively calling mutations in {seq2name[seq]}.")
print(f"\tConsidering {poslen:,} / {seqlen:,} ({positions_to_consider_pct:.2f}%) positions.")
p2called_mutations = {p: [] for p in percentages}
p2numpermb = {}
for pi, pos in enumerate(sorted(positions_to_consider), 1):
if verbose and (pi == 1 or pi % 100000 == 0):
print(f"\tOn the {pi:,}-th position ({pos:,}) of the specified {poslen:,} positions ({100 * (pi / poslen):.2f}%).")
pos_pileup = seq2pos2pileup[seq][pos]
for p in percentages:
# Note that this includes the min alt(pos) check made by naively_call_mutation() (see docs there)
# as well as the only-calling-if-rare thing
if pileup.naively_call_mutation(pos_pileup, p, only_call_if_rare=True):
p2called_mutations[p].append(pos)
for p in p2called_mutations:
num_called_mutations = len(p2called_mutations[p])
p2numpermb[p] = compute_num_mutations_per_mb(num_called_mutations, poslen)
# We add an extra layer of verbosity here because printing out 2 lines per value of p gets
# ridiculous when there are 1,000 values of p .____.
if superverbose:
print(f"\tp = {p}%: {num_called_mutations:,} called p-mutations in {seq2name[seq]}.")
print(f"\t\tNumber of called p-mutations per megabase: f = {f:,.2f}.")
return (p2called_mutations, p2numpermb, poslen)
def naive_calling_fullseq(seq):
"""Does naive variant calling across all positions in a sequence (should be in SEQS)."""
return naive_calling(seq, range(1, seq2len[seq] + 1))
def get_single_gene_cp2_positions(seq):
cp2_positions = set()
multi_gene_positions = set()
seqlen = seq2len[seq]
genes_df = parse_sco(f"../seqs/genes/{seq}.sco")
# Code here is adapted from get_parent_gene_info_of_many_positions (in Header.ipynb) a bit
# Faster to compute everything at once, rather than iterate through the genes multiple times
pos_to_genes = defaultdict(list)
for gene in genes_df.itertuples():
gene_left = int(gene.LeftEnd)
gene_right = int(gene.RightEnd)
gene_num = int(gene.Index)
gene_strand = gene.Strand
def complainAboutCPs(gn, gs, gcp):
raise ValueError(f"CP got out of whack: gene {gn}, strand {gs}, cp {gcp}?")
if gene_strand == "+":
cp = 1
else:
cp = 3
for pos in range(gene_left, gene_right + 1):
pos_to_genes[pos].append(gene_num)
if len(pos_to_genes[pos]) > 1:
multi_gene_positions.add(pos)
if cp == 2:
cp2_positions.add(pos)
# Adjust the CP. I already have some code that does this (in a different context) in the within-
# gene mutation spectrum notebook; ideally this code would be generalized between the notebooks.
if gene_strand == "+":
# For + strand genes, this goes 123123123123...
if cp == 1 or cp == 2: cp += 1
elif cp == 3: cp = 1
else: complainAboutCPs(gene_num, gene_strand, cp)
else:
# For - strand genes, this goes 321321321321...
if cp == 3 or cp == 2: cp -= 1
elif cp == 1: cp = 3
else: complainAboutCPs(gene_num, gene_strand, cp)
single_gene_cp2_positions = cp2_positions - multi_gene_positions
return single_gene_cp2_positions
def naive_calling_cp2seq(seq):
"""Does naive variant calling across just the CP 2 positions in a sequence (should be in SEQS).
NOTE that this will filter only to positions that meet the exact criteria:
- In a single gene (not in a position that is covered by overlapping genes).
- In CP 2 within this single gene.
Even if a position is in CP 2 of all the multiple genes it's covered by, we'll still ignore it.
I'm pretty sure there should be very few positions that get tossed out as a result; my take is that
it isn't worth the trouble to try to handle these positions.
"""
print(f"Identifying CP 2 positions in {seq2name[seq]} so we can use them as a decoy genome...")
single_gene_cp2_positions = get_single_gene_cp2_positions(seq)
print(f"In {seq2name[seq]}:")
#print(f"\tThere were {len(cp2_positions):,} CP 2 positions.")
#print(f"\tThere were {len(multi_gene_positions):,} positions in multiple genes.")
print(f"\tThere were {len(single_gene_cp2_positions):,} CP 2 positions in only a single gene.")
return naive_calling(seq, single_gene_cp2_positions)
# -
# ### Naively call mutations in CAMP and compute $\mathrm{frac}_{\mathrm{decoy}}$
#
# (We're treating CAMP as a "decoy" genome, where we assume that all called mutations within it will be incorrect.)
camp_naive_p2called_mutations, camp_naive_p2numpermb, _ = naive_calling_fullseq("edge_6104")
camp_cp2_naive_p2called_mutations, camp_cp2_naive_p2numpermb, num_camp_cp2_pos = naive_calling_cp2seq("edge_6104")
# ### For comparison, naively call mutations in BACT1 and compute $\mathrm{frac}_{\mathrm{BACT1}}$
bact1_naive_p2called_mutations, bact1_naive_p2numpermb, _ = naive_calling_fullseq("edge_1671")
# ### Just so we can update the `misc-text/` file, also do this for BACT2
#
# probs possible to get this info from another notebook but this is the easiest way to handle this imo
bact2_naive_p2called_mutations, bact2_naive_p2numpermb, _ = naive_calling_fullseq("edge_2358")
# ### Save this info to text files using the json module
#
# ... Since recomputing this takes, like, an hour
with open("misc-output/p2called_mutations.txt", "w") as p2cmf:
for p2cm in (
camp_naive_p2called_mutations, camp_cp2_naive_p2called_mutations,
bact1_naive_p2called_mutations, bact2_naive_p2called_mutations
):
p2cmf.write(json.dumps(p2cm))
p2cmf.write("\n")
with open("misc-output/p2numpermb.txt", "w") as p2nf:
for p2n in (
camp_naive_p2numpermb, camp_cp2_naive_p2numpermb,
bact1_naive_p2numpermb, bact2_naive_p2numpermb
):
p2nf.write(json.dumps(p2n))
p2nf.write("\n")
# ### Load naive calling info from text files
# +
with open("misc-output/p2called_mutations.txt", "r") as p2cmf:
camp_naive_p2called_mutations = json.loads(p2cmf.readline().strip())
camp_cp2_naive_p2called_mutations = json.loads(p2cmf.readline().strip())
bact1_naive_p2called_mutations = json.loads(p2cmf.readline().strip())
bact2_naive_p2called_mutations = json.loads(p2cmf.readline().strip())
with open("misc-output/p2numpermb.txt", "r") as p2nf:
camp_naive_p2numpermb = json.loads(p2nf.readline().strip())
camp_cp2_naive_p2numpermb = json.loads(p2nf.readline().strip())
bact1_naive_p2numpermb = json.loads(p2nf.readline().strip())
bact2_naive_p2numpermb = json.loads(p2nf.readline().strip())
# -
# If we haven't done naive calling yet, we gotta compute this number also so we can use this in figures, etc.
# And knowing the CP 2 positions anyway is needed for the LoFreq stuff.
camp_cp2_pos = get_single_gene_cp2_positions("edge_6104")
num_camp_cp2_pos = len(camp_cp2_pos)
print(f"There are {num_camp_cp2_pos:,} CP 2 single-gene positions in CAMP.")
# +
def get_mr(genome, p):
"""Returns a 'mutation rate' that can be used to compare the amounts of mutations across MAGs from naive
p-mutation calling.
genome is a sequence name; p is a string matching an entry in p2called_mutations. Why is this a string?
Because the marcus who wrote this notebook in july 2021 used json instead of pickle to save these
dicts. that guy was such a dummy.
This won't work with LoFreq's calls, just because it's hardcoded to work with the naive method;
use get_mr_direct() for that.
This sort of supplants the number of mutations per megabase (p2numpermb) we already computed.
It's a long story. Really, you could just take the p2numpermb value and divide it by 3e6 to
get this exact value, but due to precision stuff (and for the sake of my own sanity) I redo this
here as its own function.
"""
gl = genome.lower()
if gl == "edge_6104" or gl == "camp":
m = len(camp_naive_p2called_mutations[p])
sl = seq2len["edge_6104"]
elif gl == "edge_1671" or gl == "bact1":
m = len(bact1_naive_p2called_mutations[p])
sl = seq2len["edge_1671"]
elif gl == "edge_2358" or gl == "bact2":
m = len(bact2_naive_p2called_mutations[p])
sl = seq2len["edge_2358"]
elif gl == "camp cp 2" or gl == "camp cp2":
m = len(camp_cp2_naive_p2called_mutations[p])
sl = num_camp_cp2_pos
else:
raise ValueError(f"Unrecognized genome name: {genome}")
return get_mr_direct(m, sl)
def get_mr_direct(m, sl):
return m / (3 * sl)
# -
# ### Output info about FDR estimation for $p=0.5\%$ to `misc-text/`
#
# **NOTE: this only includes "rare" (i.e. not high-frequency, as defined above) mutations.**
m = get_mr("edge_1671", "0.5")
m
def scinot(mr):
"""Returns a nice, LaTeX-compatible scientific notation representation of a small number.
Based on https://stackoverflow.com/a/29261252.
This was only designed for use with small numbers (genome-wide mutation rates), so this will just
straight up fail if mr isn't between 0 and 1."""
if mr > 1 or mr < 0:
raise ValueError("The input to scinot() should be in [0, 1].")
if mr >= 0.1:
return f"{mr:.1f}"
if mr == 1 or mr == 0:
return str(mr)
sm = f"{mr:.1e}"
sparts = sm.split("e-0")
return f"{sparts[0]} \\times 10^{{-{sparts[1]}}}"
# +
# Total numbers of identified p-mutations
camp_nump = len(camp_naive_p2called_mutations["0.5"])
bact1_nump = len(bact1_naive_p2called_mutations["0.5"])
bact2_nump = len(bact2_naive_p2called_mutations["0.5"])
# Scaled numbers of identified p-mutations per megabase (comparable across different-length genomes
# [at least, if you assume that genome length is the only confounding factor here, which it isn't -- we
# should mention this in the paper ofc])
camp_numpermb = camp_naive_p2numpermb["0.5"]
bact1_numpermb = bact1_naive_p2numpermb["0.5"]
bact2_numpermb = bact2_naive_p2numpermb["0.5"]
camp_mr = get_mr("edge_6104", "0.5")
bact1_mr = get_mr("edge_1671", "0.5")
bact2_mr = get_mr("edge_2358", "0.5")
camp_cp2_mr = get_mr("camp cp2", "0.5")
camp_mr_s = scinot(camp_mr)
bact1_mr_s = scinot(bact1_mr)
bact2_mr_s = scinot(bact2_mr)
camp_cp2_mr_s = scinot(camp_cp2_mr)
bact1_fdr = 100 * (camp_mr / bact1_mr)
naiveinfo = (
f"At $p=0.5$\\%, NaiveFreq identified {camp_nump:,}, {bact1_nump:,}, and {bact2_nump:,} rare $p$-mutations "
f"in the {seq2name['edge_6104']}, {seq2name['edge_1671']}, and {seq2name['edge_2358']} MAGs, "
f"respectively. This illustrates that there exists a difference of nearly two orders of magnitude "
f"in mutation rates across these MAGs "
f"(${camp_mr_s}$, ${bact1_mr_s}$, and ${bact2_mr_s}$ for "
f"{seq2name['edge_6104']}, {seq2name['edge_1671']}, and {seq2name['edge_2358']}, respectively). "
f"If the {seq2name['edge_6104']} MAG, which has a relatively low mutation rate, is "
f"selected as a decoy, then the FDR for the {seq2name['edge_1671']} MAG at $p=0.5\\%$ is estimated as "
"$\\frac{" + f"{camp_mr_s}" + "}" + "{" + f"{bact1_mr_s}" + "}" + f" \\approx {bact1_fdr:.1f}\\%$."
)
output_and_print(naiveinfo, "misc-text/naive-calling-target-decoy.tex")
# +
bact1_fdr_using_camp = 100 * (camp_mr / bact1_mr)
bact1_fdr_using_campcp2 = 100 * (camp_cp2_mr / bact1_mr)
camp = seq2name["edge_6104"]
cp2info = (
"For example, at the frequency threshold $p=0.5$\%, there are only "
f"{len(camp_cp2_naive_p2called_mutations['0.5']):,} rare $p$-mutations in CP2 in the {camp} "
f"MAG, resulting in a mutation rate of ${camp_cp2_mr_s}$ for the decoy genome of just the "
f"{camp} CP2 positions (as compared to ${camp_mr_s}$ for the entire {camp} MAG). "
f"Using the {camp} CP2 positions as a new decoy genome, the estimate of the FDR for called mutations "
f"in the {seq2name['edge_1671']} MAG "
f"can now be reduced from {bact1_fdr_using_camp:.1f}\\% to {bact1_fdr_using_campcp2:.1f}\\%."
)
output_and_print(cp2info, "misc-text/camp-cp2-fdr.tex")
# -
# ### Load LoFreq calls, and output info about FDR estimation for LoFreq to `misc-text/`
# #### Load LoFreq VCF info and compute various things
#
# **NOTE: this still only includes "rare" (i.e. not high-frequency, as defined above) mutations.** LoFreq can call either type of mutations, but we explicitly filter out high-frequency mutations here to make its calls comparable with the naive ones above.
# +
# Load LoFreq info
lofreq_calls = VariantFile("../seqs/lofreq.vcf")
# Record raw numbers of called mutations from LoFreq
camp_lofreq_mutations = set()
camp_cp2_lofreq_mutations = set()
bact1_lofreq_mutations = set()
bact2_lofreq_mutations = set()
# (and these things, which we'll use to plot an FDR curve)
# NOTE: this code is gross and duplicated because it's 7pm on a friday, should make better
bact1_lofreq_p2num_called_mutations_i = {p: 0 for p in percentages}
bact1_lofreq_p2numpermb = {}
camp_lofreq_p2num_called_mutations_i = {p: 0 for p in percentages}
camp_lofreq_p2numpermb = {}
camp_cp2_lofreq_p2num_called_mutations_i = {p: 0 for p in percentages}
camp_cp2_lofreq_p2numpermb = {}
total_bact1_called_variants = 0
for c in lofreq_calls.fetch():
if pileup.is_position_rare(seq2pos2pileup[c.contig][c.pos]):
if c.contig == "edge_6104":
camp_lofreq_mutations.add(c.pos)
if c.pos in camp_cp2_pos:
camp_cp2_lofreq_mutations.add(c.pos)
elif c.contig == "edge_1671":
bact1_lofreq_mutations.add(c.pos)
total_bact1_called_variants += 1
elif c.contig == "edge_2358":
bact2_lofreq_mutations.add(c.pos)
print(f"According to LoFreq, there are {len(camp_lofreq_mutations):,} rare mutations in CAMP.")
print(f"... and {len(camp_cp2_lofreq_mutations):,} rare mutations in CAMP CP 2.")
print(f"... and {len(bact1_lofreq_mutations):,} rare mutations in BACT1.")
print(f"... and {len(bact2_lofreq_mutations):,} rare mutations in BACT2.")
if total_bact1_called_variants != len(bact1_lofreq_mutations):
print("-" * 79)
print(
f"NOTE: There were actually {total_bact1_called_variants:,} variants total in BACT1 -- due to some\n"
"variants being called at the same position. The code accounts for this, don't worry."
)
print("-" * 79)
print("Being slow and assigning p2num_called_mutations_i for BACT1 and CAMP based on LoFreq...")
# NOTE: iterating lazily over all percentages is a very terrible, slow way to do this.
# Possible to make it much faster if we sort the percentages in advance, probs...
# if this is a bottleneck for you please go send marcus an angry email, sorry 2021 is a rough year
for p in percentages:
# consider each UNIQUE position with a called variant -- positions with > 1 called variant will
# only get one pass in this loop. this is intentional.
#
# so basically, if you're me in a year and you're like "what is this code even doing", the idea is
# that a given point in the FDR curve only includes variants with mutation frequency >= that point,
# since we're adjusting "p" to create this FDR curve. This is essentially* the same as how the naive
# mutation calling works, so we can reuse that on the LoFreq calls, if this isn't too convoluted.
# (we could also use the reported AF [allele frequency] from LoFreq instead of the pileup thing, but
# i figure we may as well be consistent since we're collapsing things to single positions anyway.)
#
# *we use min_alt_pos = 0 here because that's only used when filtering naive calling, and these
# calls are from LoFreq -- so we're just varying them based on alt(pos) while ignoring the other stuff.
for pos in bact1_lofreq_mutations:
if pileup.naively_call_mutation(seq2pos2pileup["edge_1671"][pos], p, min_alt_pos=0):
bact1_lofreq_p2num_called_mutations_i[p] += 1
# same as above
for pos in camp_lofreq_mutations:
if pileup.naively_call_mutation(seq2pos2pileup["edge_6104"][pos], p, min_alt_pos=0):
camp_lofreq_p2num_called_mutations_i[p] += 1
for pos in camp_cp2_lofreq_mutations:
if pileup.naively_call_mutation(seq2pos2pileup["edge_6104"][pos], p, min_alt_pos=0):
camp_cp2_lofreq_p2num_called_mutations_i[p] += 1
print("Phew, done with that.")
# Compute numbers of mutations per megabase
camp_lofreq_numpermb = compute_num_mutations_per_mb(len(camp_lofreq_mutations), seq2len["edge_6104"])
camp_cp2_lofreq_numpermb = compute_num_mutations_per_mb(len(camp_cp2_lofreq_mutations), num_camp_cp2_pos)
bact1_lofreq_numpermb = compute_num_mutations_per_mb(len(bact1_lofreq_mutations), seq2len["edge_1671"])
bact2_lofreq_numpermb = compute_num_mutations_per_mb(len(bact2_lofreq_mutations), seq2len["edge_2358"])
# Create p2num_called_mutations for BACT1 using LoFreq.
for p in percentages:
bact1_num_called_mutations = bact1_lofreq_p2num_called_mutations_i[p]
bact1_lofreq_p2numpermb[str(p)] = compute_num_mutations_per_mb(bact1_num_called_mutations, seq2len["edge_1671"])
camp_num_called_mutations = camp_lofreq_p2num_called_mutations_i[p]
camp_lofreq_p2numpermb[str(p)] = compute_num_mutations_per_mb(camp_num_called_mutations, seq2len["edge_6104"])
camp_cp2_num_called_mutations = camp_cp2_lofreq_p2num_called_mutations_i[p]
camp_cp2_lofreq_p2numpermb[str(p)] = compute_num_mutations_per_mb(camp_cp2_num_called_mutations, num_camp_cp2_pos)
# Compute FDRs!
bact1_fdr_lofreq_camp_decoy = 100 * (camp_lofreq_numpermb / bact1_lofreq_numpermb)
bact1_fdr_lofreq_camp_cp2_decoy = 100 * (camp_cp2_lofreq_numpermb / bact1_lofreq_numpermb)
bact2_fdr_lofreq_camp_decoy = 100 * (camp_lofreq_numpermb / bact2_lofreq_numpermb)
bact2_fdr_lofreq_camp_cp2_decoy = 100 * (camp_cp2_lofreq_numpermb / bact2_lofreq_numpermb)
print(
"Just using the LoFreq calls, the FDR for BACT1 using CAMP as a decoy is "
f"{bact1_fdr_lofreq_camp_decoy:.2f}%."
)
print(
"Just using the LoFreq calls, the FDR for BACT1 using CAMP CP 2 as a decoy is "
f"{bact1_fdr_lofreq_camp_cp2_decoy:.2f}%."
)
print(
"Just using the LoFreq calls, the FDR for BACT2 using CAMP as a decoy is "
f"{bact2_fdr_lofreq_camp_decoy:.2f}%."
)
print(
"Just using the LoFreq calls, the FDR for BACT2 using CAMP CP 2 as a decoy is "
f"{bact2_fdr_lofreq_camp_cp2_decoy:.2f}%."
)
print("-" * 79)
print("NOTE: I computed the above FDRs using the # of muts per mb, instead of the new mut rate thing.")
print(
"The approaches are equivalent and we're just printing this out, not directly writing it to a "
"LaTeX file or anything, so i'm not gonna bother updating this. but it ideally should be updated "
"if for no other reason than to make this notebook internally consistent for when i come back to "
"this code in a year and i'm like HURK"
)
# -
# #### Compute naïve mutation calling info for $p = 2\%$ mutations
#
# Because it turns out these are pretty similar to what LoFreq calls.
# +
nc2 = camp_naive_p2called_mutations["2.0"]
nb12 = bact1_naive_p2called_mutations["2.0"]
nb22 = bact2_naive_p2called_mutations["2.0"]
fdr_num_n2 = camp_naive_p2numpermb["2.0"]
fdr_den_n2 = bact1_naive_p2numpermb["2.0"]
fdr_n2 = 100 * (fdr_num_n2 / fdr_den_n2)
print(f"Estimated BACT1 FDR (with old #/mb method) at p = 2% is {fdr_num_n2:,.2f} / {fdr_den_n2:,.2f} = {fdr_n2:.2f}%")
seq2naive2ct = {}
seq2overlapct = {}
for seq, naive_2pct_muts, lf_muts in (
("edge_6104", nc2, camp_lofreq_mutations),
("edge_1671", nb12, bact1_lofreq_mutations),
("edge_2358", nb22, bact2_lofreq_mutations)
):
# Print out info to prove to myself that yes, p=2% and LoFreq are pretty similar!
nl = len(naive_2pct_muts)
ll = len(lf_muts)
print(f"{seq2name[seq]} has {nl:,} p = 2% p-mutations and {ll:,} LoFreq-called mutations.")
naive_lf_overlap = set(naive_2pct_muts) & set(lf_muts)
ol = len(naive_lf_overlap)
print(f"\tNumber of overlapping calls btwn (naive calling at p=2%) and (LoFreq): {ol:,}")
print(f"\tPercentage of (Overlap / LoFreq): {100 * (ol / ll):.2f}%")
# ... Save this info to dicts to make it easy to output in the misc text notebook below.
# I guess we could do that here instead but, uh, whatever, no reason to make this more complicated
# than it already is. I don't think we have enough coffee for that.
seq2naive2ct[seq] = nl
seq2overlapct[seq] = ol
# +
# Output info -- analogous to the naive info output above.
# ... BUT WITH MORE STUFF since we're comparing LoFreq to the naive method at p=2%
lf_camp_num = len(camp_lofreq_mutations)
lf_bact1_num = len(bact1_lofreq_mutations)
lf_bact2_num = len(bact2_lofreq_mutations)
camp_lofreq_mr = get_mr_direct(len(camp_lofreq_mutations), seq2len["edge_6104"])
camp_cp2_lofreq_mr = get_mr_direct(len(camp_cp2_lofreq_mutations), num_camp_cp2_pos)
bact1_lofreq_mr = get_mr_direct(len(bact1_lofreq_mutations), seq2len["edge_1671"])
bact2_lofreq_mr = get_mr_direct(len(bact2_lofreq_mutations), seq2len["edge_2358"])
camp_lofreq_mr_s = scinot(camp_lofreq_mr)
bact1_lofreq_mr_s = scinot(bact1_lofreq_mr)
bact2_lofreq_mr_s = scinot(bact2_lofreq_mr)
camp_cp2_lofreq_mr_s = scinot(camp_cp2_lofreq_mr)
# Just because we use this a lot...!
seqnames = f"{seq2name['edge_6104']}, {seq2name['edge_1671']}, and {seq2name['edge_2358']}"
# PART 1: LoFreq vs. 2% calls comparison
lfinfo = (
f"LoFreq called {lf_camp_num:,}, {lf_bact1_num:,}, and {lf_bact2_num:,} rare mutations "
f"in the {seqnames} MAGs, "
f"respectively. At the frequency threshold $p = 2\%$, NaiveFreq "
r'called a similar number of rare $p$-mutations '
f"({seq2naive2ct['edge_6104']:,}, {seq2naive2ct['edge_1671']:,}, and {seq2naive2ct['edge_2358']:,} for "
f"{seqnames}, respectively). It turns out that the sets of rare mutations identified by LoFreq and by "
r'NaiveFreq at $p = 2\%$ are somewhat similar: the numbers of overlapping rare mutations between these '
f"groups are {seq2overlapct['edge_6104']:,}, {seq2overlapct['edge_1671']:,}, and "
f"{seq2overlapct['edge_2358']:,} for {seqnames}. This suggests that, at least for this dataset, "
f"LoFreq primarily detected rare mutations with frequency of at least 2\%. Here, we describe an analysis of "
f"FDRs which suggests that there exist many more lower-frequency rare mutations.\n\n"
)
# PART 2: LoFreq vs. 2% calls FDR: BACT2
lfinfo += (
f"Using LoFreq's calls, the mutation rates for each MAG are "
f"${camp_lofreq_mr_s}$, ${bact1_lofreq_mr_s}$, and ${bact2_lofreq_mr_s}$ "
f"for {seqnames}, respectively. "
f"We can estimate the FDR of LoFreq's calls for the {seq2name['edge_2358']} MAG (using the "
f"{seq2name['edge_6104']} MAG as a decoy) as "
"$\\frac{"
f"{camp_lofreq_mr_s}"
"}{"
f"{bact2_lofreq_mr_s}"
"}"
f" \\approx {100 * (camp_lofreq_mr / bact2_lofreq_mr):.1f}\\%$, a very large FDR, indicating that either most identified "
f"mutations are false or that selection of the {seq2name['edge_6104']} MAG as the decoy results in a "
f"highly inflated estimate of the FDR. Although NaiveFreq's calls at the frequency threshold $p = 2\\%$ "
f"result in a lower estimated FDR of "
"$\\frac{"
f"{scinot(get_mr('edge_6104', '2.0'))}"
"}{"
f"{scinot(get_mr('edge_2358', '2.0'))}"
"}"
f" \\approx {100 * (get_mr('edge_6104', '2.0') / get_mr('edge_2358', '2.0')):.1f}\\%$ "
f"for the {seq2name['edge_2358']} MAG, this is still a high FDR that raises concerns about downstream "
f"analyses such as phasing.\n\n"
)
# PART 3: LoFreq FDR: BACT1
lfinfo += (
f"On the other hand, the estimated FDR of LoFreq's calls for the {seq2name['edge_1671']} MAG (still "
f"using the {seq2name['edge_6104']} MAG as a decoy) is only "
"$\\frac{"
f"{camp_lofreq_mr_s}"
"}{"
f"{bact1_lofreq_mr_s}"
"} "
f"\\approx {100 * (camp_lofreq_mr / bact1_lofreq_mr):.1f}\\%$; "
)
# PART 4: 2% FDR: BACT1
lfinfo += (
r"NaiveFreq at the frequency threshold of $p = 2\%$ has a slightly lower estimated FDR of "
"$\\frac{"
f"{scinot(get_mr('edge_6104', '2.0'))}"
"}{"
f"{scinot(get_mr('edge_1671', '2.0'))}"
"} "
f"\\approx {100 * (get_mr('edge_6104', '2.0') / get_mr('edge_1671', '2.0')):.1f}\\%$. "
r'Although both LoFreq and NaiveFreq at $p = 2\%$ result in the reliable identification of '
"rare mutations with low FDR, we are still interested in extending the set of identified rare mutations while "
r'controlling the FDR. For example, lowering the frequency threshold of NaiveFreq to '
f"$p = 0.5\%$ results in the identification of {len(bact1_naive_p2called_mutations['0.5']):,} rare mutations in the "
f"{seq2name['edge_1671']} MAG (an additional "
f"{len(bact1_naive_p2called_mutations['0.5']) - seq2naive2ct['edge_1671']:,} rare mutations as compared to $p = 2\%$) "
"with a higher but still relatively low FDR estimate of "
f"{100 * (get_mr('edge_6104', '0.5') / get_mr('edge_1671', '0.5')):.1f}\\%."
)
output_and_print(lfinfo, "misc-text/lofreq-target-decoy.tex")
# -
# ## Plot estimated BACT1 FDR vs. scaled number of identified (rare) mutations
#
# Previous versions of this notebook only drew one FDR curve at a time; now, this function accepts multiple decoy genome `p2numpermb` objects.
DECOY_CAMP_COLOR = "#008800"
DECOY_CAMP_CP2_COLOR = cp2color[2]
def plot_bact1_fdr(
target_p2numpermb, decoy_p2numpermbs, colors, shapes, decoy_labels, fig_basename,
use_log=True, start_p=None, end_p=None, show_p_labels=False,
special_p_markers=[2, 0.22, 0.15],
titleprefix=f"{seq2name['edge_1671']} FDR curves based on na\u00efve rare $p$-mutation calling"
):
"""Plots FDR curves with some fancy annotations.
------------
NOTE that this still computes the FDR as (# muts per mb in decoy) / (# muts per mb in target),
rather than the "mutation rate" approach. The two approaches are equivalent (at least, for the
naive and CP2 decoy genomes); where they start to differ is the nonsyn/nonsense stuff, which is done
specially later below anyway. So I'm not gonna bother updating this for now.
------------
target_p2numpermb is analogous to bact1_naive_p2numpermb. This maps values of p (some percentage
that you're using as a cutoff for calling mutations) to the number of called mutations per megabase
for some genome. (This is currently assumed to be BACT1, because of e.g. the title we use for this
figure, but really you could draw these sorta curves for any genome that you have variant calls for.)
decoy_p2numpermbs, colors, shapes, and decoy_labels should all be collections of identical length
(this lets you pass in and style multiple decoy genomes to be shown on the same plot). So, for example,
you can show how different decoy genomes result in different FDR curves for the same target genome.
"""
fig, ax = pyplot.subplots(1)
for di, decoy_p2numpermb in enumerate(decoy_p2numpermbs):
if start_p is None:
start_p = percentages[0]
if end_p is None:
end_p = percentages[-1]
# This is all the percentages we HOPE to use
attempting_to_use_percentages = percentages[percentages.index(start_p) : percentages.index(end_p) + 1]
# This is all the percentages we CAN use (the FDR is d/t, so if t is 0 then we can't show that FDR...)
used_percentages = []
p2bact1fdr = {}
for p in attempting_to_use_percentages:
d = decoy_p2numpermb[str(p)]
t = target_p2numpermb[str(p)]
if t != 0:
p2bact1fdr[p] = 100 * (d / t)
used_percentages.append(p)
# FDR
x = []
# number of mutations per megabase
y = []
# list of 2-tuples of (x,y). we'll highlight these points.
special_xys = []
for p in used_percentages:
cx = p2bact1fdr[p]
cy = target_p2numpermb[str(p)]
x.append(cx)
y.append(cy)
if show_p_labels:
# add labels (manually positioned). yeah, i know i know
# we only show these labels once per plot (to avoid the text overlapping itself
# from doing this multiple times)
dy = None
dx = None
if use_log:
if p == 2: dy = -500; dx = 0.005
elif p == 0.22: dy = -4000; dx = -0.008
elif p == 0.15: dy = -6000; dx = -0.008
else:
if p == 2: dy = 0; dx = 0.005
elif p == 0.22: dy = -200; dx = 0.005
elif p == 0.15: dy = 300; dx = -0.008
if dy is not None:
if p >= 1:
text = f"$p = {p:.0f}\%$"
elif p >= 0.5:
text = f"$p = {p:.1f}\%$"
else:
text = f"$p = {p:.2f}\%$"
if di == 0:
ax.text(cx + dx, cy + dy, text)
if p in special_p_markers:
special_xys.append((cx, cy))
ax.plot(x, y, marker=shapes[di], color=colors[di], label=decoy_labels[di])
if len(special_xys) > 0:
ax.scatter([xy[0] for xy in special_xys], [xy[1] for xy in special_xys], color="#ffff00", zorder=2000, s=20)
ax.set_xlabel(f"Estimated FDR for called rare $p$-mutations in {seq2name['edge_1671']} (%)")
ax.set_ylabel(f"Number of called rare $p$-mutations per megabase in {seq2name['edge_1671']}")
title = (
f"{titleprefix},\nusing {len(used_percentages):,} values of $p$ from {max(used_percentages):.2f}% to {min(used_percentages):.2f}%"
)
if use_log:
ax.set_yscale("symlog")
title += " (log scale)"
else:
title += " (non-log scale)"
ax.set_title(title, fontsize=20)
ax.legend()
use_thousands_sep(ax.yaxis)
fig.set_size_inches(15, 8)
fig.savefig(f"figs/{fig_basename}.png", bbox_inches="tight")
plot_bact1_fdr(
bact1_naive_p2numpermb,
(camp_naive_p2numpermb, camp_cp2_naive_p2numpermb),
(DECOY_CAMP_COLOR, DECOY_CAMP_CP2_COLOR),
("s", "o"),
(
f"Decoy Genome: all {seq2len['edge_6104']:,} positions in {seq2name['edge_6104']}",
f"Decoy Genome: only the {num_camp_cp2_pos:,} CP 2 positions in {seq2name['edge_6104']}",
),
"BACT1_FDR_CAMP_decoy",
show_p_labels=True
)
# plot_bact1_fdr(
# (camp_naive_p2numpermb, camp_cp2_naive_p2numpermb),
# ("#005500", cp2color[2]),
# ("s", "o"),
# (
# f"Decoy Genome: all {seq2len['edge_6104']:,} positions in {seq2name['edge_6104']}",
# f"Decoy Genome: only the {num_camp_cp2_pos:,} CP 2 positions in {seq2name['edge_6104']}",
# ),
# "BACT1_FDR_CAMP_decoy_max2",
# start_p=2
# )
plot_bact1_fdr(
bact1_naive_p2numpermb,
(camp_naive_p2numpermb, camp_cp2_naive_p2numpermb),
(DECOY_CAMP_COLOR, DECOY_CAMP_CP2_COLOR),
("s", "o"),
(
f"Decoy Genome: all {seq2len['edge_6104']:,} positions in {seq2name['edge_6104']}",
f"Decoy Genome: only the {num_camp_cp2_pos:,} CP 2 positions in {seq2name['edge_6104']}",
),
"BACT1_FDR_CAMP_decoy_nonlog_max2",
use_log=False,
start_p=2,
show_p_labels=True
)
# ### sanity checking
#
# because the lofreq fdr curve looks weird. but it seems legit.
#
# causal factors for aforementioned weirdness:
#
# - although CAMP CP 2 is strictly smaller (i.e. includes less positions) than all of CAMP, we're not computing the FDR using just raw position counts -- we're using number per mb, so it's possible for CAMP CP 2's FDRs to exceed that of all of CAMP at a fixed value of p.
#
# - the "jumps" seem to correspond to values of p where we pick up new mutations in the decoy genome. Since there are very few of these identified by LoFreq in CAMP / CAMP CP 2, their impact on the graph is very noticeable.
# +
# print("According to LoFreq:")
# for pi in range(500, 400, -1):
# try:
# fdr = camp_lofreq_p2numpermb[str(pi / 100)] / bact1_lofreq_p2numpermb[str(pi / 100)]
# except ZeroDivisionError:
# fdr = None
# print(
# f"At p = {pi / 100}%, CAMP has {camp_lofreq_p2num_called_mutations_i[pi / 100]} rare muts. "
# f"BACT1 has {bact1_lofreq_p2num_called_mutations_i[pi / 100]} rare muts. "
# f"FDR est is {fdr}."
# )
# -
plot_bact1_fdr(
bact1_lofreq_p2numpermb,
(camp_lofreq_p2numpermb, camp_cp2_lofreq_p2numpermb),
(DECOY_CAMP_COLOR, DECOY_CAMP_CP2_COLOR),
("s", "o"),
(
f"Decoy Genome: all {seq2len['edge_6104']:,} positions in {seq2name['edge_6104']}",
f"Decoy Genome: only the {num_camp_cp2_pos:,} CP 2 positions in {seq2name['edge_6104']}",
),
"BACT1_FDR_CAMP_decoy_lofreq",
titleprefix="BACT1 FDR curves based on LoFreq calls of rare mutations",
special_p_markers=[]
)
# ## Show how adjusting $p$ adjusts the number of $p$-mutations in a genome (and the FDR, using the T/D approach)
# There is the possibility that we can't compute the FDR for certain values of p -- if a genome has 0 mutations
# for this value of p. In this case, we label these points with a color of None. In order to force matplotlib to
# draw a gray circle for these points, instead of just leaving them blank, we take the default viridis colormap
# and apply the set_bad() method to it to use this gray color. This, in conjunction with the "plotnonfinite"
# argument to scatter(), lets us draw gray circles for values of p with an undefined FDR (see
# https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.scatter.html).
viridis_cmap = copy.copy(matplotlib.cm.get_cmap("viridis"))
viridis_cmap.set_bad("#888888")
from matplotlib.ticker import MultipleLocator
# +
seq2p2numpermb = {
"edge_6104": camp_naive_p2numpermb,
"edge_1671": bact1_naive_p2numpermb,
"edge_2358": bact2_naive_p2numpermb,
"CAMP CP 2": camp_cp2_naive_p2numpermb
}
# Used elsewhere, e.g. coverage/length graph component summary
seq2color = {"edge_6104": "#00cc00", "edge_1671": "#ff0000", "edge_2358": "#880088", "CAMP CP 2": "#888888"}
def p_vs_num_muts_plot(leftmost_p=percentages[0], rightmost_p=percentages[-1],
fdr_vmin=0, fdr_vmax=100, ylogscale=False, fig_basename=None,
scatterplot_kwargs={"zorder": 20, "edgecolors": "#000000"},
text2info={}, suptitle_to_add=None):
"""Draws one of these plots, given a configurable range of p and FDR.
Sorry the documentation for this isn't great. See the uses of this below -- should be mostly
self-explanatory.
And just for future reference, the text2info parameter maps text to display to a
3-tuple of (x, y, text color). The x and y are in "data coordinates" in matplotlib parlance,
I think.
"""
# The edgecolor thing is used to give the scatterplot points a black border:
# see https://stackoverflow.com/a/50707047.
fig, axes = pyplot.subplots(2, 1, gridspec_kw={"hspace": 0.5, "height_ratios": [4, 0.2]})
used_percentages = percentages[percentages.index(leftmost_p):percentages.index(rightmost_p) + 1]
for si, seq in enumerate(["edge_6104", "edge_1671", "edge_2358", "CAMP CP 2"]):
# Reuse the already-called p2numpermb dicts to save time in setting the y-axis values.
# If you'd prefer to plot the raw number of called mutations (rather than the number of
# called mutations *per megabase*), you can use y = [len(p2called_mutations[str(p)]) for p ...]
# instead.
p2numpermb = seq2p2numpermb[seq]
y = [p2numpermb[str(p)] for p in used_percentages]
# Two phases: 1) line plot, 2) scatter plot. This lets us color points by FDR.
axes[0].plot(used_percentages, y, c=seq2color[seq])
if seq != "edge_6104" and seq != "CAMP CP 2":
fdrs = []
for p in used_percentages:
if seq2p2numpermb[seq][str(p)] > 0:
f = 100 * (camp_naive_p2numpermb[str(p)] / seq2p2numpermb[seq][str(p)])
else:
# draw a gray dot for this value of p since we can't compute the FDR (due to the
# denominator, i.e. the number of p-mutations per megabase in the "target" genome, being 0)
f = None
fdrs.append(f)
# https://stackoverflow.com/a/8204981
# We generate two sc objects (results of calling .scatter()), but we only need one of them: since
# vmin and vmax are both fixed for both the BACT1 and BACT2 MAGs, the colorbar will be applicable
# for both.
sc = axes[0].scatter(
used_percentages, y, c=fdrs, cmap=viridis_cmap, vmin=fdr_vmin, vmax=fdr_vmax, plotnonfinite=True,
**scatterplot_kwargs
)
else:
# Plot scatterplot points for CAMP or CAMP CP2,
# where we can't really estimate a FDR (since we're already using CAMP as a decoy)
axes[0].scatter(used_percentages, y, c="#888888", **scatterplot_kwargs)
# important to set y scale before calling use_thousands_sep() -- otherwise, the 10^n notation
# will trump the thousands separator stuff
if ylogscale:
axes[0].set_yscale("symlog")
else:
# hack hack hack this will break if you use this with other data :S
# this is done so that we can use the pretty MAG labels for each curve
axes[0].set_ylim(-4000, 25000)
use_thousands_sep(axes[0].yaxis)
# alter the amount of padding used based on the number of percentages we have. Use less padding if
# there are only a few percentage points shown.
padding = 1e-4 * len(used_percentages)
axes[0].set_xlim(leftmost_p + padding, rightmost_p - padding)
if leftmost_p <= 1:
axes[0].xaxis.set_major_locator(MultipleLocator(0.1))
axes[0].set_ylabel("Called rare $p$-mutations per megabase", fontsize=14)
axes[0].set_xlabel("Value of $p$ used to call rare $p$-mutations (%)", fontsize=16)
num_p_vals = len(used_percentages)
# I think there's a programmatic way to do this for arbitrary floats-that-are-actually-integers
# in python??? if anyone needs to make these plots again, we can probs get that working
if leftmost_p == 1 or leftmost_p == 2:
lps = f"{leftmost_p:.0f}%"
else:
lps = f"{leftmost_p:.2f}%"
p_range_txt = f" using {num_p_vals:,} values of $p \\in$ [{lps}, {rightmost_p:.2f}%]"
title = "$p$ vs. number of called rare $p$-mutations per megabase"
# The p range text should either go with the suptitle, or in the normal title. Depends on how
# we wanna show this figure (either on top of an FDR curve, or by itself).
if suptitle_to_add is None:
title += f",\n{p_range_txt}"
else:
fig.suptitle(f"{suptitle_to_add},{p_range_txt}", x=0.5, y=1.03, fontsize=23)
axes[0].set_title(title, fontsize=20)
# Make it "easy" (well, still involves manual positioning, but eh) to label curves in the plot
for text in text2info:
info = text2info[text]
axes[0].text(info[0], info[1], text, color=info[2], fontweight="semibold")
fig.set_size_inches(15, 6)
fig.colorbar(sc, cax=axes[-1], orientation="horizontal")
fdr_desc = ""
if fdr_vmin != 0:
fdr_desc = f"FDRs < {fdr_vmin}% are clamped to {fdr_vmin}%; "
fdr_desc += f"FDRs > {fdr_vmax}% are clamped to {fdr_vmax}%"
axes[-1].set_xlabel(f"Color scaling applied to BACT1 and BACT2 estimated FDRs (using all of CAMP as a decoy)\n{fdr_desc}", fontsize=14)
if fig_basename is not None:
fig.savefig(f"figs/{fig_basename}.png", bbox_inches="tight")
# -
p_vs_num_muts_plot(
leftmost_p=4.99,
ylogscale=True, fig_basename="p_vs_num_muts_log_clamp",
fdr_vmin=0, fdr_vmax=10,
text2info={
seq2name['edge_6104']: (1.195, 47, "#00cc00"),
seq2name['edge_1671']: (1.22, 12000, "#dd0000"),
seq2name['edge_2358']: (1.22, 200, seq2color["edge_2358"]),
"CAMP CP2": (1.345, 5.5, "#888888"),
}
)
p_vs_num_muts_plot(
leftmost_p=2,
ylogscale=False, fig_basename="p_vs_num_muts_nonlog_clamp",
fdr_vmin=0, fdr_vmax=10,
text2info={
seq2name['edge_6104']: (0.205, 200, "#00cc00"),
seq2name['edge_1671']: (0.215, 20200, "#dd0000"),
seq2name['edge_2358']: (0.215, 10200, seq2color["edge_2358"]),
"CAMP CP2": (0.26, -1300, "#888888"),
},
suptitle_to_add="Impacts of $p$ on na\u00efve $p$-mutation calling"
)
# ## FDR curves using nonsynonymous mutation rates and nonsense mutation rates
# ### Load info about syn vs nonsyn and non-nonsense vs nonsense mutations from files
#
# **NOTE:** this is the stuff we output from the `SynAndNonsenseMutationRateBarplots-PositionBased.ipynb` notebook, not the old codon-based one. might just remove the old one eventually to avoid confusion.
# +
def load_pf(basename):
with open(f"misc-output/{basename}.pickle", "rb") as pf:
return pickle.load(pf)
# All gene positions (after some checks)
p2seq2obs_si = load_pf("pos_p2seq2obs_si")
p2seq2obs_ni = load_pf("pos_p2seq2obs_ni")
p2seq2obs_nnsi = load_pf("pos_p2seq2obs_nnsi")
p2seq2obs_nsi = load_pf("pos_p2seq2obs_nsi")
seq2poss_si = load_pf("pos_seq2poss_si")
seq2poss_ni = load_pf("pos_seq2poss_ni")
seq2poss_nnsi = load_pf("pos_seq2poss_nnsi")
seq2poss_nsi = load_pf("pos_seq2poss_nsi")
### Just CP2 (after some checks)
p2seq2obs_cp2_si = load_pf("pos_p2seq2obs_cp2_si")
p2seq2obs_cp2_ni = load_pf("pos_p2seq2obs_cp2_ni")
p2seq2obs_cp2_nnsi = load_pf("pos_p2seq2obs_cp2_nnsi")
p2seq2obs_cp2_nsi = load_pf("pos_p2seq2obs_cp2_nsi")
seq2poss_cp2_si = load_pf("pos_seq2poss_cp2_si")
seq2poss_cp2_ni = load_pf("pos_seq2poss_cp2_ni")
seq2poss_cp2_nnsi = load_pf("pos_seq2poss_cp2_nnsi")
seq2poss_cp2_nsi = load_pf("pos_seq2poss_cp2_nsi")
# +
def compute_fancy_decoy_mr(decoy, p, method):
if method == "nonsyn":
rate = p2seq2obs_ni[p][decoy] / seq2poss_ni[decoy]
elif method == "nonsense":
rate = p2seq2obs_nsi[p][decoy] / seq2poss_nsi[decoy]
elif method == "nonsyn cp2":
rate = p2seq2obs_cp2_ni[p][decoy] / seq2poss_cp2_ni[decoy]
elif method == "nonsense cp2":
rate = p2seq2obs_cp2_nsi[p][decoy] / seq2poss_cp2_nsi[decoy]
else:
raise ValueError("unrecognized method")
return rate
def compute_fancy_fdr(target, p, method, decoy="edge_6104"):
"""Computes the FDR using nonsyn or nonsense mutations as the decoy genome.
...There's probably a better adjective than "fancy" for describing these particular types of mutations,
but I can't think of it right now, so I'm leaving that as an exercise for the reader.
"""
# target and decoy should both be seq names
# method should be "nonsyn" or "nonsense"
decoy_mr = compute_fancy_decoy_mr(decoy, p, method)
target_mr = get_mr(target, str(p))
return (decoy_mr / target_mr)
# -
# ### Misc. text FDR example using nonsynonymous rate
rn_fdr = compute_fancy_fdr("edge_1671", 0.5, "nonsyn")
rn_outputtext = (
f"For the SheepGut dataset at $p = 0.5\%$, and using the potential "
f"nonsynonymous mutations in {seq2name['edge_6104']} as a decoy, "
f"we would estimate the FDR for the {seq2name['edge_1671']} MAG as "
"\\frac{"
f"{scinot(compute_fancy_decoy_mr('edge_6104', 0.5, 'nonsyn'))}"
"}{"
f"{scinot(get_mr('edge_1671', '0.5'))}"
"}"
f" \\approx {100 * rn_fdr:.1f}\%$."
)
output_and_print(rn_outputtext, "misc-text/bact1-rs-rn-fdr.tex")
# ### Misc. text information about BACT2's estimated FDR using a context-dependent target-decoy approach
# +
# just for convenience's sake
c = seq2name['edge_6104']
b1 = seq2name['edge_1671']
b2 = seq2name['edge_2358']
cmr = get_mr("edge_6104", "0.5")
ccp2mr = get_mr("camp cp2", "0.5")
b2mr = get_mr("edge_2358", "0.5")
c_rn_fdr = 100 * compute_fancy_fdr("edge_2358", 0.5, "nonsyn")
c_nonsyn_mr = compute_fancy_decoy_mr("edge_6104", 0.5, "nonsyn")
c_rns_fdr = 100 * compute_fancy_fdr("edge_2358", 0.5, "nonsense")
c_nonsense_mr = compute_fancy_decoy_mr("edge_6104", 0.5, "nonsense")
bact2fdrinfo = (
f"Our initially described target-decoy approach estimated a very high FDR for the {b2} MAG:\n"
)
bact2fdrinfo += (
"at $p = 0.5\%$, NaiveFreq identified "
f"mutation rates of ${scinot(cmr)}$ and "
f"${scinot(b2mr)}$ for the {c} and {b2} MAGs, resulting in an "
"initial FDR estimate of $\\frac{"
f"{scinot(cmr)}"
"}{"
f"{scinot(b2mr)}"
"}"
f" \\approx {100 * (cmr / b2mr):.1f}\\%$.\n"
)
bact2fdrinfo += (
f"%\nHere we show how applying context-dependent target-decoy approaches changes "
f"the estimated FDR of identified mutations in {b2}.\n"
)
bact2fdrinfo += (
f"%\nUsing only the CP2 mutations in CAMP as a decoy genome (with a mutation rate of ${scinot(ccp2mr)}$), "
"we can reduce our FDR estimate to $\\frac{"
f"{scinot(ccp2mr)}"
"}{"
f"{scinot(b2mr)}"
"}"
f" \\approx {100 * (ccp2mr / b2mr):.1f}\\%$.\n"
)
bact2fdrinfo += (
f"%\nUsing all possible nonsynonymous mutations in {c} as a decoy genome (with a mutation rate of "
f"${scinot(c_nonsyn_mr)}$), we obtain a slightly higher estimate of "
"$\\frac{"
f"{scinot(c_nonsyn_mr)}"
"}{"
f"{scinot(b2mr)}"
"}"
f" \\approx {c_rn_fdr:.1f}\\%$.\n"
)
bact2fdrinfo += (
f"%\nFinally, using all possible nonsense mutations in {c} as a decoy genome (with a mutation rate of "
f"${scinot(c_nonsense_mr)}$), we obtain a much higher estimate of "
"$\\frac{"
f"{scinot(c_nonsense_mr)}"
"}{"
f"{scinot(b2mr)}"
"}"
f" \\approx {c_rns_fdr:.1f}\\%$.\n"
)
output_and_print(bact2fdrinfo, "misc-text/bact2-fdr-varyingdecoy-p0.5.tex")
# -
# ### Plot combined FDR curves
#
# Combining:
#
# - the CAMP naive + CAMP CP 2 decoy genome approaches above
# - the nonsyn + nonsense decoy approaches here
# +
fig, ax = pyplot.subplots(1)
# Highlight the points at these values of p
special_p = [2, 1, 0.5, 0.15]
special_xys = []
# x-axis: Estimated FDR
fdr_naive_campdecoy = []
fdr_cp2_campdecoy = []
fdr_nonsyn = []
fdr_nonsense = []
fdr_nonsyn_cp2 = []
fdr_nonsense_cp2 = []
# y-axis: Number of called p-mutations per megabase
num_called_pmuts_per_mb = []
# (same as above, just omitting the points where y = 0 and we thus can't estimate a FDR. lazy way of ensuring
# the x and y axes are in sync.)
y_naive = []
sn_pcts = [p / 100 for p in range(15, 201, 1)][::-1]
# limit this to starting at 4.99%, since we're only including rare mutations
# but it's weird - not entirely either way. need to decide.
for p in sn_pcts:
y = bact1_naive_p2numpermb[str(p)]
num_called_pmuts_per_mb.append(y)
if y == 0:
raise ValueError("Zero in target in FDR curve???")
y_naive.append(y)
x_naive = 100 * (camp_naive_p2numpermb[str(p)] / y)
x_cp2 = 100 * (camp_cp2_naive_p2numpermb[str(p)] / y)
x_nonsyn = 100 * compute_fancy_fdr("edge_1671", p, "nonsyn")
x_nonsense = 100 * compute_fancy_fdr("edge_1671", p, "nonsense")
x_nonsyn_cp2 = 100 * compute_fancy_fdr("edge_1671", p, "nonsyn cp2")
x_nonsense_cp2 = 100 * compute_fancy_fdr("edge_1671", p, "nonsense cp2")
fdr_naive_campdecoy.append(x_naive)
fdr_cp2_campdecoy.append(x_cp2)
fdr_nonsyn.append(x_nonsyn)
fdr_nonsense.append(x_nonsense)
fdr_nonsyn_cp2.append(x_nonsyn_cp2)
fdr_nonsense_cp2.append(x_nonsense_cp2)
if p in special_p:
print(f"At p = {p}%, nonsyn fdr = {x_nonsyn} and nonsense fdr = {x_nonsense}. # p-muts per mb = {y}")
special_xys.append((x_naive, y, "o"))
special_xys.append((x_cp2, y, "^"))
special_xys.append((x_nonsyn, y, "D"))
special_xys.append((x_nonsense, y, "s"))
special_xys.append((x_nonsyn_cp2, y, "*"))
special_xys.append((x_nonsense_cp2, y, "p"))
if p == 2: dx = -0.21; dy = -600
if p == 1: dx = 0.03; dy = -500
if p == 0.5: dx = 0.03; dy = -1000
if p == 0.15: dx = -6.5; dy = 200
# EXTREMELY lazy way of formatting the special values of p. i'm sure there's a better
# way to do this in practice but this gets the job done.
if p == 2 or p == 1:
s = f"p = {int(p)}%"
elif p == 0.5:
s = f"p = {p:.1f}%"
else:
s = f"p = {p:.2f}%"
ax.text(x_naive + dx, y + dy, s)
ax.plot(fdr_naive_campdecoy, y_naive, marker="o", color="#888888",
label=f"Decoy genome: all of {seq2name['edge_6104']}")
ax.plot(fdr_cp2_campdecoy, y_naive, marker="^", color="#0099cc",
label=f"Decoy genome: CP2 positions in {seq2name['edge_6104']}")
ax.plot(fdr_nonsyn, num_called_pmuts_per_mb, marker="D", color="#cc3322",
label=f"Decoy genome: potential nonsynonymous mutations in {seq2name['edge_6104']}")
ax.plot(fdr_nonsyn_cp2, num_called_pmuts_per_mb, marker="*", color="#ab090c",
label=f"Decoy genome: potential nonsynonymous mutations in CP2 of {seq2name['edge_6104']}")
ax.plot(fdr_nonsense, num_called_pmuts_per_mb, marker="s", color="#9922cc",
label=f"Decoy genome: potential nonsense mutations in {seq2name['edge_6104']}")
ax.plot(fdr_nonsense_cp2, num_called_pmuts_per_mb, marker="p", color="#772277",
label=f"Decoy genome: potential nonsense mutations in CP2 of {seq2name['edge_6104']}")
for sp_xy in special_xys:
x = sp_xy[0]
y = sp_xy[1]
m = sp_xy[2]
ax.scatter(x, y, marker=m, color="#ffff00", zorder=2000, s=17)
ax.set_xlabel(
f"Estimated FDR for called rare $p$-mutations in {seq2name['edge_1671']} (%),\n"
r"using a log$_{10}$ scale to highlight order-of-magnitude differences in estimated FDRs across decoy genomes"
)
ax.set_ylabel("Number of called rare $p$-mutations per megabase")
title = "BACT1 FDR curves based on na\u00efve $p$-mutation calling"
ax.set_xscale("symlog")
ax.set_xticks([x / 10 for x in range(1, 20)] + [2.5, 12.5, 17.5], minor=True)
ax.set_xticks([0,1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 25, 30, 40, 50, 60, 70, 80])
# variant of use_thousands_sep() that doesn't give a weird numpy int64 error about is_integer()
ff = matplotlib.ticker.FuncFormatter(
lambda x, pos: "{:,}".format(int(x)) if type(x) == int else "{:,}".format(x)
)
ax.xaxis.set_major_formatter(ff)
ax.xaxis.set_minor_formatter(ff)
# Adjust the x-axis tick font sizes: https://stackoverflow.com/a/11386056 (yanked from div idx ntbk)
ax.tick_params(axis="x", which="major", labelsize=12)
ax.tick_params(axis="x", which="minor", labelsize=8)
ax.set_xlim(-0.1, 80)
ax.set_title(title, fontsize=20)
ax.legend()
use_thousands_sep(ax.yaxis)
fig.set_size_inches(15, 8)
fig.savefig(f"figs/bact1_fdr_curves.png", bbox_inches="tight")
# -
# ## FDR table
# +
# I know the fact that I have to do this in the first place is obscenely ugly, please forgive me -.-____-.-
p_vals = [2, 1, 0.5, 0.25, 0.15]
p_vals_s = ["2.0", "1.0", "0.5", "0.25", "0.15"]
with open("misc-text/camp-mr-table.tex", "w") as tf:
for pi in range(len(p_vals)):
naive_mr = get_mr("edge_6104", p_vals_s[pi])
cp2_mr = get_mr("camp cp2", p_vals_s[pi])
nonsyn_mr = compute_fancy_decoy_mr("edge_6104", p_vals[pi], "nonsyn")
nonsense_mr = compute_fancy_decoy_mr("edge_6104", p_vals[pi], "nonsense")
nonsyn_cp2_mr = compute_fancy_decoy_mr("edge_6104", p_vals[pi], "nonsyn cp2")
nonsense_cp2_mr = compute_fancy_decoy_mr("edge_6104", p_vals[pi], "nonsense cp2")
tf.write(
f"{p_vals[pi]}\% & ${scinot(naive_mr)}$ & "
f"${scinot(cp2_mr)}$ & ${scinot(nonsyn_mr)}$ & ${scinot(nonsyn_cp2_mr)}$ & "
f"${scinot(nonsense_mr)}$ & ${scinot(nonsense_cp2_mr)}$ \\\\ \\hline\n"
)
# -
num_ti = 0
num_tv = 0
for pos in camp_naive_p2called_mutations["0.5"]:
pu = seq2pos2pileup["edge_6104"][pos]
ref = "ACGT"[pu[1]]
alt = pileup.get_alt_nt_if_reasonable(pu)
if ref == "A":
ti = (alt == "G")
elif ref == "C":
ti = (alt == "T")
elif ref == "G":
ti = (alt == "A")
elif ref == "T":
ti = (alt == "C")
else:
raise ValueError("pls")
if ti: num_ti += 1
else: num_tv += 1
#print(pos, ref, "->", alt, ti)
| notebooks/DemonstratingTargetDecoyApproach.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbgrader={}
# # Codecademy Completion
# + [markdown] nbgrader={}
# This problem will be used for verifying that you have completed the Python course on http://www.codecademy.com/.
#
# Here are the steps to do this verification:
#
# 1. Go to the page on http://www.codecademy.com/ that shows your percent completion.
# 2. Take a screen shot of that page.
# 3. Name the file `codecademy.png` and upload it to this folder.
# 4. Run the following cells to display the image in this notebook.
# + nbgrader={}
from IPython.display import Image
# + deletable=false nbgrader={"checksum": "54f5a91240cc5d5ed59b3d29b228bb9a", "grade": true, "grade_id": "codecademy", "points": 10}
Image(filename='codecademy.png', width='100%')
# -
| assignments/assignment01/Codecademy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Implementing Random Forest
# Authors:
# - <NAME>
# - <NAME>
# - <NAME>
# - <NAME>
#Importing Libraries
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RandomizedSearchCV
from sklearn.ensemble import RandomForestRegressor
from sklearn import metrics
from sklearn.metrics import r2_score
from datetime import datetime
import numpy as np
import time
import warnings
warnings.filterwarnings('ignore')
# %matplotlib inline
data = pd.read_csv("Data/final_data.csv")
data.head()
# We have tried every combination and found this one is best. Here we will first split our data into 95% and 5%. The 5% data will be unseen and hidden or separated. Then from that 95% we will again split the data into 80% and 20% as a train and test split. here are implementing custom Cross validation to avoid Data Leakages.
#Splitting data as X and y
X = data.iloc[:, :-1] #Independent features
y = data.iloc[:, -1] #Dependent feature
#Splitting and separating 5% data and making it as unseen
X_train_unseen, X_test_unseen, y_train_unseen, y_test_unseen = train_test_split(X, y, test_size=0.05,random_state=1)
len(X_train_unseen),len(X_test_unseen)
# Splitting that 95% data into 80% for training and 20% for testing
X_train, X_test, y_train, y_test = train_test_split(X_train_unseen, y_train_unseen, test_size=0.20,random_state=1)
sns.boxplot(X_train['PM2.5'])
# +
# Fitting Model without any tunning
model = RandomForestRegressor(n_estimators = 200, random_state = 0)
model = model.fit(X_train, y_train)
prediction = model.predict(X_test)
print("Coefficient of Determination (R^2) for train dataset: ", model.score(X_train, y_train))
print("Coefficient of Determination (R^2) for test dataset: ", model.score(X_test, y_test))
print('MAE:', metrics.mean_absolute_error(y_test, prediction))
print('MSE:', metrics.mean_squared_error(y_test, prediction))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, prediction)))
# -
sns.displot(y_test - prediction)
# The model is overfitted with 98.5% accuracy on trining but 90% on testing. Let's tune the hyperparameter and see if the model can be generalized
# # After Hyper Parameter Tuning
n_estimators = [int(x) for x in np.linspace(start=100, stop=1200, num=18)]
max_features = ['auto', 'sqrt']
max_depth = [int(x) for x in np.linspace(5, 30, num=6)]
min_samples_split = [2, 5, 10, 15, 20]
min_samples_leaf = [1, 2, 5, 10,12]
params = {
'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_leaf': min_samples_split,
'min_samples_leaf': min_samples_leaf
}
rf = RandomForestRegressor()
tuned_model = RandomizedSearchCV(rf, params, scoring='neg_mean_squared_error',
cv=5, n_iter=20, random_state=43, n_jobs=-1)
tuned_model.fit(X_train, y_train)
#Printing Best Parameter during tunning
print(tuned_model.best_estimator_)
# Now using the best parameter and predicting
best_rf = RandomForestRegressor(max_depth=7, max_features='sqrt', n_estimators=552)
best_rf.fit(X_train,y_train)
print("Coefficient of Determination (R^2) for train dataset: ", best_rf.score(X_train, y_train))
print("Coefficient of Determination (R^2) for test dataset: ", best_rf.score(X_test, y_test))
# # Prediction
X_test_unseen.head()
y_test_unseen.head()
print(model.predict([[38.82,26.56,0.82,10.25,20.06]]))
print(model.predict([[63.58,40.25,0.23,27.84,50.72]]))
print(model.predict([[62.33,2.60,0.59,7.46,29.58]]))
print(model.predict([[118.43,84.21,0.89,37.55,39.59]]))
print(model.predict([[37.67,37.32,1.06,7.06,34.92]]))
# # Insights
# - Random Forest is much better and more generalized than Decision Tree.
# - RF giving 91.8% on train and 89% on test data
# - We can see in the prediction, 421 is predicted as 304, this explain there are outlier in the AQI. Although RF is not affected by Outlier, but as per domain knowledge we know that if AQI is greater than 300, it is severe.
# - In the next part, we can try to handle large values and see the prediction
| AQI/models/5. Implementing Random Forest Classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Trabajo Final Sistemas de Bases de Datos Masivos
# ## Integrantes
# ### - <NAME>
# ### - <NAME>
# ### - <NAME>
# #### Ejercicio 1:
# *Haga una limpieza de los datos identificando los campos comunes entre los
# conjuntos de datos. (Adjuntar Rutina SQL o Python)*
# #### Respuesta a Ejercicio 1:
# En primer lugar se cargan las bases de datos públicas, en BigQuery de Google, donde se encuentran los datos de StackOverFlow.
# <img src="Datos_Origen.PNG">
# Mediante una visualización preliminar se elijen las variables (campos) de las tablas a descargar con los siguientes criterios:
# * Que contenga datos de los lenguajes de programación
# * Que sea común a todos los años (solo en 2016 falta el dato de salario)
# * Que podría servir para responder preguntas. ejm: cual lenguaje de programación genera más ingresos...
# * Que pueda usarse en algún modelo predictivo
#
# Se obtuvo la siguiente selección de variables a descargar:
# Dado que las bases de datos no tienen la misma estructura por año. así que se descargarán en el disco local para hacer una depuración con Python.
# Para el año 2011 se usó la siguiente consulta con todos los campos y registros, con un tamaño de 854Kb.
# +
# SELECT * FROM `fh-bigquery.stackoverflow.survey_results_2011`
# -
# El mismo procedimiento se ejecutó para los siguientes años: 2012, 2013, 2014, 2015, 2016 y 2017, con las siguientes consultas:
# +
#SELECT * FROM `fh-bigquery.stackoverflow.survey_results_2012`
#SELECT * FROM `fh-bigquery.stackoverflow.survey_results_2013`
#SELECT * FROM `fh-bigquery.stackoverflow.survey_results_2014`
#SELECT * FROM `fh-bigquery.stackoverflow.survey_results_2015`
# -
# La tabla del año 2016 tiene 56030 filas y BigQuery solo deja exportar 16.000 filas, por lo que se divide la consulta en 4.
# +
#SELECT * FROM `fh-bigquery.stackoverflow.survey_results_2015` LIMIT 15000
| .ipynb_checkpoints/Trabajo Final Curso Grandes Bases de DAtos-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
##Liberias
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings("ignore")
import glob
# +
#import tensorflow as tf
#from keras import layers
#import keras
#from datetime import datetime
#from sklearn.metrics import accuracy_score,classification_report
# -
# ## 1. Datos
# +
#Normas
DF_normas = pd.read_csv("normas_binary.csv",
index_col="Número de resolución",
)
#Tribunal
DF_tribunal = pd.read_csv("tribunal_binary.csv",
index_col="Número de resolución")
#Empresas
DF_empresa = pd.read_csv("empresa_binary.csv",
index_col="Número de resolución")
#
CA_WE = np.loadtxt("Criterios_emb.csv", delimiter=",")
#
CA_TFIDF = np.loadtxt("TF-IDF_Vectorization_Criterios.csv",
delimiter=",")
TF_Todas = np.loadtxt("TF-IDF_Vectorization_Todas.csv", delimiter=",")
# -
y = DF_normas.iloc[:, -1].values
normas_numpy = DF_normas.values[:, :-1]
tribunal_numpy = DF_tribunal.values[:, :-1]
empresa_numpy = DF_empresa.values[:, :-1]
# ## 2. Clasificación
def train_test_80_20(X, Y):
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X,
Y,
test_size=0.2,
random_state=10)
return X_train, X_test, Y_train, Y_test
def test_evaluation(model, X_test, y_test):
from sklearn.metrics import classification_report
# evaluación en test del modelo
y_test_pred = model.predict(X_test)
print(classification_report(y_test, y_test_pred))
# +
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score
from sklearn.neural_network import MLPClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import KFold
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
# -
# crear un iterador de CV para estandarizar los entrenamientos
# 5-fold cross-validation
kf = KFold(n_splits=5, random_state=0)
# **Random Forest - empresa**
# - bajo rendimiento
# +
# optimización de hiperparámetros utilizando cross-validation
# MODELO: RF
# VARIABLES DE ENTRADA: empresa
# HYPERPARÁMETROS ÓPTIMOS:
clf = RandomForestClassifier(n_estimators=100,
random_state=0
)
X = np.concatenate([TF_Todas], axis=1)
X_train, X_test, Y_train, Y_test = train_test_80_20(X, y)
scores = cross_val_score(clf, X_train, Y_train,
cv=kf,
scoring="f1"
)
print(f"{scores.mean():.3f} +- {scores.std():.3f}")
# +
# %time # pa que aprendas
# enternar el modelo con todos los datos de entrenamiento
clf.fit(X_train, Y_train)
# -
# solo usar cuando ya hayas terminado lo de arriba
test_evaluation(clf, X_test, Y_test)
# **Random Forest - normas**
# +
# optimización de hiperparámetros utilizando cross-validation
# MODELO: RF
# VARIABLES DE ENTRADA: normas
# HYPERPARÁMETROS ÓPTIMOS:
clf = RandomForestClassifier(n_estimators=100, random_state=0)
X = np.concatenate([normas_numpy], axis=1)
X_train, X_test, Y_train, Y_test = train_test_80_20(X, y)
scores = cross_val_score(clf, X_train, Y_train,
cv=kf,
scoring="f1"
)
print(f"{scores.mean():.3f} +- {scores.std():.3f}")
# +
# %time
# enternar el modelo con todos los datos de entrenamiento
clf.fit(X_train, Y_train)
# solo usar cuando ya hayas terminado lo de arriba
test_evaluation(clf, X_test, Y_test)
# -
# **Random Forest - criterios aplicables (TF-IDF)**
# +
# optimización de hiperparámetros utilizando cross-validation
# MODELO: RF
# VARIABLES DE ENTRADA: CA (TF-IDF)
# HYPERPARÁMETROS ÓPTIMOS:
clf = RandomForestClassifier(n_estimators=100, random_state=0)
X = np.concatenate([CA_TFIDF], axis=1)
X_train, X_test, Y_train, Y_test = train_test_80_20(X, y)
scores = cross_val_score(clf, X_train, Y_train,
cv=kf,
scoring="f1"
)
print(f"{scores.mean():.3f} +- {scores.std():.3f}")
# +
# %time
# enternar el modelo con todos los datos de entrenamiento
clf.fit(X_train, Y_train)
# solo usar cuando ya hayas terminado lo de arriba
test_evaluation(clf, X_test, Y_test)
# -
# **Random Forest - normas + criterios aplicables (TF-IDF)**
# +
# optimización de hiperparámetros utilizando cross-validation
# MODELO: RF
# VARIABLES DE ENTRADA: CA (normas + TF-IDF)
# HYPERPARÁMETROS ÓPTIMOS:
clf = RandomForestClassifier(n_estimators=100, random_state=0)
X = np.concatenate([normas_numpy, CA_TFIDF], axis=1)
X_train, X_test, Y_train, Y_test = train_test_80_20(X, y)
scores = cross_val_score(clf, X_train, Y_train,
cv=kf,
scoring="roc_auc"
)
print(f"{scores.mean():.3f} +- {scores.std():.3f}")
# +
# %time
# enternar el modelo con todos los datos de entrenamiento
clf.fit(X_train, Y_train)
# solo usar cuando ya hayas terminado lo de arriba
test_evaluation(clf, X_test, Y_test)
# -
| 3.Modeling/3.3.Clasifation_Random-Forest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Pyspark (local)
# language: python
# name: pyspark_local
# ---
# # Velge kjerne
#
# ### Hva er en kjerne?
#
# ### Hvilke kjerner tilbys
# Dataplattformen tilbyr ulike kjerner til ulike formål:
#
# - PySpark (local) :
# - PySpark (k8s) :
# - Python 3 :
#
# ### Hvordan velge kjerne
#
# ### Hvordan endre kjerne
| notebooks/Introduksjon til Dapla, JupyterLab og GitHub/Velge kjerne.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.3
# language: julia
# name: julia-1.6
# ---
# # Classification
# +
include("utils.jl"); using .Utils
checkpkgs("CSV", "DataFrames", "Plots", "Statistics", "Distributions")
getfile("https://github.com/allisonhorst/palmerpenguins/raw/master/inst/extdata/penguins_raw.csv")
# -
# ## Penguin Data
# +
import CSV
using DataFrames, Statistics, Plots
df = DataFrame(CSV.File("penguins_raw.csv"; missingstring="NA"))
size(df)
# -
first(df, 5)
# +
shorten(species) = split(species)[1]
transform!(df, "Species" => (x -> shorten.(x)) => "Species2");
# +
include("empiricaldist.jl"); using .EmpiricalDist
"""Make a CDF for each species."""
function make_cdf_map(df, colname, by="Species2")
cdf_map = Dict()
grouped = groupby(df, by)
for (k, group) in pairs(grouped)
species = k[by]
col = collect(skipmissing(group[!, colname])) # skip missing values
cdf_map[species] = cdffromseq(col, name=species)
end
return cdf_map
end
# -
"""
Make a CDF for each species.
df: DataFrame
colname: string column name
by: string column name
returns: dictionary from species name to Cdf
"""
function plot_cdfs(df, colname, by="Species2")
cdf_map = make_cdf_map(df, colname, by)
plot()
for (species, cdf) in pairs(cdf_map)
plot!(cdf, label=species)
end
xlabel!(colname)
ylabel!("CDF")
plot!(legend=:topleft)
end
colname = "Culmen Length (mm)"
plot_cdfs(df, colname)
colname = "Flipper Length (mm)"
plot_cdfs(df, colname)
colname = "Culmen Depth (mm)"
plot_cdfs(df, colname)
colname = "Body Mass (g)"
plot_cdfs(df, colname)
# ## Normal Models
# +
using Statistics, Distributions
"""Make a map from species to norm object."""
function make_norm_map(df, colname, by="Species2")
norm_map = Dict()
grouped = groupby(df, by)
for (k, group) in pairs(grouped)
species = k[by]
col = collect(skipmissing(group[!, colname])) # skip missing values
μ = mean(col)
σ = std(col)
norm_map[species] = Normal(μ, σ)
end
return norm_map
end
make_norm_map(df, colname; by="Species2") = make_norm_map(df, colname, by)
# -
flipper_map = make_norm_map(df, "Flipper Length (mm)")
keys(flipper_map)
data = 193
pdf(flipper_map["Adelie"], data)
hypos = keys(flipper_map)
likelihood = [pdf(flipper_map[hypo], data) for hypo in hypos]
likelihood
# ## The Update
prior = Pmf(1/3, hypos)
prior
posterior = prior .* likelihood
normalize!(posterior)
posterior
"""Update hypothetical species."""
function update_penguin(prior, data, norm_map)
hypos = prior.qs
likelihood = [pdf(norm_map[hypo],data) for hypo in hypos]
posterior = prior .* likelihood
normalize!(posterior)
return posterior
end
posterior1 = update_penguin(prior, 193, flipper_map)
posterior1
culmen_map = make_norm_map(df, "Culmen Length (mm)");
posterior2 = update_penguin(prior, 48, culmen_map)
posterior2
# ## Naive Bayesian Classification
"""Naive Bayesian classifier
prior: Pmf
data_seq: sequence of measurements
norm_maps: sequence of maps from species to distribution
returns: Pmf representing the posterior distribution
"""
function update_naive(prior, data_seq, norm_maps)
posterior = copy(prior)
for (data, norm_map) in zip(data_seq, norm_maps)
posterior = update_penguin(posterior, data, norm_map) # alow missing to propagate
end
return posterior
end
colnames = ["Flipper Length (mm)", "Culmen Length (mm)"]
norm_maps = [flipper_map, culmen_map];
data_seq = 193, 48
posterior = update_naive(prior, data_seq, norm_maps)
posterior
maxprob(posterior)
transform!(df,
AsTable(colnames) =>
ByRow(data_seq -> all(ismissing, data_seq) ? missing : # propagate missing
maxprob(update_naive(prior, data_seq, norm_maps))) => "Classification");
nrow(df)
# +
#valid = map(!ismissing, df[!, "Classification"])
#sum(valid)
nvalid = count(!ismissing, df[!, "Classification"])
# -
same = df[!, "Species2"] .== df[!, "Classification"]
nsame = count(skipmissing(same))
"""Compute the accuracy of classification."""
function accuracy(df)
nvalid = count(!ismissing, df[!, "Classification"])
nsame = count(skipmissing(df[!, "Species2"] .== df[!, "Classification"]))
return nsame / nvalid
end
# ## Joint Distributions
"""Make a scatter plot."""
function scatterplot(df, var1, var2)
grouped = groupby(df, "Species2")
plot()
for (k, g) in pairs(grouped)
scatter!(g[!, var1], g[!, var2], label=k["Species2"])
end
scatter!(legend=:bottomright, xlabel=var1, ylabel=var2)
end
var1 = "Flipper Length (mm)"
var2 = "Culmen Length (mm)"
scatterplot(df, var1, var2)
"""Make a Pmf approximation to a normal distribution."""
function make_pmf_norm(dist, sigmas=3, n=101)
μ, σ = mean(dist), std(dist)
low = μ - sigmas * σ
high = μ + sigmas * σ
qs = range(low, high, length=n)
ps = pdf(dist, qs)
pmf = Pmf(ps, qs)
normalize!(pmf)
return pmf
end
joint_map = Dict()
for species in hypos
pmf1 = make_pmf_norm(flipper_map[species])
pmf2 = make_pmf_norm(culmen_map[species])
joint_map[species] = makejoint(pmf1, pmf2)
end
# +
scatterplot(df, var1, var2)
for species in hypos
contour!(joint_map[species], alpha=0.5)
end
plot!()
# -
# ## Multivariate Normal Distribution
features = dropmissing(df[!, [var1, var2]]); # get rid of missing data
μ = mean.(eachcol(features))
μ
Σ = cov(features) # defined in empiricaldist
covdf(features) # just to display the matrix
multinorm = MvNormal(μ, Σ);
"""Make a map from each species to a multivariate normal."""
function make_multinorm_map(df, colnames)
multinorm_map = Dict()
grouped = groupby(df, "Species2")
for (k, group) in pairs(grouped)
species = k[1]
features = dropmissing(group[!, colnames])
μ = mean.(eachcol(features))
Σ = cov(features)
multinorm_map[species] = MvNormal(vec(μ), Σ)
end
return multinorm_map
end
multinorm_map = make_multinorm_map(df, [var1, var2]);
# ## Visualizing a Multivariate Normal Distribution
norm1 = flipper_map["Adelie"]
norm2 = culmen_map["Adelie"]
multinorm = multinorm_map["Adelie"];
pmf1 = make_pmf_norm(norm1)
pmf2 = make_pmf_norm(norm2);
densities = [pdf(multinorm, [x, y]) for y in pmf2.qs, x in pmf1.qs];
size(densities)
joint = JointDistribution(densities, pmf2.qs, pmf1.qs)
normalize!(joint)
contour(joint)
xlabel!(var1)
ylabel!(var2)
"""Make a joint distribution.
norm1: `norm` object representing the distribution of the first feature
norm2: `norm` object representing the distribution of the second feature
multinorm: `multivariate_normal` object representing the joint distribution
"""
function make_joint(norm1, norm2, multinorm)
# we don't need to go through Pmf
sigmas = 3
n = 101
μ₁, σ₁ = mean(norm1), std(norm1)
μ₂, σ₂ = mean(norm2), std(norm2)
X = range(μ₁ - sigmas * σ₁, μ₁ + sigmas * σ₁, length=n)
Y = range(μ₂ - sigmas * σ₂, μ₂ + sigmas * σ₂, length=n)
densities = [pdf(multinorm, [x,y]) for y in Y, x in X]
return JointDistribution(densities, Y, X)
end
# +
scatterplot(df, var1, var2)
for species in hypos
norm1 = flipper_map[species]
norm2 = culmen_map[species]
multinorm = multinorm_map[species]
joint = make_joint(norm1, norm2, multinorm)
contour!(joint, alpha=0.5)
end
plot!()
# -
# ## A Less Naive Classifier
"""Update hypothetical species."""
function update_penguin(prior, data, norm_map)
hypos = prior.qs
likelihood = [pdf(norm_map[hypo],data) for hypo in hypos]
posterior = prior .* likelihood
normalize!(posterior)
return posterior
end
data = [193, 48]
update_penguin(prior, data, multinorm_map)
df[!, "Classification"] .= missing
transform!(df,
AsTable(colnames) =>
ByRow(data_seq -> all(ismissing, data_seq) ? missing :
argmax(update_penguin(prior, collect(data_seq), multinorm_map))) => "Classification");
accuracy(df)
# ## Summary
# ## Exercises
# +
# Solution
# Here are the norm maps for the other two features
depth_map = make_norm_map(df, "Culmen Depth (mm)")
mass_map = make_norm_map(df, "Body Mass (g)");
# +
# Solution
# And here are sequences for the features and the norm maps
colnames4 = ["Culmen Length (mm)", "Flipper Length (mm)",
"Culmen Depth (mm)", "Body Mass (g)"]
norm_maps4 = [culmen_map, flipper_map,
depth_map, mass_map];
# +
# Solution
# Now let's classify and compute accuracy.
# We can do a little better with all four features,
# almost 97% accuracy
df[!, "Classification"] .= missing
transform!(df,
AsTable(colnames4) =>
ByRow(data_seq -> all(ismissing, data_seq) ? missing :
maxprob(update_naive(prior, data_seq, norm_maps4))) => "Classification");
accuracy(df)
# +
# Solution
gentoo = filter("Species2" => ==("Gentoo"), df)
subset = copy(gentoo);
# -
combine(groupby(subset, "Sex"), nrow)
combine(groupby(df, "Sex"), nrow) # no Sex=="."...
valid = filter("Sex" => !ismissing, df)
nrow(valid)
subset = filter("Sex" => !ismissing, gentoo);
# +
# Solution
# Here are the feature distributions grouped by sex
plot_cdfs(subset, "Culmen Length (mm)", "Sex")
# +
# Solution
plot_cdfs(subset, "Culmen Depth (mm)", "Sex")
# +
# Solution
plot_cdfs(subset, "Flipper Length (mm)", "Sex")
# +
# Solution
plot_cdfs(subset, "Body Mass (g)", "Sex")
# +
# Solution
# Here are the norm maps for the features, grouped by sex
culmen_map = make_norm_map(subset, "Culmen Length (mm)", by="Sex")
flipper_map = make_norm_map(subset, "Flipper Length (mm)", by="Sex")
depth_map = make_norm_map(subset, "Culmen Depth (mm)", by="Sex")
mass_map = make_norm_map(subset, "Body Mass (g)", by="Sex");
# +
# Solution
# And here are the sequences we need for `update_naive`
norm_maps4 = [culmen_map, flipper_map, depth_map, mass_map]
colnames4 = ["Culmen Length (mm)", "Flipper Length (mm)",
"Culmen Depth (mm)", "Body Mass (g)"];
# +
# Solution
# Here's the prior
hypos = keys(culmen_map)
prior = Pmf(1/2, hypos)
prior
# +
# Solution
# And the update
subset[!, "Classification"] .= missing
transform!(subset,
AsTable(colnames4) =>
ByRow(data_seq -> all(ismissing, data_seq) ? missing : # propagate missing
maxprob(update_naive(prior, data_seq, norm_maps4))) => "Classification");
# +
# Solution
# This function computes accuracy
"""Compute the accuracy of classification.
Compares columns Classification and Sex
df: DataFrame
"""
function accuracy_sex(df)
nvalid = count(!ismissing, df[!, "Classification"])
nsame = count(skipmissing(df[!, "Sex"] .== df[!, "Classification"]))
return nsame / nvalid
end
# +
# Solution
# Using these features we can classify Gentoo penguins by
# sex with almost 92% accuracy
accuracy_sex(subset)
# +
# Solution
# Here's the whole process in a function so we can
# classify the other species
"""
Run the whole classification process.
subset: DataFrame
"""
function classify_by_sex(subset)
culmen_map = make_norm_map(subset, "Culmen Length (mm)", by="Sex")
flipper_map = make_norm_map(subset, "Flipper Length (mm)", by="Sex")
depth_map = make_norm_map(subset, "Culmen Depth (mm)", by="Sex")
mass_map = make_norm_map(subset, "Body Mass (g)", by="Sex")
norm_maps4 = [culmen_map, flipper_map, depth_map, mass_map]
hypos = keys(culmen_map)
prior = Pmf(1/2, hypos)
subset[!, "Classification"] .= missing
transform!(subset,
AsTable(colnames4) =>
ByRow(data_seq -> all(ismissing, data_seq) ? missing : # propagate missing
maxprob(update_naive(prior, data_seq, norm_maps4))) => "Classification")
return accuracy_sex(subset)
end
# +
# Solution
# Here's the subset of Adelie penguins
# The accuracy is about 88%
adelie = filter("Species2" => ==("Adelie"), df)
subset = copy(adelie)
classify_by_sex(subset)
# +
# Solution
# It looks like Gentoo and Chinstrap penguins are about equally
# dimorphic, Adelie penguins a little less so.
# All of these results are consistent with what's in the paper.
| soln-julia/chap12.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="chEt2JOIePz0"
# ## 0. Preparation
#
# You have to move the API keys (in a .csv file) into the working folder (use the panel on the left)
# + [markdown] id="fFbkpE-5grPF"
# ## 1. TwitterAPI package
# + id="i1-vZSmvDUbc" outputId="8e3ddac1-3fd2-4204-93a0-e8da04037ac3" colab={"base_uri": "https://localhost:8080/"}
# !pip install TwitterAPI
# + id="3pDzUzvpDFgJ"
from TwitterAPI import TwitterAPI
import pandas as pd
# + id="mlaQIUX8DCk0"
my_keys = pd.read_csv('TwitterAPIKeys.csv')
consumer_key = my_keys.iloc[0][1]
consumer_secret = my_keys.iloc[1][1]
# + id="eqUhd6wbfDvZ"
api = TwitterAPI(consumer_key,
consumer_secret,
api_version='2',
auth_type='oAuth2')
# + id="4QoAHKeVDtXg"
r = api.request('tweets/search/recent', {
'query':'inter',
'tweet.fields':'author_id',
'tweet.fields':'created_at',
'expansions':'author_id',
'max_results':100})
for item in r:
print(item)
# + [markdown] id="VuXuHCqpgVxd"
# More details on the query arguments: https://developer.twitter.com/en/docs/twitter-api/tweets/search/api-reference/get-tweets-search-recent
# + [markdown] id="g_KEJrNtguSj"
# ## 2. TWARC
# + id="LgFZiH0Ag1S9" outputId="88aef316-5e8c-4a63-e91f-a29f63e89867" colab={"base_uri": "https://localhost:8080/"}
# !pip install twarc
# !pip3 install --upgrade twarc-csv
# + [markdown] id="v14UDk41g7cj"
# ...then you will have to configure TWARC2 (version 2.0).
# Run the cell, then add (manually) the "Bearer Token" (you find it in the csv file)
# Then just select (n), i.e. no additional authentication methods
# Finally, please delete the output cell (because in it the BearerToken will be printed)
# + id="RGvJ0sTng6ka"
# !twarc2 configure
# + id="iGIlr4IuhrE_" outputId="11614522-79b1-423c-c210-f62915815f57" colab={"base_uri": "https://localhost:8080/"}
# !twarc2 search --limit 100 "inter" results.jsonl
# + [markdown] id="-8voi2-5jrab"
# More info on query arguments: https://twarc-project.readthedocs.io/en/latest/twarc2_en_us/
# + id="ZQaf7Gm-h0oT" outputId="eba11bb4-d2a9-48e2-f1cc-0d80a9f86ed8" colab={"base_uri": "https://localhost:8080/"}
# convert the json into csv (friendlier for pandas)
# !twarc2 csv results.jsonl tweets.csv
# + id="iRT80h-ciHmH" outputId="620f38e6-5e01-4641-cd65-feb33e55488e" colab={"base_uri": "https://localhost:8080/", "height": 491}
# read the csv and explore it
df = pd.read_csv('tweets.csv')
df.head()
# + id="0Rn7VOMsiNZD" outputId="95c664c8-cb6d-4a51-ba11-3b0d09b1f7f1" colab={"base_uri": "https://localhost:8080/"}
for item in df.text:
print(item)
# + id="IcDCkvCsiRd9"
for item in df.lang:
print(item)
| scripts/2.Scraping/Twitter_API_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [py35]
# language: python
# name: Python [py35]
# ---
# +
import glob
import sys
sys.path.append('/Users/carlomazzaferro/Documents/Code/neoantigen/')
from mhc_parser import models, methods, utilities, pairwise_comp
import importlib
importlib.reload(models)
files = glob.glob('/Users/carlomazzaferro/Desktop/New_General/mhc_preds_fasta_base_new_prots/*.xls')
fasta_file = '/Users/carlomazzaferro/Desktop/New_General/fasta_base_new_prots.fasta'
pred_col = models.PredictionCollection(files, fasta_file)
# -
parsing = pred_col.digest_multiple()
filtered = pred_col.filter_all(50)
pw = methods.PairWiseComp(pred_col, 50, 5)
dict_ = pw.pipe_run()
# ## import pandas
# pandas.concat(dict_).loc
pandas.concat(dict_)
import pandas
dict_[0][0].keys()
tpl = [np.array([pred_col.protein_list[2]]*17), np.array(pred_col.protein_list)]
tpls = list(zip(*tpl))
# +
dics_2 = {}
for pair in tpls:
dics_2[pair] = {num: 0 for num in list(range(5, 12))}
dics_2[pair]['Num High AA'] = 0
dics_2[pair]['Matches Loc'] = []
# -
dics[('S__pyogenes_Cas', 'C__jejuni_Cas9')]
dicsdf2 = pandas.DataFrame(dics_2).T
pandas.concat([dicsdf1, dicsdf2])
index = pandas.MultiIndex.from_tuples(tpls, names=['first', 'second'])
index
np.array([pred_col.protein_list[0]]*17)
np.array([pred_col.protein_list[0]]*17)
import numpy as np
pandas.DataFrame(np.random.randn(len(pred_col.protein_list)), index=tpl, names=['a', 'b'])
a = {'a': 1, 'b':2}
a.keys()
pred_col.dictionary_collection['T__denticola_Ca']
pred_col.dictionary_collection['S__pyogenes_Cas']['Predictions'][0]
net_mhc_path = '/Users/carlomazzaferro/Desktop/BINF_Tools/netMHC-4.0/netMHC'
pred_col.predict_swaps(ref, 50, net_mhc_path)
ref = 'S__pyogenes_Cas'
filt = pred_col.filter_low_aff(ref, 50)
filt[0].Swap.swaps[0:5]
print(filt[0].Swap.swaps[0]['AESEFVYGDY'])
df_list = pred_col.return_protein_df_list()
df_list[0].head()
pred_col.dictionary_collection['S__mutans_Cas9']
# ### Pairwise Comparisons
importlib.reload(pairwise_comp)
pw_comp = pairwise_comp.PairwiseComp(df_list, 5, fasta_file)
df_comps = pw_comp.pipe_run()
df_comps
type(df_list[0].nmer[0])
pw_comp.peps_and_prots
| antigen_discovery/PredictionCollection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="k55zrcV_LTwc" colab_type="code" colab={}
import tensorflow as tf
from keras.datasets import mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# + id="6TTi0hbOL8hz" colab_type="code" colab={}
train_images.shape
len(train_labels)
train_labels
# + id="2akDVMVzQ416" colab_type="code" colab={}
digit = train_images[1000]
import matplotlib.pyplot as plt
plt.imshow(digit, cmap=plt.cm.binary)
plt.show()
# + id="XmSoozcxMRIw" colab_type="code" colab={}
from keras import models
from keras import layers
network = models.Sequential()
network.add(layers.Dense(512, activation='relu', input_shape=(28 * 28,)))
network.add(layers.Dense(10, activation='softmax'))
# + id="2wG4uEyCMXXb" colab_type="code" colab={}
network.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
train_images = train_images.reshape((60000, 28 * 28))
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28 * 28))
test_images = test_images.astype('float32') / 255
# + id="B9LEJh9-Mpwj" colab_type="code" colab={}
from keras.utils import to_categorical
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
# + id="vksUSln7M-Un" colab_type="code" colab={}
network.fit(train_images, train_labels, epochs=10, batch_size=128)
# + id="lxzqgBZ2PkvV" colab_type="code" colab={}
test_loss, test_acc = network.evaluate(test_images, test_labels)
print('test_acc:', test_acc)
| module-6/data/example1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from PIL import Image
import os
print(os.listdir("D:\comp_vision\malaria\cell_images"))
import cv2
import glob
labels=[]
data=[]
for img in glob.glob("D:/comp_vision/malaria/cell_images/Parasitized/*.png"):
image= cv2.imread(img)
image_from_array = Image.fromarray(image, 'RGB')
size_image = image_from_array.resize((50, 50))
data.append(np.array(size_image))
labels.append(1)
for img in glob.glob("D:/comp_vision/malaria/cell_images/Uninfected/*.png"):
image= cv2.imread(img)
image_from_array = Image.fromarray(image, 'RGB')
size_image = image_from_array.resize((50, 50))
data.append(np.array(size_image))
labels.append(0)
Cells=np.array(data)
labels=np.array(labels)
figure=plt.figure(figsize=(15,10))
ax=figure.add_subplot(121)
ax.imshow(Cells[0])
bx=figure.add_subplot(122)
bx.imshow(Cells[6000])
np.save("Cells",Cells)
np.save("labels",labels)
Cells=np.load("Cells.npy")
labels=np.load("labels.npy")
s=np.arange(Cells.shape[0])
np.random.shuffle(s)
Cells=Cells[s]
labels=labels[s]
labels
num_classes=len(np.unique(labels))
len_data=len(Cells)
x_train,x_test=Cells[(int)(0.1*len_data):],Cells[:(int)(0.1*len_data)]
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
train_len=len(x_train)
test_len=len(x_test)
print(train_len,test_len)
y_train,y_test=labels[(int)(0.1*len_data):],labels[:(int)(0.1*len_data)]
import keras
y_train=keras.utils.to_categorical(y_train,num_classes)
y_test=keras.utils.to_categorical(y_test,num_classes)
from keras.models import Sequential
from keras.layers import Dense,Conv2D,MaxPooling2D,Dropout,Flatten
from keras.optimizers import RMSprop
model=Sequential()
model.add(Conv2D(filters=16,kernel_size=2,padding="same",activation="relu",input_shape=(50,50,3)))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32,kernel_size=2,padding="same",activation="relu"))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=64,kernel_size=2,padding="same",activation="relu"))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(500,activation="relu"))
model.add(Dropout(0.2))
model.add(Dense(2,activation="softmax"))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x_train,y_train,batch_size=50,epochs=20,verbose=1)
accuracy = model.evaluate(x_test, y_test,verbose=1)
print('\n', 'Test_Accuracy=>', accuracy[1])
| malaria.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import os
import cPickle as pickle
from gensim.models import Word2Vec
from gensim.models import Doc2Vec
from gensim.models.word2vec import LineSentence
from gensim.models.doc2vec import TaggedDocument
from nltk.corpus import stopwords
from gensim.similarities import SoftCosineSimilarity
from gensim.corpora import Dictionary
from gensim.models.doc2vec import TaggedLineDocument
import time
import logging
import argparse
import numpy as np
import multiprocessing
from sklearn.decomposition import PCA
from matplotlib import pyplot
from gensim.parsing.preprocessing import remove_stopwords
from gensim.models import Phrases
from gensim.models.phrases import Phraser
from gensim.similarities import WmdSimilarity
import sys
import codecs
# -
data = LineSentence('countries_filter.txt')
contents = TaggedLineDocument("countries_filter.txt")
# +
domain_vocab_file = "Sports Sport sport players teams team goal score scores scored"
vocab_list = domain_vocab_file.split()
dim = 200
win = 12
neg = 5
# +
cores = multiprocessing.cpu_count()
model = Doc2Vec(contents, vector_size=dim, window=win,
min_count=1, workers=cores,hs=0,negative=5,
dm=0,dbow_words=1,epochs=20, smoothing=0.5,
sampling_param=0.7, objective_param=0.5, vocab_file=vocab_list)
# -
for d in contents:
print d[0]
print model.docvecs[40]
print model.docvecs[39]
from scipy import spatial
results2 = 1 - spatial.distance.cosine(model.docvecs[37], model.docvecs[0])
print results2
print model.similar_by_vector(model.docvecs[0],topn=10)
from gensim.test.utils import common_texts
all_docs = []
for d in data:
all_docs.append(d)
print all_docs[0]
for doc in contents:
print doc[0][0:4]
inferred_docvec = model.infer_vector(doc.words)
print model.wv.most_similar([inferred_docvec], topn=10)
X = model[model.wv.vocab]
pca = PCA(n_components=2)
result = pca.fit_transform(X)
pyplot.scatter(result[:, 0], result[:, 1])
words = list(model.wv.vocab)
for i, word in enumerate(words):
pyplot.annotate(word, xy=(result[i, 0], result[i, 1]))
pyplot.show()
dictionary = Dictionary(all_docs)
bow_corpus = [dictionary.doc2bow(document) for document in all_docs]
similarity_matrix = model.wv.similarity_matrix(dictionary)
index = SoftCosineSimilarity(bow_corpus, similarity_matrix, num_best=10)
query = 'football is a beautiful sport, i like it the most among all Sports'.split()
sims = index[dictionary.doc2bow(query)]
print sims
print model.similar_by_word("Sports", topn=10)
model.init_sims(replace=True)
distance = model.wmdistance("sport", "sport")
print distance
num_best = 40
instance = WmdSimilarity(all_docs, model, num_best=40)
# +
sent = 'football is a beautiful sport, i like it the most among all Sports.'.split()
sims = instance[sent]
# -
print 'Query:'
print sent
for i in range(num_best):
print
print 'sim = %.4f' % sims[i][1]
print all_docs[sims[i][0]]
| gensim/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Nuclear Morphology and Chromatin Organization Features
#
# Here we aim to compute a library of features that exhaustively describe the nuclear morphology and chromatin organization for each segmented nucleus in a given image.
# +
# import libraries
# %load_ext autoreload
import sys
sys.path.append("../")
from tifffile import imread
import pandas as pd
from skimage import measure
import numpy as np
import matplotlib.pyplot as plt
import cv2 as cv
import src.nuclear_features.Boundary_local_curvature as BLC
import src.nuclear_features.Boundary_global as BG
import src.nuclear_features.Int_dist_features as IDF
import src.nuclear_features.Img_texture as IT
import os
# +
# initialising paths
labelled_image_path = os.path.join(os.path.dirname(os.getcwd()),'example_data/images/TMA_nuc_labels.tif')
raw_image_path = os.path.join(os.path.dirname(os.getcwd()),'example_data/images/TMA_DAPI.tif')
feature_path = os.path.join(os.path.dirname(os.getcwd()),'example_data/')
# -
# Below is an example of the data that can be used.
# +
#Read in Images
labelled_image = imread(labelled_image_path)
raw_image = imread(raw_image_path)
#Subset the image
labelled_image=labelled_image[4000:6000,5000:7000]
raw_image=raw_image[4000:6000,5000:7000]
# normalize images
raw_image = ((raw_image-np.min(raw_image))/(np.max(raw_image)-np.min(raw_image)))*255
raw_image = raw_image.astype(int)
#Visulaise the data
#save plots to show clusters
fig = plt.figure(figsize=(8, 4))
ax0 = fig.add_subplot(121)
ax1 = fig.add_subplot(122)
#show raw image
ax0.imshow(raw_image,aspect='auto',cmap='inferno')
ax0.axis('off')
ax0.title.set_text('Image')
#show segmented image
ax1.imshow(labelled_image,aspect='auto',cmap='viridis')
ax1.axis('off')
ax1.title.set_text('Nuclear Labels')
# -
# One can now access each nucleus in the labelled image as well as the raw image.
# +
#Get indexing for the individual nuclei in the image
props = measure.regionprops(labelled_image,raw_image)
fig = plt.figure(figsize=(12, 6))
ax0 = fig.add_subplot(241)
ax1 = fig.add_subplot(242)
ax2 = fig.add_subplot(243)
ax3 = fig.add_subplot(244)
ax4 = fig.add_subplot(245)
ax5 = fig.add_subplot(246)
ax6 = fig.add_subplot(247)
ax7 = fig.add_subplot(248)
#show raw image
ax0.imshow(props[77].intensity_image,aspect='auto',cmap='inferno')
ax0.title.set_text('Nucleus 1')
ax0.axis('off')
ax1.imshow(props[452].intensity_image,aspect='auto',cmap='inferno')
ax1.title.set_text('Nucleus 2')
ax1.axis('off')
ax2.imshow(props[567].intensity_image,aspect='auto',cmap='inferno')
ax2.title.set_text('Nucleus 3')
ax2.axis('off')
ax3.imshow(props[114].intensity_image,aspect='auto',cmap='inferno')
ax3.title.set_text('Nucleus 4')
ax3.axis('off')
#show segmented image
ax4.imshow(props[77].image,aspect='auto',cmap='viridis')
ax4.title.set_text('Label 1')
ax4.axis('off')
ax5.imshow(props[452].image,aspect='auto',cmap='viridis')
ax5.title.set_text('Label 2')
ax5.axis('off')
ax6.imshow(props[567].image,aspect='auto',cmap='viridis')
ax6.title.set_text('Label 3')
ax6.axis('off')
ax7.imshow(props[114].image,aspect='auto',cmap='viridis')
ax7.title.set_text('Label 4')
ax7.axis('off')
# -
# #### Basic Features
#
# Scikit provides several informative features that describe "region properties". First we extract such built in features. For more information on how thhe features were computed check out the documentation(https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.regionprops_table).
#Measure scikit's built in features
propstable = pd.DataFrame(measure.regionprops_table(labelled_image,raw_image,cache=True,
properties=['label', 'area','perimeter','bbox_area','convex_area',
'equivalent_diameter','major_axis_length','minor_axis_length',
'eccentricity','orientation',
'centroid','weighted_centroid',
'weighted_moments','weighted_moments_normalized',
'weighted_moments_central','weighted_moments_hu',
'moments','moments_normalized','moments_central','moments_hu']))
propstable.iloc[[77,452,567,114]]
# #### Global Boundary features
#
# Here we compute the features that describe morphology of a given object. These include
# 1. Calliper distances
# 2. Distribution features of radii(centroid to boundary) distances
#
# Below are features computed for 4 nuclei
BG_feat = pd.concat([BG.boundary_features(props[77].image,centroids=props[77].local_centroid),
BG.boundary_features(props[452].image,centroids=props[452].local_centroid),
BG.boundary_features(props[567].image,centroids=props[567].local_centroid),
BG.boundary_features(props[114].image,centroids=props[114].local_centroid)])
BG_feat
# #### Local Boundary Features
#
# Here we compute the features that describe local curvature of a given object.
#
# Approach:
# For a given object we obtain the edge pixels and compute the local curvature of each point on the curve +/- a given stepsize. Larger steps give a smoother curvature.
# We define the local curvature between 3 points as the inverse of the radius of their circumcircle and if the circumcenter is inside the object then the sign of curvature is positive.
#
# Below is the radius of curvature for Nucleus 1
r_c= BLC.local_radius_curvature(props[77].image,step=5,show_boundary=True)
#calculate local curvature features
local_curvature=[np.divide(1,r_c[x]) if r_c[x]!=0 else 0 for x in range(len(r_c))]
# Now that we have the local curvature for all points on the boundary, we compute features that describe it such as Average and Standard Deviation of curature (positive and negative), number of time the polarity changes etc. Feature names are self-descriptive.
#compute local and global features
global_features = [BLC.global_curvature_features(np.array(local_curvature))]
global_features = pd.DataFrame([o.__dict__ for o in global_features])
global_features
# We also check to see if there are any prominant jumps in curvature the image.
prominant_features = [BLC.prominant_curvature_features(local_curvature,show_plot=True)]
prominant_features = pd.DataFrame([o.__dict__ for o in prominant_features])
prominant_features
# Below are the features computed for 4 nuclei.
BLC_feat= pd.concat([BLC.curvature_features(props[77].image,step=5),
BLC.curvature_features(props[452].image,step=5),
BLC.curvature_features(props[567].image,step=5),
BLC.curvature_features(props[114].image,step=5)])
BLC_feat
# #### Intensity Features
#
# Here we compute features that describe the intensity distribution.
#
# These include features that describe the intensity distribution, entropy and heterocromatin ratios
#
# Below are the features computed for 4 nuclei.
Int_feat= pd.concat([IDF.intensity_features(props[77].image,props[77].intensity_image),
IDF.intensity_features(props[452].image,props[452].intensity_image),
IDF.intensity_features(props[567].image,props[567].intensity_image),
IDF.intensity_features(props[114].image,props[114].intensity_image)])
Int_feat
# #### Image Textures
# Here we compute features that describe the texture of the image.
#
# These include the GCLM features.
#
# Below are the features computed for 4 nuclei.
Int_Text= pd.concat([IT.texture_features(props[77].image,props[77].intensity_image,props[77].local_centroid),
IT.texture_features(props[452].image,props[452].intensity_image,props[452].local_centroid),
IT.texture_features(props[567].image,props[567].intensity_image,props[567].local_centroid),
IT.texture_features(props[114].image,props[114].intensity_image,props[114].local_centroid)])
Int_Text
# #### Misc. features
#
# We merge all features and compute some related features.
# +
features = pd.concat([propstable.iloc[[77,452,567,114]].reset_index(drop=True),
BG_feat.reset_index(drop=True),
BLC_feat.reset_index(drop=True),
Int_feat.reset_index(drop=True),
Int_Text.reset_index(drop=True)], axis=1)
features['Concavity']=(features['convex_area']-features['area'])/features['convex_area']
features['Solidity']=features['area']/features['convex_area']
features['A_R']=features['minor_axis_length']/features['major_axis_length']
features['Shape_Factor']=(features['perimeter']**2)/(4*np.pi*features['area'])
features['Area_bbArea']=features['area']/features['bbox_area']
features['Center_Mismatch']=np.sqrt((features['weighted_centroid-0']-features['centroid-0'])**2+
(features['weighted_centroid-1']-features['centroid-1'])**2)
features['Smallest_largest_Calliper']=features['Min_Calliper']/features['Max_Calliper']
features['Frac_Peri_w_posi_curvature']=features['Len_posi_Curvature']/features['perimeter']
features['Frac_Peri_w_neg_curvature']=features['Len_neg_Curvature'].replace(to_replace ="NA",value =0)/features['perimeter']
features['Frac_Peri_w_polarity_changes']=features['nPolarity_changes']/features['perimeter']
features
# -
# For a quick extraction of all features given a segmented image use the following code:
# +
from src.utlis.Run_nuclear_feature_extraction import run_nuclear_chromatin_feat_ext
features = run_nuclear_chromatin_feat_ext(raw_image_path,labelled_image_path,feature_path)
# -
features_1 = features.replace('NA',0, regex=True)
features_1 = features_1.replace('NaN',0, regex=True)
features_1
# #### Tissue level summary:
#
# In order to characterise the nuclear density/crowding in a given tissue, we compute the distribution characteristics of each of the above features.
#
# The measures available are: Median, Min, Max, Standard Deviation (SD) Coefficient of Variation (CV) and Coefficient of Dispersion (CD), Inter_Quartile_Range(IQR) and Quartile Coeffient of Dispersrion (QCD).
#
from src.utlis.summarising_features import summarise_feature_table
features_1 = features.replace('NA',0, regex=True)
features_1 = features_1.replace('NaN',0, regex=True)
summarise_feature_table(features_1.drop(['Image'],axis=1))
| notes_on_feature_extraction/Nuclear_Features.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (cv)
# language: python
# name: cv
# ---
# <i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
#
# <i>Licensed under the MIT License.</i>
# # State-of-the-art image similarity
#
# This notebook implements a state-of-the-art approach for image similarity.
#
# We showed in the [01_training_and_evaluation_introduction](01_training_and_evaluation_introduction.ipynb) notebook how to train a DNN and use its feature embeddings for image retrieval. In that notebook, the DNN was trained using a standard image classification loss. More accurate models are typically trained explicitly for image similarity using Triplet Learning such as the [FaceNet](https://arxiv.org/pdf/1503.03832.pdf) paper. While triplet-based approaches achieve good accuracies, they are conceptually complex, slower, and more difficult to train/converge due to issue such as how to mine relevant triplets.
#
# Instead, we implement the BMVC 2019 paper "[Classification is a Strong Baseline for Deep Metric Learning](https://arxiv.org/abs/1811.12649)" which shows that this extra overhead is not necessary. Indeed, by making small changes to standard classification DNNs, the authors achieve results which are comparable or better than the previous state-of-the-art.
#
# Finally, we provide an implementation of a popular **re-ranking** approach published in the CVPR 2017 paper [Re-ranking Person Re-identification with k-reciprocal Encoding](http://openaccess.thecvf.com/content_cvpr_2017/papers/Zhong_Re-Ranking_Person_Re-Identification_CVPR_2017_paper.pdf). Re-ranking is a post-processing step to improve retrieval accuracy. The proposed approach is fast, fully automatic, unsupervised, and shown to outperform other state-of-the-art methods with regards to accuracy.
#
#
# ## Reproducing published results
#
# ### Datasets
#
# Three common benchmark datasets were used to verify the correctness of this notebook, namely [CARS-196](https://ai.stanford.edu/~jkrause/cars/car_dataset.html), [CUB-200-2011](http://www.vision.caltech.edu/visipedia/CUB-200-2011.html), and [SOP](http://cvgl.stanford.edu/projects/lifted_struct/).
#
# | Name | #classes | #images |
# | ---- | --------- | ------- |
# | CUB-200-2011 |200| ~12,000 |
# | CARS-196 | 196 | ~16,000 |
# | SOP |22634 | ~120,000|
#
#
# We follow the literature closely to replicate the same train/test splits and the same evaluation protocol as most publications (as described e.g. in this [paper](https://arxiv.org/abs/1511.06452)). For the datasets above, out of the total N classes, all images within the first N/2 classes are used for training and the remaining images are used for evaluation. This is an open-set evaluation setting where all images of a class are either fully assigned to training or to testing.
#
# ### Parameters
#
# Our model matches that of the [paper](https://arxiv.org/abs/1811.12649): ResNet-50 architecture with 224 pixel input resolution and a temperature of 0.05. We train the head and the full DNN for 12 epochs each, with a learning rate of 0.01 and 0.0001 respectively. Similar to the paper, we decrease the learning rate by a factor of 10 for the CUB-200-2011 dataset to avoid overfitting. Note that competitive results can often be achieved using just half the number of epochs or less. All training uses fastai's `fit_one_cycle` policy.
#
# ### Results
#
# As can be seen in the tables below, using this notebook (without re-ranking) we can re-produce the published accuracies. Our results for the CUB-200-2011 and the SOP datasets are close or even above the numbers in the paper; for CARS-196 however they are a few percentage points lower. It is worth pointing out the significant gain in accuracy for the SOP dataset compared to using the standard image classification loss in the [01_training_and_evaluation_introduction](01_training_and_evaluation_introduction.ipynb) notebook, i.e. from 57% to 80%.
#
# Recall@1 using 2048 dimensional features:
#
# | | CUB-200-2011 | CARS-196 | SOP |
# | ------------- | ------------ | -------- | --- |
# | This notebook | 65% | 84% | 81% |
# | Reported in paper| 65% | 89% | 80% |
#
#
# Recall@1 using 512 dimensional features:
#
# | | CUB-200-2011 | CARS-196 | SOP |
# | ------------- | ------------ | -------- | --- |
# | 01 notebook | 53% | 75% | 57% |
# | This notebook | 58% | 78% | 80% |
# | Reported in paper| 61% | 84% | 78% |
#
# Finally, using the 4096 dimensional features from the pooling layer of our original ResNet-50 model, we can get a further boost of up to 2-3% compared to using 2048 dimensions:
#
# | | CUB-200-2011 | CARS-196 | SOP |
# | ------------- | ------------ | -------- | --- |
# | This notebook | 67% | 87% | 81% |
#
# ## Initialization
# Ensure edits to libraries are loaded and plotting is shown in the notebook.
# %matplotlib inline
# %reload_ext autoreload
# %autoreload 2
# +
# Regular python libraries
import math, os, random, sys, torch
import numpy as np
from pathlib import Path
import scrapbook as sb
import torch.nn as nn
from IPython.core.debugger import set_trace
# Fast.ai
import fastai
from fastai.layers import FlattenedLoss
from fastai.vision import (
cnn_learner,
DatasetType,
ImageList,
imagenet_stats,
models,
)
# Computer Vision repository
sys.path.extend([".", "../.."]) # to access the utils_cv library
from utils_cv.common.data import unzip_url
from utils_cv.common.gpu import which_processor, db_num_workers
from utils_cv.similarity.data import Urls
from utils_cv.similarity.metrics import compute_distances, evaluate
from utils_cv.similarity.model import compute_features, compute_features_learner
from utils_cv.similarity.plot import plot_distances
# -
print(f"Fast.ai version = {fastai.__version__}")
which_processor()
# ## Data & Parameters
#
# A small dataset is provided to run this notebook and to illustrate how the dataset is structured. The embedding dimension should be set to a value <= 2048 to use the pooling layer suggested in the paper, or to 4096 to use the original ResNet-50 pooling layer.
# + tags=["parameters"]
# Dataset
data_root_dir = unzip_url(Urls.fridge_objects_retrieval_path, exist_ok = True)
DATA_FINETUNE_PATH = os.path.join(data_root_dir, "train")
DATA_RANKING_PATH = os.path.join(data_root_dir, "test")
print("Image root directory: {}".format(data_root_dir))
# DNN configuration and learning parameters. Use more epochs to possibly improve accuracy.
EPOCHS_HEAD = 6 #12
EPOCHS_BODY = 6 #12
HEAD_LEARNING_RATE = 0.01
BODY_LEARNING_RATE = 0.0001
BATCH_SIZE = 32
IM_SIZE = (224,224)
DROPOUT = 0
ARCHITECTURE = models.resnet50
# Desired embedding dimension. Higher dimensions slow down retrieval but often provide better accuracy.
EMBEDDING_DIM = 2048
assert EMBEDDING_DIM == 4096 or EMBEDDING_DIM <= 2048
# -
# Most images are used for training, and only a small percentage for validation to obtain a rough estimate of the validation loss. We use the standard image augmentations specified by fastai's `get_transforms()` function which includes horizontal flipping, image warping and changing pixel intensities.
# +
# Load images into fast.ai's ImageDataBunch object
random.seed(642)
data_finetune = (
ImageList.from_folder(DATA_FINETUNE_PATH)
.split_by_rand_pct(valid_pct=0.05, seed=20)
.label_from_folder()
.transform(tfms=fastai.vision.transform.get_transforms(), size=IM_SIZE)
.databunch(bs=BATCH_SIZE, num_workers = db_num_workers())
.normalize(imagenet_stats)
)
print(f"Data for fine-tuning: {len(data_finetune.train_ds.x)} training images and {len(data_finetune.valid_ds.x)} validation images.")
data_finetune.show_batch(rows=3, figsize=(12, 6))
# -
# ## NormSoftmax layers and loss
# The cell below implements the NormSoftmax loss and layers from the "[Classification is a Strong Baseline for Deep Metric Learning](https://arxiv.org/abs/1811.12649)" paper. Most of the code is taken from the [official repository](https://github.com/azgo14/classification_metric_learning) and only slightly modified to work within the fast.ai framework and to optionally use the 4096 dimensional embedding of the original ResNet-50 model.
# +
class EmbeddedFeatureWrapper(nn.Module):
"""
DNN head: pools, down-projects, and normalizes DNN features to be of unit length.
"""
def __init__(self, input_dim, output_dim, dropout=0):
super(EmbeddedFeatureWrapper, self).__init__()
self.output_dim = output_dim
self.dropout = dropout
if output_dim != 4096:
self.pool = nn.AdaptiveAvgPool2d(1)
self.standardize = nn.LayerNorm(input_dim, elementwise_affine = False)
self.remap = None
if input_dim != output_dim:
self.remap = nn.Linear(input_dim, output_dim, bias = False)
if dropout > 0:
self.dropout = nn.Dropout(dropout)
def forward(self, x):
if self.output_dim != 4096:
x = self.pool(x)
x = x.view(x.size(0), -1)
x = self.standardize(x)
if self.remap:
x = self.remap(x)
if self.dropout > 0:
x = self.dropout(x)
x = nn.functional.normalize(x, dim=1)
return x
class L2NormalizedLinearLayer(nn.Module):
"""
Apply a linear layer to the input, where the weights are normalized to be of unit length.
"""
def __init__(self, input_dim, output_dim):
super(L2NormalizedLinearLayer, self).__init__()
self.weight = nn.Parameter(torch.Tensor(output_dim, input_dim))
stdv = 1. / math.sqrt(self.weight.size(1))
self.weight.data.uniform_(-stdv, stdv)
# Initialization from nn.Linear (https://github.com/pytorch/pytorch/blob/v1.0.0/torch/nn/modules/linear.py#L129)
def forward(self, x):
norm_weight = nn.functional.normalize(self.weight, dim=1)
prediction_logits = nn.functional.linear(x, norm_weight)
return prediction_logits
class NormSoftmaxLoss(nn.Module):
"""
Apply temperature scaling on logits before computing the cross-entropy loss.
"""
def __init__(self, temperature=0.05):
super(NormSoftmaxLoss, self).__init__()
self.temperature = temperature
self.loss_fn = nn.CrossEntropyLoss()
def forward(self, prediction_logits, instance_targets):
loss = self.loss_fn(prediction_logits / self.temperature, instance_targets)
return loss
# -
# ## Modified classification DNN
#
# We begin by retrieving a pre-trained [ResNet50](https://arxiv.org/pdf/1512.03385.pdf) CNN from fast.ai's library which was trained on ImageNet.
# +
learn = cnn_learner(
data_finetune,
ARCHITECTURE,
metrics=[],
ps=DROPOUT
)
print("** Original model head **")
print(learn.model[1])
# -
# The CNN is then modified to use the suggested "norm softmax loss" instead of the default cross-entropy loss:
# +
# By default uses the 2048 dimensional pooling layer as implemented in the paper.
# Optionally can instead keep the 4096-dimensional pooling layer from the ResNet-50 model.
if EMBEDDING_DIM != 4096:
modules = []
pooling_dim = 2048
else:
modules = [l for l in learn.model[1][:3]]
pooling_dim = 4096
# Add new layers
modules.append(EmbeddedFeatureWrapper(input_dim=pooling_dim,
output_dim=EMBEDDING_DIM,
dropout=DROPOUT))
modules.append(L2NormalizedLinearLayer(input_dim=EMBEDDING_DIM,
output_dim=len(data_finetune.classes)))
learn.model[1] = nn.Sequential(*modules)
# Create new learner object since otherwise the new layers are not updated during backprop
learn = fastai.vision.Learner(data_finetune, learn.model)
# Update loss function
learn.loss_func = FlattenedLoss(NormSoftmaxLoss)
print("\n** Edited model head **")
print(learn.model[1])
# -
# ## Run DNN training
# Similar to the [classification notebooks](https://github.com/microsoft/ComputerVision/tree/master/classification/notebooks) we first refine the head and then the full CNN.
learn.fit_one_cycle(EPOCHS_HEAD, HEAD_LEARNING_RATE)
# Let's now unfreeze all the layers and fine-tuning the model more.
#
learn.unfreeze()
learn.fit_one_cycle(EPOCHS_BODY, BODY_LEARNING_RATE)
# ## Feature extraction
# We now load the ranking set which is used to evaluate image retrieval performance.
# +
# Load images into fast.ai's ImageDataBunch object
data_rank = (
ImageList.from_folder(DATA_RANKING_PATH)
.split_none()
.label_from_folder()
.transform(size=IM_SIZE)
.databunch(bs=BATCH_SIZE, num_workers = db_num_workers())
.normalize(imagenet_stats)
)
print(f"Data for retrieval evaluation: {len(data_rank.train_ds.x)} images.")
# Display example images
data_rank.show_batch(rows=3, figsize=(12, 6))
# -
# The following line will allow us to extract the DNN features after running each image through the model.
#Compute DNN features for all validation images
embedding_layer = learn.model[1][-2]
dnn_features = compute_features_learner(data_rank, DatasetType.Train, learn, embedding_layer)
# ## Image Retrieval Example
# The cell below shows how to find and display the most similar images in the ranking set for a given query image (which we also select from the ranking set). This example is similar to the one shown in the [00_webcam.ipynb](https://github.com/microsoft/ComputerVision/tree/master/similarity/notebooks/00_webcam.ipynb) notebook.
# +
# Get the DNN feature for the query image
query_im_path = str(data_rank.train_ds.items[1])
query_feature = dnn_features[query_im_path]
print(f"Query image path: {query_im_path}")
print(f"Query feature dimension: {len(query_feature)}")
assert len(query_feature) == EMBEDDING_DIM
# Compute the distances between the query and all reference images
distances = compute_distances(query_feature, dnn_features)
plot_distances(distances, num_rows=1, num_cols=6, figsize=(15,5))
# -
# ## Quantitative evaluation
#
# Finally, to quantitatively evaluate image retrieval performance, we compute the Recall@1 measure. The implementation below is slow but straight-forward and shows the usage of the `compute_distances()` function.
#
# Note that the "[Classification is a Strong Baseline for Deep Metric Learning](https://arxiv.org/abs/1811.12649)" paper uses the cosine distance, while we interchangably use either the dot product or the L2 distance. This is possible since all DNN features are L2-normalized and hence both distance metrics return the same ranking order (see: https://en.wikipedia.org/wiki/Cosine_similarity).
#
#
# ### Slow approach
#
# Below shows how one would intuitively implement the rank@1 measure. Note that this implementation uses our `compute_distances()` function and especially for large datasets is too slow due to the nested loops. For large datasets hence only a subset of around 500 queries is used.
# +
#init
count = 0
labels = data_rank.train_ds.y
im_paths = data_rank.train_ds.items
assert len(labels) == len(im_paths) == len(dnn_features)
# Use a subset of at least 500 images from the ranking set as query images.
step = math.ceil(len(im_paths)/500.0)
query_indices = range(len(im_paths))[::step]
# Loop over all query images
for query_index in query_indices:
if query_index+1 % (step*100) == 0:
print(query_index, len(im_paths))
# Get the DNN features of the query image
query_im_path = str(im_paths[query_index])
query_feature = dnn_features[query_im_path]
# Compute distance to all images in the gallery set.
distances = compute_distances(query_feature, dnn_features)
# Find the image with smallest distance
min_dist = float('inf')
min_dist_index = None
for index, distance in enumerate(distances):
if index != query_index: #ignore the query image itself
if distance[1] < min_dist:
min_dist = distance[1]
min_dist_index = index
# Count how often the image with smallest distance has the same label as the query
if labels[query_index] == labels[min_dist_index]:
count += 1
# -
recallAt1 = 100.0 * count / len(query_indices)
print("Recall@1 = {:2.2f}".format(recallAt1))
# Log some outputs using scrapbook which are used during testing to verify correct notebook execution
sb.glue("recallAt1", recallAt1)
# ### Fast approach with re-ranking
#
# Below is a much more efficient computation of different rank@n metrics and of the mean average-precision metric.
ranks, mAP = evaluate(data_rank.train_ds, dnn_features, use_rerank = False)
# The function also supports **re-ranking** to improve accuracy. Re-ranking is introduced at the top of this notebook, and in our experience can dramatically boost mAP, with less of an influence on rank@1. See the [code](../../utils_cv/similarity/references/re_ranking.py) and the [paper](https://arxiv.org/pdf/1701.08398.pdf) for more information and for a discussion of the three main paramters: k1, k2, and lambda. By default we use k1=20, k2=6, and lambda=0.3 as suggested in the paper and shown to work well on four different datasets. We suggest however to fine-tune these parameters to obtain maximum accuracy improvement.
ranks, mAP = evaluate(data_rank.train_ds, dnn_features, use_rerank = True)
| scenarios/similarity/02_state_of_the_art.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="cPKvKMkAkRMn"
# ##### Copyright 2019 The TensorFlow Authors.
# + cellView="form" colab={} colab_type="code" id="TWofNaR-kS1s"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="xWVObL142EBs"
# # Writing custom layers and models with Keras
# + [markdown] colab_type="text" id="p7VqaVrAvw9j"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/guide/keras/custom_layers_and_models"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/keras/custom_layers_and_models.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/keras/custom_layers_and_models.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/custom_layers_and_models.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="NrUIvL8oxlhj"
# ### Setup
# + colab={} colab_type="code" id="Szd0mNROxqJ7"
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# # %tensorflow_version only exists in Colab.
# %tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
tf.keras.backend.clear_session() # For easy reset of notebook state.
# + [markdown] colab_type="text" id="zVsTiIb62IbJ"
# ## The Layer class
#
# + [markdown] colab_type="text" id="D0KRUQWG2k4v"
# ### Layers encapsulate a state (weights) and some computation
#
# The main data structure you'll work with is the `Layer`.
# A layer encapsulates both a state (the layer's "weights")
# and a transformation from inputs to outputs (a "call", the layer's
# forward pass).
#
# Here's a densely-connected layer. It has a state: the variables `w` and `b`.
#
# + colab={} colab_type="code" id="LisHKABR2-Nj"
from tensorflow.keras import layers
class Linear(layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(initial_value=w_init(shape=(input_dim, units),
dtype='float32'),
trainable=True)
b_init = tf.zeros_initializer()
self.b = tf.Variable(initial_value=b_init(shape=(units,),
dtype='float32'),
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
# + [markdown] colab_type="text" id="Y8RsI6Hr2OOd"
# Note that the weights `w` and `b` are automatically tracked by the layer upon
# being set as layer attributes:
# + colab={} colab_type="code" id="D7x3Hl8m2XEJ"
assert linear_layer.weights == [linear_layer.w, linear_layer.b]
# + [markdown] colab_type="text" id="IXQPwqEs2gCH"
# Note you also have access to a quicker shortcut for adding weight to a layer: the `add_weight` method:
#
# + colab={} colab_type="code" id="8LSx6HDg2iPz"
class Linear(layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(shape=(input_dim, units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(units,),
initializer='zeros',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
x = tf.ones((2, 2))
linear_layer = Linear(4, 2)
y = linear_layer(x)
print(y)
# + [markdown] colab_type="text" id="vXjjEthGgr4y"
# #### Layers can have non-trainable weights
#
# Besides trainable weights, you can add non-trainable weights to a layer as well.
# Such weights are meant not to be taken into account during backpropagation,
# when you are training the layer.
#
# Here's how to add and use a non-trainable weight:
# + colab={} colab_type="code" id="OIIfmpDIgyUy"
class ComputeSum(layers.Layer):
def __init__(self, input_dim):
super(ComputeSum, self).__init__()
self.total = tf.Variable(initial_value=tf.zeros((input_dim,)),
trainable=False)
def call(self, inputs):
self.total.assign_add(tf.reduce_sum(inputs, axis=0))
return self.total
x = tf.ones((2, 2))
my_sum = ComputeSum(2)
y = my_sum(x)
print(y.numpy())
y = my_sum(x)
print(y.numpy())
# + [markdown] colab_type="text" id="qLiWVq-3g0c0"
# It's part of `layer.weights`, but it gets categorized as a non-trainable weight:
# + colab={} colab_type="code" id="X7RhEZNvg2dE"
print('weights:', len(my_sum.weights))
print('non-trainable weights:', len(my_sum.non_trainable_weights))
# It's not included in the trainable weights:
print('trainable_weights:', my_sum.trainable_weights)
# + [markdown] colab_type="text" id="DOwYZ-Ew329E"
# ### Best practice: deferring weight creation until the shape of the inputs is known
#
# In the logistic regression example above, our `Linear` layer took an `input_dim` argument
# that was used to compute the shape of the weights `w` and `b` in `__init__`:
# + colab={} colab_type="code" id="tzxUxoPc3Esh"
class Linear(layers.Layer):
def __init__(self, units=32, input_dim=32):
super(Linear, self).__init__()
self.w = self.add_weight(shape=(input_dim, units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(units,),
initializer='zeros',
trainable=True)
# + [markdown] colab_type="text" id="ejSYZGaP4CD6"
# In many cases, you may not know in advance the size of your inputs, and you would
# like to lazily create weights when that value becomes known,
# some time after instantiating the layer.
#
# In the Keras API, we recommend creating layer weights in the `build(inputs_shape)` method of your layer.
# Like this:
# + colab={} colab_type="code" id="AGhRg7Nt4EB8"
class Linear(layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
# + [markdown] colab_type="text" id="0SpZzAag4Mk_"
# The `__call__` method of your layer will automatically run `build` the first time it is called.
# You now have a layer that's lazy and easy to use:
# + colab={} colab_type="code" id="_cdMFCUp4KSQ"
linear_layer = Linear(32) # At instantiation, we don't know on what inputs this is going to get called
y = linear_layer(x) # The layer's weights are created dynamically the first time the layer is called
# + [markdown] colab_type="text" id="-kaDooBSC_Oc"
# ### Layers are recursively composable
#
# If you assign a Layer instance as attribute of another Layer,
# the outer layer will start tracking the weights of the inner layer.
#
# We recommend creating such sublayers in the `__init__` method (since the sublayers will typically have a `build` method, they will be built when the outer layer gets built).
# + colab={} colab_type="code" id="-YPI4vwN4Ozo"
# Let's assume we are reusing the Linear class
# with a `build` method that we defined above.
class MLPBlock(layers.Layer):
def __init__(self):
super(MLPBlock, self).__init__()
self.linear_1 = Linear(32)
self.linear_2 = Linear(32)
self.linear_3 = Linear(1)
def call(self, inputs):
x = self.linear_1(inputs)
x = tf.nn.relu(x)
x = self.linear_2(x)
x = tf.nn.relu(x)
return self.linear_3(x)
mlp = MLPBlock()
y = mlp(tf.ones(shape=(3, 64))) # The first call to the `mlp` will create the weights
print('weights:', len(mlp.weights))
print('trainable weights:', len(mlp.trainable_weights))
# + [markdown] colab_type="text" id="fq5_AsbEh-BQ"
# ### Layers recursively collect losses created during the forward pass
#
# When writing the `call` method of a layer, you can create loss tensors that you will want to use later, when writing your training loop. This is doable by calling `self.add_loss(value)`:
#
# + colab={} colab_type="code" id="W66HsCzajERu"
# A layer that creates an activity regularization loss
class ActivityRegularizationLayer(layers.Layer):
def __init__(self, rate=1e-2):
super(ActivityRegularizationLayer, self).__init__()
self.rate = rate
def call(self, inputs):
self.add_loss(self.rate * tf.reduce_sum(inputs))
return inputs
# + [markdown] colab_type="text" id="p_dMrJ-QjcZH"
# These losses (including those created by any inner layer) can be retrieved via `layer.losses`.
# This property is reset at the start of every `__call__` to the top-level layer, so that `layer.losses` always contains the loss values created during the last forward pass.
# + colab={} colab_type="code" id="C1vYiSnVjdCc"
class OuterLayer(layers.Layer):
def __init__(self):
super(OuterLayer, self).__init__()
self.activity_reg = ActivityRegularizationLayer(1e-2)
def call(self, inputs):
return self.activity_reg(inputs)
layer = OuterLayer()
assert len(layer.losses) == 0 # No losses yet since the layer has never been called
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # We created one loss value
# `layer.losses` gets reset at the start of each __call__
_ = layer(tf.zeros(1, 1))
assert len(layer.losses) == 1 # This is the loss created during the call above
# + [markdown] colab_type="text" id="9Jv3LKNfk_LL"
# In addition, the `loss` property also contains regularization losses created for the weights of any inner layer:
# + colab={} colab_type="code" id="iokhhZfUlJUU"
class OuterLayer(layers.Layer):
def __init__(self):
super(OuterLayer, self).__init__()
self.dense = layers.Dense(32, kernel_regularizer=tf.keras.regularizers.l2(1e-3))
def call(self, inputs):
return self.dense(inputs)
layer = OuterLayer()
_ = layer(tf.zeros((1, 1)))
# This is `1e-3 * sum(layer.dense.kernel ** 2)`,
# created by the `kernel_regularizer` above.
print(layer.losses)
# + [markdown] colab_type="text" id="P2Xdp_dvlGLG"
# These losses are meant to be taken into account when writing training loops, like this:
#
#
# ```python
# # Instantiate an optimizer.
# optimizer = tf.keras.optimizers.SGD(learning_rate=1e-3)
# loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
#
# # Iterate over the batches of a dataset.
# for x_batch_train, y_batch_train in train_dataset:
# with tf.GradientTape() as tape:
# logits = layer(x_batch_train) # Logits for this minibatch
# # Loss value for this minibatch
# loss_value = loss_fn(y_batch_train, logits)
# # Add extra losses created during this forward pass:
# loss_value += sum(model.losses)
#
# grads = tape.gradient(loss_value, model.trainable_weights)
# optimizer.apply_gradients(zip(grads, model.trainable_weights))
# ```
#
# For a detailed guide about writing training loops, see the second section of the [guide to training and evaluation](./train_and_evaluate.ipynb).
# + [markdown] colab_type="text" id="ozo04iqHohNg"
# ### You can optionally enable serialization on your layers
#
# If you need your custom layers to be serializable as part of a [Functional model](./functional.ipynb), you can optionally implement a `get_config` method:
#
# + colab={} colab_type="code" id="ckT5Zbo0oxrz"
class Linear(layers.Layer):
def __init__(self, units=32):
super(Linear, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
return {'units': self.units}
# Now you can recreate the layer from its config:
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
# + [markdown] colab_type="text" id="9fKngh4UozyM"
# Note that the `__init__` method of the base `Layer` class takes some keyword arguments, in particular a `name` and a `dtype`. It's good practice to pass these arguments to the parent class in `__init__` and to include them in the layer config:
# + colab={} colab_type="code" id="UCMoN42no0D5"
class Linear(layers.Layer):
def __init__(self, units=32, **kwargs):
super(Linear, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(Linear, self).get_config()
config.update({'units': self.units})
return config
layer = Linear(64)
config = layer.get_config()
print(config)
new_layer = Linear.from_config(config)
# + [markdown] colab_type="text" id="sNrHV0zAo0Tc"
# If you need more flexibility when deserializing the layer from its config, you can also override the `from_config` class method. This is the base implementation of `from_config`:
#
# ```python
# def from_config(cls, config):
# return cls(**config)
# ```
#
# To learn more about serialization and saving, see the complete [Guide to Saving and Serializing Models](./save_and_serialize.ipynb).
# + [markdown] colab_type="text" id="-TB8iViSo4p9"
# ### Privileged `training` argument in the `call` method
#
#
# Some layers, in particular the `BatchNormalization` layer and the `Dropout` layer, have different behaviors during training and inference. For such layers, it is standard practice to expose a `training` (boolean) argument in the `call` method.
#
# By exposing this argument in `call`, you enable the built-in training and evaluation loops (e.g. `fit`) to correctly use the layer in training and inference.
#
# + colab={} colab_type="code" id="QyI_b4Rgo-EE"
class CustomDropout(layers.Layer):
def __init__(self, rate, **kwargs):
super(CustomDropout, self).__init__(**kwargs)
self.rate = rate
def call(self, inputs, training=None):
if training:
return tf.nn.dropout(inputs, rate=self.rate)
return inputs
# + [markdown] colab_type="text" id="3X6eQH_K2wf1"
# ## Building Models
# + [markdown] colab_type="text" id="XZen-bAOE9I5"
# ### The Model class
#
# In general, you will use the `Layer` class to define inner computation blocks,
# and will use the `Model` class to define the outer model -- the object you will train.
#
# For instance, in a ResNet50 model, you would have several ResNet blocks subclassing `Layer`,
# and a single `Model` encompassing the entire ResNet50 network.
#
# The `Model` class has the same API as `Layer`, with the following differences:
#
# - It exposes built-in training, evaluation, and prediction loops (`model.fit()`, `model.evaluate()`, `model.predict()`).
# - It exposes the list of its inner layers, via the `model.layers` property.
# - It exposes saving and serialization APIs.
#
# Effectively, the "Layer" class corresponds to what we refer to in the literature
# as a "layer" (as in "convolution layer" or "recurrent layer") or as a "block" (as in "ResNet block" or "Inception block").
#
# Meanwhile, the "Model" class corresponds to what is referred to in the literature
# as a "model" (as in "deep learning model") or as a "network" (as in "deep neural network").
#
# For instance, we could take our mini-resnet example above, and use it to build a `Model` that we could
# train with `fit()`, and that we could save with `save_weights`:
#
# ```python
# class ResNet(tf.keras.Model):
#
# def __init__(self):
# super(ResNet, self).__init__()
# self.block_1 = ResNetBlock()
# self.block_2 = ResNetBlock()
# self.global_pool = layers.GlobalAveragePooling2D()
# self.classifier = Dense(num_classes)
#
# def call(self, inputs):
# x = self.block_1(inputs)
# x = self.block_2(x)
# x = self.global_pool(x)
# return self.classifier(x)
#
#
# resnet = ResNet()
# dataset = ...
# resnet.fit(dataset, epochs=10)
# resnet.save_weights(filepath)
# ```
#
# + [markdown] colab_type="text" id="roVCX-TJqYzx"
# ### Putting it all together: an end-to-end example
#
# Here's what you've learned so far:
#
# - A `Layer` encapsulate a state (created in `__init__` or `build`) and some computation (in `call`).
# - Layers can be recursively nested to create new, bigger computation blocks.
# - Layers can create and track losses (typically regularization losses).
# - The outer container, the thing you want to train, is a `Model`. A `Model` is just like a `Layer`, but with added training and serialization utilities.
#
# Let's put all of these things together into an end-to-end example: we're going to implement a Variational AutoEncoder (VAE). We'll train it on MNIST digits.
#
# Our VAE will be a subclass of `Model`, built as a nested composition of layers that subclass `Layer`. It will feature a regularization loss (KL divergence).
# + colab={} colab_type="code" id="1QxkfjtzE4X2"
class Sampling(layers.Layer):
"""Uses (z_mean, z_log_var) to sample z, the vector encoding a digit."""
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
class Encoder(layers.Layer):
"""Maps MNIST digits to a triplet (z_mean, z_log_var, z)."""
def __init__(self,
latent_dim=32,
intermediate_dim=64,
name='encoder',
**kwargs):
super(Encoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation='relu')
self.dense_mean = layers.Dense(latent_dim)
self.dense_log_var = layers.Dense(latent_dim)
self.sampling = Sampling()
def call(self, inputs):
x = self.dense_proj(inputs)
z_mean = self.dense_mean(x)
z_log_var = self.dense_log_var(x)
z = self.sampling((z_mean, z_log_var))
return z_mean, z_log_var, z
class Decoder(layers.Layer):
"""Converts z, the encoded digit vector, back into a readable digit."""
def __init__(self,
original_dim,
intermediate_dim=64,
name='decoder',
**kwargs):
super(Decoder, self).__init__(name=name, **kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation='relu')
self.dense_output = layers.Dense(original_dim, activation='sigmoid')
def call(self, inputs):
x = self.dense_proj(inputs)
return self.dense_output(x)
class VariationalAutoEncoder(tf.keras.Model):
"""Combines the encoder and decoder into an end-to-end model for training."""
def __init__(self,
original_dim,
intermediate_dim=64,
latent_dim=32,
name='autoencoder',
**kwargs):
super(VariationalAutoEncoder, self).__init__(name=name, **kwargs)
self.original_dim = original_dim
self.encoder = Encoder(latent_dim=latent_dim,
intermediate_dim=intermediate_dim)
self.decoder = Decoder(original_dim, intermediate_dim=intermediate_dim)
def call(self, inputs):
z_mean, z_log_var, z = self.encoder(inputs)
reconstructed = self.decoder(z)
# Add KL divergence regularization loss.
kl_loss = - 0.5 * tf.reduce_mean(
z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)
self.add_loss(kl_loss)
return reconstructed
# + colab={} colab_type="code" id="oDVSVl4Iu8kC"
original_dim = 784
vae = VariationalAutoEncoder(original_dim, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
mse_loss_fn = tf.keras.losses.MeanSquaredError()
loss_metric = tf.keras.metrics.Mean()
(x_train, _), _ = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255
train_dataset = tf.data.Dataset.from_tensor_slices(x_train)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
epochs = 3
# Iterate over epochs.
for epoch in range(epochs):
print('Start of epoch %d' % (epoch,))
# Iterate over the batches of the dataset.
for step, x_batch_train in enumerate(train_dataset):
with tf.GradientTape() as tape:
reconstructed = vae(x_batch_train)
# Compute reconstruction loss
loss = mse_loss_fn(x_batch_train, reconstructed)
loss += sum(vae.losses) # Add KLD regularization loss
grads = tape.gradient(loss, vae.trainable_weights)
optimizer.apply_gradients(zip(grads, vae.trainable_weights))
loss_metric(loss)
if step % 100 == 0:
print('step %s: mean loss = %s' % (step, loss_metric.result()))
# + [markdown] colab_type="text" id="5hgOl_y34NZD"
# Note that since the VAE is subclassing `Model`, it features built-in training loops. So you could also have trained it like this:
# + colab={} colab_type="code" id="Y153oEzk4Piz"
vae = VariationalAutoEncoder(784, 64, 32)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=3, batch_size=64)
# + [markdown] colab_type="text" id="ZVkFme-U0IHb"
# ### Beyond object-oriented development: the Functional API
#
# Was this example too much object-oriented development for you? You can also build models using [the Functional API](./functional.ipynb). Importantly, choosing one style or another does not prevent you from leveraging components written in the other style: you can always mix-and-match.
#
# For instance, the Functional API example below reuses the same `Sampling` layer we defined in the example above.
# + colab={} colab_type="code" id="1QzeXGAl3Uxn"
original_dim = 784
intermediate_dim = 64
latent_dim = 32
# Define encoder model.
original_inputs = tf.keras.Input(shape=(original_dim,), name='encoder_input')
x = layers.Dense(intermediate_dim, activation='relu')(original_inputs)
z_mean = layers.Dense(latent_dim, name='z_mean')(x)
z_log_var = layers.Dense(latent_dim, name='z_log_var')(x)
z = Sampling()((z_mean, z_log_var))
encoder = tf.keras.Model(inputs=original_inputs, outputs=z, name='encoder')
# Define decoder model.
latent_inputs = tf.keras.Input(shape=(latent_dim,), name='z_sampling')
x = layers.Dense(intermediate_dim, activation='relu')(latent_inputs)
outputs = layers.Dense(original_dim, activation='sigmoid')(x)
decoder = tf.keras.Model(inputs=latent_inputs, outputs=outputs, name='decoder')
# Define VAE model.
outputs = decoder(z)
vae = tf.keras.Model(inputs=original_inputs, outputs=outputs, name='vae')
# Add KL divergence regularization loss.
kl_loss = - 0.5 * tf.reduce_mean(
z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)
vae.add_loss(kl_loss)
# Train.
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
vae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())
vae.fit(x_train, x_train, epochs=3, batch_size=64)
| site/en/guide/keras/custom_layers_and_models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ### Power On the Compute Server
import dracclient.client
import getpass
import warnings
warnings.filterwarnings('ignore')
import yaml
with open ('credential.yml') as ymlfile:
cfg = yaml.safe_load(ymlfile)
local_ip = cfg['server']['local_ip']
remote_ip = cfg['server']['remote_ip']
local_port = cfg['server']['local_port']
remote_port = cfg['server']['remote_port']
username = <PASSWORD>()
password = <PASSWORD>()
client= dracclient.client.DRACClient(local_ip, username, password, port=local_port)
power = client.get_power_state()
print(power)
client.set_power_state(target_state= 'POWER_ON')
| Compute_Server/iDRAC.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import offsetbox
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
from sklearn.datasets import load_boston
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
plt.rcParams['figure.figsize'] = (20, 25)
boston = load_boston()
# -
def show_dataset(X, y, ax=None):
"""
Given examples in 2/3 dimensions X, and target y, show them
:param X:
:param y:
:return:
"""
if X.shape[1] == 3:
scattered = ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=y, cmap='gray')
plt.colorbar(scattered)
elif X.shape[1] == 2:
plt.scatter(X[:, 0], X[:, 1], c=y, cmap='gray')
plt.colorbar()
else:
raise RuntimeError("Dimension too big")
# +
ax = plt.subplot(3, 1, 1)
alg = TSNE(n_components=2, perplexity=5.0, n_iter=2000, metric="euclidean")
new_data = alg.fit_transform(boston.data, boston.target)
show_dataset(new_data, boston.target)
ax = plt.subplot(3, 1, 2)
alg2 = TSNE(n_components=2, perplexity=20.0, n_iter=2000, metric="euclidean")
new_data2 = alg2.fit_transform(boston.data, boston.target)
show_dataset(new_data2, boston.target)
ax = plt.subplot(3, 1, 3)
alg3 = TSNE(n_components=2, perplexity=50.0, n_iter=2000, metric="euclidean")
new_data3 = alg3.fit_transform(boston.data, boston.target)
show_dataset(new_data3, boston.target)
# -
| visualization_stuff/visualise_boston.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Numpy Array Operations
#
# * Topicos:
# * Operações Aritméticas
# * Operações Ponto a ponto
# * Metodos da Biblioteca (funções Universais)
# * Produto Interno e Externo
# * Matriz Inversa e Transposta
# +
# Importar Library
import numpy as np
# Criar array
array1 = np.arange(1,10) # Array 1d de 9 elementos [1 - 9]
array2 = array2 = array1.reshape((3,3))
# Operações Aritmeticas (por escalar)
array3 = array2 + 2
print(array3)
array3 = array2 - 2
print(array3)
array3 = array2 * 2
print(array3)
array3 = array2 / 2
print(array3)
array3 = array2 ** 2
print(array3)
# +
# Operações Aritmeticas (ponto a ponto)
array3 = array2 + array2
print(array3)
array3 = array2 - array2
print(array3)
array3 = array2 * array2
print(array3)
array3 = array2 / array2
print(array3)
array4 = 2*np.ones((3,3))
array3 = array2 ** array4
print(array3)
# Obs1: Operações "inválidas" (0 / 0), geram o valor "nan"
# Obs2: Divisão por 0, gera valor "inf"
# +
# Metodos Nativos da Biblioteca (Funções Universais)
# Link: https://docs.scipy.org/doc/numpy/reference/ufuncs.html
array1 = np.arange(1,6)
print(array1)
print(array1.max(),np.max(array1))
print(array1.argmax())
print(array1.min(),np.min(array1))
print(array1.argmin())
print(np.sqrt(array1))
print(np.square(array1))
print(np.exp(array1))
print(np.log(array1))
print(np.mean(array1))
print(np.std(array1))
print(np.sin(array1))
# +
# Metodos Nativos da Biblioteca (Funções Universais)
x = np.arange(5,10)
print(x)
y = np.ones([5,5])*10
print(y)
z = np.repeat(x,3).reshape(5,3)
print(z)
# Funcoes
print(np.mod(y,x)) # Resto da divisao
print(np.minimum(x,7)) # Compara valor a valor com o 7
print(np.median(z)) # Mediana de todos os valores
print(np.median(z,axis=0)) # Mediana por coluna (das linhas)
print(np.median(z,axis=1)) # Mediana por linha (das colunas)
print(np.add.accumulate(x)) # soma acumulada
print(np.multiply.outer(x,x)) # Multiplicação matricial
# +
# Inverse and Transpose
import numpy as np
from numpy.linalg import inv
from numpy.linalg import pinv
# Produto escalar: np.inner() (vectors) / np.dot() (matrices)
# Produto vetorial: np.outer()
# Inner product of 1D arrays
B = np.array([1,2,3])
C = np.array([4,5,6])
A = np.inner(B,C)
print(A)
# Transpose of 1D vectors
A = np.array([[1,2,3]]) # create as a 2D object
B = A.T
C = A.transpose()
print(B)
print(C)
# Matrix Class
m = np.matrix([[2,3],[4,5]]) # create matrix object
print(m)
m_inv = m.I # calculate matrix inverse
print(m_inv)
print(np.matmul(m,m_inv)) # Identity Matrix
# Transpose and inverse Using Arrays (A = B' * inv(C))
B = np.random.randn(2,2)
C = np.random.randn(2,2)
A = np.dot(B.T, inv(C))
A = np.matmul(B.T, inv(C))
# Transpose and inverse using Matrices (A = B' * inv(C))
B = np.mat(B) # cast B from array object to matrix object
C = np.mat(C) # cast C from array object to matrix object
A = B.T*C.I
# -
| scripts_numpy_pandas/Numpy_03_operations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import os, sys, gc
import time
import glob
import pickle
import copy
import json
import random
from collections import OrderedDict, namedtuple
import multiprocessing
import threading
import traceback
from typing import Tuple, List
import h5py
from tqdm import tqdm, tqdm_notebook
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cv2
from PIL import Image
import torch
import torchvision
import torch.nn.functional as F
from torch import nn, optim
import torch.optim.lr_scheduler as lr_scheduler
from torch.utils.data import Dataset, DataLoader
from torch.optim.lr_scheduler import CosineAnnealingLR
import torchmetrics
import pl_bolts
import pytorch_lightning as pl
from IPython.display import display, clear_output
import faiss
from modules.AugsDS_v13 import *
from modules.eval_functions import *
from modules.eval_metrics import evaluate
sys.path.append('./modules')
# -
from modules.Facebook_AFMultiGPU_model_v23_sy_v8 import ArgsT23_EffNetV2, FacebookModel
# +
args = ArgsT23_EffNetV2()
args.pretrained_bb = False
args.arc_classnum = 40
print(args)
# -
# # Building model
model = FacebookModel(args)
# # Loading ckpt
ckpt_filename = './checkpoints/sjy_test9/epoch=9-step=42649_LIGHT.ckpt'
_ = model.restore_checkpoint(ckpt_filename)
# # Inference configuration
# +
do_simple_augmentation = False
K = 500
BATCH_SIZE = 128
N_WORKERS = 7
DS_INPUT_DIR = f'./all_datasets/dataset'
ALL_FOLDERS = ['query_images', 'reference_images', 'training_images']
args.ALL_FOLDERS = ALL_FOLDERS
args.BATCH_SIZE = BATCH_SIZE
args.N_WORKERS = N_WORKERS
args.DS_INPUT_DIR = DS_INPUT_DIR
# +
while DS_INPUT_DIR[-1] in ['/', r'\\']:
DS_INPUT_DIR = DS_INPUT_DIR[:-1]
# Path where the rescaled images will be saved
args.DS_DIR = f'{args.DS_INPUT_DIR}_jpg_{args.DATASET_WH[0]}x{args.DATASET_WH[1]}'
# -
# # Data Source
# +
if any( [not os.path.exists(os.path.join(args.DS_DIR, folder)) for folder in args.ALL_FOLDERS] ):
assert os.path.exists(args.DS_INPUT_DIR), f'DS_INPUT_DIR not found: {args.DS_INPUT_DIR}'
resize_dataset(
ds_input_dir=args.DS_INPUT_DIR,
ds_output_dir=args.DS_DIR,
output_wh=args.DATASET_WH,
output_ext='jpg',
num_workers=args.N_WORKERS,
ALL_FOLDERS=args.ALL_FOLDERS,
verbose=False,
)
print('Paths:')
print(' - DS_INPUT_DIR:', args.DS_INPUT_DIR)
print(' - DS_DIR: ', args.DS_DIR)
assert os.path.exists(args.DS_DIR), f'DS_DIR not found: {args.DS_DIR}'
try:
public_ground_truth_path = os.path.join(args.DS_DIR, 'public_ground_truth.csv')
public_gt = pd.read_csv( public_ground_truth_path)
except:
public_ground_truth_path = os.path.join(args.DS_INPUT_DIR, 'public_ground_truth.csv')
public_gt = pd.read_csv( public_ground_truth_path)
# -
# # Datasets
# +
ds_qry_full = FacebookDataset(
samples_id_v=[f'Q{i:05d}' for i in range(50_000)],
do_augmentation=False,
ds_dir=args.DS_DIR,
output_wh=args.OUTPUT_WH,
channel_first=True,
norm_type= args.img_norm_type,
verbose=True,
)
# ds_qry_full.plot_sample(4)
ds_ref_full = FacebookDataset(
samples_id_v=[f'R{i:06d}' for i in range(1_000_000)],
do_augmentation=False,
ds_dir=args.DS_DIR,
output_wh=args.OUTPUT_WH,
channel_first=True,
norm_type=args.img_norm_type,
verbose=True,
)
# ds_ref_full.plot_sample(4)
ds_trn_full = FacebookDataset(
samples_id_v=[f'T{i:06d}' for i in range(1_000_000)],
do_augmentation=False,
ds_dir=args.DS_DIR,
output_wh=args.OUTPUT_WH,
channel_first=True,
norm_type=args.img_norm_type,
verbose=True,
)
# ds_trn_full.plot_sample(4)
dl_qry_full = DataLoader(
ds_qry_full,
batch_size=args.BATCH_SIZE,
num_workers=args.N_WORKERS,
shuffle=False,
)
dl_ref_full = DataLoader(
ds_ref_full,
batch_size=args.BATCH_SIZE,
num_workers=args.N_WORKERS,
shuffle=False,
)
dl_trn_full = DataLoader(
ds_trn_full,
batch_size=args.BATCH_SIZE,
num_workers=args.N_WORKERS,
shuffle=False,
)
# -
# ### Query embeddings
embed_qry_d = calc_embed_d(
model,
dataloader=dl_qry_full,
do_simple_augmentation=do_simple_augmentation
)
# ### Reference embeddings
aug = '_AUG' if do_simple_augmentation else ''
submission_path = ckpt_filename.replace('.ckpt', f'_{args.OUTPUT_WH[0]}x{args.OUTPUT_WH[1]}{aug}_REF.h5')
scores_path = submission_path.replace('.h5', '_match_d.pickle')
# +
embed_ref_d = calc_embed_d(
model,
dataloader=dl_ref_full,
do_simple_augmentation=do_simple_augmentation
)
save_submission(
embed_qry_d,
embed_ref_d,
save_path=submission_path,
)
match_d = calc_match_scores(embed_qry_d, embed_ref_d, k=K)
save_obj(match_d, scores_path)
# -
# ### Public GT validation
eval_d = evaluate(
submission_path=submission_path,
gt_path=public_ground_truth_path,
is_matching=False,
)
# ### Training embeddings
aug = '_AUG' if do_simple_augmentation else ''
submission_path = ckpt_filename.replace('.ckpt', f'_{args.OUTPUT_WH[0]}x{args.OUTPUT_WH[1]}{aug}_TRN.h5')
scores_path = submission_path.replace('.h5', '_match_d.pickle')
# +
embed_trn_d = calc_embed_d(
model,
dataloader=dl_trn_full,
do_simple_augmentation=do_simple_augmentation
)
save_submission(
embed_qry_d,
embed_trn_d,
save_path=submission_path,
)
# -
match_d = calc_match_scores(embed_qry_d, embed_trn_d, k=K)
save_obj(match_d, scores_path)
| phase1_scripts/inference_test9_epoch9.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/jimmyye1/experiment/blob/main/Copy_of_C4W4_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ROm8kovwJIxD"
# # Week 4 Assignment: GANs with Hands
#
#
# For the last programming assignment of this course, you will build a Generative Adversarial Network (GAN) that generates pictures of hands. These will be trained on a dataset of hand images doing sign language.
#
# The model you will build will be very similar to the DCGAN model that you saw in the second ungraded lab of this week. Feel free to review it in case you get stuck with any of the required steps.
# + [markdown] id="m6Oumw5-Jx1w"
# ***Important:*** *This colab notebook has read-only access so you won't be able to save your changes. If you want to save your work periodically, please click `File -> Save a Copy in Drive` to create a copy in your account, then work from there.*
# + [markdown] id="K0OwpFl8JIxP"
# ## Imports
# + id="k3nvoSP3Btzu"
import tensorflow as tf
import tensorflow.keras as keras
import matplotlib.pyplot as plt
import numpy as np
import urllib.request
import zipfile
from IPython import display
# + [markdown] id="Yxy_M7xbQef-"
# ## Utilities
# + id="cg_4z8-glz6P"
def plot_results(images, n_cols=None):
'''visualizes fake images'''
display.clear_output(wait=False)
n_cols = n_cols or len(images)
n_rows = (len(images) - 1) // n_cols + 1
if images.shape[-1] == 1:
images = np.squeeze(images, axis=-1)
plt.figure(figsize=(n_cols, n_rows))
for index, image in enumerate(images):
plt.subplot(n_rows, n_cols, index + 1)
plt.imshow(image, cmap="binary")
plt.axis("off")
# + [markdown] id="2iI8bUNSJIxR"
# ## Get the training data
#
# You will download the dataset and extract it to a directory in your workspace. As mentioned, these are images of human hands performing sign language.
# + id="uIx-60V_BEyo"
# download the dataset
training_url = "https://storage.googleapis.com/laurencemoroney-blog.appspot.com/Resources/signs-training.zip"
training_file_name = "signs-training.zip"
urllib.request.urlretrieve(training_url, training_file_name)
# extract to local directory
training_dir = "/tmp"
zip_ref = zipfile.ZipFile(training_file_name, 'r')
zip_ref.extractall(training_dir)
zip_ref.close()
# + [markdown] id="5iPZmV9RJIxR"
# ## Preprocess the images
#
# Next, you will prepare the dataset to a format suitable for the model. You will read the files, convert it to a tensor of floats, then normalize the pixel values.
# + id="4rf-e4f-d3H7"
BATCH_SIZE = 32
# mapping function for preprocessing the image files
def map_images(file):
'''converts the images to floats and normalizes the pixel values'''
img = tf.io.decode_png(tf.io.read_file(file))
img = tf.dtypes.cast(img, tf.float32)
img = img / 255.0
return img
# create training batches
filename_dataset = tf.data.Dataset.list_files("/tmp/signs-training/*.png")
image_dataset = filename_dataset.map(map_images).batch(BATCH_SIZE)
# + [markdown] id="lz9NfgdTJIxS"
# ## Build the generator
#
# You are free to experiment but here is the recommended architecture:
# - *Dense*: number of units should equal `7 * 7 * 128`, input_shape takes in a list containing the random normal dimensions.
# - `random_normal_dimensions` is a hyperparameter that defines how many random numbers in a vector you'll want to feed into the generator as a starting point for generating images.
# - *Reshape*: reshape the vector to a 7 x 7 x 128 tensor.
# - *BatchNormalization*
# - *Conv2DTranspose*: takes `64` units, kernel size is `5`, strides is `2`, padding is `SAME`, activation is `selu`.
# - *BatchNormalization*
# - *Conv2DTranspose*: `1` unit, kernel size is `5`, strides is `2`, padding is `SAME`, and activation is `tanh`.
# + id="uagZDaF0CZON"
# You'll pass the random_normal_dimensions to the first dense layer of the generator
random_normal_dimensions = 32
### START CODE HERE ###
generator = keras.models.Sequential([
keras.layers.Dense(7 * 7 * 128, input_shape=[random_normal_dimensions]),
keras.layers.Reshape([7, 7, 128]),
keras.layers.BatchNormalization(),
keras.layers.Conv2DTranspose(64, kernel_size=5, strides=2, padding="SAME",
activation="selu"),
keras.layers.BatchNormalization(),
keras.layers.Conv2DTranspose(1, kernel_size=5, strides=2, padding="SAME",
activation="tanh"),
])
### END CODE HERE ###
# + [markdown] id="8_lAy0bjJIxS"
# ## Build the discriminator
#
# Here is the recommended architecture for the discriminator:
# - *Conv2D*: 64 units, kernel size of 5, strides of 2, padding is SAME, activation is a leaky relu with alpha of 0.2, input shape is 28 x 28 x 1
# - *Dropout*: rate is 0.4 (fraction of input units to drop)
# - *Conv2D*: 128 units, kernel size of 5, strides of 2, padding is SAME, activation is LeakyRelu with alpha of 0.2
# - *Dropout*: rate is 0.4.
# - *Flatten*
# - *Dense*: with 1 unit and a sigmoid activation
# + id="siCh-qRtJIxT"
### START CODE HERE ###
discriminator = keras.models.Sequential([
keras.layers.Conv2D(64, kernel_size=5, strides=2, padding="SAME",
activation=keras.layers.LeakyReLU(0.2),
input_shape=[28, 28, 1]),
keras.layers.Dropout(0.4),
keras.layers.Conv2D(128, kernel_size=5, strides=2, padding="SAME",
activation=keras.layers.LeakyReLU(0.2)),
keras.layers.Dropout(0.4),
keras.layers.Flatten(),
keras.layers.Dense(1, activation="sigmoid")
])
### END CODE HERE ###
# + [markdown] id="EKlTL1lhJIxT"
# ## Compile the discriminator
#
# - Compile the discriminator with a binary_crossentropy loss and rmsprop optimizer.
# - Set the discriminator to not train on its weights (set its "trainable" field).
# + id="xh4EaHDlJIxT"
### START CODE HERE ###
discriminator.compile(loss="binary_crossentropy", optimizer="rmsprop")
discriminator.trainable = False
### END CODE HERE ###
# + [markdown] id="3X25T2kUJIxT"
# ## Build and compile the GAN model
#
# - Build the sequential model for the GAN, passing a list containing the generator and discriminator.
# - Compile the model with a binary cross entropy loss and rmsprop optimizer.
# + id="SBclsOMsJIxU"
### START CODE HERE ###
gan = keras.models.Sequential([generator, discriminator])
gan.compile(loss="binary_crossentropy", optimizer="rmsprop")
### END CODE HERE ###
# + [markdown] id="zX2CB0srJIxU"
# ## Train the GAN
#
# Phase 1
# - real_batch_size: Get the batch size of the input batch (it's the zero-th dimension of the tensor)
# - noise: Generate the noise using `tf.random.normal`. The shape is batch size x random_normal_dimension
# - fake images: Use the generator that you just created. Pass in the noise and produce fake images.
# - mixed_images: concatenate the fake images with the real images.
# - Set the axis to 0.
# - discriminator_labels: Set to `0.` for fake images and `1.` for real images.
# - Set the discriminator as trainable.
# - Use the discriminator's `train_on_batch()` method to train on the mixed images and the discriminator labels.
#
#
# Phase 2
# - noise: generate random normal values with dimensions batch_size x random_normal_dimensions
# - Use `real_batch_size`.
# - Generator_labels: Set to `1.` to mark the fake images as real
# - The generator will generate fake images that are labeled as real images and attempt to fool the discriminator.
# - Set the discriminator to NOT be trainable.
# - Train the GAN on the noise and the generator labels.
# + id="AuV97d_kCpb_"
def train_gan(gan, dataset, random_normal_dimensions, n_epochs=50):
""" Defines the two-phase training loop of the GAN
Args:
gan -- the GAN model which has the generator and discriminator
dataset -- the training set of real images
random_normal_dimensions -- dimensionality of the input to the generator
n_epochs -- number of epochs
"""
# get the two sub networks from the GAN model
generator, discriminator = gan.layers
for epoch in range(n_epochs):
print("Epoch {}/{}".format(epoch + 1, n_epochs))
for real_images in dataset:
### START CODE HERE ###
# infer batch size from the current batch of real images
batch_size = real_images.shape[0]
# Train the discriminator - PHASE 1
# create the noise
noise = tf.random.normal(shape=[batch_size, random_normal_dimensions])
# use the noise to generate fake images
fake_images = generator(noise)
# create a list by concatenating the fake images with the real ones
mixed_images = tf.concat([fake_images, real_images], axis=0)
# Create the labels for the discriminator
# 0 for the fake images
# 1 for the real images
discriminator_labels = tf.constant([[0.]] * batch_size + [[1.]] * batch_size)
# ensure that the discriminator is trainable
discriminator.trainable = True
# use train_on_batch to train the discriminator with the mixed images and the discriminator labels
discriminator.train_on_batch(mixed_images, discriminator_labels)
# Train the generator - PHASE 2
# create a batch of noise input to feed to the GAN
noise = tf.random.normal(shape=[batch_size, random_normal_dimensions])
# label all generated images to be "real"
generator_labels = tf.constant([[1.]] * batch_size)
# freeze the discriminator
discriminator.trainable = False
# train the GAN on the noise with the labels all set to be true
gan.train_on_batch(noise, generator_labels)
### END CODE HERE ###
plot_results(fake_images, 16)
plt.show()
return fake_images
# + [markdown] id="OzbX3hwKJIxW"
# ### Run the training
#
# For each epoch, a set of 31 images will be displayed onscreen. The longer you train, the better your output fake images will be. You will pick your best images to submit to the grader.
# + id="wYx9rzdACt0A"
# you can adjust the number of epochs
EPOCHS = 60
# run the training loop and collect images
fake_images = train_gan(gan, image_dataset, random_normal_dimensions, EPOCHS)
# + [markdown] id="uIAih3a1JIxX"
# ## Choose your best images to submit for grading!
#
# Please visually inspect your 31 generated hand images. They are indexed from 0 to 30, from left to right on the first row on top, and then continuing from left to right on the second row below it.
#
# - Choose 16 images that you think look most like actual hands.
# - Use the `append_to_grading_images()` function, pass in `fake_images` and a list of the indices for the 16 images that you choose to submit for grading (e.g. `append_to_grading_images(fake_images, [1, 4, 5, 6, 8... until you have 16 elements])`).
# + id="4Qcxe1RK-piF"
# helper function to collect the images
def append_to_grading_images(images, indexes):
l = []
for index in indexes:
if len(l) >= 16:
print("The list is full")
break
l.append(tf.squeeze(images[index:(index+1),...], axis=0))
l = tf.convert_to_tensor(l)
return l
# + [markdown] id="RFg-wvIcS-Jv"
# Please fill in the empty list (2nd parameter) with 16 indices indicating the images you want to submit to the grader.
# + id="InUSbfGI-0vk"
ls = list([4,5,8,10,11,12,15,16,18,20,24,25,27,29,30,31])
#grading_images = append_to_grading_images(fake_images, ls)
print(fake_image)
# + [markdown] id="BsTurLWKJIxY"
# ## Zip your selected images for grading
#
# Please run the code below. This will save the images you chose to a zip file named `my-signs.zip`.
#
# - Please download this file from the Files explorer on the left.
# - Please return to the Coursera classroom and upload the zip file for grading.
# + id="vL8W2OGBqFL_"
from PIL import Image
from zipfile import ZipFile
denormalized_images = grading_images * 255
denormalized_images = tf.dtypes.cast(denormalized_images, dtype = tf.uint8)
file_paths = []
for this_image in range(0,16):
i = tf.reshape(denormalized_images[this_image], [28,28])
im = Image.fromarray(i.numpy())
im = im.convert("L")
filename = "hand" + str(this_image) + ".png"
file_paths.append(filename)
im.save(filename)
with ZipFile('my-signs.zip', 'w') as zip:
for file in file_paths:
zip.write(file)
# + [markdown] id="Yp7jYkyXZsM9"
# **Congratulations on completing the final assignment of this course!**
| Copy_of_C4W4_Assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# ## Python Movie Recommendation System
import pandas as pd
import numpy as np
from scipy.sparse import csr_matrix
from sklearn.neighbors import NearestNeighbors
import matplotlib.pyplot as plt
import seaborn as sns
movies = pd.read_csv("C:\\Users\\black\\Desktop\\ml_py\\datasets\\ml-latest-small\\movies.csv")
ratings = pd.read_csv("C:\\Users\\black\\Desktop\\ml_py\\datasets\\ml-latest-small\\ratings.csv")
# ## Getting Overview of Data
movies.head()
ratings.head()
# ## Pivoting Data
final_dataset = ratings.pivot(index='movieId',columns='userId',values='rating')
final_dataset.head()
final_dataset.fillna(0,inplace=True)
final_dataset.head()
# ## Preparing Final Data
no_user_voted = ratings.groupby('movieId')['rating'].agg('count')
no_movies_voted = ratings.groupby('userId')['rating'].agg('count')
f,ax = plt.subplots(1,1,figsize=(16,4))
# ratings['rating'].plot(kind='hist')
plt.scatter(no_user_voted.index,no_user_voted,color='mediumseagreen')
plt.axhline(y=10,color='r')
plt.xlabel('MovieId')
plt.ylabel('No. of users voted')
plt.show()
final_dataset = final_dataset.loc[no_user_voted[no_user_voted > 10].index,:]
f,ax = plt.subplots(1,1,figsize=(16,4))
plt.scatter(no_movies_voted.index,no_movies_voted,color='mediumseagreen')
plt.axhline(y=50,color='r')
plt.xlabel('UserId')
plt.ylabel('No. of votes by user')
plt.show()
final_dataset=final_dataset.loc[:,no_movies_voted[no_movies_voted > 50].index]
final_dataset
# ## Removing Sparsity
sample = np.array([[0,0,3,0,0],[4,0,0,0,2],[0,0,0,0,1]])
sparsity = 1.0 - ( np.count_nonzero(sample) / float(sample.size) )
print(sparsity)
csr_sample = csr_matrix(sample)
print(csr_sample)
csr_data = csr_matrix(final_dataset.values)
final_dataset.reset_index(inplace=True)
# ## Making the movie recommendation system model
knn = NearestNeighbors(metric='cosine', algorithm='brute', n_neighbors=20, n_jobs=-1)
knn.fit(csr_data)
def get_movie_recommendation(movie_name):
n_movies_to_reccomend = 10
movie_list = movies[movies['title'].str.contains(movie_name)]
if len(movie_list):
movie_idx= movie_list.iloc[0]['movieId']
movie_idx = final_dataset[final_dataset['movieId'] == movie_idx].index[0]
distances , indices = knn.kneighbors(csr_data[movie_idx],n_neighbors=n_movies_to_reccomend+1)
rec_movie_indices = sorted(list(zip(indices.squeeze().tolist(),distances.squeeze().tolist())),key=lambda x: x[1])[:0:-1]
recommend_frame = []
for val in rec_movie_indices:
movie_idx = final_dataset.iloc[val[0]]['movieId']
idx = movies[movies['movieId'] == movie_idx].index
recommend_frame.append({'Title':movies.iloc[idx]['title'].values[0],'Distance':val[1]})
df = pd.DataFrame(recommend_frame,index=range(1,n_movies_to_reccomend+1))
return df
else:
return "No movies found. Please check your input"
# ## Recommending movies
get_movie_recommendation('Iron Man')
get_movie_recommendation('Memento')
| Projects/project-4-python-movie-recommendation-system.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/darenjabrica/OOP-58001/blob/main/Midterm_Num1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="sYenTCgtc8c2" outputId="82db0a0f-8070-4f97-97dd-3a366f65848d"
def main():
class TemperatureConversion:
def __init__(self, temp=1):
self._temp = temp
class CelsiusToFahrenheit(TemperatureConversion):
def conversion(self):
return (self._temp * 9) / 5 + 32
class CelsiusToKelvin(TemperatureConversion):
def conversion(self):
return self._temp + 273.15
tempInCelsius = float(input("Enter the temperature in Celsius: "))
convert = CelsiusToKelvin(tempInCelsius)
print(str(convert.conversion()) + " Kelvin")
convert = CelsiusToFahrenheit(tempInCelsius)
print(str(convert.conversion()) + " Fahrenheit")
main()
#Fahrenheit to Celsius and Kelvin to Celsius
def main():
class TemperatureConversion:
def __init__(self, temp=1):
self._temp = temp
class FahrenheitToCelsius(TemperatureConversion):
def conversion(self):
return (self._temp - 32) * 5 / 9
class KelvinToCelsius(TemperatureConversion):
def conversion(self):
return self._temp - 273.15
tempInFahrenheit = float(input("Enter the temperature in Farenheit: "))
convert = FahrenheitToCelsius(tempInFahrenheit)
print(str(convert.conversion()) + " Celsius")
tempInKelvin = float(input("Enter the temperature in Kelvin: "))
convert = KelvinToCelsius(tempInKelvin)
print(str(convert.conversion()) + " Celsius")
main()
| Midterm_Num1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from sklearn.preprocessing import RobustScaler, StandardScaler, MinMaxScaler
import pandas as pd
plt.rc('font', size=16) # controls default text sizes
plt.rc('axes', titlesize=16) # fontsize of the axes title
plt.rc('axes', labelsize=18) # fontsize of the x and y labels
plt.rc('xtick', labelsize=16) # fontsize of the tick labels
plt.rc('ytick', labelsize=16) # fontsize of the tick labels
plt.rc('legend', fontsize=18) # legend fontsize
plt.rc('figure', titlesize=18) # fontsize of the figure title
# +
def convert1d2d(arr):
# convert (m, ) to (m, 1), i.e. 1d to 2d
return np.reshape(arr, (-1, 1))
def squaredErrorCost(mat, y, theta):
m = len(y)
return 1 / (2*m) * np.linalg.norm( np.subtract( np.dot(design_mat, theta), y ) )**2
def gradientDescent(design_mat, y, theta, alpha=0.01, max_iter=10000):
# design_mat [m, n]: design matrix [1 x]
# y [m, 1]: m-dimensional target vector
# theta [n, 1]: n-dimensional vector, initialized with guess for parameter
# alpha: learning rate (positive!)
m = len(y)
for i in range(max_iter):
theta -= (alpha / m) * np.dot( design_mat.T, (np.subtract( np.dot(design_mat, theta), y )) )
return theta
def gradientDescentNotVectorized(mat, y, theta, alpha=0.01, max_iter=10000):
length = len(y)
for j in range(max_iter):
update_0 = 0
update_1 = 0
for i in range(length):
error = theta[0] + theta[1] * mat[i, 1] - y[i]
update_0 += error
update_1 += error * mat[i, 1] # inner derivate
theta[0] -= (alpha / length) * update_0
theta[1] -= (alpha / length) * update_1
return theta
def gradientDescentTol(mat, y, theta, alpha=0.001, max_iter=100000, tol=0.0001):
m = len(y)
J_history = []
J_history.append(squaredErrorCost(mat, y, theta))
for i in range(max_iter):
theta -= (alpha / m) * np.dot( design_mat.T, (np.subtract( np.dot(design_mat, theta), y )) )
J_history.append(squaredErrorCost(mat, y, theta))
if abs(J_history[i] - J_history[i+1]) < tol:
break
return theta
def solveNormalEquations(mat, y):
# inv(mat.T * mat) * (mat.T * y)
return np.dot( np.linalg.inv( np.dot(mat.T, mat) ), (np.dot(mat.T, y)) )
# -
# # linear regression with a single variable
# loading and transforming data
df1 = pd.read_csv("example_1_data_1.txt")
arr = df1.to_numpy()
x = convert1d2d(arr[:,0])
y = convert1d2d(arr[:,1])
# +
min_x, max_x = 4, 23
theta1, theta0, r2, p_value, std_err = stats.linregress(arr[:,0], arr[:,1])
vals = np.linspace(min_x, max_x, 100)
f = plt.figure(figsize=(20,10))
plt.plot(x, y, color="r", marker="o", markersize="10", ls="none")
plt.plot(vals, theta0 + theta1*vals, color="b", markersize="0", ls="-", label=r"$R^2 = {:.2F}\%$".format(r2*100))
plt.xlabel("inhabitants in 10000")
plt.ylabel("profits in $10000")
plt.xlim(min_x, max_x)
plt.legend(loc="best")
plt.show()
# +
n_points = len(y) # number of data points
theta = np.zeros((2, 1)) # init column vector of parameters
ones = np.ones((n_points)) # helping array of shape (n_points, )
design_mat = np.c_[ones, x] # concatenate two vectors to matrix
theta2 = gradientDescent(design_mat, y, theta)
theta = np.zeros((2, 1))
theta4 = gradientDescentTol(design_mat, y, theta, tol=0.000001)
theta = np.zeros((2, 1))
theta3 = solveNormalEquations(design_mat, y)
theta = np.zeros((2, 1))
print("linreg from scipy.stats\t h(theta) = {:.10F} + {:.10F} x".format(theta1, theta0))
print("normal equations\t h(theta) = {:.10F} + {:.10F} x".format(theta3[1,0], theta3[0,0]))
print("gradient descent\t h(theta) = {:.10F} + {:.10F} x".format(theta2[1,0], theta2[0,0]))
print("gradient descent tol\t h(theta) = {:.10F} + {:.10F} x".format(theta4[1,0], theta4[0,0]))
# -
# # linear regression with several variables
# The Min-Max Scaler is defined as: (x<sub>i</sub> – min(x)) / (max(x) – min(x)). As it uses the *min* and *max* values, so it’s very sensitive to outliers.<br>
# The Standard Scaler is defined as: (x<sub>i</sub> – mean(x)) / stdev(x), which causes problems for data that is not normally distributed.<br>
# The Robust Scaler uses statistics that are robust to outliers: (x<sub>i</sub> – Q<sub>1</sub>(x)) / (Q<sub>3</sub>(x) – Q<sub>1</sub>(x))
# loading data and converting to arrays
df2 = pd.read_csv("example_1_data_2.txt")
arr = df2.to_numpy()
X = arr[:,:2]
x1 = convert1d2d(arr[:,0])
x2 = convert1d2d(arr[:,1])
y = convert1d2d(arr[:,2])
# testing for outliers
fig, axs = plt.subplots(1, 2, figsize=(20, 5))
axs[0].boxplot(x1)
axs[1].boxplot(x2)
plt.show()
# testing for normal distribution (despite outliers)
w1, p1 = stats.shapiro(x1)
w2, p2 = stats.shapiro(x2)
print("Shapiro-Wilk normality tests:\n x1: p = {:.5F}\n x2: p = {:.5F}".format(p1,p2))
# scaling; different saclers are possible for different features, but obviously not for polynomials
scaler = RobustScaler()
x1 = scaler.fit_transform(x1)
scaler = MinMaxScaler()
x2 = scaler.fit_transform(x2)
# +
n_points = len(y) # number of data points
theta = np.zeros((3, 1)) # init column vector of parameters
ones = np.ones((n_points)) # helping array of shape (n_points, )
design_mat = np.c_[ones, x1, x2] # concatenate two vectors to matrix
theta1 = gradientDescent(design_mat, y, theta, 0.001, 100000)
theta = np.zeros((3, 1))
theta2 = solveNormalEquations(design_mat, y)
theta3, res, rank, s = np.linalg.lstsq(design_mat, y, rcond=None) # lstsq solution, residuals, rank, singular values
print("Gradient descent:\t h(theta) = {:.5F} + {:.5F} x1 + {:.5F} x2".format(theta1[0,0], theta1[1,0], theta1[2,0]))
print("Normal equations:\t h(theta) = {:.5F} + {:.5F} x1 + {:.5F} x2".format(theta2[0,0], theta2[1,0], theta2[2,0]))
print("Backslash:\t\t h(theta) = {:.5F} + {:.5F} x1 + {:.5F} x2".format(theta3[0,0], theta3[1,0], theta3[2,0]))
# -
# # polynomial regression on "Filip data set"
# (without scaling)
# loading data and converting to arrays; src: https://www.itl.nist.gov/div898/strd/lls/data/LINKS/DATA/Filip.dat
df3 = pd.read_csv("example_1_filip.txt", delimiter=",")
arr = df3.to_numpy()
y = convert1d2d(arr[:,0])
x = convert1d2d(arr[:,1])
exact_sol = np.array([-1467.48961422980, -2772.17959193342, -2316.37108160893, -1127.97394098372, -354.478233703349, -75.1242017393757, -10.8753180355343, -1.06221498588947, -0.670191154593408E-01, -0.246781078275479E-02, -0.402962525080404E-04])
# +
dim_par, dim_points = 11, len(y) # number of fitting parameters and data points
theta = np.zeros((dim_par, 1)) # init column vector of parameters
ones = np.ones((dim_points)) # helping array of shape (n_points, )
design_mat = np.c_[ones, x] # concatenate two vectors to matrix
for i in range(2, 11):
design_mat = np.c_[design_mat, x**i]
# creating design matrix easily by using vandermonde matrix (cannot use x because x is 2d: [82, 1]);
# reverse column order with np.flip
vander = np.vander(arr[:,1], dim_par)
vander = np.flip(vander, 1)
# show that both methods are equal
print("Vandermonde Matrix is the same as manually created Matrix?", np.allclose(vander, design_mat))
# compute the condition number showing that this problem is ill-conditioned
u, s, v = np.linalg.svd(vander, full_matrices=True)
cond = max(s) / min (s)
print("Condition number: {:.2E}".format(cond))
# solving with different methods
#theta1 = gradientDescent(design_mat, y, theta) # fails
#theta2 = gradientDescentTol(design_mat, y, theta, alpha=0.00001, max_iter=100000, tol=1E-8) # fails
theta3 = solveNormalEquations(design_mat, y)
theta4, res, rank, s = np.linalg.lstsq(vander, y, rcond=None) # lstsq solution, residuals, rank, singular values
theta5, res, rank, s = np.linalg.lstsq(vander, y, rcond=1E-16) # lstsq solution, residuals, rank, singular values
theta6 = np.linalg.pinv(design_mat).dot(y)
d = {'NIST': exact_sol, 'Normal Equations': theta3[:,0], 'pinv': theta6[:,0], 'Backslash (rcond=None)': theta4[:,0], 'Backslash (rcond=1E-16)': theta5[:,0]}
pd.DataFrame(data=d)
# +
min_x, max_x = -9, -3
fig = plt.figure(figsize=(20, 8))
x_vals = np.linspace(min_x, max_x, 100)
y_vals = np.zeros((100))
for i in range(dim_par):
y_vals += theta5[i, 0] * x_vals**i # y = theta0 * x^0 + theta1 * x + theta2 * x^2 + ...
plt.plot(x, y, marker="o", markersize="5", ls="none", label="data points")
plt.plot(x_vals, y_vals, ls="--", label="$10^{\mathrm{th}}$ order fit")
plt.xlim(min_x, max_x)
plt.legend(loc="best", fancybox=True, shadow=True)
plt.show()
# -
# # polynomial regression on "Filip data set"
# (with feature scaling)
# testing for outliers
fig = plt.figure(figsize=(20, 5))
plt.boxplot(x)
plt.show()
# testing for normal distribution (despite outliers)
w, p = stats.shapiro(x)
print("Shapiro-Wilk normality tests:\n x: p = {:.5F}".format(p))
# scaling with StandardScaler since p > 0.05
scaler = StandardScaler()
x = scaler.fit_transform(x)
# +
l=5e-9
gnuplot> n=1e4
gnuplot> c=2.6e18
dim_par, dim_points = 11, len(y) # number of fitting parameters and data points
theta = np.zeros((dim_par, 1)) # init column vector of parameters
# creating design matrix easily by using vandermonde matrix (cannot use x because x is 2d: [82, 1]);
# reverse column order with np.flip
vander = np.vander(x.flatten(), dim_par)
vander = np.flip(vander, 1)
# compute the condition number showing that this problem is ill-conditioned
u, s, v = np.linalg.svd(vander, full_matrices=True)
cond = max(s) / min (s)
print("Condition number: {:.2E}".format(cond))
# solving with different methods
#theta1 = gradientDescent(design_mat, y, theta) # fails
#theta2 = gradientDescentTol(design_mat, y, theta, alpha=0.00001, max_iter=100000, tol=1E-8) # fails
theta3 = solveNormalEquations(vander, y)
theta4, res, rank, s = np.linalg.lstsq(vander, y, rcond=None) # lstsq solution, residuals, rank, singular values
theta5, res, rank, s = np.linalg.lstsq(vander, y, rcond=1E-16) # lstsq solution, residuals, rank, singular values
theta6 = np.linalg.pinv(vander).dot(y)
print("\n(Values are not scaled back.)")
d = {'NIST': exact_sol, 'Normal Equations': theta3[:,0], 'pinv': theta6[:,0], 'Backslash (rcond=None)': theta4[:,0], 'Backslash (rcond=1E-16)': theta5[:,0]}
pd.DataFrame(data=d)
| math/regression-linear.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 2020년 3월 22일 일요일
# ### HackerRank - Hash Table : Ransom Note
# ### 문제 :https://www.hackerrank.com/challenges/ctci-ransom-note/problem?h_l=interview&playlist_slugs%5B%5D=interview-preparation-kit&playlist_slugs%5B%5D=dictionaries-hashmaps
# ### 블로그 :https://somjang.tistory.com/entry/HackerRank-Hash-Tables-Ransom-Note-Python
# ### 첫번째 시도
# +
# #!/bin/python3
import math
import os
import random
import re
import sys
# Complete the checkMagazine function below.
def checkMagazine(magazine, note):
answer = 'Yes'
magazine_dic = {}
for i in range(len(magazine)):
if magazine[i] not in magazine_dic.keys():
magazine_dic[magazine[i]] = 1
else:
magazine_dic[magazine[i]] = magazine_dic[magazine[i]] + 1
# print(magazine_dic)
for i in range(len(note)):
if note[i] not in magazine_dic.keys():
answer = 'No'
break
else:
magazine_dic[note[i]] = magazine_dic[note[i]] - 1
if magazine_dic[note[i]] < 0:
answer = "No"
break
return answer
if __name__ == '__main__':
mn = input().split()
m = int(mn[0])
n = int(mn[1])
magazine = input().rstrip().split()
note = input().rstrip().split()
print(checkMagazine(magazine, note))
| DAY 001 ~ 100/DAY045_[HackerRank] Hash Tables Ransom Note (Python).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # The Prevalence of Concussion in Amateur Irish Rugby Players
# 
# ## Introduction
# I have decided to base my assigment on a collection of data I gathered for my thesis as part of my Bsc. in Physiotherapy in 2012. For my thesis I investigated the overall prevalence of concussion in Irish amateur rugby players. On completion of my research my findings were used in various different sports medicine conferences and also featured in a national newspaper at the time. Concussion is defined as a “complex pathophysiological process affecting the brain, induced by traumatic biomechanical forces”(3). There are over 150,000 registered rugby players in Ireland and despite the popularity of the game, the physical nature of rugby can lead to many injuries occurring like concussion(5). The mismanagement of repeated concussion can lead to very serious long term effects like amnesia and brain damage(6). Ireland has also experienced an increased exposure to the long term effects of concussion following the premature retirement of International rugby players due to their mismanagement of multiple concussions (7).
#
#
#
# Concern regarding concussion injury in rugby union has grown, due to its potentially dangerous long- term effects on players, but the prevalence is not known in Ireland(3). There are strict ‘return to play’ guidelines after concussion(4), however, it is unclear how compliant players are in regards to these regulations in Ireland. Previous studies of New Zealand rugby players have found the career prevalence of concussion to be as high as 60%(1). Hollis et al. in 2009 found that a player who received one concussion was twice as likely to suffer another later in the season (2). At the time of my study there was no published data on Irish rugby players.
#
#
#
# The original study consisted of 114 amatuer players who played with various Leinster junior rugby clubs. Each player filled out a questionnaire designed by myself to investigate how common concussion was, it's symptoms and how much each player knew about concussion. Below is a copy of the first 2 pages of the questionnaire that I created to collect the data.
#
# 
# 
#
#
#
# Almost 33% (37/114) of players suffered a diagnosed concussion during their rugby-playing career (95% C.I. 23.4%-40.6%). Headache was the most common post-concussion symptom, present in 86% (32/37) of those with diagnosed concussion, dizziness ranked second ith 24% and nausea was third with 15%. Mean age of the 114 respondents was 25 years.The mean duration playing rugby was 13 years. Players trained 2 hours (median) a week and participated in 14 (median) matches a year. 46% concussions occurred in forwards and 54% in backs. 75% of players felt that concussion a danger to welfare however 57% would play in an important game if suffering from concussion symptoms.
#
# Below is the graph depicting the post concussion symtopms players reported in the original study.
#
# 
#
#
# ## Variables and their relationships
# I intend on using a few of the main points from the originl dataset for this projet. I am also going to try and find a hypothetical link between the more games a player plays after a concussion, the more likely he is to suffer a repeat injury. I found from my original dataset that 92% of concussions happen during a game. The most worrying statistic was that when a player suffered a concusion he generally had on average another 2 concussion after that original event. The 37 players who suffered a concussion from my original study accounted for 85 concussions between them with the average being 2.29 concussions per player. Nathonson et al in 2013 found the season prevalence of concussion in professional American Football players to be extremely high(8). They found that in 480 games, there were 292 concussions, resulting in 0.61 concussions per game. Applying this ratio to Irish amateur rugby, in theory the more games a player plays the increased risk of concussion and this is the hypothetical link I will try to find.
#
# I have decided to mock up the data of 100 players who suffered a concussion using the four variables of age, number of games played in a season, number of concussions and most common post-concussion symptom.
#
# I will use the first variable as a non zero integer (Age) with the normal distribution between 20-30 years. I have used the normal distribution for this variable as the mean age of players was 25 between with the range from 20-30 which should fit nicely into this distribution.
#
# My second variable will be a non zero integer (Games) and I will use the Gamma distribution between 10-20 games. My third variable will be a non zero integer (Concussion) which will again be with an Gamma distribution between 1-3. I have decided to use the Gamma distribution for both by second and third variables to try and create a graph indicating a relationship between the two variables. Ideally the graph will represent a linear relationship between the more games they play post concussion the more injuries are likely to occur. From the original study players who suffered a concussion generally suffered on average two more.
# My last variable will be (Symptoms) which will be selected using normal distribution to divide up headache, dizziness and nausea. The first three variable will be non-zero integers and the last variable will be categorical variable with three different values. I have used normal distribution for these variables as generally each player would suffer from a multiple of different symtopms with these three conditions being the most prevalent.
#
# I will use this notebook to develop an algorithm to discuss my hypothetical link between an increased number of games post concussion leading to a heightened risk of suffering another concussion. I will generate some data using the numpy.random package and pandas and seaborn packages to analyse the data.
# ## Generate the Concussion Data and Data Analysis
# ### 1. Age
# +
# Age variable
# Import the libraries I will use to evaluate the age of the 100 players who suffered a concussion.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#Make sure the graph is printed below
# %matplotlib inline
# Using randint from the numpy.random package, variables based on my previous study
age = (np.random.randint(20,30,100,))
print (age)
# Created a database of just age
age = pd.Series(np.random.randint(20,30,100,))
# Format the histogram
age.plot.hist(grid=True, bins=10, rwidth=0.5,
color='#607c8e')
plt.title('Average Age of 100 Players with Concussion')
plt.xlabel('Years of age')
plt.ylabel('Number of Players')
plt.grid(axis='y', alpha=0.75)
# https://realpython.com/python-histograms/
# -
# ### 2. Number of Concussion Per Player
# +
#Finding the average number of concussion per 100 Players
# Importing the necessary packages
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Make sure that the graph is printed below
# %matplotlib inline
# Created the variable, using the randint package from the numpy.random package.
# Values ranging to find the average number of concussions
concussion = np.random.randint(0,4,100,)
# print out the values
print (concussion)
# Created a database to develop and investigate the variables
concussion = pd.Series(np.random.randint(0,4,100,))
# Format the histogram
concussion.plot.hist(grid=True, bins=10, rwidth=1,
color='#607c8e')
plt.title('Average Age of 100 Players with Concussion')
plt.xlabel('Years of age')
plt.ylabel('Number of Players')
plt.grid(axis='y', alpha=.2)
# -
# ### 3. Symptoms of Concussion
# +
#Symptoms of Concussion
#Import the packages required.
import pandas as pd
import numpy as np
# Create the variale
symtopms= ['Headache', 'Dizziness','Nausea']
# Create the dataframe to use for analysis, random.choice package from the numpy.random library to
s = pd.Series(np.random.choice(symtopms) for i in range (100))
print (s)
# %matplotlib inline
# Generating the variables for the histogram table
symtopms = ['Dizziness', 'Headaches', 'Nausea']
# Took the values generated from the above dataframe
values =[40,35,25]
plt.figure(1, figsize=(9, 3))
# 3 different types of plots to show the variables
plt.subplot(131)
plt.bar(symtopms, values)
plt.subplot(132)
plt.scatter(symtopms, values)
plt.subplot(133)
plt.plot(symtopms, values)
plt.suptitle('Symtopms of Concussion')
plt.show()
#https://matplotlib.org/tutorials/introductory/pyplot.html#sphx-glr-tutorials-introductory-pyplot-py
# -
# ### 4. Number of games per Player
# +
# Generating data for average number of games played in a season
import matplotlib.pyplot as plt
import scipy.special as sps
import numpy as np
shape, scale = 14, 1. # mean=14, std=5
s = np.random.gamma(shape, scale,100)
count, bins, ignored = plt.hist(s, 15, density=True)
y = bins**(shape-1)*(np.exp(-bins/scale) / (sps.gamma(shape)*scale**shape))
plt.plot(bins, y, linewidth=2, color='r')
plt.xlabel('Number of games')
plt.ylabel('Number of players')
plt.show()
#https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.gamma.html#numpy.random.gamma
# -
# ### 4B. Attempted to create a link between games and concussion
# +
import numpy as np
import pandas as pd
# Make sure that the graph is printed below
# %matplotlib inline
# Created the variable, using the randint package from the numpy.random package.
# Values ranging to find the average number of concussions
shape, scale = 2, .5 # mean=4, std=2*sqrt(2)
c = np.random.gamma(shape, scale, 100)
print (c)
import matplotlib.pyplot as plt
import scipy.special as sps
count, bins, ignored = plt.hist(c, 4, density=True)
y = bins**(shape-1)*(np.exp(-bins/scale) /(sps.gamma(shape)*scale**shape))
plt.plot(bins, y, linewidth=1, color='r')
plt.xlabel('Number of concussions')
plt.ylabel('Number of players')
plt.show()
#https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.gamma.html#numpy.random.gamma
# -
# ### 5. Creating the Dataframe with all the Variables
# +
# I tried to create a dataframe with all of the variables grouped together.
# Unfortunately I was unable to create successfully and was not able to access the data needed to do some data analysis on it
# I left this is to show my efforts at creating the dataframe and why I had to use individuals sections for the data analysis
import pandas as pd
import numpy as np
data = {'Age' :[np.random.randint(20,30,100,)], 'Concussion' :[np.random.randint(1,3,100,)], 'Games' :[np.random.randint(10,20,100,)]}
df = pd.Series(data,index=['player'])
df= pd.Series(data)
print (df)
df.describe()
#http://www.datasciencemadesimple.com/descriptive-summary-statistics-python-pandas/
# -
# ## Summary Of Findings
# The average age of the 100 players who suffered a concussion was 25 years old. Almost 60% of players in the mocked up data suffered another concussion which would tally with previous findings in the research of concussion. Players played on average 14 games in a season with the standard deviation ranging from 8 to 20 games. Headache was marginally the most common symptom found after a concussion. I was unable to find a direct link between the more games a player played after a concussive event the more likely he was to suffer another injury.
# ## References
# 1. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2577443/
# 2. https://journals.sagepub.com/doi/abs/10.1177/0363546509341032
# 3. https://bjsm.bmj.com/content/bjsports/51/11/838.full.pdf
# 4. http://www.irbplayerwelfare.com/?documentid=3
# 5. http://www.irb.com/unions/union=11000001/index.html
# 6. Gavett BE, Stern RA, McKee ACChronic traumatic encephalopathy: a potential late effectof sport-related concussive and subconcussive head trauma. Clin SportsMed2011;30:179–88.
# 7. http://www.independent.ie/sport/rugby/it-affects-every-facet-of-your-life-it-takes-from-you-im-a-different-person-when-this-is-bad-2410819.html
# 8. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4731682/
| Real World Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6.8 64-bit
# language: python
# name: python36864bit808004b91bd74de482c7a28e46d03816
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
data = pd.read_csv('fma_metadata/features_dataset.csv')
data.head()
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
from sklearn.metrics import classification_report,confusion_matrix
X = data.drop(['track_id','genre_top'],axis=1)
y = data['genre_top']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
svm = SVC(decision_function_shape='ovo')
svm.fit(X_train,y_train)
svm_preds = svm.predict(X_test)
print(classification_report(y_test,svm_preds))
params = {
'C':[0.1,1,10,100,1000],
'gamma':[10,1,0.1,0.01,0.001]
}
svm_grid = GridSearchCV(SVC(),params,verbose=3)
svm_grid.fit(X_train,y_train)
svm_grid.best_params_
svm_grid_preds = svm_grid.predict(X_test)
print(classification_report(y_test,svm_grid_preds))
grid.best_params_
# # Summary
#
# Dataset size - 11851 rows x 79 columns
#
#
# ## Splitting the dataset 70:30 train test split ratio
#
# * SVM
# <br><br>
# * Accuracy with default parameters - 42%
# <br><br>
# * Best params by Grid Search - { 'C': 1, 'gamma':10 }
# <br><br>
# * Accuracy with Grid Search parameters - 27%
# <br><br>
| svm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# +
#library(tidyverse)
library(ggplot2)
#library(dplyr)
#regrex1 <- read_csv(file="regrex1.csv")
args <- commandArgs(trailingOnly = TRUE)
regrex1 <- read.csv(args[1])
png("Rscript_regrex1.png")
ggplot(data = regrex1, aes(x=x, y=y)) +
geom_point()
dev.off()
png("Rscript2_regrex1.png")
ggplot(data = regrex1, aes(x=x, y=y)) +
geom_point() +
geom_smooth(color = "purple", level = 0, method = "lm")
dev.off()
# -
| Rscript1.ipynb |
-- ---
-- jupyter:
-- jupytext:
-- text_representation:
-- extension: .hs
-- format_name: light
-- format_version: '1.5'
-- jupytext_version: 1.14.4
-- kernelspec:
-- display_name: Haskell
-- language: haskell
-- name: haskell
-- ---
-- # Demo Notebook for "Automatic Differentiation With Higher Infinitesimals, or Computational Smooth Infinitesimal Analysis in Weil Algebra"
-- In this Notebook, we will demonstrate the functionality of Computational Smooth Infinitesimal Analysis.
-- Note that IHaskell emits superfluous like:
-- ## Setup
-- This section prepares the environment for the computaion in what follows.
-- Genrally, you don't have to modify the contents of this section unless you know what you're doing.
--
-- NOTE:
:set -Wno-all -Wno-name-shadowing -Wno-type-defaults
:set -XDataKinds -XPolyKinds -XTypeApplications -XTypeOperators -XGADTs -XFlexibleContexts -XMonoLocalBinds -XOverloadedStrings -XOverloadedLabels -XScopedTypeVariables -XFlexibleInstances -XMultiParamTypeClasses -XUndecidableInstances
-- +
import qualified Algebra.Prelude.Core as AP
import Numeric.Algebra.Smooth
import Numeric.Algebra.Smooth.Weil
import Algebra.Ring.Polynomial
import Algebra.Ring.Ideal
import IHaskell.Display
import IHaskell.Display.Blaze
import Control.Monad
import qualified Text.Blaze.Html5 as H5
import qualified Data.Map.Strict as M
import qualified Data.Text as T
import Data.Coerce
import GHC.OverloadedLabels
import Data.Reflection
import GHC.TypeLits
import qualified Data.Sized as SV
import Symbolic
type Q = AP.Rational
latexise = T.replace "*" "\\cdot" . T.replace "exp" "\\exp" . T.replace "cos" "\\cos" . T.replace "sin" "\\sin" . T.pack
instance
( c ~ Symbolic
, Reifies i (WeilSettings n m)
, KnownNat n
, KnownNat m
, KnownSymbol sym
) =>
IsLabel sym (Weil i c)
where
fromLabel = injCoeWeil $ fromLabel @sym @Symbolic
{-# INLINE fromLabel #-}
-- -
-- ## Simple Univariate Differentiation
-- First, let's calculate the differential coefficients of
--
-- $$
-- f(x) = \sin\left(\frac x 2\right) \mathrm{e}^{x^2}
-- $$
--
-- at $x = \frac \pi 4$ up to the fourth.
f :: Floating a => a -> a
f x = sin (x /2) * exp (x^2)
let dic = diffUpTo 4 f (pi/4)
M.toList dic
H5.table $ do
H5.thead $ H5.tr $ do
forM_ (M.keys dic) $ \i -> H5.th $ H5.toMarkup $ show i
H5.tr $ do
forM_ (M.toList dic) $ \(_, x) ->
H5.td $ H5.toMarkup $ '$' : show x ++ "$"
-- As we saw in the paper (Theorem 2), we can compute $n$-th derivatives by evaluating $f$ in the tensor product of $\mathrm{R}[d]$'s, i.e. calculating $f(x + d_1 + \cdots + d_n)$ where $d_i \in \mathbb{R}[d_i]$.
-- Let's calculate up to the fourth:
let fval = f (pi/4 + di 0 + di 1 + di 2 + di 3) :: Weil (D1 |*| D1 |*| D1 |*| D1) Double
fval
-- Here, the coefficient of $d_0 \cdots d_{n-1}$ corresponds to the $n$-th differential coefficient.
-- +
let pol = weilToPoly fval
H5.table $ do
let ds :: [OrderedMonomial Grevlex 4]
ds = map leadingMonomial (vars :: [Polynomial AP.Rational 4])
H5.thead $ H5.tr $ do
mapM_ (H5.th . H5.toMarkup . show) [0..4]
H5.tr $ forM_ [0..4] $ \i ->
H5.td $ do
"$"
H5.toMarkup $ show $ AP.unwrapFractional $ coeff (AP.product $ take i ds) pol
"$"
-- -
-- However, this approach gives us exponential growth in $n$.
-- As we saw in Lemma 2, we could use $\mathbb{R}[\varepsilon] = \mathbb{R}[x]/(x^{n+1})$ alternatively:
let fval' = f (pi/4 + di 0) :: Weil (DOrder 5) Double
fval'
-- Note that the coefficien of $d^i$ is multiplied by $i!$, namely
--
-- $$
-- f(x + \varepsilon) = \sum_{0 \leq i \leq n} \frac{f^{(i)}}{i!}\varepsilon^i.
-- $$
--
-- Then the table gets:
higher = fmap AP.unwrapFractional $ terms $ weilToPoly fval'
higher
H5.table $ do
let dic = M.toList higher
H5.thead $ H5.tr $ do
H5.th ""
mapM_ (H5.th . H5.toMarkup . show) [0..4]
H5.tr $ do
H5.th "$c_i$"
forM_ dic $ \(_, c) -> H5.td $ "$" >> H5.toMarkup (show c) >> "$"
H5.tr $ do
H5.th "$c_i \\cdot i!$"
forM_ (zip [0..] dic) $ \(i, (_,c)) -> H5.td $ "$" >> H5.toMarkup (show $ product [1..i] * c) >> "$"
-- It coincides with the results we've gotten so far (modulo admissible floating-point errors)!
-- ### Recovering symbolic differentiation
-- If we use `Symbolic` as a coefficient type instead of `Double`, we can recover the symbolic differentiation from the automatic differentiation!
let dic = normalise <$> f (#x + di 0) :: Weil (DOrder 3) Symbolic
H5.table $ do
H5.thead $ H5.tr $ H5.th "n" *> H5.th "$f^{(n)}$"
forM_ (M.toList $ terms $ weilToPoly dic) $ \(mon, p) -> H5.tr $ do
H5.th $ H5.toMarkup $ show $ totalDegree mon
H5.td $ do
"$"
H5.toMarkup $ latexise $ show $ AP.unwrapFractional p
"$"
-- ## Multivariate differential
-- Let's see how the multivariate partial derivatives can be calculated with tensor products of Weil algebras.
-- Let
--
-- $$
-- g(x,y) = \sin(x) \mathrm{e}^{y^2}
-- $$
--
-- and calculate the partial derivatives at $(\frac{\pi}{3}, \frac{\pi}{6})$ of $g$ up to $(1,2)$-th.
g :: Floating x => x -> x -> x
g x y = sin x * exp (y ^ 2)
g' = g (pi/3 + di 0) (pi/6 + di 1) :: Weil (DOrder 2 |*| DOrder 3) Double
g'
-- +
let gDic :: M.Map (OrderedMonomial Grevlex 2) Double
gDic = coerce $ terms $ weilToPoly g'
H5.table $ do
forM_ (M.toList gDic) $ \(mon, coe) -> H5.tr $ do
let np = product [1..totalDegree mon]
degs = getMonomial mon
xn = degs SV.%!! 0
yn = degs SV.%!! 1
H5.th $ do
"$"
when (xn > 0) $ "\\partial x^{" >> H5.toMarkup (show xn) >> "}"
when (yn > 0) $ "\\partial y^{" >> H5.toMarkup (show yn) >> "}"
"g(x,y)"
"$"
H5.td $ "$" >> H5.toMarkup (show $ fromIntegral np * coe) >> "$"
-- -
-- We can recover multivariate symbolic differentiation as well (this may take a while):
-- +
let g'' = normalise <$> g (#x + di 0) (#y + di 1) :: Weil (DOrder 2 |*| DOrder 3) Symbolic
gDic :: M.Map (OrderedMonomial Grevlex 2) Symbolic
gDic = coerce $ terms $ weilToPoly g''
H5.table $ do
forM_ (M.toList gDic) $ \(mon, coe) -> H5.tr $ do
let np = product [1..totalDegree mon]
degs = getMonomial mon
xn = degs SV.%!! 0
yn = degs SV.%!! 1
H5.th $ do
"$"
when (xn > 0) $ "\\partial x^{" >> H5.toMarkup (show xn) >> "}"
when (yn > 0) $ "\\partial y^{" >> H5.toMarkup (show yn) >> "}"
"g(x,y)"
"$"
H5.td $ "$" >> H5.toMarkup (latexise $ show $ fromIntegral np * coe) >> "$"
-- -
-- ## Computation in General Weil Algebra
-- We can treat general Weil algebra defined by general ideal over polynomial rings.
-- Let us consider the following randomly-chosen ideal:
--
-- $$
-- I = \left\langle x^3 - 2 * y^2, x ^ 2 y, y ^ 3 \right\rangle.
-- $$
--
-- Let's test if $I$ is Weil or not:
-- +
i :: Ideal (Polynomial AP.Rational 2)
i = toIdeal [var 0 ^ 3 - 2 * var 1 ^2 , var 0 ^2 * var 1, var 1 ^ 3]
isWeil i
-- -
-- OK, it is a Weil algebra (although its intution is unclear). Let's evaluate some function on it:
-- +
import Data.Maybe
fromJust $ withWeil i $
let x = pi / 6 + di 0
y = pi / 3 + di 1
in x * y * sin x * exp (y*y)
-- -
-- It somehow works indeed!
-- The algorithm rejects non-Weil algebra correctly.
-- First let us consider $\langle x^2 - 1 \rangle \subseteq \mathbb{R}[x]$, which is zero-dimensional but not nilpotent:
withWeil (toIdeal [var 0 ^2 - 1 :: Polynomial Q 1]) $
let x = pi / 6 + di 0
y = pi / 3 + di 1
in x * y * sin x * exp (y*y)
-- It returned `Nothing`, which means it is not Weil.
--
-- Next we consider The $\langle x^2 - y^3 \rangle \subseteq \mathbb{R}[x, y]$.
-- Note that $y$ never vanish so it is not even zero-dimensional!
withWeil (toIdeal [var 0 ^2 - var 1 ^ 3 :: Polynomial Q 2]) $
let x = pi / 6 + di 0
y = pi / 3 + di 1
in x * y * sin x * exp (y*y)
-- It returns `Nothing` as expected.
| notebooks/demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import loader
from sympy import *
init_printing()
from root.solver import *
# +
F = Symbol('F', real=True)
coeffs = 1, 2, 5
m, b, k = coeffs
yc, p = nth_order_const_coeff(*coeffs)
p.display()
# this is better solve by undetermined coefficients.
yp, p = undetermined_coefficients(yc, coeffs, F*sin(2 * t) + F)
p.display()
to_general(yc, yp)[0]
# -
| notebooks/vibration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
# Lambda School Data Science
#
# *Unit 2, Sprint 3, Module 1*
#
# ---
#
#
# # Define ML problems
# - Choose a target to predict, and check its distribution
# - Avoid leakage of information from test to train or from target to features
# - Choose an appropriate evaluation metric
#
# ### Setup
#
# +
# %%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# !pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# -
# # Choose a target to predict, and check its distribution
# ## Overview
# This is the data science process at a high level:
#
# <img src="https://image.slidesharecdn.com/becomingadatascientistadvice-pydatadc-shared-161012184823/95/becoming-a-data-scientist-advice-from-my-podcast-guests-55-638.jpg?cb=1476298295">
#
# —<NAME>, [Becoming a Data Scientist, PyData DC 2016 Talk](https://www.becomingadatascientist.com/2016/10/11/pydata-dc-2016-talk/)
# We've focused on the 2nd arrow in the diagram, by training predictive models. Now let's zoom out and focus on the 1st arrow: defining problems, by translating business questions into code/data questions.
# Last sprint, you did a Kaggle Challenge. It’s a great way to practice model validation and other technical skills. But that's just part of the modeling process. [Kaggle gets critiqued](https://speakerdeck.com/szilard/machine-learning-software-in-practice-quo-vadis-invited-talk-kdd-conference-applied-data-science-track-august-2017-halifax-canada?slide=119) because some things are done for you: Like [**defining the problem!**](https://www.linkedin.com/pulse/data-science-taught-universities-here-why-maciej-wasiak/) In today’s module, you’ll begin to practice this objective, with your dataset you’ve chosen for your personal portfolio project.
#
# When defining a supervised machine learning problem, one of the first steps is choosing a target to predict.
# Which column in your tabular dataset will you predict?
#
# Is your problem regression or classification? You have options. Sometimes it’s not straightforward, as we'll see below.
#
# - Discrete, ordinal, low cardinality target: Can be regression or multi-class classification.
# - (In)equality comparison: Converts regression or multi-class classification to binary classification.
# - Predicted probability: Seems to [blur](https://brohrer.github.io/five_questions_data_science_answers.html) the line between classification and regression.
# ## Follow Along
# Let's reuse the [Burrito reviews dataset.](https://nbviewer.jupyter.org/github/LambdaSchool/DS-Unit-2-Linear-Models/blob/master/module4-logistic-regression/LS_DS_214_assignment.ipynb) 🌯
#
import pandas as pd
pd.options.display.max_columns = None
df = pd.read_csv(DATA_PATH+'burritos/burritos.csv')
# ### Choose your target
#
# Which column in your tabular dataset will you predict?
#
target = 'overall'
# ### How is your target distributed?
#
# For a classification problem, determine: How many classes? Are the classes imbalanced?
# # Avoid leakage of information from test to train or from target to features
# ## Overview
# Overfitting is our enemy in applied machine learning, and leakage is often the cause.
#
# > Make sure your training features do not contain data from the “future” (aka time traveling). While this might be easy and obvious in some cases, it can get tricky. … If your test metric becomes really good all of the sudden, ask yourself what you might be doing wrong. Chances are you are time travelling or overfitting in some way. — [<NAME>matriain](https://www.quora.com/What-are-some-best-practices-for-training-machine-learning-models/answer/Xavier-Amatriain)
#
# Choose train, validate, and test sets. Are some observations outliers? Will you exclude them? Will you do a random split or a time-based split? You can (re)read [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/).
# ## Follow Along
# First, begin to **explore and clean your data.**
df['Burrito'].nunique()
# Next, do a **time-based split:**
#
# - Train on reviews from 2016 & earlier.
# - Validate on 2017.
# - Test on 2018 & later.
df['Burrito'] = df['Burrito'].str.lower()
# +
# Categorize all burritos into 4 classses + other class
california = df['Burrito'].str.contains('california')
california
# -
# Begin to choose which features, if any, to exclude. **Would some features “leak” future information?**
#
# What happens if we _DON’T_ drop features with leakage?
# Drop the column with “leakage”.
# # Choose an appropriate evaluation metric
# ## Overview
# How will you evaluate success for your predictive model? You must choose an appropriate evaluation metric, depending on the context and constraints of your problem.
#
# **Classification & regression metrics are different!**
#
# - Don’t use _regression_ metrics to evaluate _classification_ tasks.
# - Don’t use _classification_ metrics to evaluate _regression_ tasks.
#
# [Scikit-learn has lists of popular metrics.](https://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values)
# ## Follow Along
# For classification problems:
#
# As a rough rule of thumb, if your majority class frequency is >= 50% and < 70% then you can just use accuracy if you want. Outside that range, accuracy could be misleading — so what evaluation metric will you choose, in addition to or instead of accuracy? For example:
#
# - Precision?
# - Recall?
# - ROC AUC?
#
# ### Precision & Recall
#
# Let's review Precision & Recall. What do these metrics mean, in scenarios like these?
#
# - Predict great burritos
# - Predict fraudulent transactions
# - Recommend Spotify songs
#
# [Are false positives or false negatives more costly? Can you optimize for dollars?](https://alexgude.com/blog/machine-learning-metrics-interview/)
# ### ROC AUC
#
# Let's also review ROC AUC (Receiver Operating Characteristic, Area Under the Curve).
#
# [Wikipedia explains,](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) "A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. **The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.**"
#
# ROC AUC is the area under the ROC curve. [It can be interpreted](https://stats.stackexchange.com/questions/132777/what-does-auc-stand-for-and-what-is-it) as "the expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative."
#
# ROC AUC measures **how well a classifier ranks predicted probabilities.** So, when you get your classifier’s ROC AUC score, you need to **use predicted probabilities, not discrete predictions.**
#
# ROC AUC ranges **from 0 to 1.** Higher is better. A naive majority class **baseline** will have an ROC AUC score of **0.5**, regardless of class (im)balance.
#
# #### Scikit-Learn docs
# - [User Guide: Receiver operating characteristic (ROC)](https://scikit-learn.org/stable/modules/model_evaluation.html#receiver-operating-characteristic-roc)
# - [sklearn.metrics.roc_curve](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_curve.html)
# - [sklearn.metrics.roc_auc_score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html)
#
# #### More links
# - [StatQuest video](https://youtu.be/4jRBRDbJemM)
# - [Data School article / video](https://www.dataschool.io/roc-curves-and-auc-explained/)
# - [The philosophical argument for using ROC curves](https://lukeoakdenrayner.wordpress.com/2018/01/07/the-philosophical-argument-for-using-roc-curves/)
#
# ### Imbalanced classes
#
# Do you have highly imbalanced classes?
#
# If so, you can try ideas from [Learning from Imbalanced Classes](https://www.svds.com/tbt-learning-imbalanced-classes/):
#
# - “Adjust the class weight (misclassification costs)” — most scikit-learn classifiers have a `class_balance` parameter.
# - “Adjust the decision threshold” — we did this last module. Read [Visualizing Machine Learning Thresholds to Make Better Business Decisions](https://blog.insightdatascience.com/visualizing-machine-learning-thresholds-to-make-better-business-decisions-4ab07f823415).
# - “Oversample the minority class, undersample the majority class, or synthesize new minority classes” — try the the [imbalanced-learn](https://github.com/scikit-learn-contrib/imbalanced-learn) library as a stretch goal.
# # BONUS: Regression example 🏘️
#
# Read our NYC apartment rental listing dataset
df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv')
# ### Choose your target
#
# Which column in your tabular dataset will you predict?
#
y = df['price']
# ### How is your target distributed?
#
# For a regression problem, determine: Is the target right-skewed?
#
# Yes, the target is right-skewed
import seaborn as sns
sns.distplot(y);
y.describe()
# ### Are some observations outliers?
#
# Will you exclude
# them?
#
# +
# Yes! There are outliers
# Some prices are so high or low it doesn't really make sense.
# Some locations aren't even in New York City
# Remove the most extreme 1% prices,
# the most extreme .1% latitudes, &
# the most extreme .1% longitudes
import numpy as np
df = df[(df['price'] >= np.percentile(df['price'], 0.5)) &
(df['price'] <= np.percentile(df['price'], 99.5)) &
(df['latitude'] >= np.percentile(df['latitude'], 0.05)) &
(df['latitude'] < np.percentile(df['latitude'], 99.95)) &
(df['longitude'] >= np.percentile(df['longitude'], 0.05)) &
(df['longitude'] <= np.percentile(df['longitude'], 99.95))]
# -
# The distribution has improved, but is still right-skewed
y = df['price']
sns.distplot(y);
y.describe()
# ### Log-Transform
#
# If the target is right-skewed, you may want to “log transform” the target.
#
#
# > Transforming the target variable (using the mathematical log function) into a tighter, more uniform space makes life easier for any [regression] model.
# >
# > The only problem is that, while easy to execute, understanding why taking the log of the target variable works and how it affects the training/testing process is intellectually challenging. You can skip this section for now, if you like, but just remember that this technique exists and check back here if needed in the future.
# >
# > Optimally, the distribution of prices would be a narrow “bell curve” distribution without a tail. This would make predictions based upon average prices more accurate. We need a mathematical operation that transforms the widely-distributed target prices into a new space. The “price in dollars space” has a long right tail because of outliers and we want to squeeze that space into a new space that is normally distributed. More specifically, we need to shrink large values a lot and smaller values a little. That magic operation is called the logarithm or log for short.
# >
# > To make actual predictions, we have to take the exp of model predictions to get prices in dollars instead of log dollars.
# >
# >— <NAME> & <NAME>, [The Mechanics of Machine Learning, Chapter 5.5](https://mlbook.explained.ai/prep.html#logtarget)
#
# [Numpy has exponents and logarithms](https://docs.scipy.org/doc/numpy/reference/routines.math.html#exponents-and-logarithms). Your Python code could look like this:
#
# ```python
# import numpy as np
# y_train_log = np.log1p(y_train)
# model.fit(X_train, y_train_log)
# y_pred_log = model.predict(X_val)
# y_pred = np.expm1(y_pred_log)
# print(mean_absolute_error(y_val, y_pred))
# ```
sns.distplot(y)
plt.title('Original target, in the unit of US dollars');
y_log = np.log1p(y)
sns.distplot(y_log)
plt.title('Log-transformed target, in log-dollars');
y_untransformed = np.expm1(y_log)
sns.distplot(y_untransformed)
plt.title('Back to the original units');
# ## Challenge
#
# You will use your portfolio project dataset for all assignments this sprint. (If you haven't found a dataset yet, do that today. [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2) and choose your dataset.)
#
# Complete these tasks for your project, and document your decisions.
#
# - Choose your target. Which column in your tabular dataset will you predict?
# - Is your problem regression or classification?
# - How is your target distributed?
# - Classification: How many classes? Are the classes imbalanced?
# - Regression: Is the target right-skewed? If so, you may want to log transform the target.
# - Choose your evaluation metric(s).
# - Classification: Is your majority class frequency >= 50% and < 70% ? If so, you can just use accuracy if you want. Outside that range, accuracy could be misleading. What evaluation metric will you choose, in addition to or instead of accuracy?
# - Regression: Will you use mean absolute error, root mean squared error, R^2, or other regression metrics?
# - Choose which observations you will use to train, validate, and test your model.
# - Are some observations outliers? Will you exclude them?
# - Will you do a random split or a time-based split?
# - Begin to clean and explore your data.
# - Begin to choose which features, if any, to exclude. Would some features "leak" future information?
#
# Some students worry, ***what if my model isn't “good”?*** Then, [produce a detailed tribute to your wrongness. That is science!](https://twitter.com/nathanwpyle/status/1176860147223867393)
| module1-define-ml-problems/LS_DS_231.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction
# This is an introductory tutorial to locate, load, and plot ESM4 biogeochemistry data.
#
# # Loading data
# Output from the pre-industrial control simulation of ESM4 is located in the file directory:
# /archive/oar.gfdl.cmip6/ESM4/DECK/ESM4_piControl_D/gfdl.ncrc4-intel16-prod-openmp/pp/
#
# NOTE: it is easiest to navigate the filesystem from the terminal, using the "ls" command.
#
# Within this directory are a number of sub-folders in which different variables have been saved. Of relevance for our work are the folders with names starting ocean_ and ocean_cobalt_ (cobalt is the name of the biogeochemistry model used in this simulation). In each of the subfolders, data have been subsampled and time-averaged in different ways. So for example, in the sub-folder ocean_cobalt_omip_tracers_month_z, we find the further sub-folder ts/monthly/5yr/. In this folder are files (separate ones for each biogeochemical tracer) containing monthly averages for each 5 year time period since the beginning of the simulation.
#
# Let's load and plot the data from one of these files.
# Load certain useful packages in python
import xarray as xr
import numpy as np
from matplotlib import pyplot as plt
# We will load the oxygen (o2) data from a 5 year window of the simulation - years 711 to 715.
# Specify the location of the file
rootdir = '/archive/oar.gfdl.cmip6/ESM4/DECK/ESM4_piControl_D/gfdl.ncrc4-intel16-prod-openmp/pp/'
datadir = 'ocean_cobalt_omip_tracers_month_z/ts/monthly/5yr/'
filename = 'ocean_cobalt_omip_tracers_month_z.071101-071512.o2.nc'
# Note the timestamp in the filename: 071101-071512
# which specifies that in this file are data from year 0711 month 01 to year 0715 month 12.
# Load the file using the xarray (xr) command open_dataset
# We load the data to a variable that we call 'oxygen'
oxygen = xr.open_dataset(rootdir+datadir+filename)
# Print to the screen the details of the file
print(oxygen)
# We can see above that the file contains the variable (in this case oxygen - o2), as well as all of the dimensional information - latitude (xh), longitude (yh), depth (z_i and z_l). The two depth coordinates correspond to the layer and interface depths - for our purposes, we will almost always be interested only in the layer depth.
#
# We can learn more about a variable (e.g. what it is, and what its units are), by printing it to the screen directly.
print(oxygen.o2)
# Here we can see that the variable o2 corresponds to the concetration of dissolved oxygen, in moles per cubic metre.
# ***
#
# # Plotting
#
# Now let's plot some of this data to see what it looks like.
#
# We use the package pyplot from matplotlib (plt), with the command pcolormesh.
# This plots a 2D coloured mesh of whatever variable we specify.
# We load the generated image to the variable 'im', so that we can point to it later.
#
# Within pcolormesh, we use the '.' to pull out the bits that we want from the dataset 'oxygen'
# In the first instance we take the variable o2:
# oxygen.o2
# Then we select the first time point and the very upper depth level using index selection:
# oxygen.o2.isel(time=0,z_l=0)
im = plt.pcolormesh(oxygen.o2.isel(time=0,z_l=0)) # pcolormesh of upper surface at first time step
plt.colorbar(im) # Plot a colorbar
plt.show() # Show the plot
# ***
# We can just as easily plot a deeper depth level. Let's look at the 10th level.
im = plt.pcolormesh(oxygen.o2.isel(time=0,z_l=9)) # remember python indices start from 0
plt.colorbar(im) # Plot a colorbar
plt.show() # Show the plot
# # Temperature and salinity data
#
# Now we are equipped to load and examine biogeochemical data.
#
# Where do we find the coincident physical variables, temperature and salinity? The physical variables are stored in the same root directory, but a different sub-directory: ocean_monthly_z/ts/monthly/5yr/.
#
# Let's load the temperature data for the same time period.
# +
datadir = 'ocean_monthly_z/ts/monthly/5yr/'
filename = 'ocean_monthly_z.071101-071512.thetao.nc'
temperature = xr.open_dataset(rootdir+datadir+filename)
print(temperature.thetao)
# -
# Let's plot the surface temperature data.
im = plt.pcolormesh(temperature.thetao.isel(time=0,z_l=0))
plt.colorbar(im) # Plot a colorbar
plt.show() # Show the plot
# # Binning
#
# A lot of what we will be doing is looking at variables such as oxygen in a 'temperature coordinate'. That is to say, 'binning' the oxygen according to the temperature of the water.
#
# Let's look at how to do that in xarray.
# Merge our oxygen and temperature dataarrays
ds = xr.merge([temperature,oxygen])
# Set temperature as a 'coordinate' in the new dataset
ds = ds.set_coords('thetao')
# Use the groupby_bins functionality of xarray to group the o2 measurements into temperature bins
theta_bins = np.arange(-2,30,1) # Specify the range of the bins
o2_in_theta = ds.o2.isel(time=0).groupby_bins('thetao',theta_bins) # Do the grouping
# This series of operations has grouped the o2 datapoints according to their coincident temperature values.
# (A short example of the functionality of groupby using multi-dimensional coordinates, such as temperature, is provided [here](http://xarray.pydata.org/en/stable/examples/multidimensional-coords.html))
# We can now perform new operations on the grouped object (o2_in_theta).
#
# For example, we can simply count up the number of data points in each group (like a histogram):
o2_in_theta.count(xr.ALL_DIMS)
# And we can plot that very easily:
o2_in_theta.count(xr.ALL_DIMS).plot()
# Or, we can take the mean value in each group:
o2_in_theta.mean(xr.ALL_DIMS)
# ### Accounting for volume
# Different grid cells in the model have different volumes. Thus, when we are doing summations, calculating means, etc., we need to account for this variable volume.
#
# So, first load up the gridcell volume data.
# +
datadir = 'ocean_monthly_z/ts/monthly/5yr/'
filename = 'ocean_monthly_z.071101-071512.volcello.nc'
volume = xr.open_dataset(rootdir+datadir+filename)
print(volume.volcello)
# -
# As a first example, sum up the volumes of the grid cells within each density class. For this we will need to bin the volume into temperature classes, as we did with oxygen.
ds = xr.merge([ds,volume])
volcell_in_theta = ds.volcello.isel(time=0).groupby_bins('thetao',theta_bins)
# Summing these binned volumes then provides a true account of the volume of ocean water in each temperature class.
volcell_in_theta.sum(xr.ALL_DIMS).plot()
# I might also want to look at the summed oxygen content. This would involve binning and summing the product of oxygen and grid cell volume.
o2cont = ds.volcello*ds.o2
o2cont.name='o2cont'
ds = xr.merge([ds,o2cont])
o2cont_in_theta = ds.o2cont.isel(time=0).groupby_bins('thetao',theta_bins)
o2cont_in_theta.sum(xr.ALL_DIMS).plot()
# And now the volume-weighted mean oxygen in each temperature class.
o2mean_in_theta = o2cont_in_theta.sum(xr.ALL_DIMS)/volcell_in_theta.sum(xr.ALL_DIMS)
o2mean_in_theta.plot()
# ### Doing our binning all at once
# The binning process takes some time, since the algorithm has to search through the whole 3D grid.
# groupby_bins can also operate on DataSets, rather than just DataArrays. As such, it could be more time efficient to do all of the binning at once.
# Let's have a look at that.
#
# Remember, our DataSet ds has all of the variables that we are interested in binning.
ds_in_theta = ds.isel(time=0).groupby_bins('thetao',theta_bins)
ds_in_theta
| notebooks/archive/tutorial_esm4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Principle component analysis for dimensional reduction
#
# In the following we introduce the concept of principle component analysis (PCA). The basic idea is to reduce the number of relevant features/columns of data, i.e., yield dimensionality reduction. To this end, the PCA produces so-called principle components which capture/represent in descending order the variance of the data. Basically, the co-variance matrix of the date is computed and diagonalized to reach this goal. For more information on PCA, we refer to https://en.wikipedia.org/wiki/Principal_component_analysis and for a discussion on the python implementation see https://towardsdatascience.com/pca-using-python-scikit-learn-e653f8989e60.
#
# The data to be analyzed captures mobile phone user motion information and can be downloaded from: https://www.kaggle.com/uciml/human-activity-recognition-with-smartphones/data.
#
# In this notebook, we merely seek to discuss the python implementation and visualization of the result of the PCA.
# +
#importing necessary packages
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
# %matplotlib inline
import matplotlib.pyplot as plt
# +
#define data frame
df = pd.read_csv("./train.csv.bz2")
df.head()
# -
df.shape
#define variables
X = df.drop("subject", axis = 1).drop("Activity", axis = 1) #drop two last columns
Y = df["Activity"]
X.shape
#need to rescale data
scaler = StandardScaler()
X = scaler.fit_transform(X)
# +
#proceed with PCA (reduce dimensions)
from sklearn.decomposition import PCA
pca = PCA(n_components = 2) #break down 561 columns/axes down to 2!
pca.fit(X)
X_transformed = pca.transform(X)
# -
X_transformed.shape
# +
#visualize result of dimensional reduction
plt.scatter(X_transformed[:, 0], X_transformed[:, 1])
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.title('Principle component scatter plot')
plt.show()
# +
#incorporate Y-information into the analysis
#would like to filter the above graphic and allow only those points where Y has the value "STANDING"
#Y.unique()
#Y == "STANDING"
#filtering
X_transformed_filtered = X_transformed[Y == "STANDING"]
# +
#visualize filtered result of dimensional reduction
plt.scatter(X_transformed_filtered[:, 0], X_transformed_filtered[:, 1], color = 'r')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.title('filtered ("STANDING") Principle component scatter plot')
plt.show()
# +
#visualize generalized filtering: all categories
plt.figure(figsize = (10, 6))
for activity in Y.unique():
X_transformed_filtered = X_transformed[Y == activity]
plt.scatter(X_transformed_filtered[:, 0], X_transformed_filtered[:, 1], label = activity, s = 4.5)
plt.legend()
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.title('fully filtered Principle component scatter plot')
plt.show()
# -
# We observe that the left cluster corresponds to resting probands while the right cluster represents probands in motion. Hence, the data broadly falls into two categories. The data thus is linearily separable.
# +
#more detailed analysis: PCA leading to 3 PCs
pca = PCA(n_components = 3) #break down 561 columns/axes down to 2!
pca.fit(X)
X_transformed = pca.transform(X)
X_transformed.shape
# +
#visualize generalized filtering: all categories in 3D
# %matplotlib notebook
#to rotate notebook #inline to get fixed plot
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize = (10, 6))
ax = fig.add_subplot(111, projection='3d')
for activity in Y.unique():
X_transformed_filtered = X_transformed[Y == activity]
ax.scatter(
X_transformed_filtered[:, 0], #:integer to constrain the amount of transformed/plotted points
X_transformed_filtered[:, 1],
X_transformed_filtered[:, 2],
label = activity,
s = 4
)
plt.legend()
ax.set_xlabel('PC1')
ax.set_ylabel('PC2')
ax.set_zlabel('PC3')
plt.title('fully filtered Principle component scatter plot in 3D')
plt.show()
# -
| simplePCAwithvisualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## scikit-learn中的多项式回归和Pipeline
import numpy as np
import matplotlib.pyplot as plt
x = np.random.uniform(-3, 3, size=100)
X = x.reshape(-1, 1)
y = 0.5 * x**2 + x + 2 + np.random.normal(0, 1, 100)
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(degree=2)
poly.fit(X)
X2 = poly.transform(X)
X2.shape
X[:5,:]
X2[:5,:]
# +
from sklearn.linear_model import LinearRegression
lin_reg2 = LinearRegression()
lin_reg2.fit(X2, y)
y_predict2 = lin_reg2.predict(X2)
# -
plt.scatter(x, y)
plt.plot(np.sort(x), y_predict2[np.argsort(x)], color='r')
plt.show()
lin_reg2.coef_
lin_reg2.intercept_
# ### 关于PolynomialFeatures
X = np.arange(1, 11).reshape(-1, 2)
X
poly = PolynomialFeatures(degree=2)
poly.fit(X)
X2 = poly.transform(X)
X2.shape
X2
# ### Pipeline
# +
x = np.random.uniform(-3, 3, size=100)
X = x.reshape(-1, 1)
y = 0.5 * x**2 + x + 2 + np.random.normal(0, 1, 100)
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
poly_reg = Pipeline([
("poly", PolynomialFeatures(degree=2)),
("std_scaler", StandardScaler()),
("lin_reg", LinearRegression())
])
# -
poly_reg.fit(X, y)
y_predict = poly_reg.predict(X)
plt.scatter(x, y)
plt.plot(np.sort(x), y_predict[np.argsort(x)], color='r')
plt.show()
| 08-Polynomial-Regression-and-Model-Generalization/02-Polynomial-Regression-in-scikit-learn/02-Polynomial-Regression-in-scikit-learn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import pygame
from pygame.locals import *
import sys
import random
import time
pygame.init()
vec = pygame.math.Vector2 #2 for two dimensional
HEIGHT = 300
WIDTH = 900
ACC = 0.5
FRIC = -0.01
FPS = 60
FramePerSec = pygame.time.Clock()
displaysurface = pygame.display.set_mode((WIDTH, HEIGHT))
displaysurface.fill((255,255,255))
pygame.display.set_caption("Red Light Green Light")
class Player(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
#self.image = pygame.image.load("character.png")
self.surf = pygame.Surface((15, 30))
self.surf.fill((0,150,165))
self.rect = self.surf.get_rect()
self.pos = vec((5, HEIGHT-5))
self.vel = vec(0,0)
self.acc = vec(0,0)
def move(self):
self.acc = vec(0,0.5)
pressed_keys = pygame.key.get_pressed()
if pressed_keys[K_a] or pressed_keys[K_LEFT]:
self.acc.x = -ACC
if pressed_keys[K_d] or pressed_keys[K_RIGHT]:
self.acc.x = ACC
self.acc.x += self.vel.x * FRIC
self.vel += self.acc
self.pos += self.vel + 0.5 * self.acc
if self.pos.x > WIDTH:
self.pos.x = 0
if self.pos.x < 0:
self.pos.x = 0
if self.pos.y < 20:
self.pos.y = 20
self.rect.midbottom = self.pos
def draw(self, surface):
surface.blit(self.rect)
class platform(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
self.surf = pygame.Surface((WIDTH, 20))
self.surf.fill((255,230,145))
self.rect = self.surf.get_rect(center = (WIDTH/2, HEIGHT-20))
def draw(self, surface):
surface.blit(self.rect)
P1 = Player()
Ground = platform()
all_sprites = pygame.sprite.Group()
all_sprites.add(P1)
all_sprites.add(Ground)
done = False
while not done:
P1.move()
for entity in all_sprites:
displaysurface.blit(entity.surf, entity.rect)
if P1.pos.x > WIDTH:
done = True
pygame.quit()
sys.exit()
pygame.display.update()
FramePerSec.tick(FPS)
# -
print('Hello')
# +
import pygame
from pygame.locals import *
import sys
import random
pygame.init()
vec = pygame.math.Vector2
HEIGHT = 400
WIDTH = 1000
ACC = 1
FRIC = -0.12
FPS = 60
floor = 30
jump = 7.5
bounce = 0.5
Gravity = 0.5
FramePerSec = pygame.time.Clock()
displaysurface = pygame.display.set_mode((WIDTH, HEIGHT))
pygame.display.set_caption("Game")
class Player(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
self.surf = pygame.Surface((30, 60))
self.surf.fill((0,150,165))
self.rect = self.surf.get_rect()
self.pos = vec((10, HEIGHT- floor))
self.vel = vec(0,0)
self.acc = vec(0,Gravity)
def move(self):
self.acc = vec(0,Gravity)
hits = pygame.sprite.spritecollide(P1 , platforms, False)
if hits:
self.pos.y = hits[0].rect.top + 1
self.vel.y = -self.vel.y*bounce
pressed_keys = pygame.key.get_pressed()
if pressed_keys[K_LEFT] or pressed_keys[K_a]:
self.acc.x = -ACC
if pressed_keys[K_RIGHT] or pressed_keys[K_d]:
self.acc.x = ACC
if pressed_keys[K_UP] or pressed_keys[K_w] or pressed_keys[K_SPACE]:
ground = HEIGHT-floor+1
if self.pos.y > ground-1 and self.pos.y < ground+1:
self.vel.y = -jump
self.acc.x += self.vel.x * FRIC
self.vel += self.acc
self.pos += self.vel + 0.5 * self.acc
if self.pos.x > WIDTH:
self.pos.x = WIDTH
if self.pos.x < 0:
self.pos.x = 0
self.rect.midbottom = self.pos
class platform(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
self.surf = pygame.Surface((WIDTH, floor))
self.surf.fill((255,230,145))
self.rect = self.surf.get_rect(center = (WIDTH/2, HEIGHT - floor/2))
Ground = platform()
P1 = Player()
platforms = pygame.sprite.Group()
platforms.add(Ground)
all_sprites = pygame.sprite.Group()
all_sprites.add(Ground)
all_sprites.add(P1)
while True:
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
sys.exit()
displaysurface.fill((255,255,255))
P1.move()
for entity in all_sprites:
displaysurface.blit(entity.surf, entity.rect)
pygame.display.update()
FramePerSec.tick(FPS)
# -
print(pygame.font.get_fonts())
# + active=""
#
# -
# +
import pygame
from pygame.locals import *
import sys
import random
import time
import pygame.freetype
pygame.init()
vec = pygame.math.Vector2
HEIGHT = 400
WIDTH = 1000
ACC = 2
FRIC = -0.3
FPS = 60
floor = 30
jump = 7.5
bounce = 0.5
Gravity = 0.5
Status = 'Green'
GREEN = (0,200,0)
YELLOW = (240,255,0)
RED = (240,0,0)
prob = int(10000/FPS)
global death, reset_time, reset_count
death = False
reset_count = 0
reset_time = 12000/FPS
FramePerSec = pygame.time.Clock()
GAME_FONT = pygame.freetype.SysFont('rockwell', 24)
displaysurface = pygame.display.set_mode((WIDTH, HEIGHT))
pygame.display.set_caption("Red Light Green Light")
class Player(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
self.surf = pygame.Surface((30, 60))
self.surf.fill((0,150,165))
self.rect = self.surf.get_rect()
self.pos = vec((10, HEIGHT- floor))
self.vel = vec(0,0)
self.acc = vec(0,Gravity)
def move(self):
if not death:
self.acc = vec(0,Gravity)
hits = pygame.sprite.spritecollide(P1 , platforms, False)
if hits:
self.pos.y = hits[0].rect.top + 1
self.vel.y = -self.vel.y*bounce
pressed_keys = pygame.key.get_pressed()
if pressed_keys[K_LEFT] or pressed_keys[K_a]:
self.acc.x = -ACC
if pressed_keys[K_RIGHT] or pressed_keys[K_d]:
self.acc.x = ACC
if pressed_keys[K_UP] or pressed_keys[K_w] or pressed_keys[K_SPACE]:
ground = HEIGHT-floor+1
if self.pos.y > ground-1 and self.pos.y < ground+1:
self.vel.y = -jump
self.acc.x += self.vel.x * FRIC
self.vel += self.acc
self.pos += self.vel + 0.5 * self.acc
if self.pos.x > WIDTH:
self.pos.x = WIDTH
if self.pos.x < 0:
self.pos.x = 0
self.rect.midbottom = self.pos
class platform(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
self.surf = pygame.Surface((WIDTH, floor))
self.surf.fill((255,230,145))
self.rect = self.surf.get_rect(center = (WIDTH/2, HEIGHT - floor/2))
class timer(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
h = 150
w = 50
self.surf = pygame.Surface((w, h))
self.surf.fill(GREEN)
self.rect = self.surf.get_rect(center = (WIDTH-w/2, HEIGHT-floor-h/2))
def update(self, status):
if Status == 'Green':
self.surf.fill(GREEN)
if Status == 'Yellow':
self.surf.fill(YELLOW)
if Status == 'Red':
self.surf.fill(RED)
Ground = platform()
P1 = Player()
Timer = timer()
platforms = pygame.sprite.Group()
platforms.add(Ground)
all_sprites = pygame.sprite.Group()
all_sprites.add(Ground)
all_sprites.add(P1)
all_sprites.add(Timer)
while True:
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
sys.exit()
displaysurface.fill((255,255,255))
if Status == 'Green':
x = random.randrange(0,prob)
if x < 1:
Status = 'Yellow'
if Status == 'Yellow':
x = random.randrange(0,int(prob/2))
if x < 1:
Status = 'Red'
if Status =='Red':
if abs(P1.vel.x) < 0.1 and abs(P1.vel.y - 0.33) < 0.1:
reset_count += 1
else:
GAME_FONT.render_to(displaysurface, (WIDTH/2, HEIGHT/4),'Death!')
death = True
reset_count += 1
if reset_count > reset_time:
death = False
reset_count = 0
Status = 'Green'
if abs(P1.vel.x) < 0.1 and abs(P1.pos.y - (HEIGHT-floor+1)) < 1:
GAME_FONT.render_to(displaysurface, (WIDTH/2, HEIGHT/4),'Safe!')
P1.move()
Timer.update(Status)
for entity in all_sprites:
displaysurface.blit(entity.surf, entity.rect)
pygame.display.update()
FramePerSec.tick(FPS)
# -
# +
import pygame
from pygame.locals import *
import sys
import random
import time
import pygame.freetype
pygame.init()
vec = pygame.math.Vector2
HEIGHT = 500
WIDTH = 1000
ACC = 0.8
FRIC = -0.15
FPS = 60
floor = 30
jump = 7.5
bounce = 0.5
Gravity = 0.5
Status = 'Green'
GREEN = (0,200,0)
YELLOW = (240,255,0)
RED = (240,0,0)
green_init = 50
yellow_init = 80
yellow_count = yellow_init
yellow_var = 0.75
yellow_max = 100
green_count = green_init
green_var = 1
green_max = 100
global death, reset_time, reset_count
death = False
reset_count = 0
reset_time = 9000/FPS
FramePerSec = pygame.time.Clock()
GAME_FONT = pygame.freetype.SysFont('rockwell', 24)
displaysurface = pygame.display.set_mode((WIDTH, HEIGHT))
pygame.display.set_caption("Red Light Green Light")
class Player(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
self.surf = pygame.Surface((30, 60))
self.surf.fill((0,150,165))
self.rect = self.surf.get_rect()
self.pos = vec((25, HEIGHT- floor))
self.vel = vec(0,0)
self.acc = vec(0,Gravity)
def move(self):
if not death:
self.acc = vec(0,Gravity)
hits = pygame.sprite.spritecollide(P1 , platforms, False)
if hits:
self.pos.y = hits[0].rect.top + 1
self.vel.y = -self.vel.y*bounce
pressed_keys = pygame.key.get_pressed()
if pressed_keys[K_LEFT] or pressed_keys[K_a]:
self.acc.x = -ACC
if pressed_keys[K_RIGHT] or pressed_keys[K_d]:
self.acc.x = ACC
if pressed_keys[K_UP] or pressed_keys[K_w] or pressed_keys[K_SPACE]:
ground = HEIGHT-floor+1
if self.pos.y > ground-1 and self.pos.y < ground+1:
self.vel.y = -jump
self.acc.x += self.vel.x * FRIC
self.vel += self.acc
self.pos += self.vel + 0.5 * self.acc
if self.pos.x > WIDTH:
self.pos.x = WIDTH
if self.pos.x < 0:
self.pos.x = 0
self.rect.midbottom = self.pos
class platform(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
self.surf = pygame.Surface((WIDTH, floor))
self.surf.fill((255,230,145))
self.rect = self.surf.get_rect(center = (WIDTH/2, HEIGHT - floor/2))
class lines(pygame.sprite.Sprite):
def __init__(self, line_x_pos):
super().__init__()
self.surf = pygame.Surface((10, floor))
self.surf.fill((0,0,0))
self.rect = self.surf.get_rect(center = (line_x_pos, HEIGHT-floor/2))
class timer(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
h = 150
w = 50
self.surf = pygame.Surface((w, h))
self.surf.fill(GREEN)
self.rect = self.surf.get_rect(center = (WIDTH-w/2, HEIGHT-floor-h/2))
def update(self, status):
if Status == 'Green':
self.surf.fill(GREEN)
if Status == 'Yellow':
self.surf.fill(YELLOW)
if Status == 'Red':
self.surf.fill(RED)
Ground = platform()
P1 = Player()
Timer = timer()
start_line = lines(55)
finish_line = lines(WIDTH-85)
platforms = pygame.sprite.Group()
platforms.add(Ground)
platforms.add(start_line)
platforms.add(finish_line)
all_sprites = pygame.sprite.Group()
all_sprites.add(Ground)
all_sprites.add(P1)
all_sprites.add(Timer)
all_sprites.add(start_line)
all_sprites.add(finish_line)
while True:
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
sys.exit()
displaysurface.fill((255,255,255))
if Status == 'Green':
green_count += random.uniform(0,green_var)
if green_count > green_max:
Status = 'Yellow'
green_count = green_init
if Status == 'Yellow':
yellow_count += random.uniform(0,yellow_var)
if yellow_count > yellow_max:
Status = 'Red'
yellow_count = yellow_init
if Status =='Red':
if abs(P1.vel.x) < 0.1 and abs(P1.vel.y - 0.33) < 0.1:
reset_count += 1
else:
GAME_FONT.render_to(displaysurface, (WIDTH/2, HEIGHT/4),'Death!')
death = True
reset_count += 1
P1.pos = vec((25, HEIGHT- floor))
P1.vel.x = 0
if reset_count > reset_time:
death = False
reset_count = 0
Status = 'Green'
if abs(P1.vel.x) < 0.1 and abs(P1.pos.y - (HEIGHT-floor+1)) < 1:
GAME_FONT.render_to(displaysurface, (WIDTH/2, HEIGHT/4),'Safe!')
P1.move()
Timer.update(Status)
for entity in all_sprites:
displaysurface.blit(entity.surf, entity.rect)
pygame.display.update()
FramePerSec.tick(FPS)
# -
random.uniform(5,10)
# +
import pygame
from pygame.locals import *
import sys
import random
import time
import pygame.freetype
pygame.init()
class GameEngine:
def __init__(self):
vec = pygame.math.Vector2
HEIGHT = 400
WIDTH = 1000
ACC = 2
FRIC = -0.3
FPS = 60
floor = 30
jump = 7.5
bounce = 0.5
Gravity = 0.5
GREEN = (0,200,0)
YELLOW = (240,255,0)
RED = (240,0,0)
prob = int(10000/FPS)
global death, reset_time, reset_count, Status
death = False
reset_count = 0
reset_time = 12000/FPS
Status = 'Green'
FramePerSec = pygame.time.Clock()
GAME_FONT = pygame.freetype.SysFont('rockwell', 24)
displaysurface = pygame.display.set_mode((WIDTH, HEIGHT))
pygame.display.set_caption("Red Light Green Light")
class Player(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
self.surf = pygame.Surface((30, 60))
self.surf.fill((0,150,165))
self.rect = self.surf.get_rect()
self.pos = vec((10, HEIGHT- floor))
self.vel = vec(0,0)
self.acc = vec(0,Gravity)
def move(self):
if not death:
self.acc = vec(0,Gravity)
hits = pygame.sprite.spritecollide(P1 , platforms, False)
if hits:
self.pos.y = hits[0].rect.top + 1
self.vel.y = -self.vel.y*bounce
pressed_keys = pygame.key.get_pressed()
if pressed_keys[K_LEFT] or pressed_keys[K_a]:
self.acc.x = -ACC
if pressed_keys[K_RIGHT] or pressed_keys[K_d]:
self.acc.x = ACC
if pressed_keys[K_UP] or pressed_keys[K_w] or pressed_keys[K_SPACE]:
ground = HEIGHT-floor+1
if self.pos.y > ground-1 and self.pos.y < ground+1:
self.vel.y = -jump
self.acc.x += self.vel.x * FRIC
self.vel += self.acc
self.pos += self.vel + 0.5 * self.acc
if self.pos.x > WIDTH:
self.pos.x = WIDTH
if self.pos.x < 0:
self.pos.x = 0
self.rect.midbottom = self.pos
class platform(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
self.surf = pygame.Surface((WIDTH, floor))
self.surf.fill((255,230,145))
self.rect = self.surf.get_rect(center = (WIDTH/2, HEIGHT - floor/2))
class timer(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
h = 150
w = 50
self.surf = pygame.Surface((w, h))
self.surf.fill(GREEN)
self.rect = self.surf.get_rect(center = (WIDTH-w/2, HEIGHT-floor-h/2))
def update(self, status):
if Status == 'Green':
self.surf.fill(GREEN)
if Status == 'Yellow':
self.surf.fill(YELLOW)
if Status == 'Red':
self.surf.fill(RED)
Ground = platform()
P1 = Player()
Timer = timer()
platforms = pygame.sprite.Group()
platforms.add(Ground)
all_sprites = pygame.sprite.Group()
all_sprites.add(Ground)
all_sprites.add(P1)
all_sprites.add(Timer)
states = [Starting_Screen(), Running()]
def run(self):
global running
running = True
while running:
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
sys.exit()
displaysurface.fill((255,255,255))
if Status == 'Green':
x = random.randrange(0,prob)
if x < 1:
Status = 'Yellow'
if Status == 'Yellow':
x = random.randrange(0,int(prob/2))
if x < 1:
Status = 'Red'
if Status =='Red':
if abs(P1.vel.x) < 0.1 and abs(P1.vel.y - 0.33) < 0.1:
reset_count += 1
else:
GAME_FONT.render_to(displaysurface, (WIDTH/2, HEIGHT/4),'Death!')
death = True
reset_count += 1
if reset_count > reset_time:
death = False
reset_count = 0
Status = 'Green'
if abs(P1.vel.x) < 0.1 and abs(P1.pos.y - (HEIGHT-floor+1)) < 1:
GAME_FONT.render_to(displaysurface, (WIDTH/2, HEIGHT/4),'Safe!')
P1.move()
Timer.update(Status)
for entity in all_sprites:
displaysurface.blit(entity.surf, entity.rect)
pygame.display.update()
FramePerSec.tick(FPS)
| RLGL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [py3k]
# language: python
# name: Python [py3k]
# ---
# +
import mhcflurry
import numpy
import seaborn
import logging
from matplotlib import pyplot
% matplotlib inline
logging.basicConfig(level="DEBUG")
# -
# # Making predictions
# Note: if you haven't already, run `mhcflurry-downloads fetch` in a shell to download the trained models.
# ## Simplest way to run predictions: `mhcflurry.predict()`
help(mhcflurry.predict)
mhcflurry.predict(alleles=["HLA-A0201"], peptides=["SIINFEKL", "SIINFEQL"])
#
# ## Instantiating a model
model = mhcflurry.class1_allele_specific.load.from_allele_name("HLA-A0201")
model.predict(["SIINFEKL", "SIQNPEKP", "SYNFPEPI"])
#
# ## Instantiating a model from a custom set of models on disk
models_dir = mhcflurry.downloads.get_path("models_class1_allele_specific_single")
models_dir
# Make a Loader first
loader = mhcflurry.class1_allele_specific.load.Class1AlleleSpecificPredictorLoader(models_dir)
model = loader.from_allele_name("HLA-A0201")
model.predict(["SIINFEKL", "SIQNPEKP", "SYNFPEPI"])
#
# # Loading a `Dataset`
full_training_data = mhcflurry.dataset.Dataset.from_csv(
mhcflurry.downloads.get_path("data_combined_iedb_kim2014", "combined_human_class1_dataset.csv"))
full_training_data
# +
kim2014_full = mhcflurry.dataset.Dataset.from_csv(
mhcflurry.downloads.get_path("data_kim2014", "bdata.20130222.mhci.public.1.txt"))
kim2014_train = mhcflurry.dataset.Dataset.from_csv(
mhcflurry.downloads.get_path("data_kim2014", "bdata.2009.mhci.public.1.txt"))
kim2014_test = mhcflurry.dataset.Dataset.from_csv(
mhcflurry.downloads.get_path("data_kim2014", "bdata.2013.mhci.public.blind.1.txt"))
len(kim2014_full), len(kim2014_train), len(kim2014_test)
# -
#
# # Predicting affinities from a `Dataset`
#
model = mhcflurry.class1_allele_specific.load.from_allele_name("HLA-A0201")
model.predict(kim2014_train.get_allele("HLA-A0201").peptides)
#
# # Fit a model
help(mhcflurry.class1_allele_specific.Class1BindingPredictor)
train_data = kim2014_train.get_allele("HLA-A3301")
train_data
# We'll use the default hyper parameters here. Could also specify them as kwargs:
new_model = mhcflurry.class1_allele_specific.Class1BindingPredictor()
new_model.hyperparameters
# This will run faster if you have a GPU.
# %time new_model.fit_dataset(train_data)
#
# ## Evaluate the fit model on held-out test data
# ### Generate predictions
# +
test_data = kim2014_test.get_allele("HLA-A3301")
predictions = new_model.predict(test_data.peptides)
seaborn.set_context('notebook')
seaborn.regplot(numpy.log10(test_data.affinities), numpy.log10(predictions))
pyplot.xlim(xmin=0)
pyplot.ylim(ymin=0)
pyplot.xlabel("Measured affinity (log10 nM)")
pyplot.ylabel("Predicted affinity (log10 nM)")
pyplot.title("MHCflurry on test data")
# -
#
# ### Calculate AUC, F1, and Kendall's Tau scores
help(mhcflurry.class1_allele_specific.scoring.make_scores)
mhcflurry.class1_allele_specific.scoring.make_scores(test_data.affinities, predictions)
#
# ## Cross validation for hyperparameter selection
help(mhcflurry.class1_allele_specific.cross_validation.cross_validation_folds)
folds = mhcflurry.class1_allele_specific.cross_validation.cross_validation_folds(train_data)
folds
# Take a look at what hyperparameters are available for searching over.
mhcflurry.class1_allele_specific.train.HYPERPARAMETER_DEFAULTS.defaults
models_to_search = mhcflurry.class1_allele_specific.train.HYPERPARAMETER_DEFAULTS.models_grid(
fraction_negative=[.1],
layer_sizes=[[8], [12]])
print("Searching over %d models." % len(models_to_search))
print("First model: \n%s" % models_to_search[0])
help(mhcflurry.class1_allele_specific.train.train_across_models_and_folds)
results_df = mhcflurry.class1_allele_specific.train.train_across_models_and_folds(
folds,
models_to_search,
return_predictors=True)
results_df
# The trained predictors are in the 'predictor' column
results_df.predictor
# Which model had the best average AUC across folds?
results_df.groupby("model_num").test_auc.mean()
| examples/class1_allele_specific_models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="mhfyHiq8wkNT" colab_type="code" colab={}
import tensorflow_datasets as tfds
import tensorflow as tf
import time
import numpy as np
import matplotlib.pyplot as plt
# + id="0tVliKqbxKbK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 416, "referenced_widgets": ["fb00ad43aa814f21b3783cae05ab89ae", "e7688c3a80a64ee38565bc12eaf69e7b", "<KEY>", "3d9632a3958e40adaf5ab0e2031656e6", "ba4d4a3189d3479f8aa9f517193bf9ed", "<KEY>", "db5bced68db647ab98e4ba5f1f973775", "ac7a2fae8ed44182b49acd3ea8eaa4b4", "e6772b87bbbc48bda9889df6b7a1feeb", "<KEY>", "01d0a5b3ff71456ea1bd82334f08b43f", "c6586b6b55de4cada2ae2fd889eace0c", "<KEY>", "49fe3e89a1d6476e89578f00fed00874", "<KEY>", "<KEY>", "<KEY>", "4f0348e8989047de891813b72016f9e3", "22718344b58f4ea6a81c81ec4a37fb3b", "88d3e2ea6eb445e5b06cd11fc42edc0d", "586dacbc8b89421eb2ac38ee255aece2", "ffe7b767cb864ed2aca7de502e8a3cf6", "<KEY>", "<KEY>", "9f7faaf39fe3475da0ec5e0d54702683", "3add793e80ab4aeeb51de9b070283d17", "3d9df50f56ac4978bbe489263712b5db", "1e6b53b67d584a5b8bdd8e48d569e1d9", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "95c34628919e469085160d195b2dc07e", "<KEY>", "ceaf1c5f001541619bc1ec6347a53a86", "<KEY>", "ca224944cfa14dce8e0887d8a9b1ce4f", "6c7640485ff64ab8bd381e19d99d0df4", "<KEY>", "740e7f48b1d4450bbe7de1daa716811f", "95b352abe49a446ca929fe7ada4d4353", "<KEY>", "<KEY>", "69408fa859ec464d99596b0eb267dfce", "<KEY>", "a131efad0e494d0d96ab2a6b6ff7a41e", "30744c69bf8f4af888ced1fb4ee4f217", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "ac6b492794ed44a381f001a80ba03c1f", "<KEY>", "abba7420645f419ba1242ea259109898", "dbec40d555fa420eb600500ff39deded", "<KEY>", "fb83caf57d374726be2db6b402b43662", "7ede840e185d494f81b75bf9b2468a22", "c648f1e326ba41ba8e1c572113ece79f", "7a138cf02d2848eeac4a15fd81c80018", "<KEY>", "f7c66e2ff28c4e8b82b4a578a1fc59a7", "03252d36365a46ddabb5d1cb4e27a46d", "d7640d5a0e8945d2b6b4be425da6d686", "e68287257a654b47ab00e0dd93574c10", "99bf0ef37d1d46a78d12cc67e4ed6b42", "b03e3e877df04da89fef00bb2d9a9a76", "e6fd8c3e0039434da3e0454224fac217", "<KEY>", "ae2a57438fc546cfafede2088d553fdd"]} outputId="d47b8aaf-64f7-42ee-c9d2-a9d46a924092"
examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']
# + id="N4syuTQgxgNp" colab_type="code" colab={}
tokenizer_en = tfds.features.text.SubwordTextEncoder.build_from_corpus(
(en.numpy() for pt, en in train_examples), target_vocab_size=2**13)
tokenizer_pt = tfds.features.text.SubwordTextEncoder.build_from_corpus(
(pt.numpy() for pt, en in train_examples), target_vocab_size=2**13)
# + id="bWXofvOhx-qc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="ca515e60-e736-4dd1-b457-14c401ccdf04"
sample_string = "Dilawar is cool."
tokenized_string = tokenizer_en.encode(sample_string)
print(f"Tokenized string is {tokenized_string}")
original_string = tokenizer_en.decode(tokenized_string)
print(f"The original string: {original_string}")
assert original_string == sample_string
# + id="GbWOSQZ9y6ok" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="f8e96200-cd7c-437f-b3f9-72faad0a1bcb"
for ts in tokenized_string:
print(f"{ts} ----> {tokenizer_en.decode([ts])}")
# + id="eEVSr-hQzEwA" colab_type="code" colab={}
BUFFER_SIZE = 20000
BATCH_SIZE = 64
# + id="HF2RxjdrzMBe" colab_type="code" colab={}
def encode(lang1, lang2):
lang1 = [tokenizer_pt.vocab_size] + tokenizer_pt.encode(
lang1.numpy()) + [tokenizer_pt.vocab_size+1]
lang2 = [tokenizer_en.vocab_size] + tokenizer_en.encode(
lang2.numpy()) + [tokenizer_en.vocab_size+1]
return lang1, lang2
# + id="9bRHzbTOza12" colab_type="code" colab={}
def tf_encode(pt, en):
result_pt, result_en = tf.py_function(encode, [pt, en], [tf.int64, tf.int64])
result_pt.set_shape([None])
result_en.set_shape([None])
return result_pt, result_en
# + id="ZaNkD_6U0BYV" colab_type="code" colab={}
MAX_LENGTH = 40
# + id="hHQgpDDE0DtP" colab_type="code" colab={}
def filter_max_length(x, y, max_length=MAX_LENGTH):
return tf.logical_and(tf.size(x) <= max_length,
tf.size(y) <= max_length)
# + id="1WrKe3jL0PI7" colab_type="code" colab={}
train_dataset = train_examples.map(tf_encode)
train_dataset = train_dataset.filter(filter_max_length)
train_dataset = train_dataset.cache()
train_dataset = train_dataset.shuffle(BUFFER_SIZE).padded_batch(BATCH_SIZE)
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)
val_dataset = val_examples.map(tf_encode)
val_dataset = val_dataset.filter(filter_max_length).padded_batch(BATCH_SIZE)
# + id="-9JG8aVo0r2e" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 289} outputId="1e157344-3b1e-43d6-823e-606883ff5761"
pt_batch, en_batch = next(iter(val_dataset))
pt_batch, en_batch
# + id="N9VvJWMs09I9" colab_type="code" colab={}
def get_angles(pos, i, d_model):
angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
return pos * angle_rates
# + id="OlG12w3n1e0t" colab_type="code" colab={}
def positional_encoding(position, d_model):
angle_rads = get_angles(np.arange(position)[:, np.newaxis],
np.arange(d_model)[np.newaxis, :],
d_model)
angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])
angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])
pos_encoding = angle_rads[np.newaxis, ...]
return tf.cast(pos_encoding, tf.float32)
# + id="hdBWLV4w14We" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 300} outputId="a62a9d6f-47d7-4601-ea37-079f0576be53"
pos_encoding = positional_encoding(50, 512)
print(pos_encoding.shape)
plt.pcolormesh(pos_encoding[0], cmap='RdBu')
plt.xlabel('Depth')
plt.xlim(0, 512)
plt.ylabel('Position')
plt.colorbar()
plt.show()
# + id="hPngOeTm3lPr" colab_type="code" colab={}
def create_padding_mask(seq):
seq = tf.cast(tf.math.equal(seq, 0), tf.float32)
return seq[:, tf.newaxis, tf.newaxis, :]
# + id="lfAhg4Y937SF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="c4351c4e-279b-454c-89c4-2e71da05c381"
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
create_padding_mask(x)
# + id="djis1M3y3-x8" colab_type="code" colab={}
def create_look_ahead_mask(size):
mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
return mask
# + id="OnAlti1h4k-S" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="ed7d54d2-bb05-4b11-9f56-d5c97473ccbb"
x = tf.random.uniform((1, 3))
temp = create_look_ahead_mask(x.shape[1])
temp
# + id="EippL6DU4pyM" colab_type="code" colab={}
def scaled_dot_product_attention(q, k, v, mask):
matmul_qk = tf.matmul(q, k, transpose_b=True)
dk = tf.cast(tf.shape(k)[-1], tf.float32)
scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)
if mask is not None:
scaled_attention_logits += (mask * -1e9)
attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1)
output = tf.matmul(attention_weights, v)
return output, attention_weights
# + id="c4srbjao6Ll8" colab_type="code" colab={}
def print_out(q, k, v):
temp_out, temp_attn = scaled_dot_product_attention(
q, k, v, None)
print("Attention weights are:")
print(temp_attn)
print("Output is:")
print(temp_out)
# + id="GdGdD5MU6buR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="ac27a1ae-443c-4546-cca2-a9bf60795d23"
np.set_printoptions(suppress=True)
temp_k = tf.constant([[10,0,0],
[0,10,0],
[0,0,10],
[0,0,10]], dtype=tf.float32)
temp_v = tf.constant([[ 1,0],
[ 10,0],
[ 100,5],
[1000,6]], dtype=tf.float32)
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32)
print_out(temp_q, temp_k, temp_v)
# + id="3_bjxwhs6yAe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="4501b491-b81b-43ed-a143-475e93c10e27"
temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32)
print_out(temp_q, temp_k, temp_v)
# + id="yXpTRjLK7E4A" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="d4e04ce7-9afe-4b20-d313-c85de6a4563c"
temp_q = tf.constant([[10, 10, 0]], dtype=tf.float32)
print_out(temp_q, temp_k, temp_v)
# + id="oqeddEE87RAv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 187} outputId="12bba297-ee71-4f6b-c572-09e6fd9a6668"
temp_q = tf.constant([[0, 0, 10], [0, 10, 0], [10, 10, 0]], dtype=tf.float32)
print_out(temp_q, temp_k, temp_v)
# + id="xgaXofNt7Xbz" colab_type="code" colab={}
class MultiHeadAttention(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads):
super(MultiHeadAttention, self).__init__()
self.num_heads = num_heads
self.d_model = d_model
assert d_model % self.num_heads == 0
self.depth = d_model // self.num_heads
self.wq = tf.keras.layers.Dense(d_model)
self.wk = tf.keras.layers.Dense(d_model)
self.wv = tf.keras.layers.Dense(d_model)
self.dense = tf.keras.layers.Dense(d_model)
def split_heads(self, x, batch_size):
x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
return tf.transpose(x, perm=[0, 2, 1, 3])
def call(self, v, k, q, mask):
batch_size = tf.shape(q)[0]
q = self.wq(q)
k = self.wk(k)
v = self.wv(v)
q = self.split_heads(q, batch_size)
k = self.split_heads(k, batch_size)
v = self.split_heads(v, batch_size)
scaled_attention, attention_weights = scaled_dot_product_attention(
q, k, v, mask)
scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3])
concat_attention = tf.reshape(scaled_attention,
(batch_size, -1, self.d_model))
output = self.dense(concat_attention)
return output, attention_weights
# + id="5C2nydax9rj8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d00f47cf-7d85-42ef-8ba5-eba7437f9cc1"
temp_mha = MultiHeadAttention(d_model=512, num_heads=8)
y = tf.random.uniform((1, 60, 512))
out, attn = temp_mha(y, k=y, q=y, mask=None)
out.shape, attn.shape
# + id="3kf10RvI96wj" colab_type="code" colab={}
def point_wise_feed_forward_network(d_model, dff):
return tf.keras.Sequential([
tf.keras.layers.Dense(dff, activation='relu'),
tf.keras.layers.Dense(d_model)
])
# + id="39Ita91VAd7Z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="25c4968c-53d4-42c3-e5c6-00c6b45d05e9"
sample_ffn = point_wise_feed_forward_network(512, 2048)
sample_ffn(tf.random.uniform((64, 50, 512))).shape
# + id="Dr7f5no9Ak02" colab_type="code" colab={}
class EncoderLayer(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, dff, rate=0.1):
super(EncoderLayer, self).__init__()
self.mha = MultiHeadAttention(d_model, num_heads)
self.ffn = point_wise_feed_forward_network(d_model, dff)
self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = tf.keras.layers.Dropout(rate)
self.dropout2 = tf.keras.layers.Dropout(rate)
def call(self, x, training, mask):
attn_output, _ = self.mha(x, x, x, mask)
attn_output = self.dropout1(attn_output, training=training)
out1 = self.layernorm1(x + attn_output)
ffn_output = self.ffn(out1)
ffn_output = self.dropout2(ffn_output, training=training)
out2 = self.layernorm2(out1 + ffn_output)
return out2
# + id="e63LFVHlB2-N" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="df3f6085-f8b3-4686-ac99-dc4b3f9723e8"
sample_encoder_layer = EncoderLayer(512, 8, 2048)
sample_encoder_layer_output = sample_encoder_layer(
tf.random.uniform((64, 43, 512)), False, None)
sample_encoder_layer_output.shape
# + id="XWKLjBh_CGIS" colab_type="code" colab={}
class DecoderLayer(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, dff, rate=0.1):
super(DecoderLayer, self).__init__()
self.mha1 = MultiHeadAttention(d_model, num_heads)
self.mha2 = MultiHeadAttention(d_model, num_heads)
self.ffn = point_wise_feed_forward_network(d_model, dff)
self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = tf.keras.layers.Dropout(rate)
self.dropout2 = tf.keras.layers.Dropout(rate)
self.dropout3 = tf.keras.layers.Dropout(rate)
def call(self, x, enc_output, training,
look_ahead_mask, padding_mask):
attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask)
attn1 = self.dropout1(attn1, training=training)
out1 = self.layernorm1(attn1 + x)
attn2, attn_weights_block2 = self.mha2(
enc_output, enc_output, out1, padding_mask)
attn2 = self.dropout2(attn2, training=training)
out2 = self.layernorm2(attn2 + out1)
ffn_output = self.ffn(out2)
ffn_output = self.dropout3(ffn_output, training=training)
out3 = self.layernorm3(ffn_output + out2)
return out3, attn_weights_block1, attn_weights_block2
# + id="lyp7I5RcEPwi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="af7d4611-b923-4859-90f1-49df664c2b2d"
sample_decoder_layer = DecoderLayer(512, 8, 2048)
sample_decoder_layer_output, _, _ = sample_decoder_layer(
tf.random.uniform((64, 50, 512)), sample_encoder_layer_output,
False, None, None)
sample_decoder_layer_output.shape
# + id="w8G-TbQ0EqBA" colab_type="code" colab={}
class Encoder(tf.keras.layers.Layer):
def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
maximum_position_encoding, rate=0.1):
super(Encoder, self).__init__()
self.d_model = d_model
self.num_layers = num_layers
self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
self.pos_encoding = positional_encoding(maximum_position_encoding,
self.d_model)
self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(rate)
def call(self, x, training, mask):
seq_len = tf.shape(x)[1]
x = self.embedding(x)
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
x += self.pos_encoding[:, :seq_len, :]
x = self.dropout(x, training=training)
for i in range(self.num_layers):
x = self.enc_layers[i](x, training, mask)
return x
# + id="pLODH7BiH1s1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="ff3ed8fe-abb9-4005-baee-ab6be70c5075"
sample_encoder = Encoder(num_layers=2, d_model=512, num_heads=8,
dff=2048, input_vocab_size=8500,
maximum_position_encoding=10000)
temp_input = tf.random.uniform((64, 62), dtype=tf.int64, minval=0, maxval=200)
sample_encoder_output = sample_encoder(temp_input, training=False, mask=None)
print(sample_encoder_output.shape)
# + id="D8zmAhhzIPOo" colab_type="code" colab={}
class Decoder(tf.keras.layers.Layer):
def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size,
maximum_position_encoding, rate=0.1):
super(Decoder, self).__init__()
self.d_model = d_model
self.num_layers = num_layers
self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
self.pos_encoding = positional_encoding(maximum_position_encoding, d_model)
self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(rate)
def call(self, x, enc_output, training,
look_ahead_mask, padding_mask):
seq_len = tf.shape(x)[1]
attention_weights = {}
x = self.embedding(x)
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
x += self.pos_encoding[:, :seq_len, :]
x = self.dropout(x, training=training)
for i in range(self.num_layers):
x, block1, block2 = self.dec_layers[i](x, enc_output, training,
look_ahead_mask, padding_mask)
attention_weights[f"decoder_layer{i+1}_block1"] = block1
attention_weights[f"decoder_layer{i+1}_block2"] = block2
return x, attention_weights
# + id="H_qX3UQ0LXy-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="026b58ab-f425-4d12-c18a-8c1181881474"
sample_decoder = Decoder(num_layers=2, d_model=512, num_heads=8,
dff=2048, target_vocab_size=8000,
maximum_position_encoding=5000)
temp_input = tf.random.uniform((64, 26), dtype=tf.int64, minval=0, maxval=200)
output, attn = sample_decoder(temp_input,
enc_output=sample_encoder_output,
training=False,
look_ahead_mask=None,
padding_mask=None)
output.shape, attn["decoder_layer2_block2"].shape
# + id="GPHIpvgRMSgx" colab_type="code" colab={}
class Transformer(tf.keras.Model):
def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
target_vocab_size, pe_input, pe_target, rate=0.1):
super(Transformer, self).__init__()
self.encoder = Encoder(num_layers, d_model, num_heads, dff,
input_vocab_size, pe_input, rate)
self.decoder = Decoder(num_layers, d_model, num_heads, dff,
target_vocab_size, pe_target, rate)
self.final_layer = tf.keras.layers.Dense(target_vocab_size)
def call(self, inp, tar, training, enc_padding_mask,
look_ahead_mask, dec_padding_mask):
enc_output = self.encoder(inp, training, enc_padding_mask)
dec_output, attention_weights = self.decoder(
tar, enc_output, training, look_ahead_mask, dec_padding_mask)
final_output = self.final_layer(dec_output)
return final_output, attention_weights
# + id="cVKgAprANmsA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5412b469-3ea5-4e2a-f15f-991fbec21867"
sample_transformer = Transformer(
num_layers=2, d_model=512, num_heads=2, dff=2048,
input_vocab_size=8500, target_vocab_size=8000,
pe_input=10000, pe_target=6000)
temp_input = tf.random.uniform((64, 38), dtype=tf.int64, minval=0, maxval=200)
temp_target = tf.random.uniform((64, 36), dtype=tf.int64, minval=0, maxval=200)
fn_out, _ = sample_transformer(temp_input, temp_target, training=False,
enc_padding_mask=None,
look_ahead_mask=None,
dec_padding_mask=None)
fn_out.shape
# + id="rxfocEQDOx5r" colab_type="code" colab={}
num_layers = 4
d_model = 128
dff = 512
num_heads = 8
input_vocab_size = tokenizer_pt.vocab_size + 2
target_vocab_size = tokenizer_en.vocab_size + 2
dropout_rate = 0.1
# + id="ll9olHuXPInI" colab_type="code" colab={}
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
def __init__(self, d_model, warmup_steps=4000):
super(CustomSchedule, self).__init__()
self.d_model = d_model
self.d_model = tf.cast(self.d_model, tf.float32)
self.warmup_steps = warmup_steps
def __call__(self, step):
arg1 = tf.math.rsqrt(step)
arg2 = step * (self.warmup_steps ** -1.5)
return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
# + id="8KhkcaSgP0AT" colab_type="code" colab={}
learning_rate = CustomSchedule(d_model)
optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98,
epsilon=1e-9)
# + id="N1Edn9tPQAzU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 296} outputId="fe4d9fc2-978d-4764-907b-514984c75bea"
temp_learning_rate_schedule = CustomSchedule(d_model)
plt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32)))
plt.ylabel("Learning Rate")
plt.xlabel("Train Step")
# + id="PKaasLdHQNsu" colab_type="code" colab={}
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction="none")
# + id="5_PH8XenQlmy" colab_type="code" colab={}
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_sum(loss_)/tf.reduce_sum(mask)
# + id="yy8DiqDeQ0ot" colab_type="code" colab={}
train_loss = tf.keras.metrics.Mean(name="train_loss")
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name="train_accuracy")
# + id="27IV_fUvRmyG" colab_type="code" colab={}
transformer = Transformer(num_layers, d_model, num_heads, dff,
input_vocab_size, target_vocab_size,
pe_input=input_vocab_size,
pe_target=target_vocab_size,
rate=dropout_rate)
# + id="k2qPRYCbR6JO" colab_type="code" colab={}
def create_masks(inp, tar):
enc_padding_mask = create_padding_mask(inp)
dec_padding_mask = create_padding_mask(inp)
look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])
dec_target_padding_mask = create_padding_mask(tar)
combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)
return enc_padding_mask, combined_mask, dec_padding_mask
# + id="5NwPs48eSRz7" colab_type="code" colab={}
checkpoint_path = "./checkpoints/train"
ckpt = tf.train.Checkpoint(transformer=transformer,
optimizer=optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)
if ckpt_manager.latest_checkpoint:
ckpt.restore(ckpt_manager.latest_checkpoint)
print("Latest checkpoint restored!!")
# + id="w2_dZPXNS5aF" colab_type="code" colab={}
EPOCHS = 20
# + id="gc2PYXzSTF6G" colab_type="code" colab={}
train_step_signature = [
tf.TensorSpec(shape=(None, None), dtype=tf.int64),
tf.TensorSpec(shape=(None, None), dtype=tf.int64),
]
@tf.function(input_signature=train_step_signature)
def train_step(inp, tar):
tar_inp = tar[:, :-1]
tar_real = tar[:, 1:]
enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp)
with tf.GradientTape() as tape:
predictions, _ = transformer(inp, tar_inp,
True,
enc_padding_mask,
combined_mask,
dec_padding_mask)
loss = loss_function(tar_real, predictions)
gradients = tape.gradient(loss, transformer.trainable_variables)
optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))
train_loss(loss)
train_accuracy(tar_real, predictions)
# + id="bK9mFRbSUbmn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="0b3da848-aed5-4ce6-8869-5658347f54f9"
for epoch in range(EPOCHS):
start = time.time()
train_loss.reset_states()
train_accuracy.reset_states()
for (batch, (inp, tar)) in enumerate(train_dataset):
train_step(inp, tar)
if batch % 50 == 0:
print(f"Epoch {epoch+1} Batch {batch} Loss {train_loss.result():.4f} Accuracy {train_accuracy.result():.4f}")
if (epoch + 1) % 5 == 0:
ckpt_save_path = ckpt_manager.save()
print(f"Saving checkpoint for epoch {epoch+1} at {ckpt_save_path}")
print(f"Epoch {epoch+1} Loss {train_loss.result():.4f} Accuracy {train_accuracy.result():.4f}")
print(f"Time taken for 1 epoch: {time.time() - start} secs\n")
# + id="HJdKGtDHVlAD" colab_type="code" colab={}
def evaluate(inp_sentence):
start_token = [tokenizer_pt.vocab_size]
end_token = [tokenizer_pt.vocab_size + 1]
inp_sentence = start_token + tokenizer_pt.encode(inp_sentence) + end_token
encoder_input = tf.expand_dims(inp_sentence, 0)
decoder_input = [tokenizer_en.vocab_size]
output = tf.expand_dims(decoder_input, 0)
for i in range(MAX_LENGTH):
enc_padding_mask, combined_mask, dec_padding_mask = create_masks(
encoder_input, output)
predictions, attention_weights = transformer(encoder_input,
output,
False,
enc_padding_mask,
combined_mask,
dec_padding_mask)
predictions = predictions[:, -1:, :]
predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
if predicted_id == tokenizer_en.vocab_size+1:
return tf.squeeze(output, axis=0), attention_weights
output = tf.concat([output, predicted_id], axis=-1)
return tf.squeeze(output, axis=0), attention_weights
# + id="HvbIsezkbBKM" colab_type="code" colab={}
def plot_attention_weights(attention, sentence, result, layer):
fig = plt.figure(figsize=(16, 8))
sentence = tokenizer_pt.encode(sentence)
attention = tf.squeeze(attention[layer], axis=0)
for head in range(attention.shape[0]):
ax = fig.add_subplot(2, 4, head+1)
ax.matshow(attention[head][:-1, :], cmap="viridis")
fontdict = {"fontsize": 10}
ax.set_xticks(range(len(sentence)+2))
ax.set_yticks(range(len(result)))
ax.set_ylim(len(result)-1.5, -0.5)
ax.set_xticklabels(
['<start>']+[tokenizer_pt.decode([i]) for i in sentence]+['<end>'],
fontdict=fontdict, rotation=90)
ax.set_yticklabels([tokenizer_en.decode([i]) for i in result
if i < tokenizer_en.vocab_size],
fontdict=fontdict)
ax.set_xlabel(f"Head {head+1}")
plt.tight_layout()
plt.show()
# + id="KfD8pCYSdAwo" colab_type="code" colab={}
def translate(sentence, plot=''):
result, attention_weights = evaluate(sentence)
predicted_sentence = tokenizer_en.decode([i for i in result
if i < tokenizer_en.vocab_size])
print(f"Input: {sentence}")
print(f"Predicted translation: {predicted_sentence}")
if plot:
plot_attention_weights(attention_weights, sentence, result, plot)
# + id="87-GpVm5dV92" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="679bc471-b2e8-4de1-e498-7f6f98409927"
translate("este é um problema que temos que resolver.")
print ("Real translation: this is a problem we have to solve .")
# + id="5xtFyMJAdZbx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="06a7b15c-5d33-4609-b812-dcd799c84104"
translate("os meus vizinhos ouviram sobre esta ideia.")
print ("Real translation: and my neighboring homes heard about this idea .")
# + id="hgtb_5_wdem0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="b74229c9-bef9-4984-deb2-7da4ae4194fa"
translate("vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.")
print ("Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .")
# + id="CMwg-AAvdidP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 598} outputId="74f3e2d3-9d63-4bc7-ffba-a85e84e8fd9c"
translate("este é o primeiro livro que eu fiz.", plot='decoder_layer4_block2')
print ("Real translation: this is the first book i've ever done.")
| tensorflow/text/transformer_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python3 (fastai)
# language: python
# name: fastai
# ---
# https://leetcode.com/problems/find-the-difference/submissions/
# +
def findTheDifference(s: str, t: str) -> str:
s_count = {v:s.count(v) for v in set(s)}
t_count = {v:t.count(v) for v in set(t)}
for k,v in t_count.items():
if k not in s_count or s_count[k] != v:
return k
| lt_389_find_difference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="NT4f2Kygv7iE"
# # Importation
# + id="hhSLshnwwFP1" outputId="b4db1ea2-e1e1-4338-ab22-511d5c65bd35"
# # ! pip install kaggle
# # ! mkdir ~/.kaggle
# # ! cp kaggle.json ~/.kaggle/
# # ! chmod 600 ~/.kaggle/kaggle.json
# # ! kaggle datasets download -d enzodurand/boudingboxonlyhanddataset
# # ! unzip boudingboxonlyhanddataset.zip
# + id="BSdwgPVvv7iK" outputId="9bb2e587-33b2-41cc-9220-f2625433242c"
import os
import copy
import cv2
# import wandb
import numpy as np
import pandas as pd
from tqdm import tqdm
from time import time
from sklearn import preprocessing
from matplotlib import pyplot as plt
import torchvision
from torchvision import models, transforms
from torchvision.io import read_image
import torch
import torch.nn.functional as F
from torch import nn
from torch.utils.data import Dataset
# # !pip uninstall albumentations
# # !pip install albumentations==0.4.6
import albumentations as A
from albumentations.pytorch import ToTensorV2
# + [markdown] id="1C5YIPfwzhxp"
# # GPU/TPU setup
# + id="vVQzDrNlzfI_" outputId="d2328c3e-a835-4f78-af06-fc5339afac63"
## TPU
# # !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
# # !python pytorch-xla-env-setup.py --apt-packages libomp5 libopenblas-dev
# import torch_xla
# import torch_xla.core.xla_model as xm
# device = xm.xla_device()
# torch.set_default_tensor_type('torch.FloatTensor')
## GPU
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
## Weight and biases
# wandb.login()
# + id="CfM_nSBaiXcB" outputId="9e21e657-01d3-4e87-8ec7-ace6895e068f"
# !nvidia-smi
# + [markdown] id="1jRIVkpdv7iM"
# # Global variables
# + id="huwjGSaZv7iN"
INPUT_SIZE = 400
N_CLASS = 4
WHERE = "home"
# + id="e-dNYYCyz8E4"
if WHERE=="colab":
PATH_LABELS = "/content/index_label_bbox.csv"
PATH_IMG = "/content/output/output"
PATH_LABELS_VALID = "/content/index_label_bbox_validation.csv"
PATH_IMG_VALID = "/content/output_validation/output_validation"
BATCH_SIZE = 32
elif WHERE=="kaggle":
PATH_LABELS = "../input/boudingboxonlyhanddataset/index_label_bbox.csv"
PATH_IMG = "../input/boudingboxonlyhanddataset/output/output"
PATH_LABELS_VALID = "../input/boudingboxonlyhanddataset/index_label_bbox_validation.csv"
PATH_IMG_VALID = "../input/boudingboxonlyhanddataset/output_validation/output_validation"
BATCH_SIZE = 64
elif WHERE=="home":
PATH_LABELS = "../../../data_labels/bounding_box_model/done/index_label_bbox.csv"
PATH_IMG = "../../../data_labels/bounding_box_model/done/output"
PATH_LABELS_VALID = "../../../data_labels/bounding_box_model/done_validation/index_label_bbox_validation.csv"
PATH_IMG_VALID = "../../../data_labels/bounding_box_model/done_validation/output_validation"
BATCH_SIZE = 4
# + [markdown] id="nSGukie2v7iN"
# # Data functions
# + id="uwmn_pJKv7iO"
class HandGestureDataset(Dataset):
def __init__(self, annotations_file, img_dir, transform=None):
self.img_labels = pd.read_csv(annotations_file)
self.img_dir = img_dir
self.transform = transform
def __len__(self):
return len(self.img_labels)
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, os.listdir(self.img_dir)[idx])
image = read_image(img_path)
path = str("output/"+os.listdir(self.img_dir)[idx]).split("/")[0]
line = self.img_labels["index"] == str(path+"/"+os.listdir(self.img_dir)[idx])
x, y, x_end, y_end = self.img_labels.loc[line]["x"].item(),\
self.img_labels.loc[line]["y"].item(),\
self.img_labels.loc[line]["x_end"].item(),\
self.img_labels.loc[line]["y_end"].item()
x, y, x_end, y_end = x/INPUT_SIZE, y/INPUT_SIZE, x_end/INPUT_SIZE, y_end/INPUT_SIZE
image = image/255
# image = image.permute(1,2,0)
# if self.transform:
# transformed = self.transform(image=np.array(image), bboxes=[[x,y,x_end,y_end]])
# transformed_image = transformed['image']
# transformed_bboxes = transformed['bboxes']
# return transformed_image, transformed_bboxes
if self.transform:
transformed = self.transform(image)
label = [x, y, x_end, y_end]
return {"image":image, "label":label}
# + id="VQreQL2ov7iP"
def draw_predictions(image, preds):
startX, startY, endX, endY = preds
# scale the predicted bounding box coordinates based on the image
# dimensions
startX = int(startX * INPUT_SIZE)
startY = int(startY * INPUT_SIZE)
endX = int(endX * INPUT_SIZE)
endY = int(endY * INPUT_SIZE)
# print(startX, startY, endX, endY)
# draw the predicted bounding box on the image
image = image.numpy().copy()
cv2.rectangle(image, (startX, startY), (endX, endY), (0, 255, 0), 2)
# show the output image
plt.imshow(image)
plt.show()
def prepare_data_vgg(data_type):
## Parameters fitting vgg/imagenet
mean=[0.485, 0.456, 0.406]
std=[0.229, 0.224, 0.225]
transformVGGTrainAlbu = A.Compose([
A.VerticalFlip(p=0.3),
A.HorizontalFlip(p=0.5),
A.Blur(p=0.3, blur_limit=5),
A.RandomBrightnessContrast(p=0.3),
A.RandomGamma(p=0.3),
A.ChannelShuffle(p=0.3),
A.Rotate(p=0.5, limit=60),
# A.Downscale(p=0.3, scale_min=0.6, scale_max=0.9),
# A.ShiftScaleRotate(p=0.3),
# A.ElasticTransform(p=0.3, border_mode=cv2.BORDER_REFLECT_101, alpha_affine=40),
# A.RGBShift(r_shift_limit=0.3, g_shift_limit=0.3, b_shift_limit=30, p=0.3),
# A.Normalize(mean=mean, std=std),
A.Resize(INPUT_SIZE, INPUT_SIZE, p=1),
ToTensorV2(),
], bbox_params=A.BboxParams(format='albumentations', label_fields=""))
transformVGGValidAlbu = A.Compose([
# A.Normalize(mean=mean, std=std),
A.Resize(INPUT_SIZE, INPUT_SIZE, p=1),
ToTensorV2(),
], bbox_params=A.BboxParams(format='albumentations', label_fields=""))
transformVGGTrain = torchvision.transforms.Compose([
torchvision.transforms.ToPILImage(),
torchvision.transforms.Resize(size=(INPUT_SIZE, INPUT_SIZE)),
torchvision.transforms.ToTensor(),
])
transformVGGValid = torchvision.transforms.Compose([
torchvision.transforms.ToPILImage(),
torchvision.transforms.Resize(size=(INPUT_SIZE, INPUT_SIZE)),
torchvision.transforms.ToTensor(),
])
if data_type == "custom":
## Custom dataset
# VGG_dataset_train = HandGestureDataset(PATH_LABELS, PATH_IMG, transformVGGTrainAlbu)
# VGG_dataset_valid = HandGestureDataset(PATH_LABELS_VALID, PATH_IMG_VALID, transformVGGValidAlbu)
VGG_dataset_train = HandGestureDataset(PATH_LABELS, PATH_IMG, transformVGGTrain)
VGG_dataset_valid = HandGestureDataset(PATH_LABELS_VALID, PATH_IMG_VALID, transformVGGValid)
VGG_trainloader = torch.utils.data.DataLoader(VGG_dataset_train, batch_size=BATCH_SIZE, pin_memory=True, shuffle=True)
VGG_validloader = torch.utils.data.DataLoader(VGG_dataset_valid, batch_size=BATCH_SIZE, pin_memory=True, shuffle=True)
return VGG_trainloader, VGG_validloader
# + [markdown] id="I7EYl1LCv7iQ"
# # Loading data into pytorch dataset and dataloader objects
# + id="gSLUtb3rv7iQ"
VGG_trainloader, VGG_validloader = prepare_data_vgg("custom")
# + id="Dxv-Unzwv7iR"
# for img, bbox in VGG_trainloader:
# res = []
# for e in bbox:
# res_ = []
# for elt in e:
# res_.append(elt.numpy())
# res.append(np.array(res_))
# res = np.array(res).T.squeeze()
# cpt = 0
# for i, l in zip(img, res):
# draw_predictions(i.permute(1,2,0), l)
# for img, bbox in VGG_validloader:
# res = []
# for e in bbox:
# res_ = []
# for elt in e:
# res_.append(elt.numpy())
# res.append(np.array(res_))
# res = np.array(res).T.squeeze()
# cpt = 0
# for i, l in zip(img, res):
# draw_predictions(i.permute(1,2,0), l)
# +
# for item in VGG_trainloader:
# x, y = item["image"], item["label"]
# base_img = item["image"]
# x = item["image"].to(device)
# res = []
# for e in y:
# res.append(np.array(e))
# res = np.array(res).T
# y = torch.as_tensor(res)
# y = y.to(torch.float32)
# y = y.to(device)
# for i in range(4):
# draw_predictions(base_img[i].permute(1,2,0), y.cpu()[i])
# + [markdown] id="0yNIY1tPv7iR"
# # Model functions
# + _kg_hide-input=true id="TmOsJNGTv7iS"
# def train(model, epochs, train_loader, valid_loader, learning_rate, patience, feature_extract=False):
# ## Early stopping variables
# es = EarlyStopping(patience=patience)
# terminate_training = False
# best_model_wts = copy.deepcopy(model.state_dict())
# best_loss = np.inf
# model = model.to(device)
# ## Training only the parameters where we require gradient since we are fine-tuning
# params_to_update = model.parameters()
# print("params to learn:")
# if feature_extract:
# params_to_update = []
# for name,param in model.named_parameters():
# if param.requires_grad == True:
# params_to_update.append(param)
# print("\t", name)
# else:
# for name,param in model.named_parameters():
# if param.requires_grad == True:
# print("\t", name)
# ## Setting up our optimizer
# optim = torch.optim.Adam(params_to_update, lr=learning_rate)
# ## Setting up our loss function
# loss = nn.MSELoss()
# ## Running the train loop
# print(f"running {model.name}")
# for epoch in range(epochs):
# cumloss, count = 0, 0
# model.train()
# for x,y in train_loader:
# optim.zero_grad()
# x = x.to(device)
# x = x.float()
# res = []
# for e in y:
# res_ = []
# for elt in e:
# res_.append(elt.numpy())
# res.append(np.array(res_))
# res = np.array(res).T.squeeze()
# # print("/"*20)
# # print(res)
# # print("/"*20)
# y = torch.as_tensor(res)
# y = y.to(torch.float32)
# y = y.to(device)
# yhat = model(x)
# l = loss(yhat, y)
# l.backward()
# # xm.optimizer_step(optim, barrier=True)
# optim.step()
# cumloss += l * len(x)
# count += len(x)
# print("epoch :", epoch, end="")
# loss_ = cumloss.cpu().item()/count
# # wandb.log({'train_loss': loss_})
# print(", train_loss: ", loss_, end="")
# if epoch % 1 == 0:
# model.eval()
# with torch.no_grad():
# valid_cumloss, count = 0, 0
# for x,y in valid_loader:
# x = x.to(device)
# x = x.float()
# res = []
# for e in y:
# res_ = []
# for elt in e:
# res_.append(elt.numpy())
# res.append(np.array(res_))
# res = np.array(res).T.squeeze()
# # print("ù"*20)
# # print(res)
# # print("ù"*20)
# y = torch.as_tensor(res)
# y = y.to(torch.float32)
# y = y.to(device)
# yhat = model(x)
# valid_cumloss += loss(yhat,y) * len(x)
# count += len(x)
# valid_loss_ = valid_cumloss.cpu().item()/count
# # wandb.log({'valid_loss': valid_loss_})
# print(", valid_loss: ", valid_loss_)
# ## Early stopping
# if valid_cumloss/count < best_loss:
# best_loss = valid_cumloss/count
# best_model_wts = copy.deepcopy(model.state_dict())
# if es.step(valid_cumloss.cpu().item()/count):
# terminate_training = True
# break
# if terminate_training:
# break
# print('Best val loss: {:4f}'.format(best_loss))
# ## Returns the best model
# model.load_state_dict(best_model_wts)
# return model
# def set_parameter_requires_grad(model, feature_extract):
# if feature_extract:
# for name,p in model.named_parameters():
# if "features" in name:
# p.requires_grad = False
# else:
# p.requires_grad = True
# +
def train(model, epochs, train_loader, valid_loader, learning_rate, patience, feature_extract=False):
## Early stopping variables
es = EarlyStopping(patience=patience)
terminate_training = False
best_model_wts = copy.deepcopy(model.state_dict())
best_loss = np.inf
model = model.to(device)
## Training only the parameters where we require gradient since we are fine-tuning
params_to_update = model.parameters()
print("params to learn:")
if feature_extract:
params_to_update = []
for name,param in model.named_parameters():
if param.requires_grad == True:
params_to_update.append(param)
print("\t", name)
else:
for name,param in model.named_parameters():
if param.requires_grad == True:
print("\t", name)
## Setting up our optimizer
optim = torch.optim.Adam(params_to_update, lr=learning_rate)
## Setting up our loss function
loss = nn.MSELoss()
## Running the train loop
print(f"running {model.name}")
for epoch in range(epochs):
cumloss, count = 0, 0
model.train()
for item in train_loader:
x, y = item["image"], item["label"]
x = x.to(device)
res = []
for e in y:
res.append(np.array(e))
res = np.array(res).T
y = torch.as_tensor(res)
y = y.to(torch.float32)
y = y.to(device)
# print(x.shape)
# print(x)
yhat = model(x)
l = loss(yhat, y)
l.backward()
# xm.optimizer_step(optim, barrier=True)
optim.step()
cumloss += l * len(x)
count += len(x)
print("epoch :", epoch, end="")
loss_ = cumloss.cpu().item()/count
# wandb.log({'train_loss': loss_})
print(", train_loss: ", loss_, end="")
if epoch % 1 == 0:
model.eval()
with torch.no_grad():
valid_cumloss, count = 0, 0
for item in valid_loader:
x, y = item["image"], item["label"]
x = x.to(device)
res = []
for e in y:
res.append(np.array(e))
res = np.array(res).T
y = torch.as_tensor(res)
y = y.to(torch.float32)
y = y.to(device)
yhat = model(x)
valid_cumloss += loss(yhat,y) * len(x)
count += len(x)
valid_loss_ = valid_cumloss.cpu().item()/count
# wandb.log({'valid_loss': valid_loss_})
print(", valid_loss: ", valid_loss_)
## Early stopping
if valid_cumloss/count < best_loss:
best_loss = valid_cumloss/count
best_model_wts = copy.deepcopy(model.state_dict())
if es.step(valid_cumloss.cpu().item()/count):
terminate_training = True
break
if terminate_training:
break
print('Best val loss: {:4f}'.format(best_loss))
## Returns the best model
model.load_state_dict(best_model_wts)
return model
def set_parameter_requires_grad(model, feature_extract):
if feature_extract:
for name,p in model.named_parameters():
if "features" in name:
p.requires_grad = False
else:
p.requires_grad = True
# + [markdown] id="LdrT-bVhv7iS"
# # Loading the model and modifying the classifier part
# + id="X_bEUNDLv7iT" outputId="74d04a24-a4b5-436c-a9d9-8cae0997a8e2"
## Loading vgg16 model pretrained on imagenet
vgg = models.vgg16(pretrained=True)
vgg.classifier = nn.Sequential(nn.Linear(25088, 4096),
nn.ReLU(),
# nn.Dropout(0.5),
nn.Linear(4096, 1024),
nn.ReLU(),
# nn.Dropout(0.5),
nn.Linear(1024, 256),
nn.ReLU(),
# nn.Dropout(0.5),
nn.Linear(256, N_CLASS),
nn.Sigmoid())
print(vgg.eval())
## Sets all the requires grad of the classifier layers to True
set_parameter_requires_grad(vgg, True)
# + [markdown] id="oiqed-DRv7iT"
# # Implementing early stopping
# + id="N4pImz6Av7iU"
class EarlyStopping(object):
def __init__(self, mode='min', min_delta=0, patience=10, percentage=False):
self.mode = mode
self.min_delta = min_delta
self.patience = patience
self.best = None
self.num_bad_epochs = 0
self.is_better = None
self._init_is_better(mode, min_delta, percentage)
if patience == 0:
self.is_better = lambda a, b: True
self.step = lambda a: False
def step(self, metrics):
if self.best is None:
self.best = metrics
return False
if np.isnan(metrics):
return True
if self.is_better(metrics, self.best):
self.num_bad_epochs = 0
self.best = metrics
else:
self.num_bad_epochs += 1
if self.num_bad_epochs >= self.patience:
return True
return False
def _init_is_better(self, mode, min_delta, percentage):
if mode not in {'min', 'max'}:
raise ValueError('mode ' + mode + ' is unknown!')
if not percentage:
if mode == 'min':
self.is_better = lambda a, best: a < best - min_delta
if mode == 'max':
self.is_better = lambda a, best: a > best + min_delta
else:
if mode == 'min':
self.is_better = lambda a, best: a < best - (
best * min_delta / 100)
if mode == 'max':
self.is_better = lambda a, best: a > best + (
best * min_delta / 100)
# + [markdown] id="tRpWlPvEv7iU"
# # Training only the modified parts of the classifier
# + id="46vOGoBkecro"
# os.environ['WANDB_NOTEBOOK_NAME'] = '4096_5e-6'
# wandb.init(project="jetson-autonomous-driving")
# + id="JXlDQQfhv7iV" outputId="9e28c8a1-b134-4f27-d9da-e440e1d4bef0"
print(len(VGG_trainloader))
print(len(VGG_validloader))
## Fine-tuning the model on our data
vgg.name = "VGG"
best_model = train(model=vgg,
epochs=1000,
train_loader=VGG_trainloader,
valid_loader=VGG_validloader,
learning_rate=5e-5,
patience=20) ## metric for earlystopping : val_loss
# + [markdown] id="9_RQMAV0v7iV"
# # Checking predictions
# + id="FDqVuSEIv7iW"
# with torch.no_grad():
# for item in VGG_trainloader:
# x, y = item["image"], item["label"]
# base_img = item["image"]
# x = item["image"].to(device)
# res = []
# for e in y:
# res.append(np.array(e))
# res = np.array(res).T
# y = torch.as_tensor(res)
# y = y.to(torch.float32)
# y = y.to(device)
# yhat = best_model(x)
# for i in range(4):
# draw_predictions(base_img[i].permute(1,2,0), yhat.cpu()[i])
with torch.no_grad():
for x, y in VGG_validloader:
x = x.to(device)
x = x.float()
yhat = best_model(x)
for i in range(4):
draw_predictions(x[i].cpu().permute(1,2,0), yhat.cpu()[i])
# + [markdown] id="yOuDvy_7v7iW"
# # Saving the model in .pth and .onnx extension
# + id="69YZ-O8Pv7iW"
PATH = "./"
torch.save(best_model.state_dict(), os.path.join(PATH,"boundingbox_vgg_last.pth"))
# from google.colab import files
# files.download(os.path.join(PATH,"boundingbox_vgg_last.pth"))
# + id="k3zZ0xCAv7iX"
# del vgg
# del best_model
# + id="fhRGHT_zv7iX"
# model = models.vgg16(pretrained=True)
# model.classifier[0] = nn.Linear(25088, 8192)
# model.classifier[3] = nn.Linear(8192, 1024)
# model.classifier[6] = nn.Linear(1024, N_CLASS)
# model.load_state_dict(torch.load(os.path.join(PATH,"vgg.pth"), map_location='cpu'))
# model.eval()
# dummy_input = torch.randn(BATCH_SIZE, 3, INPUT_SIZE, INPUT_SIZE)
# torch.onnx.export(model,
# dummy_input,
# "vgg.onnx",
# export_params=True,
# do_constant_folding=True,
# input_names = ['modelInput'],
# output_names = ['modelOutput'])
# + id="3nyDFGuUv7iX"
# + id="DLpVF-Vov7iX"
| src/main/only_bounding_box_model/vgg-bounding-box-modified.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os, glob, re, json, random
from tqdm.notebook import tqdm
def extract_text(conv):
text = " __eou__ ".join(conv['turns']) + " __eou__"
text = text.replace("\n", "")
text = text.replace("\r", "")
return text
training_files = glob.glob('./data/dialogues/training/*.txt')
MAX_CONV_PER_DOMAIN = 55
out_dir = './data/reddit_50k/' # around 900 train files
train_data = []
for file in tqdm(training_files):
with open(file) as f:
for i, l in enumerate(f):
if i >= MAX_CONV_PER_DOMAIN:
break
conv = extract_text(json.loads(l))
train_data.append(conv)
len(train_data)
val_files = glob.glob('./data/dialogues/validation*/*.txt')
val_data = []
for file in tqdm(val_files):
with open(file) as f:
for j, l in enumerate(f):
if j >= MAX_CONV_PER_DOMAIN//10:
break
conv = extract_text(json.loads(l))
val_data.append(conv)
len(val_data)
random.seed(42)
random.shuffle(train_data)
random.shuffle(val_data)
test_data = val_data[1000:]
val_data = val_data[:1000]
print(f"Train: {len(train_data)}, Val: {len(val_data)}, Test: {len(test_data)}")
test_data[0]
exp_path = 'export/reddit_50k/'
try:
os.makedirs(exp_path)
except FileExistsError:
pass
with open(os.path.join(exp_path, 'train_dialogues.txt'), "w") as f:
for l in train_data:
f.write(l+"\n")
with open(os.path.join(exp_path, 'val_dialogues.txt'), "w") as f:
for l in val_data:
f.write(l+"\n")
with open(os.path.join(exp_path, 'test_dialogues.txt'), "w") as f:
for l in test_data:
f.write(l+"\n")
| convert_to_dd_format.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
instroom2016_2019 <- read.table("1e_asielaanvraag_instroom_2016_2019.csv", header = TRUE, sep = ";")
library(tidyverse)
names(instroom2016_2019)
names(instroom2016_2019) <- c("Aantalzaken", "Doorlooptijdindagen", "Jaarinstroomdatum", "Jaarstanddatum", "Maandnummerinstroomdatum", "Weeknummerinstroomdatum", "Jaaruitstroomdatum", "Maandnummeruitstroomdatum", "Weeknummeruitstroomdatum", "Zaakdefinitiefindicatiezaak", "EersteAsielaanvraagindicatiezaak", "EersteRegulieraanvraagindicatiezaak", "Binnenbuitenwettelijketermijnzaak", "Werksoortzaak", "Binnenbuitenstreeftermijnzaak", "Gevraagdekwalificatiezaak", "Geleverdekwalificatiezaak", "Afdoeningswijzezaakdefinitief", "Behandelresultaatzaakdefinitief", "Nationaliteitsubjectinstroomdatum")
str(instroom2016_2019)
instroom2016_2019$Jaarinstroomdatum <- as.factor(instroom2016_2019$Jaarinstroomdatum)
instroom2016_2019$Maandnummerinstroomdatum <- as.factor(instroom2016_2019$Maandnummerinstroomdatum)
instroom2016_2019$Weeknummerinstroomdatum <- as.factor(instroom2016_2019$Weeknummerinstroomdatum)
str(instroom2016_2019)
DefInstroom2016_2019 <- filter(instroom2016_2019, Zaakdefinitiefindicatiezaak == "J")
unique(DefInstroom2016_2019$Afdoeningswijzezaakdefinitief)
str(DefInstroom2016_2019)
DefInstroom2016_2019$Zaakdefinitiefindicatiezaak <- droplevels(DefInstroom2016_2019$Zaakdefinitiefindicatiezaak)
ggplot(filter(instroom2016_2019, Jaarinstroomdatum=="2018"), aes(x=Maandnummerinstroomdatum, y = Aantalzaken, fill= Gevraagdekwalificatiezaak)) + # Werksoortzaak Gevraagdekwalificatiezaak
geom_col()
#scale_y_log10() +
#ylab("proportion") +
# facet_wrap(~Jaarinstroomdatum)
# theme(axis.text.x=element_text(angle=90))
ggplot(instroom2016_2019, aes(x=Maandnummerinstroomdatum, y = Aantalzaken, fill= Gevraagdekwalificatiezaak)) + # Werksoortzaak Gevraagdekwalificatiezaak
geom_col() +
#scale_y_log10() +
#ylab("proportion") +
facet_wrap(~Jaarinstroomdatum)
# theme(axis.text.x=element_text(angle=90))
ggplot(DefInstroom2016_2019, aes(x=Gevraagdekwalificatiezaak, y = Aantalzaken, fill= Afdoeningswijzezaakdefinitief)) + # Werksoortzaak Gevraagdekwalificatiezaak
geom_col(position = "fill") +
#scale_y_log10() +
ylab("proportion") +
facet_wrap(~Jaarinstroomdatum) +
theme(axis.text.x=element_text(angle=90))
ggplot(DefInstroom2018, aes(x=Gevraagdekwalificatiezaak, y = Aantalzaken, fill= Afdoeningswijzezaakdefinitief)) + # Werksoortzaak Gevraagdekwalificatiezaak
geom_col(position = "fill") +
ylab("proportion") +
facet_wrap(~Geleverdekwalificatiezaak) +
theme(axis.text.x=element_text(angle=90))
| InstroomAA1e_2016_2019.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Isostatic deflection in 2D
# Source: Hodgetts et al. (1998). Flexural modelling of continental lithosphere deformation: a comparison of 2D and 3D techniques, Tectonophysics, 294, 1-2, p.1-2
# These are the equations being solved:
#
# $\left(\frac{\partial^{2}}{\partial x^{2}} + \frac{\partial^{2}}{\partial y^{2}}\right) D \left( \frac{\partial^{2} w_{(x,y)}}{\partial y^{2}} + \frac{2 \partial^{2} w_{(x,y)}}{\partial x \partial y} + \frac{\partial^{2} w_{(x,y)}}{\partial x^{2}}\right) + \left( \rho_{m} - \rho \right) g w_{(x,y)} = l_{(x,y)}$
#
# This is solved using a Fourier transform solution:
#
# $W_{(u,v)} = R_{(u,v)} \cdot L_{(u,v)}$
#
# Where $W(u,v)$ is the Fourier transform of the deflections, $L(u,v)$ is the Fourier transform of the surface loads (equal to $\rho g h$), and $R(u,v)$ is a response function, defined as:
#
# $R_{(u,v)} = \frac{1}{\left( \rho_{m} - \rho \right) g + D\left(u^{2} + v^{2}\right)^{2}}$
#
# In the particular case of Curtis' Santa Cruz Mountains problem, we are interested in knowing the rock uplift that is associated with a given amount of crustal thickening. Noting that $h = t - w$, where $t$ is the thickening, we can rewrite Equation (10) as:
#
# $\left(\frac{\partial^{2}}{\partial x^{2}} + \frac{\partial^{2}}{\partial y^{2}}\right) D \left( \frac{\partial^{2} w_{(x,y)}}{\partial y^{2}} + \frac{2 \partial^{2} w_{(x,y)}}{\partial x \partial y} + \frac{\partial^{2} w_{(x,y)}}{\partial x^{2}}\right) + \rho_{m} g w_{(x,y)} = \rho g t_{(x,y)}$
#
# And Equation 12 as:
#
# $\frac{1}{\rho_{m} g + D\left(u^{2} + v^{2}\right)^{2}}$
#
# In this case, $l(x,y)$ now becomes the crustal thickening ($t(x,y)$).
#
# Once deflections are computed, rock uplift can be computed as $u(x,y) = t(x,y) - w(x,y)$
import numpy as np
# %matplotlib widget
import matplotlib.pylab as plt
# We will use a model with E=50 GPa, h = 20 km as a demonstration. Values of E=10 GPa, h = 5 km produce way too much deflection. This is probably due to the fact that this model uses thickening rate (and so $(\rho_{m} - \rho)$ in the original form is smaller than $\rho_{m}$), the fact that the model is not periodic, and that the model does not extend forever in the out-of-plane direction. Note however, that the fraction of Airy isostacy approaches about what we would like ($f\approx 0.38$) for the wavelengths of the SCM.
# +
# Define constants:
E = 10E9
g = 9.8
rho_m = 3200
rho = 2700
h = 5E3
v = 0.25
D = E*np.power(h,3) / (12*(1-np.power(v,2)))
print('D = ', D)
# +
# Define extent of plots:
bounding_box = np.array([[5.149823487603397807e+05, 4.162743473135999404e+06],
[5.592764889708703849e+05, 4.195161883133378811e+06],
[6.377426705260890303e+05, 4.087951441662845202e+06],
[5.934485303155583097e+05, 4.055533031665465795e+06]])
extent = [np.min(bounding_box[:,0]), np.max(bounding_box[:,0]), np.min(bounding_box[:,1]), np.max(bounding_box[:,1])]
# +
# Read thickening grid and define dimensions:
import pickle as p
(X, Y, UX, UY, UZ) = p.load(open('data/dislocation_safonly_nolock.p','rb'))
dx = np.mean(np.diff(X)[1,:])*1000
thickening_disloc = UZ*1E6*4*1000 # This will give us units of meters for a 4Myr model
disloc_extent = np.array([np.min(X[:]), np.max(X[:]), np.min(Y[:]), np.max(Y[:])])*1E3
# +
# Calculate wavenumbers:
wx_disloc = np.fft.fftfreq(thickening_disloc.shape[1],d=dx)*2.0*np.pi
wy_disloc = np.fft.fftfreq(thickening_disloc.shape[0],d=dx)*2.0*np.pi
[WX_disloc, WY_disloc] = np.meshgrid(wx_disloc,wy_disloc)
# +
# Build response function:
R_disloc = np.power(rho_m*g + D*np.power(np.power(WX_disloc,2)+np.power(WY_disloc,2),2),-1)
# +
# Transform thickening grid:
T_disloc = np.fft.fft2(thickening_disloc*rho*g)
# +
# Convolve and back-transform:
W_disloc = R_disloc*T_disloc
w_disloc = np.real(np.fft.ifft2(W_disloc))
# +
# Calculate rock uplift and plot:
u_disloc = thickening_disloc - w_disloc
plt.figure()
plt.title('Dislocation - Thickening (m)')
plt.imshow(thickening_disloc, extent=disloc_extent, origin='lower', vmin = 0, vmax = 2500)
plt.axis(extent)
plt.colorbar()
plt.show()
plt.figure()
plt.title('Dislocation - Deflection (m)')
plt.imshow(w_disloc, extent=disloc_extent, origin='lower', vmin = 0, vmax = 2500)
plt.axis(extent)
plt.colorbar()
plt.show()
plt.figure()
plt.title('Dislocation - Rock / Surface Uplift (m)')
plt.imshow(u_disloc, extent=disloc_extent, origin='lower', vmin = 0, vmax = 2500)
plt.axis(extent)
plt.colorbar()
plt.show()
# +
# Read irregular points for EP model and create regular grid:
xyz_ep = np.loadtxt('data/EP_UTM_surface_nodes.txt')
xy_ep = xyz_ep[:,0:2]
z_ep = xyz_ep[:,2] - 20000.0
ep_extent = [np.min(xy_ep[:,0]), np.max(xy_ep[:,0]), np.min(xy_ep[:,1]), np.max(xy_ep[:,1])]
[Xi, Yi] = np.meshgrid(np.arange(ep_extent[0],ep_extent[1],dx), np.arange(ep_extent[2],ep_extent[3],dx))
from scipy.interpolate import griddata
thickening_ep = griddata(xy_ep, z_ep, (Xi, Yi), method='cubic', fill_value=0.0)
# +
# Transform thickening grid and calculate deflections and rock uplift:
T_ep = np.fft.fft2(thickening_ep*rho*g)
wx_ep = np.fft.fftfreq(thickening_ep.shape[1],d=dx)*2.0*np.pi
wy_ep = np.fft.fftfreq(thickening_ep.shape[0],d=dx)*2.0*np.pi
[WX_ep, WY_ep] = np.meshgrid(wx_ep,wy_ep)
R_ep = np.power(rho_m*g + D*np.power(np.power(WX_ep,2)+np.power(WY_ep,2),2),-1)
W_ep = R_ep*T_ep
w_ep = np.real(np.fft.ifft2(W_ep))
u_ep = thickening_ep - w_ep
# +
# Calculate rock uplift and plot:
# %matplotlib widget
import matplotlib.pylab as plt
u_ep = thickening_ep - w_ep
plt.figure()
plt.title('Elastoplastic - Thickening (m)')
plt.imshow(thickening_ep, extent=ep_extent, origin='lower', vmin = 0, vmax = 2500)
plt.axis(extent)
plt.colorbar()
plt.show()
plt.figure()
plt.title('Elastoplastic - Deflection (m)')
plt.imshow(w_ep, extent=ep_extent, origin='lower', vmin = 0, vmax = 2500)
plt.axis(extent)
plt.colorbar()
plt.show()
plt.figure()
plt.title('Elastoplastic - Rock / Surface Uplift (m)')
plt.imshow(u_ep, extent=ep_extent, origin='lower', vmin = 0, vmax = 2500)
plt.axis(extent)
plt.colorbar()
plt.show()
# +
# Read irregular points for E model and create regular grid:
xyz_e = np.loadtxt('data/E_UTM_surface_nodes.txt')
xy_e = xyz_e[:,0:2]
z_e = xyz_e[:,2] - 20000.0
e_extent = [np.min(xy_e[:,0]), np.max(xy_e[:,0]), np.min(xy_e[:,1]), np.max(xy_e[:,1])]
[Xi, Yi] = np.meshgrid(np.arange(e_extent[0],e_extent[1],dx), np.arange(e_extent[2],e_extent[3],dx))
from scipy.interpolate import griddata
thickening_e = griddata(xy_e, z_e, (Xi, Yi), method='cubic', fill_value=0.0)
# +
# Transform thickening grid and calculate deflections and rock uplift:
T_e = np.fft.fft2(thickening_e*rho*g)
wx_e = np.fft.fftfreq(thickening_e.shape[1],d=dx)*2.0*np.pi
wy_e = np.fft.fftfreq(thickening_e.shape[0],d=dx)*2.0*np.pi
[WX_e, WY_e] = np.meshgrid(wx_e,wy_e)
R_e = np.power(rho_m*g + D*np.power(np.power(WX_e,2)+np.power(WY_e,2),2),-1)
W_e = R_e*T_e
w_e = np.real(np.fft.ifft2(W_e))
u_e = thickening_e - w_e
# +
# Calculate rock uplift and plot:
# %matplotlib widget
import matplotlib.pylab as plt
u_e = thickening_e - w_e
plt.figure()
plt.title('Elastic - Thickening (m)')
plt.imshow(thickening_e, extent=e_extent, origin='lower', vmin = 0, vmax = 2500)
plt.axis(extent)
plt.colorbar()
plt.show()
plt.figure()
plt.title('Elastic - Deflection (m)')
plt.imshow(w_e, extent=e_extent, origin='lower', vmin = 0, vmax = 2500)
plt.axis(extent)
plt.colorbar()
plt.show()
plt.figure()
plt.title('Elastic - Rock / Surface Uplift (m)')
plt.imshow(u_e, extent=e_extent, origin='lower', vmin = 0, vmax = 2500)
plt.axis(extent)
plt.colorbar()
plt.show()
# + jupyter={"source_hidden": true}
# -
| IsostaticDeflection/IsostaticDeflection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
import numpy as np
char_arr = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n',
'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
num_dic = {n:i for i, n in enumerate(char_arr)}
dic_len = len(num_dic)
print(num_dic)
seq_data = ['word', 'wood', 'deep', 'dive', 'cold', 'cool', 'load', 'love', 'kiss', 'kind']
def make_batch(seq_data):
input_batch = []
target_batch =[]
for seq in seq_data:
input = [num_dic[n] for n in seq[:-1]]
target = num_dic[seq[-1]]
input_batch.append(np.eye(dic_len)[input])
target_batch.append(target)
return input_batch, target_batch
learning_rate = 0.01
n_hidden = 128
total_epoch = 30
n_step = 3
n_input = n_class = dic_len
X = tf.placeholder(tf.float32, [None, n_step, n_input])
Y = tf.placeholder(tf.int32, [None])
W = tf.Variable(tf.random_normal([n_hidden, n_class]))
b = tf.Variable(tf.random_normal([n_class]))
cell1 = tf.nn.rnn_cell.BasicLSTMCell(n_hidden)
cell1 = tf.nn.rnn_cell.DropoutWrapper(cell1, output_keep_prob = 0.5)
cell2 = tf.nn.rnn_cell.BasicLSTMCell(n_hidden)
multi_cell = tf.nn.rnn_cell.MultiRNNCell([cell1, cell2])
outputs, states = tf.nn.dynamic_rnn(multi_cell, X, dtype = tf.float32)
outputs = tf.transpose(outputs, [1, 0, 2])
outputs = outputs[-1]
model = tf.matmul(outputs, W) + b
cost = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=model, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
input_batch, target_batch = make_batch(seq_data)
# +
for epoch in range(total_epoch):
_, loss = sess.run([optimizer, cost], feed_dict = {X:input_batch, Y:target_batch})
print('epoch:', '%04d' %(epoch + 1), 'cost=,' '{:.6f}'.format(loss))
print('complete')
# -
prediction = tf.cast(tf.argmax(model, 1), tf.int32)
prediction_check = tf.equal(prediction, Y)
accuracy = tf.reduce_mean(tf.cast(prediction_check, tf.float32))
input_batch, target_batch = make_batch(seq_data)
predict, accuracy_val = sess.run([prediction, accuracy], feed_dict = {X:input_batch, Y:target_batch})
# +
predict_words = []
for idx, val in enumerate(seq_data):
last_char = char_arr[predict[idx]]
predict_words.append(val[:3] + last_char)
print('\n====result====')
print('input:',[w[:3] + '' for w in seq_data])
print('pre:', predict_words)
print('accu:', accuracy_val)
# -
| python/rnn_test3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (modnet-develop)
# language: python
# name: modnet-develop
# ---
# # MODNet 'matbench_phonons' benchmarking
# +
from collections import defaultdict
import itertools
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Markdown
from matminer.datasets import load_dataset
from pymatgen.core import Composition
from modnet.preprocessing import MODData
from modnet.models import MODNetModel
from modnet.featurizers import MODFeaturizer
from modnet.featurizers.presets import DeBreuck2020Featurizer
os.environ["CUDA_VISIBLE_DEVICES"] = "3"
# -
Markdown(filename="./README.md")
# ## Data exploration
df = load_dataset("matbench_phonons")
df.columns
# + [markdown] heading_collapsed=true
# ### Target space
# + hidden=true
df.describe()
# + hidden=true
fig, ax = plt.subplots(facecolor="w")
ax.hist(df["last phdos peak"], bins=100, density=True);
ax.set_ylabel("Frequency")
ax.set_xlabel("Last PhDOS peak")
# -
# ## Featurization and feature selection
# First, we define some convenience classes that pass wraps composition data in a fake structure containe, and we define a composition only featurizer preset based on `DeBreuck2020Featurizer`.
# +
PRECOMPUTED_MODDATA = "./precomputed/phonon_benchmark_moddata.pkl.gz"
if os.path.isfile(PRECOMPUTED_MODDATA):
data = MODData.load(PRECOMPUTED_MODDATA)
else:
data = MODData(
structures=df["structure"].tolist(),
targets=df["last phdos peak"].tolist(),
target_names=["last phdos peak"],
featurizer=DeBreuck2020Featurizer(n_jobs=8)
)
data.featurize()
data.feature_selection(n=-1)
data.save(PRECOMPUTED_MODDATA)
# +
#data.optimal_features=None
#data.cross_nmi = None
#data.num_classes = {"w":0}
#data.feature_selection(n=-1)
#data.save("./precomputed/phonon_benchmark_moddata_MPCNMI.pkl.gz")
# -
# ## Training
# +
try:
plot_benchmark
except:
import sys
sys.path.append('..')
from modnet_matbench.utils import *
from sklearn.model_selection import KFold
from modnet.models import MODNetModel
from tensorflow.keras.callbacks import ReduceLROnPlateau, EarlyStopping
data.df_targets.rename(columns={data.target_names[0]: "w"}, inplace=True)
# [[512], [128], [32], [16]]
best_settings = {
"increase_bs":True,
"num_neurons": [[512], [128], [64], [64]],
"n_feat": 280,
"lr": 0.005,
"epochs": 800,
"act": "elu",
"batch_size": 64,
"loss": "mae",
}
results = matbench_benchmark(data, [[["w"]]], {"w": 1}, best_settings,save_folds=True)
np.mean(results['scores'])
# -
best_settings = {
"increase_bs":True,
"num_neurons": [[512], [128], [64], [64]],
"n_feat": 280,
"lr": 0.005,
"epochs": 800,
"act": "elu",
"batch_size": 64,
"loss": "mae",
}
fig, ax = plt.subplots()
sns.scatterplot(data=reg_df, x="targets", y="predictions", hue="split", palette="Dark2", ax=ax, alpha=0.5)
sns.regplot(data=reg_df, x="targets", y="predictions", ax=ax, scatter=False)
plt.xlabel("True")
plt.ylabel("Pred.")
g = sns.jointplot(data=reg_df, x="errors", y="predictions", hue="split", palette="Dark2", alpha=0.0, marginal_kws={"shade": False})
g.plot_joint(sns.scatterplot, hue=None, c="black", s=5, alpha=0.8)
g.plot_joint(sns.kdeplot, color="split", zorder=0, levels=5, alpha=0.5)
sns.kdeplot(data=reg_df, x="targets", y="predictions", hue="split", shade=False, levels=3, palette="Dark2", alpha=0.5, )
| matbench_phonons/phonons_benchmark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
a = [1,2,3,4,5,6]
a.index(1)
a.index(2)
# ## 言论过滤
# +
import numpy as np
from functools import reduce
import re
import random
random.seed(2020)
def load_dataset():
"""
加载评论数据集,假设数据集已经按照单词切分好
:return: 返回数据集和标签
"""
# 切分的样本
post_list = [['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'],
['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'],
['my', 'dalmation', 'is', 'so', 'cute', 'I', 'love', 'him'],
['stop', 'posting', 'stupid', 'worthless', 'garbage'],
['mr', 'licks', 'ate', 'my', 'steak', 'how', 'to', 'stop', 'him'],
['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']]
# 类别标签向量,1代表侮辱类 ,0代表非侮辱类
class_vector = [0, 1, 0, 1, 0, 1]
return post_list, class_vector
def create_vocab_list(dataset):
"""
将切分的样本整理成不重复的词汇表(词向量)
:param dataset: 切分的实验样本
:return: 词汇表
"""
# 创建一个空的不重复的列表
vocab_set = set([])
for doc in dataset:
# 取并集
vocab_set = vocab_set | set(doc)
return list(vocab_set)
# return np.array(list(vocab_set))
def set_word2vec(vocab_list, input_data):
"""
根据vocab_list词汇表,将input_data向量化,向量的每个元素为1或0
:type vocab_list: list
:param vocab_list: createVocabList返回的列表
:param input_data: 切分的词条列表
:return: 文档向量 (词向量)
"""
# 初始化向量为零向量
word_vector = [0] * len(vocab_list)
for word in input_data:
if word in vocab_list:
# 如果输入数据中的词汇在词汇表中,则词汇向量对应的元素置一
word_vector[vocab_list.index(word)] = 1
else:
print(f"the word {word} is not in VocabularyList!")
return word_vector
def train_naive_bayes(train_matrix, train_y, laplace=True):
"""
朴素贝叶斯分类器训练
:param train_matrix: 训练样本
:param train_y: 训练样本标签
:param laplace: 拉普拉斯平滑
:return: 返回预测的两类概率向量以及文档中属于侮辱性的概率
"""
# 计算训练的文档数目
n_docs = len(train_matrix)
# 计算每篇文档的词条数
n_words_per_doc = len(train_matrix[0])
# 文档属于侮辱类的概率
prob_abusive = sum(train_y) / float(n_docs)
# 创建数组,用于存储单词属于0和1类的概率,np.zeros初始化为0
prob_0 = np.zeros(n_words_per_doc)
prob_1 = np.zeros(n_words_per_doc)
# 分母初始化为 0.0
prob_0_denominator = 0.0
prob_1_denominator = 0.0
if laplace:
# 分母初始化为 2.0 (二分类)
prob_0_denominator = 2.0
prob_1_denominator = 2.0
for i in range(n_docs):
# 统计属于侮辱类的条件概率所需的数据,即 P(w0|1),P(w1|1),P(w2|1)···
if train_y[i] == 1:
prob_1 += train_matrix[i]
prob_1_denominator += sum(train_matrix[i])
# 统计属于非侮辱类的条件概率所需的数据,即P(w0|0),P(w1|0),P(w2|0)···
else:
prob_0 += train_matrix[i]
prob_0_denominator += sum(train_matrix[i])
# 词向量中,单词属于1类(非侮辱性)的概率向量
prob_1_vector = prob_1 / prob_1_denominator
# 词向量中,单词属于0类的概率向量
prob_0_vector = prob_0 / prob_0_denominator
# 返回属于侮辱类的条件概率数组,属于非侮辱类的条件概率数组,文档属于侮辱类的概率
return prob_0_vector, prob_1_vector, prob_abusive
def navie_bayes_classifer(input_vector, prob_0_vector, prob_1_vector, prob_abusive, log=True):
"""
贝叶斯分类器
:param input_vector: 待分类的词向量
:param prob_0_vector: 属于0类的概率向量
:param prob_1_vector: 属于1类的概率向量
:param prob_abusive: 词向量属于1类的概率
:param log: 防止造成下溢
:return: 0或1
"""
# reduce() 函数会对参数序列中元素进行累积。
prob_1 = reduce(lambda x, y : x * y, input_vector * prob_1_vector) * prob_abusive
prob_0 = reduce(lambda x, y : x * y, input_vector * prob_0_vector) * prob_abusive
if log:
# 对应元素相乘 logA * B = logA + logB,所以这里加上log(pClass1)
prob_1 = sum(input_vector * prob_1_vector) + np.log(prob_abusive)
prob_0 = sum(input_vector * prob_0_vector) + np.log(1.0 - prob_abusive)
print("prob_1:", prob_1)
print("prob_0:", prob_0)
if prob_1 > prob_0:
return 1
else:
return 0
def test_nave_bayes(test_vocab):
post_list, class_vector = load_dataset()
vocab_list = create_vocab_list(post_list)
train_matrix = []
for post_in_doc in post_list:
train_matrix.append((set_word2vec(vocab_list, post_in_doc)))
prob_0_vector, prob_1_vector, prob_abusive = train_naive_bayes(train_matrix, class_vector)
test_vector = np.array(set_word2vec(vocab_list, test_vocab))
if navie_bayes_classifer(test_vector, prob_0_vector, prob_1_vector, prob_abusive):
print(test_vocab, '属于侮辱类') # 执行分类并打印分类结果
else:
print(test_vocab, '属于非侮辱类')
if __name__ == '__main__':
# post_list, class_vector = load_dataset()
# print("post_list:\n", post_list)
#
# vocab_list = create_vocab_list(post_list)
# print("vocab_list:\n", vocab_list)
# print("vocab_list.shape:", len(vocab_list))
#
# train_matrix = []
# for post_in_doc in post_list:
# train_matrix.append((set_word2vec(vocab_list, post_in_doc)))
# print("train_matrix:\n", train_matrix)
# print("train_matrix.shape:", np.array(train_matrix).shape)
#
# # --------------------- train Naive Bayes Classifier ---------------------
# prob_0_vector, prob_1_vector, prob_abusive = train_naive_bayes(train_matrix, class_vector)
# print("prob_0_vector:\n", prob_0_vector)
# print("prob_1_vector:\n", prob_1_vector)
#
# print("class_vector:", class_vector)
# # prob_abusive是所有侮辱类的样本占所有样本的概率,从class_vector中可以看出,一用有3个侮辱类,3个非侮辱类。所以侮辱类的概率是0.5
# print("prob_abusive:", prob_abusive)
# ------------------- Naive Bayes Classifier predict ---------------------
# 会发现,算法无法进行分类,p0和p1的计算结果都是0,显然结果错误,需要进行改进——拉普拉斯平滑(Laplace Smoothing)!
# 另外一个遇到的问题就是下溢出,这是由于太多很小的数相乘造成的。通过求对数可以避免下溢出或者浮点数舍入导致的错误。
test_vocab1 = ['love', 'my', 'dalmation']
test_nave_bayes(test_vocab1)
test_vocab2 = ['stupid', 'garbage']
test_nave_bayes(test_vocab2)
# -
# ## 垃圾邮件分类
# +
def bag_word2vec(vocab_list, input_set):
"""
根据 vocab_list词汇表,构建词袋模型
:param vocab_list: creat_vocab_list 返回的词汇表(列表)
:param input_set: 切分的词条列表
:return: 文档向量(词袋模型)
"""
vocab_vector = [0] * len(vocab_list)
for word in input_set:
if word in vocab_list:
vocab_vector[vocab_list.index(word)] += 1
return vocab_vector
def str_to_list(text):
"""
接收一个大字符串并将其解析为字符串列表
:param text: 大字符串
:return: 字符串列表
"""
# 将特殊符号作为切分标志进行字符串切分,即非字母、非数字
list_of_tokens = re.split(r'\W+', text)
return [token.lower() for token in list_of_tokens if len(token) > 2]
def spam_classifier(sklearn=True):
"""
垃圾邮件分类
ham:废垃圾邮件;spam:垃圾邮件
:return:
"""
rootdir = 'D:/Github/ML-Algorithm-Source-Code/'
spam_filepath = rootdir + 'dataset/email/spam/'
ham_filepath = rootdir + 'dataset/email/ham/'
doc_list = []
class_list = []
full_text = []
# 遍历 25个 txt 文件
for i in range(1, 26):
# 读取每个垃圾邮件,并字符串转换成字符串列表
word_list = str_to_list(open(spam_filepath + '%d.txt' % i, 'r').read())
doc_list.append(word_list)
full_text.append(word_list)
class_list.append(1)
word_list = str_to_list(open(ham_filepath + '%d.txt' % i, 'r').read())
doc_list.append(word_list)
full_text.append(word_list)
class_list.append(0)
# 创建词汇表,不重复
vocab_list = create_vocab_list(doc_list)
dataset = list(range(50))
test_x = []
# 从50个邮件中,随机挑选出40个作为训练集,10个做测试集
# 随机选取10个,构造测试集
for i in range(10):
rand_index = int(random.uniform(0, len(dataset)))
test_x.append(dataset[rand_index])
del(dataset[rand_index])
train_x = []
train_y = []
# 遍历训练集
for doc_index in dataset:
# 将生成的词袋模型添加到训练矩阵中
train_x.append(set_word2vec(vocab_list, doc_list[doc_index]))
# 将类别添加到训练集类别标签向量中
train_y.append(class_list[doc_index])
# 训练朴素贝叶斯模型
prob_0_vector, prob_1_vector, prob_spam = train_naive_bayes(np.array(train_x), np.array(train_y))
# 错误分类计数
error_count = 0
# 遍历测试集
for doc_index in test_x:
word_vector = set_word2vec(vocab_list, doc_list[doc_index])
if navie_bayes_classifer(np.array(word_vector), prob_0_vector, prob_1_vector, prob_spam) != class_list[doc_index]:
error_count += 1
print("分类错误测试集:", doc_list[doc_index])
print("错误率:%.2f%%" % (float(error_count) / len(test_x) * 100))
if sklearn:
if __name__ == '__main__':
spam_classifier()
# -
# ## sklearn 垃圾邮件分类
# https://scikit-learn.org/dev/modules/generated/sklearn.naive_bayes.MultinomialNB.html
# +
from sklearn.naive_bayes import MultinomialNB
def bag_word2vec(vocab_list, input_set):
"""
根据 vocab_list词汇表,构建词袋模型
:param vocab_list: creat_vocab_list 返回的词汇表(列表)
:param input_set: 切分的词条列表
:return: 文档向量(词袋模型)
"""
vocab_vector = [0] * len(vocab_list)
for word in input_set:
if word in vocab_list:
vocab_vector[vocab_list.index(word)] += 1
return vocab_vector
def str_to_list(text):
"""
接收一个大字符串并将其解析为字符串列表
:param text: 大字符串
:return: 字符串列表
"""
# 将特殊符号作为切分标志进行字符串切分,即非字母、非数字
list_of_tokens = re.split(r'\W+', text)
return [token.lower() for token in list_of_tokens if len(token) > 2]
def spam_classifier(sklearn=True):
"""
垃圾邮件分类
ham:废垃圾邮件;spam:垃圾邮件
:param sklearn: 使用sklearn的api进行测试
"""
rootdir = 'D:/Github/ML-Algorithm-Source-Code/'
spam_filepath = rootdir + 'dataset/email/spam/'
ham_filepath = rootdir + 'dataset/email/ham/'
doc_list = []
class_list = []
full_text = []
# 遍历 25个 txt 文件
for i in range(1, 26):
# 读取每个垃圾邮件,并字符串转换成字符串列表
word_list = str_to_list(open(spam_filepath + '%d.txt' % i, 'r').read())
doc_list.append(word_list)
full_text.append(word_list)
class_list.append(1)
word_list = str_to_list(open(ham_filepath + '%d.txt' % i, 'r').read())
doc_list.append(word_list)
full_text.append(word_list)
class_list.append(0)
# 创建词汇表,不重复
vocab_list = create_vocab_list(doc_list)
dataset = list(range(50))
test_x = []
# 从50个邮件中,随机挑选出40个作为训练集,10个做测试集
# 随机选取10个,构造测试集
for i in range(10):
rand_index = int(random.uniform(0, len(dataset)))
test_x.append(dataset[rand_index])
del(dataset[rand_index])
train_x = []
train_y = []
# 遍历训练集
for doc_index in dataset:
# 将生成的词袋模型添加到训练矩阵中
train_x.append(set_word2vec(vocab_list, doc_list[doc_index]))
# 将类别添加到训练集类别标签向量中
train_y.append(class_list[doc_index])
# 训练朴素贝叶斯模型
prob_0_vector, prob_1_vector, prob_spam = train_naive_bayes(np.array(train_x), np.array(train_y))
# 正确分类计数
true_count = 0
# 遍历测试集
for doc_index in test_x:
word_vector = set_word2vec(vocab_list, doc_list[doc_index])
if navie_bayes_classifer(np.array(word_vector), prob_0_vector, prob_1_vector, prob_spam) == class_list[doc_index]:
true_count += 1
print("error test set:", doc_list[doc_index])
print('-' * 32)
print("Self NB test acc:%.2f%%" % ((float(true_count) / len(test_x)) * 100))
if sklearn:
test_x = []
test_y = []
# 随机选取10个,构造测试集
for i in range(10):
rand_index = int(random.uniform(0, len(dataset)))
test_x.append(set_word2vec(vocab_list, doc_list[rand_index]))
test_y.append(class_list[rand_index])
del (dataset[rand_index])
clf = MultinomialNB()
clf.fit(np.array(train_x), np.array(train_y).ravel())
# clf_pred = clf.predict(np.array(test_x))
test_acc = clf.score(np.array(test_x), np.array(test_y).ravel())
print("sklearn NB test acc: ", test_acc)
if __name__ == '__main__':
spam_classifier()
# -
# Reference:
# > https://cuijiahua.com/blog/2017/11/ml_5_bayes_2.html
| SupervisedLearning/04. NaiveBayes/SpamClassification_NaiveBayes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Create metadata
#
# This notebook creates a metadata.json file based on the EnergyPlus eplusout.csv file.
# ## Setup
import requests
import csv
import json
# ## Get csv file from figshare
csv_download_url='https://figshare.com/ndownloader/files/35934071?private_link=464885898d0041bfa8fd'
response=requests.get(csv_download_url)
csv_text=response.text
csv_text.split('\n')[0]
# ## Get header row
csv_reader=csv.reader(csv_text.splitlines())
header_row=next(csv_reader)
header_row=[x.strip() for x in header_row]
header_row
# ## Function to create column descriptions
# +
base_url=r"https://bigladdersoftware.com/epx/docs/9-6/input-output-reference/"
reference_dict={
"Site Outdoor Air Drybulb Temperature": \
"group-location-climate-weather-file-access.html#site-outdoor-air-drybulb-temperature-c",
"Zone Mean Radiant Temperature": \
"group-thermal-zone-description-geometry.html#zone-mean-radiant-temperature-c-1",
"Site Total Sky Cover": \
"group-location-climate-weather-file-access.html#site-total-sky-cover",
"Site Opaque Sky Cover": \
"group-location-climate-weather-file-access.html#site-opaque-sky-cover",
"Zone Mean Air Temperature": \
"group-thermal-zone-description-geometry.html#zone-mean-air-temperature-c-1",
"Zone Air Heat Balance Surface Convection Rate": \
"group-thermal-zone-description-geometry.html#zone-air-heat-balance-surface-convection-rate-w",
"Zone Air Heat Balance Air Energy Storage Rate": \
"group-thermal-zone-description-geometry.html#zone-air-heat-balance-air-energy-storage-rate-w",
"Site Daylight Saving Time Status": \
"group-location-climate-weather-file-access.html#site-daylight-saving-time-status",
"Site Day Type Index": \
"group-location-climate-weather-file-access.html#site-day-type-index",
"Zone Total Internal Latent Gain Energy": \
"group-thermal-zone-description-geometry.html#zone-total-internal-latent-gain-energy-j",
"Other Equipment Total Heating Energy": \
"group-internal-gains-people-lights-other.html#outputs-5-004",
"Surface Inside Face Temperature": \
"group-thermal-zone-description-geometry.html#surface-inside-face-temperature-c",
"Surface Outside Face Temperature": \
"group-thermal-zone-description-geometry.html#surface-outside-face-temperature-c",
"Surface Inside Face Convection Heat Transfer Coefficient": \
"group-thermal-zone-description-geometry.html#surface-inside-face-convection-heat-transfer-coefficient-wm2-k",
"Surface Outside Face Convection Heat Transfer Coefficient": \
"group-thermal-zone-description-geometry.html#surface-outside-face-convection-heat-transfer-coefficient-wm2-k"
}
qudt_dict={
#"": "http://qudt.org/vocab/unit/FRACTION",
"C": "http://qudt.org/vocab/unit/DEG_C",
"W": "http://qudt.org/vocab/unit/W-PER-M",
"J": "http://qudt.org/vocab/unit/J",
"W/m2-K": "http://qudt.org/vocab/unit/W-PER-M2-K"
}
time_interval_dict={
"Hourly": "H1",
"Daily": "D1",
"Monthly": "M1"
}
def create_column_description(header):
""
d={
"@type": "Column",
"titles": header,
}
variable=header.split('[')[0].strip()
try:
units=header.split('[')[1].split(']')[0].strip()
except IndexError:
units=None
try:
time_interval=header.split('(')[1].split(')')[0].strip()
except IndexError:
time_interval=None
# dc:description
d["dc:description"]= header
# schema:variableMeasured
d['schema:variableMeasured']=variable
# schema:unitText
if units:
d['schema:unitText']=units
# schema:duration
if time_interval:
d['schema:duration']=time_interval_dict[time_interval]
# dc:reference
reference_url=None
for k,v in reference_dict.items():
if variable.endswith(k):
reference_url=v
if reference_url:
d['dc:references']={"@id":f"{base_url}{reference_url}"}
else:
if not variable is None:
print(variable)
# http://purl.org/linked-data/sdmx/2009/attribute#unitMeasure
qudt_url=qudt_dict.get(units,None)
if qudt_url:
d['http://purl.org/linked-data/sdmx/2009/attribute#unitMeasure']={"@id":f"{qudt_url}"}
else:
print(units)
# datatype
if variable=='Date/Time':
d['datatype']='string'
else:
d['datatype']='number'
# other comments
if variable=='Date/Time':
d['rdfs:comment']='The Date/Time column contains a non-standard date and time format which does match any of the CSVW data format options.'
return d
# test below
header='Environment:Site Outdoor Air Drybulb Temperature [C](Hourly)'
create_column_description(header)
# -
# ## Create & save metadata dict
d={
"@context": "http://www.w3.org/ns/csvw",
"@type": "Table",
"url": "eplusout.csv",
"dc:title": "EnergyPlus simulation test",
"dc:description": "An EnergyPlus simulation of an office room in Sir Frank Gibb building, Loughborough University",
"dc:creator": "ABCE Open Research Team",
"tableSchema": {
"@type": "Schema",
"columns": [create_column_description(header) for header in header_row]
}
}
with open('eplusout.csv-metadata.json','w') as f:
json.dump(d, f, indent=4)
d
| csv-on-the-web-working-with-energyplus-results/create_metadata.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# con_contact
# Authors: <NAME>, <NAME>
# Example of how to check the effects of contact tracing
# -
# General imports
import numpy as np
import matplotlib.pyplot as plt
import sys
# Adding path to module
sys.path.append("../")
# picture path
PICS = '../pics/'
# Module imports
from contagion import Contagion, config
# +
# The fractions of interest
tracking_fractions = [0.1,0.2,0.4,0.6]
config["population"]["population size"] = 1000
config["population"]["average social circle"] = 40
config["population"]['re-use population'] = False
config["infection"]["infected"] = 10
infections = []
infectious = []
susceptible = []
for tracked_fraction in tracking_fractions:
# Setting additional stuff
config["measures"]['type'] = 'contact_tracing'
config["measures"]['tracked fraction'] = tracked_fraction
# Creating a fourth_day object
contagion = Contagion()
# Running the simulations
contagion.sim()
# Storing results
infections.append(np.diff(contagion.statistics['is_incubation']))
infectious.append(contagion.statistics['is_infectious'])
# -
# Plotting standards
std_size = 10.
fontsize = 20.
lw=3.
h_length=1.
# from matplotlib import rc
# rc('font', **{'family': 'serif', 'serif': ['Computer Modern']})
# rc('text', usetex=True)
# Infections per time step
figure, (ax1, ax2, ax3) = plt.subplots(3, 1 ,figsize=(std_size, std_size * 6. / 8.))
colors = [
'#7b3294',
'#a1dab4',
'#41b6c4',
'#225ea8']
high_x = 100
# New infections
for i, tracked_fraction in enumerate(tracking_fractions):
ax1.plot(contagion.t[1:], infections[i], color=colors[i],
lw=lw, label='Tracked: %.1f Percent' %(tracked_fraction*100.))
ax1.set_xlim(1e0, high_x)
ax1.set_ylim(0., 200)
ax1.set_xscale('linear')
ax1.set_yscale('linear')
# ax1.set_xlabel(r't [Days]', fontsize=fontsize)
ax1.set_ylabel(r'New infections', fontsize=fontsize)
ax1.tick_params(axis = 'both', which = 'major', labelsize=fontsize, direction='in')
ax1.tick_params(axis = 'both', which = 'minor', labelsize=fontsize, direction='in')
h, l = ax1.get_legend_handles_labels()
lgd1 = ax1.legend(h,l, loc=9, bbox_to_anchor=(0.5, +1.6),
ncol=2, fontsize=fontsize, handlelength=h_length,
fancybox=True, frameon=False)
ax1.add_artist(lgd1)
ax1.grid(True)
# Infection total
for i, tracked_fraction in enumerate(tracking_fractions):
ax2.plot(contagion.t, infectious[i], color=colors[i],
lw=lw)
ax2.set_xlim(1e0, high_x)
ax2.set_ylim(1e0, 1000)
ax2.grid(True)
ax2.set_xscale('linear')
ax2.set_yscale('linear')
ax2.set_xlabel(r't [Days]', fontsize=fontsize)
ax2.set_ylabel(r'Infected', fontsize=fontsize)
ax2.tick_params(axis = 'both', which = 'major', labelsize=fontsize, direction='in')
ax2.tick_params(axis = 'both', which = 'minor', labelsize=fontsize, direction='in')
# Healthy
for i, tracked_fraction in enumerate(tracking_fractions):
ax3.plot(contagion.t,
config["population"]['population size'] - np.cumsum(infectious[i]),
lw=lw, color=colors[i],)
ax3.set_xlim(1., high_x)
ax3.set_ylim(0., 1000)
ax3.set_xscale('linear')
ax3.set_yscale('linear')
ax3.set_xlabel(r't [Days]', fontsize=fontsize)
ax3.set_ylabel(r'Susceptible', fontsize=fontsize)
ax3.tick_params(axis = 'both', which = 'major', labelsize=fontsize, direction='in')
ax3.tick_params(axis = 'both', which = 'minor', labelsize=fontsize, direction='in')
ax3.grid(True)
plt.show()
figure.savefig(PICS + "Contagion_Contact_Tracing.png",
bbox_inches='tight')
| examples/con_contact.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + slideshow={"slide_type": "notes"}
from IPython.core.display import HTML
HTML("<style>.container { width:95% !important; }</style>")
# + [markdown] slideshow={"slide_type": "slide"}
# # Steepest descent and Newton's method
# -
# ## Let us define the same function as on the previous lesson for testing
def f_simple(x):
return (x[0] - 10.0)**2 + (x[1] + 5.0)**2+x[0]**2
# + [markdown] slideshow={"slide_type": "slide"}
# ## Automatic differentiation in Python
# -
# Import automatic differentiation package for Python
# Needs to be installed typing
# ```
# pip install ad
# ```
import ad
# You can ask for gradient and hessian using the <pre>ad.gh</pre> function. Let us do that for the function <it>f</it> that we defined.
# + slideshow={"slide_type": "-"}
grad_f, hess_f = ad.gh(f_simple)
# -
print "At the point (1,2) gradient is ", grad_f([1,2]), " and hessian is ",hess_f([1,2])
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Let us visualize the gradient
# -
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
from pylab import meshgrid
def visualize_gradient(f,point,x_lim,y_lim):
grad_point = np.array(ad.gh(f)[0](point))
grad_point = grad_point/np.linalg.norm(grad_point)
X,Y,Z = point[0],point[1],f(point)
U,V,W = grad_point[0],grad_point[1],0
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
x = np.arange(x_lim[0],x_lim[1],0.1)
y = np.arange(y_lim[0],y_lim[1],0.1)
X2,Y2 = meshgrid(x, y) # grid of point
Z2 = [f([x,y]) for (x,y) in zip (X2,Y2)] # evaluation of the function on the grid
surf = ax.plot_surface(X2, Y2, Z2,alpha=0.5)
ax.quiver(X,Y,Z,U,V,W,color='red',linewidth=1.5)
return plt
visualize_gradient(f_simple,[1,-2],[0,10],[-10,0]).show()
visualize_gradient(lambda x:ad.gh(f_simple)[0](x)[0],[1,-2],[0,10],[-10,0]).show()
# + [markdown] slideshow={"slide_type": "subslide"}
# With the lambda function we can easily visualize gradients of various functions
# -
import math
visualize_gradient(lambda x:3*x[0]+x[1],[1,1],[0,2],[0,2]).show()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Base algorithm for the steepest descent and Newton's algorithms
# **Input:** function $f$ to be optimized, starting point $x_0$, step length rule $alpha$, stopping rule $stop$
# **Output:** A solution $x^*$ that is close to a locally optimal solution
# ```
# set f_old as a big number and f_new as f(x0)
# while a stopping criterion has not been met:
# f_old = f_new
# determine search direction d_h according to the method
# determine the step length alpha
# set x = x + alpha *d_h
# f_new = f(x)
# return x
# ```
# -
# The way to determine search direction distinguishes steepest descent algorithm and the Newton algorithm. Different stopping rules and step sizes can be mixed and matched with both algorithms.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Steepest Descent algorithm for unconstrained optimization
# -
# In the steepest descent algorithm, the search direction is determined by the negative of the gradient $-\nabla f(x)$.
# ### Code in Python
# Let us use a simple stopping rule, where we stop when the change is not bigger than precision and we have a fixed step size.
import numpy as np
import ad
def steepest_descent(f,start,step,precision):
f_old = float('Inf')
x = np.array(start)
steps = []
f_new = f(x)
while abs(f_old-f_new)>precision:
f_old = f_new
d = -np.array(ad.gh(f)[0](x))
x = x+d*step
f_new = f(x)
steps.append(list(x))
return x,f_new,steps
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Solve the problem using the Python function
# -
start = [2.0,-10.0]
(x_value,f_value,steps) = steepest_descent(f_simple,start,0.2,0.0001)
print "Optimal solution is ",x_value
# Plot the steps of solving
# +
import matplotlib.pyplot as plt
def plot_2d_steps(steps,start):
myvec = np.array([start]+steps).transpose()
plt.plot(myvec[0,],myvec[1,],'ro')
for label,x,y in zip([str(i) for i in range(len(steps)+1)],myvec[0,],myvec[1,]):
plt.annotate(label,xy = (x, y))
return plt
# -
plot_2d_steps(steps,start).show()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Newton's method
# -
# Based on setting the research direction as $-[Hf(x)]^{-1}\nabla f(x)$.
# In one-dimensional case, it is easy to see that since
# $$f(x+\Delta x)\approx f(x)+f'(x)\Delta x+\frac12f''(x)\Delta x^2$$
# with the Taylor series.
# We want to find $x$ such that $f(x)$ is at minimum and, thus, we seek to solve the equation that sets the derivative of this expression with respect to $\Delta x$ equal to zero:
#
# $$ 0 = \frac{d}{d\Delta x} \left(f(x_n)+f'(x_n)\Delta x+\frac 1 2 f''(x_n) \Delta x^2\right) = f'(x_n)+f'' (x_n) \Delta x.$$
#
# The solution of the above equation is $\Delta x=-f'(x)/f''(x)$. Thus, the best approximation of $x$ as the minimum is $x-f''(x)^{-1}f'(x)$.
#
# + slideshow={"slide_type": "subslide"}
def newton(f,start,step,precision):
f_old = float('Inf')
x = np.array(start)
steps = []
f_new = f(x)
while abs(f_old-f_new)>precision:
f_old = f_new
H_inv = np.linalg.inv(np.matrix(ad.gh(f)[1](x)))
d = (-H_inv*(np.matrix(ad.gh(f)[0](x)).transpose())).transpose()
#Change the type from np.matrix to np.array so that we can use it in our function
x = np.array(x+d*step)[0]
f_new = f(x)
steps.append(list(x))
return x,f_new,steps
# -
start = [2.0,-10.0]
(x_value,f_value,steps) = newton(f_simple,start,0.5,0.01)
print "Optimal solution is ",x_value
plot_2d_steps(steps,start).show()
| Lecture 4, Steepest descent and Newton's method for unrestricted optimization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Regression and Other Stories: Sex Ratio
import arviz as az
from bambi import Model, Prior
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
import statsmodels.formula.api as smf
# ### Data
x = np.arange(-2,3,1)
y = [50, 44, 50, 47, 56]
sexratio = pd.DataFrame(dict(x=x, y=y))
sexratio
# ### Informative priors
theta_hat_prior = 0
se_prior= 0.25
theta_hat_data = 8
se_data = 3
theta_hat_bayes = (theta_hat_prior/se_prior**2 + theta_hat_data/se_data**2)/(1/se_prior**2 + 1/se_data**2)
se_bayes = np.sqrt(1/(1/se_prior**2 + 1/se_data**2))
# ### Least Squares Regression
results = smf.ols('y ~ x', data=sexratio).fit()
results.summary()
# +
# TODO: Change the plot from points to years
fig, ax = plt.subplots()
a_hat, b_hat = results.params
# Generate x range
x_domain = np.linspace(sexratio["x"].min(), sexratio["x"].max(), 100)
# Plot Line
ax.plot(x_domain, a_hat+b_hat*x_domain)
# Add formula
# There seems to be no easy way to get stderr so we omit it
x_midpoint = x_domain.mean()
ax.text(x_midpoint, a_hat+b_hat*x_midpoint,
f"y = {np.round(a_hat, 2)} + {np.round(b_hat, 2)} * x");
# Add scatter plot
sexratio.plot(kind="scatter", x="x", y="y", ax=ax)
ax.set_xlabel(" Average recent growth in personal income")
ax.set_ylabel("Incumbent party's vote share");
# -
# ### Bayesian regression with weakly informative prior
model = Model(sexratio)
fit_default = model.fit('y ~ x', samples=1000, chains=4)
func_dict = {"Median": np.median,
"MAD_SD":stats.median_abs_deviation,
}
coefs = az.summary(fit_default, stat_funcs=func_dict, extend=False, round_to=2)
coefs
# ### Bayesian regression with informative prior
# +
model = Model(sexratio)
slope_prior = Prior('Normal', mu=0., sigma=.2)
intercept_prior = Prior('Normal', mu=48.8, sigma=.5)
priors={"x":slope_prior, "Intercept":intercept_prior}
fit_default = model.fit('y ~ x', samples=1000, chains=4, priors=priors )
# -
# ### Plot Posterior simulations under weakly informative abd informative prior
# +
# TODO: Add posterior simulations and posterior predictive of fits
| ROS/SexRatio/sexratio.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
import numpy as np
import pandas as pd
# This function is called from Main and expects train and test values for x and y
def load_ag_data(authors = None, docID = None):
import data
train = data.getCharAuthorData(authors, docID) #Pass it to data and it returns a data frame
train = train.dropna()
labels = [] #
texts = []
size = []
authorList = train.author_id.unique()
for auth in authorList:
current = train.loc[train['author_id'] == auth]
size.append(current.shape[0])
print("Author: %5s Size: %5s" % (auth, current.shape[0]))
print("Min: %s" % (min(size)))
print("Max: %s" % (max(size)))
authorList = authorList.tolist()
for auth in authorList:
current = train.loc[train['author_id'] == auth]
samples = min(size)
current = current.sample(n = samples)
textlist = current.doc_content.tolist()
texts = texts + textlist
labels = labels + [authorList.index(author_id) for author_id in current.author_id.tolist()]
labels_index = {}
labels_index[0] = 0
for i, auth in enumerate(authorList):
labels_index[i] = auth
del train
from keras.utils.np_utils import to_categorical
labels = to_categorical(labels)
print('Authors %s.' % (str(authorList)))
print('Found %s texts.' % len(texts))
print('Found %s labels.' % len(labels))
from sklearn.model_selection import train_test_split
trainX, valX, trainY, valY = train_test_split(texts, labels, test_size= 0.2)
# return (texts, labels, labels_index, samples)
return ((trainX, trainY), (valX, valY))
def encode_data(x, maxlen, vocab, vocab_size, check):
#Iterate over the loaded data and create a matrix of size maxlen x vocabsize
#In this case that will be 1014x69. This is then placed in a 3D matrix of size
#data_samples x maxlen x vocab_size. Each character is encoded into a one-hot
#array. Chars not in the vocab are encoded into an all zero vector.
input_data = np.zeros((len(x), maxlen, vocab_size))
for dix, sent in enumerate(x):
counter = 0
sent_array = np.zeros((maxlen, vocab_size))
chars = list(sent.replace(' ', ''))
for c in chars:
if counter >= maxlen:
pass
else:
char_array = np.zeros(vocab_size, dtype=np.int)
if c in check:
ix = vocab[c]
char_array[ix] = 1
sent_array[counter, :] = char_array
counter += 1
input_data[dix, :, :] = sent_array
return input_data
def create_vocab_set():
#This is a Unicode Character set
import string
unicode_characters = [];
for k in range(0,255) :
unicode_characters.append(unichr(k))
for k in range(1024, 1280) :
unicode_characters.append(unichr(k))
#alphabet = (list(string.ascii_lowercase) + list(string.digits) +
# list(string.punctuation) + ['\n'])
alphabet = unicode_characters
vocab_size = len(alphabet)
check = set(alphabet)
vocab = {}
reverse_vocab = {}
for ix, t in enumerate(alphabet):
vocab[t] = ix
reverse_vocab[ix] = t
return vocab, reverse_vocab, vocab_size, check
# +
# (vocab, reverse_vocab, vocab_size, check) = create_vocab_set()
| NN_AuthorshipID/data_helpers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: SageMath 8.4
# language: ''
# name: sagemath
# ---
# + [markdown] deletable=false
# # [Applied Statistics](https://lamastex.github.io/scalable-data-science/as/2019/)
# ## 1MS926, Spring 2019, Uppsala University
# ©2019 <NAME>. [Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
# -
# # 06. Statistics from Data: Fetching New Zealand Earthquakes & Live Play with `data/`
#
# - Live Data-fetch of NZ EQ Data
# - More on Statistics
# - Sample Mean
# - Sample Variance
# - Order Statistics
# - Frequencies
# - Empirical Mass Function
# - Empirical Distribution Function
# - List Comprehensions
# - New Zealand Earthquakes
# - Live Play with `data/`
# - Swedish election data
# - Biergartens in Germany
#
#
# # Live Data-fetching Exercise Now
#
# Go to [https://quakesearch.geonet.org.nz/](https://quakesearch.geonet.org.nz/) and download data on NZ earthquakes.
#
# <img src = "images/GeoNetQuakeSearchDownloadCSV.png" width =800>
# In my attempt above to zoom out to include both islands of New Zealand (NZ) and get one year of data using the `Last Year` button choice from this site:
# - [https://quakesearch.geonet.org.nz/](https://quakesearch.geonet.org.nz/)
# and hitting `Search` box gave the following URLs for downloading data. I used the `DOWNLOAD` button to get my own data in Outpur Format `CSV` as chosen earlier.
#
# https://quakesearch.geonet.org.nz/csv?bbox=163.52051,-49.23912,182.19727,-32.36140&startdate=2017-06-01&enddate=2018-05-17T14:00:00
# https://quakesearch.geonet.org.nz/csv?bbox=163.52051,-49.23912,182.19727,-32.36140&startdate=2017-5-17T13:00:00&enddate=2017-06-01
#
# ## What should you do now?
#
# Try to `DOWNLOAD` your own `CSV` data and store it in a file named **`my_earthquakes.csv`** (NOTE: rename the file when you download so you don't replace the file `earthquakes.csv`!) inside the folder named **`data`** that is inside the same directory that this notebook is in.
# + language="sh"
# # print working directory
# pwd
# + language="sh"
# ls # list contents of working directory
# + language="sh"
# # after download you should have the following file in directory named data
# ls data
# + magic_args=" " language="sh"
# # first three lines
# head -3 data/earthquakes_small.csv
# + language="sh"
# # last three lines
# tail -3 data/earthquakes_small.csv
# + magic_args=" " language="sh"
# # number of lines in the file; menmonic from `man wc` is wc = word-count option=-l is for lines
# wc -l data/earthquakes_small.csv
# +
# #%%sh
#man wc
# -
# ## Let's analyse the measured earth quakes in `data/earthquakes.csv`
#
# This will ensure we are all looking at the same file!
#
# But feel free to play with your own `data/my_earthquakes.csv` on the side.
# ### Exercise:
# Grab origin-time, lat, lon, magnitude, depth
# +
with open("data/earthquakes_small.csv") as f:
reader = f.read()
dataList = reader.split('\n')
# -
len(dataList)
dataList[0]
myDataAccumulatorList =[]
for data in dataList[1:-2]:
dataRow = data.split(',')
myData = [dataRow[4],dataRow[5],dataRow[6]]#,dataRow[7]]
myFloatData = tuple([float(x) for x in myData])
myDataAccumulatorList.append(myFloatData)
points(myDataAccumulatorList)
# # More on Statistics
#
# Recall that a statistic is any measureable function of the data: $T(x): \mathbb{X} \rightarrow \mathbb{T}$.
#
# Thus, a statistic $T$ is also an RV that takes values in the space $\mathbb{T}$.
#
# When $x \in \mathbb{X}$ is the observed data, $T(x)=t$ is the observed statistic of the observed data $x$.
#
# # Let's Play Live with other datasets, shall we?
# # Swedish 2018 National Election Data
#
#
# ## Swedish Election Outcomes 2018
#
# See: [http://www.lamastex.org/datasets/public/elections/2018/sv/README](http://www.lamastex.org/datasets/public/elections/2018/sv/README)!
#
# This was obtained by processing using the scripts at:
#
# - https://gitlab.com/tilo.wiklund/swedis-election-data-scraping
#
# You already have this dataset in your `/data` directory.
# + language="sh"
# cd data
# # if you don't see final.csv in data/ below
# # then either uncomment and try the next line in linux/Mac OSX
# #tar -zxvf final.tgz
# # or try the next line after uncommenting it to extract final.csv
# # unzip final.csv.zip
# ls -al
# + language="sh"
# wc data/final.csv
# head data/final.csv
# -
# ## Counting total votes per party
# Let's quickly load the data using [`csv.reader`](https://docs.python.org/2/library/csv.html) and count the number of votes for each party over all of Sweden next.
# +
import csv, sys
filename = 'data/final.csv'
linesAlreadyRead=0
partyVotesDict={}
with open(filename, 'rb') as f:
reader = csv.reader(f,delimiter=',',quotechar='"')
headers = next(reader) # skip first line of header
try:
for row in reader:
linesAlreadyRead+=1
party=row[3].decode('utf-8') # convert str to unicode
votes=int(row[4])
if party in partyVotesDict: # the data value already exists as a key
partyVotesDict[party] = partyVotesDict[party] + votes # add 1 to the count
else: # the data value does not exist as a key value
# add a new key-value pair for this new data value, frequency 1
partyVotesDict[party] = votes
except csv.Error as e:
sys.exit('file %s, line %d: %s' % (filename, reader.line_num, e))
print "lines read = ", linesAlreadyRead
# -
# fancy printing of non-ASCII string
for kv in partyVotesDict.items():
print "party ",kv[0], "\thas a total of votes =\t", kv[1]
# let's sort by descending order of votes
for party in sorted(partyVotesDict, key=partyVotesDict.get, reverse=True):
print party, "\t", partyVotesDict[party]
# # Geospatial adventures
#
# Say you want to visit some places of interest in Germany. This tutorial on [Open Street Map's Overpass API](https://janakiev.com/blog/openstreetmap-with-python-and-overpass-api/) shows you how to get the locations of `"amenity"="biergarten"` in Germany.
#
# We may come back to [https://www.openstreetmap.org](https://www.openstreetmap.org) later. If we don't then you know where to go for openly available data for geospatial statistical analysis.
# +
import requests
import json
overpass_url = "http://overpass-api.de/api/interpreter"
overpass_query = """
[out:json];
area["ISO3166-1"="DE"][admin_level=2];
(node["amenity"="biergarten"](area);
way["amenity"="biergarten"](area);
rel["amenity"="biergarten"](area);
);
out center;
"""
response = requests.get(overpass_url,
params={'data': overpass_query})
data = response.json()
# +
#data # uncomment this cell to see the raw JSON
# +
import numpy as np
# Collect coords into list
coords = []
for element in data['elements']:
if element['type'] == 'node':
lon = element['lon']
lat = element['lat']
coords.append((lon, lat))
elif 'center' in element:
lon = element['center']['lon']
lat = element['center']['lat']
coords.append((lon, lat))
# Convert coordinates into numpy array
X = np.array(coords)
p = points(zip(X[:, 0], X[:, 1]))
p += text('Biergarten in Germany',(12,56))
p.axes_labels(['Longitude','Latitude'])
#plt.axis('equal')
p.show()
# -
# ## Pubs in Sweden
# With a minor modification to the above code we can view `amenity=pub` in Sweden.
# +
import requests
import json
overpass_url = "http://overpass-api.de/api/interpreter"
overpass_query = """
[out:json];
area["ISO3166-1"="SE"][admin_level=2];
(node["amenity"="pub"](area);
way["amenity"="pub"](area);
rel["amenity"="pub"](area);
);
out center;
"""
response = requests.get(overpass_url,
params={'data': overpass_query})
data = response.json()
import numpy as np
# Collect coords into list
coords = []
for element in data['elements']:
if element['type'] == 'node':
lon = element['lon']
lat = element['lat']
coords.append((lon, lat))
elif 'center' in element:
lon = element['center']['lon']
lat = element['center']['lat']
coords.append((lon, lat))
# Convert coordinates into numpy array
X = np.array(coords)
p = points(zip(X[:, 0], X[:, 1]))
p += text('Pubar i Sverige',(14,68))
p.axes_labels(['Longitude','Latitude'])
#plt.axis('equal')
p.show()
# -
| _as/2019/jp/06.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# <a href="https://colab.research.google.com/github/DingLi23/s2search/blob/pipelining/pipelining/pdp-exp1/pdp-exp1_cslg-rand-500_plotting.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# ### Experiment Description
#
# Produce PDP for a randomly picked data from cslg.
#
# > This notebook is for experiment \<pdp-exp1\> and data sample \<cslg-rand-500\>.
# ### Initialization
# +
# %load_ext autoreload
# %autoreload 2
import numpy as np, sys, os
in_colab = 'google.colab' in sys.modules
# fetching code and data(if you are using colab
if in_colab:
# !rm -rf s2search
# !git clone --branch pipelining https://github.com/youyinnn/s2search.git
sys.path.insert(1, './s2search')
# %cd s2search/pipelining/pdp-exp1/
pic_dir = os.path.join('.', 'plot')
if not os.path.exists(pic_dir):
os.mkdir(pic_dir)
# -
# ### Loading data
# +
sys.path.insert(1, '../../')
import numpy as np, sys, os, pandas as pd
from s2search_score_pdp import pdp_based_importance, apply_order
sample_name = 'cslg-rand-500'
f_list = ['title', 'abstract', 'venue', 'authors', 'year', 'n_citations']
pdp_xy = {}
pdp_metric = pd.DataFrame(columns=['feature_name', 'pdp_range', 'pdp_importance'])
for f in f_list:
file = os.path.join('.', 'scores', f'{sample_name}_pdp_{f}.npz')
if os.path.exists(file):
data = np.load(file)
sorted_pdp_data = apply_order(data)
feature_pdp_data = [np.mean(pdps) for pdps in sorted_pdp_data]
pdp_xy[f] = {
'y': feature_pdp_data,
'numerical': True
}
if f == 'year' or f == 'n_citations':
pdp_xy[f]['x'] = np.sort(data['arr_1'])
else:
pdp_xy[f]['y'] = feature_pdp_data
pdp_xy[f]['x'] = list(range(len(feature_pdp_data)))
pdp_xy[f]['numerical'] = False
pdp_metric.loc[len(pdp_metric.index)] = [f, np.max(feature_pdp_data) - np.min(feature_pdp_data), pdp_based_importance(feature_pdp_data, f)]
pdp_xy[f]['weird'] = feature_pdp_data[len(feature_pdp_data) - 1] > 30
print(pdp_metric.sort_values(by=['pdp_importance'], ascending=False))
# -
# ### PDP
# +
import matplotlib.pyplot as plt
categorical_plot_conf = [
{
'xlabel': 'Title',
'ylabel': 'Scores',
'pdp_xy': pdp_xy['title']
},
{
'xlabel': 'Abstract',
'pdp_xy': pdp_xy['abstract']
},
{
'xlabel': 'Authors',
'pdp_xy': pdp_xy['authors']
},
{
'xlabel': 'Venue',
'pdp_xy': pdp_xy['venue'],
# 'zoom': {
# 'inset_axes': [0.15, 0.45, 0.47, 0.47],
# 'x_limit': [950, 1010],
# 'y_limit': [-9, 7],
# 'connects': [True, True, False, False]
# }
},
]
numerical_plot_conf = [
{
'xlabel': 'Year',
'ylabel': 'Scores',
'pdp_xy': pdp_xy['year']
},
{
'xlabel': 'Citation Count',
'pdp_xy': pdp_xy['n_citations'],
# 'zoom': {
# 'inset_axes': [0.5, 0.2, 0.47, 0.47],
# 'x_limit': [-100, 1000],
# 'y_limit': [-7.3, -6.2],
# 'connects': [False, False, True, True]
# }
}
]
def pdp_plot(confs, title):
fig, axes = plt.subplots(nrows=1, ncols=len(confs), figsize=(20, 5), dpi=100)
subplot_idx = 0
# plt.suptitle(title, fontsize=20, fontweight='bold')
# plt.autoscale(False)
for conf in confs:
axess = axes if len(confs) == 1 else axes[subplot_idx]
axess.plot(conf['pdp_xy']['x'], conf['pdp_xy']['y'])
axess.grid(alpha = 0.4)
if ('ylabel' in conf):
axess.set_ylabel(conf.get('ylabel'), fontsize=20, labelpad=10)
axess.set_xlabel(conf['xlabel'], fontsize=16, labelpad=10)
if not (conf['pdp_xy']['weird']):
if (conf['pdp_xy']['numerical']):
axess.set_ylim([-9, -5.5])
pass
else:
axess.set_ylim([-15, 10])
pass
if 'zoom' in conf:
axins = axess.inset_axes(conf['zoom']['inset_axes'])
axins.plot(conf['pdp_xy']['x'], conf['pdp_xy']['y'])
axins.set_xlim(conf['zoom']['x_limit'])
axins.set_ylim(conf['zoom']['y_limit'])
axins.grid(alpha=0.3)
rectpatch, connects = axess.indicate_inset_zoom(axins)
connects[0].set_visible(conf['zoom']['connects'][0])
connects[1].set_visible(conf['zoom']['connects'][1])
connects[2].set_visible(conf['zoom']['connects'][2])
connects[3].set_visible(conf['zoom']['connects'][3])
subplot_idx += 1
pdp_plot(categorical_plot_conf, "PDPs for four categorical features")
plt.savefig(os.path.join('.', 'plot', f'{sample_name}-categorical.png'), facecolor='white', transparent=False, bbox_inches='tight')
# second fig
pdp_plot(numerical_plot_conf, "PDPs for two numerical features")
plt.savefig(os.path.join('.', 'plot', f'{sample_name}-numerical.png'), facecolor='white', transparent=False, bbox_inches='tight')
| pipelining/pdp-exp1/pdp-exp1_cslg-rand-500_plotting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy
ar1=numpy.array([2,4,5,6,7])
ar2=numpy.array([3,6,89,2,5])
print(ar1+ar2)
ar1.shape
print(ar1*5)
x=10
y=20
print(f"x={x},y={y}")
x,y=y,x
print(f"x={x},y={y}")
#location gives for value not for variables
print(id(x))
z=20
print(id(z))
type(z)
x="sumit"
type(x)
'mi' in x
print(x[4],x[-1],x[-4],x[-5])
# +
#int,float,string all are immutable
#list,tuples,dictionary are mutable data
# -
list=[4,6,7,7,8,"sumit"]
print(type(list[1]),type(list[-1]))
print(list,list[5],list[-1])
#list does not allow duplication of data
tuples=(4,6,7,7,"sumit")
print(list,tuples)
# tuples allow duplication of data
print(type(list),type(tuples))
tupl=(6,7,8,"jai krishna ")
for i in range(6):
print(tupl[3]*3,end='\n')
# for in iterator and while is loop
# It picks the data & decide no of times to iterate
for i in "google":
print("Facebook")
# +
import time
print(time.ctime())
# -
import pandas
| ML/ML_Practice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Your Turn! (Solution)
#
# In the last video, you saw two of the main aspects of principal components:
#
# 1. **The amount of variability captured by the component.**
# 2. **The components themselves.**
#
# In this notebook, you will get a chance to explore these a bit more yourself. First, let's read in the necessary libraries, as well as the data.
# +
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, accuracy_score
from helper_functions import show_images, do_pca, scree_plot, plot_component
import test_code as t
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
#read in our dataset
train = pd.read_csv('./data/train.csv')
train.fillna(0, inplace=True)
# save the labels to a Pandas series target
y = train['label']
# Drop the label feature
X = train.drop("label",axis=1)
show_images(30)
# -
# `1.` Perform PCA on the **X** matrix using on your own or using the **do_pca** function from the **helper_functions** module. Reduce the original more than 700 features to only 10 principal components.
pca, X_pca = do_pca(10, X)
# `2.` Now use the **scree_plot** function from the **helper_functions** module to take a closer look at the results of your analysis.
scree_plot(pca)
# `3.` Using the results of your scree plot, match each letter as the value to the correct key in the **solution_three** dictionary. Once you are confident in your solution run the next cell to see if your solution matches ours.
# +
a = True
b = False
c = 6.13
d = 'The total amount of variability in the data explained by the first two principal components'
e = None
solution_three = {
'10.42' : d,
'The first component will ALWAYS have the most amount of variability explained.': a,
'The total amount of variability in the data explained by the first component': c,
'The sum of the variability explained by all the components can be greater than 100%': b
}
# -
#Run this cell to see if your solution matches ours
t.question_3_check(solution_three)
# `4.` Use the **plot_component** function from the **helper_functions** module to look at each of the components (remember they are 0 indexed). Use the results to assist with question 5.
plot_component(pca, 3)
# `5.` Using the results from viewing each of your principal component weights in question 4, change the following values of the **solution_five** dictionary to the **number of the index** for the principal component that best matches the description. Once you are confident in your solution run the next cell to see if your solution matches ours.
solution_five = {
'This component looks like it will assist in identifying zero': 0,
'This component looks like it will assist in identifying three': 3
}
#Run this cell to see if your solution matches ours
t.question_5_check(solution_five)
# From this notebook, you have had an opportunity to look at the two major parts of PCA:
#
# `I.` The amount of **variance explained by each component**. This is called an **eigenvalue**.
#
# `II.` The principal components themselves, each component is a vector of weights. In this case, the principal components help us understand which pixels of the image are most helpful in identifying the difference between between digits. **Principal components** are also known as **eigenvectors**.
| 03_unsupervised_learning/4_PCA/.ipynb_checkpoints/Interpret_PCA_Results_Solution-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Getting started with Python for Data Science and Automation in Biotechnology
# Welcome! You can use this Jupyter notebook to get started. How to use it will be explained in detail during class.
1 + 1
| GETTING-STARTED.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] graffitiCellId="id_18ngdm1"
# # Linked List Practice
#
# Implement a linked list class. Your class should be able to:
#
# # + Append data to the tail of the list and prepend to the head
# # + Search the linked list for a value and return the node
# # + Remove a node
# # + Pop, which means to return the first node's value and delete the node from the list
# # + Insert data at some position in the list
# # + Return the size (length) of the linked list
# -
# enable intellisense
# %config IPCompleter.greedy=True
# + graffitiCellId="id_x917dkg"
class Node:
def __init__(self, value):
self.value = value
self.next = None
def get_next(self):
return self.next
def set_next(self, node):
self.next = node
def get_value(self):
return self.value
def set_value(self, value):
self.value = value
def has_next(self):
if self.get_next() is None:
return False
return True
def to_string(self):
return "Node value: " + str(self.value)
# + graffitiCellId="id_hg4vhdi"
class LinkedList:
def __init__(self):
self.head = None
self.size = 0 # keep track of size of list
def get_size(self):
return self.size
def prepend(self, value):
""" Prepend a value to the beginning of the list. """
if self.head is None:
self.head = Node(value) # head pts to beginning of list
self.size += 1
return
new_node = Node(value)
new_node.next = self.head
self.head = new_node
self.size += 1
return
def append(self, value):
""" Append a value to the end of the list. """
if self.head is None:
self.head = Node(value) # head pts to beginning of list
self.size += 1
return
# Start at the head and move to the tail (the last node)
node = self.head
while node.next:
node = node.next
# create new node at the end and point to it
node.next = Node(value)
self.size += 1
return
def search(self, value):
""" Search the linked list for a node with the requested value and return the node. """
search_node = self.head
while search_node is not None:
if search_node.get_value() == value:
return search_node
elif search_node.get_next() == None:
return False
else:
search_node = search_node.get_next()
def remove(self, value):
""" Remove first occurrence of value. """
if self.head is None:
return None
current_node = self.head
previous_node = None
while current_node is not None:
if current_node.get_value() == value:
if previous_node is not None:
previous_node.set_next(current_node.get_next())
else:
self.head = current_node.get_next()
self.size -= 1
return True # data found and removed
else:
previous_node = current_node
current_node = current_node.get_next()
return False # data not found in list
def pop(self):
""" Return the first node's value and remove it from the list. """
if self.head is None:
return None
node = self.head
# get head's next list item
self.head = self.head.get_next()
self.size -= 1
return node.value
def insert(self, value, pos):
""" Insert value at pos position in the list. If pos is larger than the
length of the list, append to the end of the list. """
if pos == 0:
self.prepend(value) # remember, prepend increases size
return
position = 0
node = self.head
while node.get_next() and position <= pos:
if (pos - 1) == position:
new_node = Node(value)
new_node.next = node.get_next()
node.next = new_node
self.size += 1
return
position += 1
node = node.get_next()
else:
self.append(value) # append also increases size
def size(self):
""" Return the size or length of the linked list. """
return self.get_size()
def to_list(self):
out = []
node = self.head
while node:
out.append(node.value)
node = node.next
return out
# + graffitiCellId="id_f9k83vl"
## Test your implementation here
# Test prepend
linked_list = LinkedList()
linked_list.prepend(1)
assert linked_list.to_list() == [1], f"list contents: {linked_list.to_list()}"
linked_list.append(3)
linked_list.prepend(2)
assert linked_list.to_list() == [2, 1, 3], f"list contents: {linked_list.to_list()}"
# Test append
linked_list = LinkedList()
linked_list.append(1)
assert linked_list.to_list() == [1], f"list contents: {linked_list.to_list()}"
linked_list.append(3)
assert linked_list.to_list() == [1, 3], f"list contents: {linked_list.to_list()}"
# Test search
linked_list.prepend(2)
linked_list.prepend(1)
linked_list.append(4)
linked_list.append(3)
assert linked_list.search(1).value == 1, f"list contents: {linked_list.to_list()}"
assert linked_list.search(4).value == 4, f"list contents: {linked_list.to_list()}"
# Test remove
linked_list.remove(1)
assert linked_list.to_list() == [2, 1, 3, 4, 3], f"list contents: {linked_list.to_list()}"
linked_list.remove(3)
assert linked_list.to_list() == [2, 1, 4, 3], f"list contents: {linked_list.to_list()}"
linked_list.remove(3)
assert linked_list.to_list() == [2, 1, 4], f"list contents: {linked_list.to_list()}"
# Test pop
value = linked_list.pop()
assert value == 2, f"list contents: {linked_list.to_list()}"
assert linked_list.head.value == 1, f"list contents: {linked_list.to_list()}"
# Test insert
linked_list.insert(5, 0)
assert linked_list.to_list() == [5, 1, 4], f"list contents: {linked_list.to_list()}"
linked_list.insert(2, 1)
assert linked_list.to_list() == [5, 2, 1, 4], f"list contents: {linked_list.to_list()}"
linked_list.insert(3, 6)
assert linked_list.to_list() == [5, 2, 1, 4, 3], f"list contents: {linked_list.to_list()}"
# Test size
print("\n=============")
print(linked_list.get_size())
assert linked_list.get_size() == 5, f"list contents: {linked_list.to_list()}"
# + [markdown] graffitiCellId="id_hpn7l32"
# <span class="graffiti-highlight graffiti-id_hpn7l32-id_xaqiyxe"><i></i><button>Show Solution</button></span>
# -
| practice/linked_lists/3_linked_list_practice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + run_control={"frozen": false, "read_only": false}
# %matplotlib inline
import numpy as np
import pandas as pd
import scipy
import sklearn
import spacy
import matplotlib.pyplot as plt
import seaborn as sns
import re
import nltk
from nltk.corpus import gutenberg, stopwords
# + [markdown] run_control={"frozen": false, "read_only": false}
# ## Intro to word2vec
#
# The most common unsupervised neural network approach for NLP is word2vec, a shallow neural network model for converting words to vectors using distributed representation: Each word is represented by many neurons, and each neuron is involved in representing many words. At the highest level of abstraction, word2vec assigns a vector of random values to each word. For a word W, it looks at the words that are near W in the sentence, and shifts the values in the word vectors such that the vectors for words near that W are closer to the W vector, and vectors for words not near W are farther away from the W vector. With a large enough corpus, this will eventually result in words that often appear together having vectors that are near one another, and words that rarely or never appear together having vectors that are far away from each other. Then, using the vectors, similarity scores can be computed for each pair of words by taking the cosine of the vectors.
#
# This may sound quite similar to the Latent Semantic Analysis approach you just learned. The conceptual difference is that LSA creates vector representations of sentences based on the words in them, while word2vec creates representations of individual words, based on the words around them.
# -
# ## What is it good for?
#
# Word2vec is useful for any time when computers need to parse requests written by humans. The problem with human communication is that there are so many different ways to communicate the same concept. It's easy for us, as humans, to know that "the silverware" and "the utensils" can refer to the same thing. Computers can't do that unless we teach them, and this can be a real chokepoint for human/computer interactions. If you've ever played a text adventure game (think _Colossal Cave Adventure_ or _Zork_), you may have encountered the following scenario:
# + active=""
# GAME: You are on a forest path north of the field. A cave leads into a granite butte to the north.
# A thick hedge blocks the way to the west.
# A hefty stick lies on the ground.
#
# YOU: pick up stick
#
# GAME: You don't know how to do that.
#
# YOU: lift stick
#
# GAME: You don't know how to do that.
#
# YOU: take stick
#
# GAME: You don't know how to do that.
#
# YOU: grab stick
#
# GAME: You grab the stick from the ground and put it in your bag.
# -
# And your brain explodes from frustration. A text adventure game that incorporates a properly trained word2vec model would have vectors for "pick up", "lift", and "take" that are close to the vector for "grab" and therefore could accept those other verbs as synonyms so you could move ahead faster. In more practical applications, word2vec and other similar algorithms are what help a search engine return the best results for your query and not just the ones that contain the exact words you used. In fact, search is a better example, because not only does the search engine need to understand your request, it also needs to match it to web pages that were _also written by humans_ and therefore _also use idiosyncratic language_.
#
# Humans, man.
#
# So how does it work?
#
# ## Generating vectors: Multiple algorithms
#
# In considering the relationship between a word and its surrounding words, word2vec has two options that are the inverse of one another:
#
# * _Continuous Bag of Words_ (CBOW): the identity of a word is predicted using the words near it in a sentence.
# * _Skip-gram_: The identities of words are predicted from the word they surround. Skip-gram seems to work better for larger corpuses.
#
# For the sentence "<NAME> is a better comedian than a director", if we focus on the word "comedian" then CBOW will try to predict "comedian" using "is", "a", "better", "than", "a", and "director". Skip-gram will try to predict "is", "a", "better", "than", "a", and "director" using the word "comedian". In practice, for CBOW the vector for "comedian" will be pulled closer to the other words, while for skip-gram the vectors for the other words will be pulled closer to "comedian".
#
# In addition to moving the vectors for nearby words closer together, each time a word is processed some vectors are moved farther away. Word2vec has two approaches to "pushing" vectors apart:
#
# * _Negative sampling_: Like it says on the tin, each time a word is pulled toward some neighbors, the vectors for a randomly chosen small set of other words are pushed away.
# * _Hierarchical softmax_: Every neighboring word is pulled closer or farther from a subset of words chosen based on a tree of probabilities.
#
# ## What is similarity? Word2vec strengths and weaknesses
#
# Keep in mind that word2vec operates on the assumption that frequent proximity indicates similarity, but words can be "similar" in various ways. They may be conceptually similar ("royal", "king", and "throne"), but they may also be functionally similar ("tremendous" and "negligible" are both common modifiers of "size"). Here is a more detailed exploration, [with examples](https://quomodocumque.wordpress.com/2016/01/15/messing-around-with-word2vec/), of what "similarity" means in word2vec.
#
# One cool thing about word2vec is that it can identify similarities between words _that never occur near one another in the corpus_. For example, consider these sentences:
#
# "The dog played with an elastic ball."
# "Babies prefer the ball that is bouncy."
# "I wanted to find a ball that's elastic."
# "Tracy threw a bouncy ball."
#
# "Elastic" and "bouncy" are similar in meaning in the text but don't appear in the same sentence. However, both appear near "ball". In the process of nudging the vectors around so that "elastic" and "bouncy" are both near the vector for "ball", the words also become nearer to one another and their similarity can be detected.
#
# For a while after it was introduced, [no one was really sure why word2vec worked as well as it did](https://arxiv.org/pdf/1402.3722v1.pdf) (see last paragraph of the linked paper). A few years later, some additional math was developed to explain word2vec and similar models. If you are comfortable with both math and "academese", have a lot of time on your hands, and want to take a deep dive into the inner workings of word2vec, [check out this paper](https://arxiv.org/pdf/1502.03520v7.pdf) from 2016.
#
# One of the draws of word2vec when it first came out was that the vectors could be used to convert analogies ("king" is to "queen" as "man" is to "woman", for example) into mathematical expressions ("king" + "woman" - "man" = ?) and solve for the missing element ("queen"). This is kinda nifty.
#
# A drawback of word2vec is that it works best with a corpus that is at least several billion words long. Even though the word2vec algorithm is speedy, this is a a lot of data and takes a long time! Our example dataset is only two million words long, which allows us to run it in the notebook without overwhelming the kernel, but probably won't give great results. Still, let's try it!
#
# There are a few word2vec implementations in Python, but the general consensus is the easiest one to us is in [gensim](https://radimrehurek.com/gensim/models/word2vec.html). Now is a good time to `pip install gensim` if you don't have it yet.
nltk.download('gutenberg')
# !python -m spacy download en
# +
# Utility function to clean text.
def text_cleaner(text):
# Visual inspection shows spaCy does not recognize the double dash '--'.
# Better get rid of it now!
text = re.sub(r'--',' ',text)
# Get rid of headings in square brackets.
text = re.sub("[\[].*?[\]]", "", text)
# Get rid of chapter titles.
text = re.sub(r'Chapter \d+','',text)
# Get rid of extra whitespace.
text = ' '.join(text.split())
return text[0:900000]
# Import all the Austen in the Project Gutenberg corpus.
austen = ""
for novel in ['persuasion','emma','sense']:
work = gutenberg.raw('austen-' + novel + '.txt')
austen = austen + work
# Clean the data.
austen_clean = text_cleaner(austen)
# -
# Parse the data. This can take some time.
nlp = spacy.load('en')
austen_doc = nlp(austen_clean)
# +
# Organize the parsed doc into sentences, while filtering out punctuation
# and stop words, and converting words to lower case lemmas.
sentences = []
for sentence in austen_doc.sents:
sentence = [
token.lemma_.lower()
for token in sentence
if not token.is_stop
and not token.is_punct
]
sentences.append(sentence)
print(sentences[20])
print('We have {} sentences and {} tokens.'.format(len(sentences), len(austen_clean)))
# + run_control={"frozen": false, "read_only": false}
import gensim
from gensim.models import word2vec
model = word2vec.Word2Vec(
sentences,
workers=4, # Number of threads to run in parallel (if your computer does parallel processing).
min_count=10, # Minimum word count threshold.
window=6, # Number of words around target word to consider.
sg=0, # Use CBOW because our corpus is small.
sample=1e-3 , # Penalize frequent words.
size=300, # Word vector length.
hs=1 # Use hierarchical softmax.
)
print('done!')
# +
# List of words in model.
vocab = model.wv.vocab.keys()
print(model.wv.most_similar(positive=['lady', 'man'], negative=['woman']))
# Similarity is calculated using the cosine, so again 1 is total
# similarity and 0 is no similarity.
print(model.wv.similarity('mr', 'mrs'))
# One of these things is not like the other...
print(model.doesnt_match("breakfast marriage dinner lunch".split()))
# + [markdown] run_control={"frozen": false, "read_only": false}
# Clearly this model is not great – while some words given above might possibly fill in the analogy woman:lady::man:?, most answers likely make little sense. You'll notice as well that re-running the model likely gives you different results, indicating random chance plays a large role here.
#
# We do, however, get a nice result on "marriage" being dissimilar to "breakfast", "lunch", and "dinner".
#
# ## Drill 0
#
# Take a few minutes to modify the hyperparameters of this model and see how its answers change. Can you wrangle any improvements?
# + run_control={"frozen": false, "read_only": false}
# Tinker with hyperparameters here.
param_dict1 = {'workers':4, 'min_count':20, 'window':6, 'sg':0, 'sample':1e-3, 'size':300, 'hs':1}
param_dict2 = {'workers':4, 'min_count':10, 'window':10, 'sg':0, 'sample':1e-3, 'size':300, 'hs':1}
param_dict3 = {'workers':4, 'min_count':10, 'window':6, 'sg':0, 'sample':1e-4, 'size':300, 'hs':1}
param_dict4 = {'workers':4, 'min_count':10, 'window':6, 'sg':0, 'sample':1e-3, 'size':300, 'hs':0}
# +
model1 = word2vec.Word2Vec(sentences, **param_dict1)
# List of words in model.
vocab1 = model1.wv.vocab.keys()
print(model1.wv.most_similar(positive=['lady', 'man'], negative=['woman']))
# Similarity is calculated using the cosine, so again 1 is total
# similarity and 0 is no similarity.
print(model1.wv.similarity('mr', 'mrs'))
# One of these things is not like the other...
print(model1.doesnt_match("breakfast marriage dinner lunch".split()))
# +
model2 = word2vec.Word2Vec(sentences, **param_dict2)
# List of words in model.
vocab2 = model2.wv.vocab.keys()
print(model2.wv.most_similar(positive=['lady', 'man'], negative=['woman']))
# Similarity is calculated using the cosine, so again 1 is total
# similarity and 0 is no similarity.
print(model2.wv.similarity('mr', 'mrs'))
# One of these things is not like the other...
print(model2.doesnt_match("breakfast marriage dinner lunch".split()))
# +
model3 = word2vec.Word2Vec(sentences, **param_dict3)
# List of words in model.
vocab3 = model3.wv.vocab.keys()
print(model3.wv.most_similar(positive=['lady', 'man'], negative=['woman']))
# Similarity is calculated using the cosine, so again 1 is total
# similarity and 0 is no similarity.
print(model3.wv.similarity('mr', 'mrs'))
# One of these things is not like the other...
print(model3.doesnt_match("breakfast marriage dinner lunch".split()))
# +
model4 = word2vec.Word2Vec(sentences, **param_dict4)
# List of words in model.
vocab4 = model4.wv.vocab.keys()
print(model4.wv.most_similar(positive=['lady', 'man'], negative=['woman']))
# Similarity is calculated using the cosine, so again 1 is total
# similarity and 0 is no similarity.
print(model4.wv.similarity('mr', 'mrs'))
# One of these things is not like the other...
print(model4.doesnt_match("breakfast marriage dinner lunch".split()))
# +
param_dict5 = {'workers':4, 'min_count':10, 'window':12, 'sg':0, 'sample':0.01, 'size':300, 'hs':1}
model5 = word2vec.Word2Vec(sentences, **param_dict5)
# List of words in model.
vocab5 = model5.wv.vocab.keys()
print(model5.wv.most_similar(positive=['lady', 'man'], negative=['woman']))
# Similarity is calculated using the cosine, so again 1 is total
# similarity and 0 is no similarity.
print(model5.wv.similarity('mr', 'mrs'))
# One of these things is not like the other...
print(model5.doesnt_match("breakfast marriage dinner lunch".split()))
# -
# Model 2 performed best, losing some similarity between Mr. and Mrs., but gaining the ability to say the woman:lady::man:mr and finding that marriage is the odd one out among the meals of the day.
# + [markdown] run_control={"frozen": false, "read_only": false}
# # Example word2vec applications
#
# You can use the vectors from word2vec as features in other models, or try to gain insight from the vector compositions themselves.
#
# Here are some neat things people have done with word2vec:
#
# * [Visualizing word embeddings in Jane Austen's Pride and Prejudice](http://blogger.ghostweather.com/2014/11/visualizing-word-embeddings-in-pride.html). Skip to the bottom to see a _truly honest_ account of this data scientist's process.
#
# * [Tracking changes in Dutch Newspapers' associations with words like 'propaganda' and 'alien' from 1950 to 1990](https://www.slideshare.net/MelvinWevers/concepts-through-time-tracing-concepts-in-dutch-newspaper-discourse-using-sequential-word-vector-spaces).
#
# * [Helping customers find clothing items similar to a given item but differing on one or more characteristics](http://multithreaded.stitchfix.com/blog/2015/03/11/word-is-worth-a-thousand-vectors/).
# -
# ## Drill 1: Word2Vec on 100B+ words
#
# As we mentioned, word2vec really works best on a big corpus, but it can take half a day to clean such a corpus and run word2vec on it. Fortunately, there are word2vec models available that have already been trained on _really_ big corpora. They are big files, but you can download a [pretrained model of your choice here](https://github.com/3Top/word2vec-api). At minimum, the ones built with word2vec (check the "Architecture" column) should load smoothly using an appropriately modified version of the code below, and you can play to your heart's content.
#
# Because the models are so large, however, you may run into memory problems or crash the kernel. If you can't get a pretrained model to run locally, check out this [interactive web app of the Google News model](https://rare-technologies.com/word2vec-tutorial/#bonus_app) instead.
#
# However you access it, play around with a pretrained model. Is there anything interesting you're able to pull out about analogies, similar words, or words that don't match? Write up a quick note about your tinkering and discuss it with your mentor during your next session.
# + run_control={"frozen": false, "read_only": false}
# Load Google's pre-trained Word2Vec model.
model = gensim.models.KeyedVectors.load_word2vec_format ('https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz', binary=True)
# -
# + run_control={"frozen": false, "read_only": false}
# Play around with your pretrained model here.
print(model.wv.most_similar(positive=['lady', 'man'], negative=['woman']))
# -
print(model.wv.similarity('mr', 'mrs'))
print(model.doesnt_match("breakfast marriage dinner lunch".split()))
print(model.wv.most_similar(positive=['paper', 'brush'], negative=['pen']))
print(model.wv.most_similar(positive=['paper', 'paintbrush'], negative=['pen']))
print(model.wv.most_similar(positive=['paper', 'oil'], negative=['watercolor']))
print(model.wv.most_similar(positive=['canvas', 'marble'], negative=['oil']))
print(model.wv.most_similar(positive=['bun', 'fajita'], negative=['hamburger']))
print(model.wv.most_similar(positive=['bun', 'fajita'], negative=['hotdog']))
print(model.wv.most_similar(positive=['evening', 'brunch'], negative=['dinner']))
print(model.wv.most_similar(positive=['steeple', 'mosque'], negative=['church']))
print(model.wv.most_similar(positive=['diamond', 'beryllium'], negative=['carbon']))
print(model.wv.most_similar(positive=['diamond', 'aluminum'], negative=['carbon']))
| 4-Copy1.4.4 Unsupervised Neural Networks and NLP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <b>Calcule a integral dada</b>
# $\int \frac{x}{\sqrt{2x + 2}}dx$
# <b>Tirando a constante da integral</b>
# $\int \frac{x}{\sqrt{2x + 2}}dx = \frac{1}{\sqrt{2}} \cdot \int \frac{x}{\sqrt{x + 1}}dx$
# <b>Substituindo $u = x + 1$</b>
# $\int \frac{u - 1}{\sqrt{u}}du$
# <b>$\int (\sqrt{u} - \frac{1}{\sqrt{u}})du$</b>
# $\int (\sqrt{u} - \frac{1}{\sqrt{u}})du = \int \sqrt{u}du - \int \frac{1}{\sqrt{u}}du$
# $\int (\sqrt{u} - \frac{1}{\sqrt{u}})du = \frac{2u^{\frac{3}{2}}}{3} - 2\sqrt{u}$
# $\int (\sqrt{u} - \frac{1}{\sqrt{u}})du = \frac{2(x+1)^{\frac{3}{2}}}{3} - \sqrt{2}\sqrt{x+1} + C$
| Problemas 6.1/14.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/agemagician/CodeTrans/blob/main/prediction/single%20task/code%20comment%20generation/java_base_model.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="6YPrvwDIHdBe"
# ## Install the library and download the pretrained models
# + id="_WI23u_mBGZ7" outputId="b792ff37-e15e-4477-a5cc-858681d78509" colab={"base_uri": "https://localhost:8080/"}
print("Installing dependencies...")
# %tensorflow_version 2.x
# !pip install -q t5==0.6.4
import functools
import os
import time
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import tensorflow.compat.v1 as tf
import tensorflow_datasets as tfds
import t5
# !wget "https://www.dropbox.com/sh/kjoqdpj7e16dny9/AADdvjWVFckCgNQN-AqMKhiDa?dl=1" -O vocabulary.zip
# !unzip vocabulary.zip
# !rm vocabulary.zip
# !wget "https://www.dropbox.com/sh/7sjojtm46p14tg1/AAAFiN_DDvEWFFK_30CP38uga?dl=1" -O comment_gen.zip
# !unzip comment_gen.zip
# !rm comment_gen.zip
# + [markdown] id="MbW4aF7hHosN"
# ## Set sentencepiece model
# + id="-0SXWeKu9B5U" outputId="3da6169f-ef87-4911-f9f2-d25bfbcf7520" colab={"base_uri": "https://localhost:8080/"}
from t5.data.sentencepiece_vocabulary import SentencePieceVocabulary
vocab_model_path = 'code_spm_unigram_40M.model'
vocab = SentencePieceVocabulary(vocab_model_path, extra_ids=100)
print("Vocab has a size of %d\n" % vocab.vocab_size)
# + [markdown] id="7VpxgigMIGbv"
# ## Set the preprocessors and the task registry for the t5 model
# + id="meFeWitk-TE3"
def codeComment_dataset_fn(split, shuffle_files=False):
del shuffle_files
ds = tf.data.TextLineDataset(code_comment_path[split])
ds = ds.map(
functools.partial(tf.io.decode_csv, record_defaults=["", ""], field_delim="\t", use_quote_delim=False),
num_parallel_calls=tf.data.experimental.AUTOTUNE
)
ds = ds.map(lambda *ex: dict(zip(["code", "docstring"], ex)))
return ds
def codeComment_preprocessor(ds):
def normalize_text(text):
return text
def to_inputs_and_targets(ex):
return {
"inputs": tf.strings.join(["code comment java: ", normalize_text(ex["code"])]),
"targets": normalize_text(ex["docstring"])
}
return ds.map(to_inputs_and_targets, num_parallel_calls=tf.data.experimental.AUTOTUNE)
t5.data.TaskRegistry.remove('code_comment')
t5.data.TaskRegistry.add(
"code_comment",
dataset_fn=codeComment_dataset_fn,
output_features={
"inputs": t5.data.utils.Feature(vocabulary=vocab),
"targets": t5.data.utils.Feature(vocabulary=vocab),
},
splits=["train", "validation"],
text_preprocessor=[codeComment_preprocessor],
postprocess_fn=t5.data.postprocessors.lower_text,
metric_fns=[t5.evaluation.metrics.bleu, t5.evaluation.metrics.accuracy, t5.evaluation.metrics.rouge],
)
# + [markdown] id="sjZN7C7wISzq"
# ## Set t5 base model
# + id="EWBKPR24BweX"
MODEL_DIR = "base"
model_parallelism = 1
train_batch_size = 256
tf.io.gfile.makedirs(MODEL_DIR)
model = t5.models.MtfModel(
model_dir=MODEL_DIR,
tpu=None,
tpu_topology=None,
model_parallelism=model_parallelism,
batch_size=train_batch_size,
sequence_length={"inputs": 512, "targets": 512},
mesh_shape="model:1,batch:1",
mesh_devices=["GPU:0"],
learning_rate_schedule=0.003,
save_checkpoints_steps=5000,
keep_checkpoint_max=None,
iterations_per_loop=100,
)
# + [markdown] id="yEE3ZQt5I3Jt"
# ## Code Comment Generation
# + [markdown] id="hkynwKIcEvHh"
# ### Give the code for generating comment
# + id="nld-UUmII-2e"
code = "protected String renderUri(URI uri){\n return uri.toASCIIString();\n}\n" #@param {type:"raw"}
# + [markdown] id="BFEowQAcf9cw"
# ### Parsing and Tokenization
# + id="opV9iL3bgCCR" outputId="8dc1dcf9-ec1f-4e6f-9945-b966794b2d79" colab={"base_uri": "https://localhost:8080/"}
# !pip install javalang
import javalang
# + id="_XEO_5T-gFcn"
def tokenize_java_code(code):
tokenList = []
tokens = list(javalang.tokenizer.tokenize(code))
for token in tokens:
tokenList.append(token.value)
return ' '.join(tokenList)
# + id="V3ZbUryggIYk" outputId="2ae5edf0-eacd-444a-f3c5-9cdd5468e71f" colab={"base_uri": "https://localhost:8080/"}
tokenized_code = tokenize_java_code(code)
print("Output after tokenization: " + tokenized_code)
# + [markdown] id="iUGjYiWzJSu0"
# ### Record the code for generating comment with the prefix to a txt file
# + id="UCGjrieBJck1"
codes = [tokenized_code]
inputs_path = 'input.txt'
with tf.io.gfile.GFile(inputs_path, "w") as f:
for c in codes:
f.write("code comment java: %s\n" % c)
predict_outputs_path = 'MtfModel-output.txt'
# + [markdown] id="PK_kyR4VJlha"
# ### Running the model with the best checkpoint to generating comment for the given code
# + id="cdThL03VDNX3" outputId="ce398b71-fd30-491e-8840-d65042c11ec2" colab={"base_uri": "https://localhost:8080/"}
model.batch_size = 8 # Min size for small model on v2-8 with parallelism 1.
model.predict(
input_file="input.txt",
output_file=predict_outputs_path,
checkpoint_steps=80000,
beam_size=4,
vocabulary=vocab,
# Select the most probable output token at each step.
temperature=0,
)
# + [markdown] id="La_Lmsj1J7Wq"
# ### Code Comment Generation Result
# + id="Fov0lPWdD72H" outputId="d16b2240-b558-4b36-cb03-a433d2c1e818" colab={"base_uri": "https://localhost:8080/"}
prediction_file = "MtfModel-output.txt-80000"
print("\nPredictions using checkpoint 80000:\n" )
with tf.io.gfile.GFile(prediction_file) as f:
for c, d in zip(codes, f):
if c:
print("Code for prediction: " + c + '\n')
print("Generated Summarization: " + d)
| prediction/single task/code comment generation/t5 interface/java_base_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="a5EkLOFwB0Nx"
# #Anomaly Detection with Adaptive Fourier Features and DMKDE Quantum Algorithm in Real Quantum Computer
# + [markdown] id="wGV5SHkfwgch"
# ## Imports and Data load
# + colab={"base_uri": "https://localhost:8080/"} id="IezDXx2y6WcQ" outputId="31e13d37-3466-4767-8559-82d60abe37db"
# !pip install qiskit==0.35.0
# !pip install pylatexenc
# + [markdown] id="4yl1bBNQYzOu"
# ## Mount Google Drive
# + id="6wJYWE1nUlre" colab={"base_uri": "https://localhost:8080/"} outputId="f129a036-19b7-4ba9-ccdd-86ff6639cbff"
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] id="XTETuV1FZ8U8"
# Load from drive .mat file
# + colab={"base_uri": "https://localhost:8080/"} id="q1liTjaGb5zO" outputId="121ec15f-d8dc-423b-b0f1-7fc74b428c59"
# !pip install --upgrade --no-cache-dir gdown
# + colab={"base_uri": "https://localhost:8080/"} id="alx4TXl9ZwTS" outputId="87643b63-c174-41d6-ac6c-136722c6be13"
#Loading .mat Cardiotocography dataset file
# !gdown 1j4qIus2Bl44Om0UiOu4o4f__wVwUeDfP
# + colab={"base_uri": "https://localhost:8080/"} id="aZRgeliPz_z8" outputId="ff4a6f44-736d-4765-8611-69578a9537e3"
import numpy as np
from time import time
from sklearn.kernel_approximation import RBFSampler
from sklearn.metrics import roc_auc_score
import matplotlib.pyplot as plt
# cardio = np.load("Cardiotocography.npy")
from scipy import io
cardio = io.loadmat("cardio.mat")
cardio["X"].shape, cardio["y"].shape
# + [markdown] id="zn-jB3En0Ha_"
# Preprocessing
#
# np.load object --> X, y (scaled)
#
# normal: '1'
# anomalies: '0'
# + colab={"base_uri": "https://localhost:8080/"} id="M67oIQRQ298k" outputId="c1b41eaf-8a39-44d5-db0d-18b69f074a7d"
from sklearn.preprocessing import MinMaxScaler
from scipy.stats import zscore
def preprocessing_cardio(data):
features, labels = cardio["X"], cardio["y"]
labels = 1 - labels
# scaler = MinMaxScaler()
# scaler.fit(features)
# features = scaler.transform(features)
return features, labels
cardio_X, cardio_y = preprocessing_cardio(cardio)
cardio_X.shape, cardio_y.shape
# + [markdown] id="T_dh3TrN0WFR"
# ## Random Fourier Features
#
# parameters: gamma, dimensions, random_state
#
# X --> rff(X)
# + id="9O-_tfbC2DEi"
from sklearn.kernel_approximation import RBFSampler
"""
Code from https://arxiv.org/abs/2004.01227
"""
class QFeatureMap:
def get_dim(self, num_features):
pass
def batch2wf(self, X):
pass
def batch2dm(self, X):
psi = self.batch2wf(X)
rho = np.einsum('...i,...j', psi, np.conj(psi))
return rho
class QFeatureMap_rff(QFeatureMap):
def __init__(self, rbf_sampler):
self.rbf_sampler = rbf_sampler
self.weights = np.array(rbf_sampler.random_weights_)
self.offset = np.array(rbf_sampler.random_offset_)
self.dim = rbf_sampler.get_params()['n_components']
def get_dim(self, num_features):
return self.dim
def batch2wf(self, X):
vals = np.dot(X, self.weights) + self.offset
vals = np.cos(vals)
vals *= np.sqrt(2.) / np.sqrt(self.dim)
norms = np.linalg.norm(vals, axis=1)
psi = vals / norms[:, np.newaxis]
return psi
# + id="opFX2jGW2G7d"
# Create the RandomFourierFeature map
def rff(X, dim, gamma):
feature_map_fourier = RBFSampler(gamma=gamma, n_components=dim, random_state=2)
X_feat_train = feature_map_fourier.fit(cardio_X)
rffmap = QFeatureMap_rff(rbf_sampler=feature_map_fourier)
Crff = rffmap.batch2wf(cardio_X)
return Crff
# + [markdown] id="5prTAAs-6215"
# Train test split
# + colab={"base_uri": "https://localhost:8080/"} id="M7sq1ptSJnHs" outputId="db139050-2e61-4def-9951-a6974b89f8a0"
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(cardio_X, cardio_y, test_size=0.2, stratify=cardio_y, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, stratify=y_train, random_state=42)
print(f"shape of X_train: {X_train.shape} X_test: {X_test.shape} X_val {X_val.shape}")
n_classes = np.bincount(y_test.ravel().astype(np.int64))
print(f"classes: 0: {n_classes[0]} 1: {n_classes[1]} %-anomalies: {n_classes[0] / (n_classes[0] + n_classes[1])}")
#print(f"classes: 0: {n_classes[0]} 1: {n_classes[1]} %-anomalies: {n_classes[1] / (n_classes[0] + n_classes[1])}")
# + [markdown] id="99zg2Ps8WgK5"
# ## Quantum Prediction
# + [markdown] id="axoxfK4h5i5_"
# Density Matrix Build
#
# Pure State: x_train --> U (matrix)
#
# Mixed State: X_train --> lambda (vec) , U (matrix)
# + id="07II8vchI3oZ"
def pure_state(Ctrain):
phi_train = np.sum(Ctrain, axis=0)
phi_train = phi_train / np.linalg.norm(phi_train)
size_U = len(phi_train)
U_train = np.zeros((size_U, size_U))
x_1 = phi_train
U_train[:, 0] = x_1
for i in range(1, size_U):
x_i = np.random.randn(size_U)
for j in range(0, i):
x_i -= x_i.dot(U_train[:, j]) * U_train[:, j]
x_i = x_i / np.linalg.norm(x_i)
U_train[:, i] = x_i
return U_train
# + id="G67FoafZ57Hl"
def mixed_state(Ctrain):
Z_train = np.outer(Ctrain[0], Ctrain[0])
for i in range(1, len(Ctrain)):
Z_train += np.outer(Ctrain[i], Ctrain[i])
Z_train *= 1/len(Ctrain)
lambda_P1_temp, U_train = np.linalg.eigh(Z_train)
return lambda_P1_temp, U_train
# + id="z4ZXxqVB8_JU"
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
from qiskit import Aer, execute
from sklearn.metrics import classification_report
# + [markdown] id="aTr7HTfRjjYD"
# # Quantum Prediction with Adaptive RFF
# + [markdown] id="bg5HouMNCobX"
# ## Clone the QMC from GitHUB
# + colab={"base_uri": "https://localhost:8080/"} id="iz_g0LI9CBLc" outputId="e7268a69-7652-4edc-8445-aa209414f1b7"
# !pip install git+https://github.com/fagonzalezo/qmc.git
# + [markdown] id="pUZJKFvypUTT"
# ## Adaptive RFF
# + id="kmwZ_WBj9lq_"
import tensorflow as tf
import numpy as np
import qmc.tf.layers as layers
import qmc.tf.models as models
# + id="CrKdxKmcB6l2"
import pylab as pl
# + colab={"base_uri": "https://localhost:8080/", "height": 300} id="U-_bWN7Zvl7f" outputId="115b7b64-e115-435d-faca-16e3e690e570"
num_samples = 100000
rnd_idx1 = np.random.randint(X_train.shape[0],size=(num_samples, ))
rnd_idx2 = np.random.randint(X_train.shape[0],size=(num_samples, ))
#x_train_rff = [X_train[rnd_idx1], X_train[rnd_idx2]]
x_train_rff = np.concatenate([X_train[rnd_idx1][:, np.newaxis, ...],
X_train[rnd_idx2][:, np.newaxis, ...]],
axis=1)
dists = np.linalg.norm(x_train_rff[:, 0, ...] - x_train_rff[:, 1, ...], axis=1)
print(dists.shape)
pl.hist(dists)
print(np.quantile(dists, 0.001))
rnd_idx1 = np.random.randint(X_test.shape[0],size=(num_samples, ))
rnd_idx2 = np.random.randint(X_test.shape[0],size=(num_samples, ))
#x_test_rff = [X_test[rnd_idx1], X_test[rnd_idx2]]
x_test_rff = np.concatenate([X_test[rnd_idx1][:, np.newaxis, ...],
X_test[rnd_idx2][:, np.newaxis, ...]],
axis=1)
# + id="8Kz9xY9X3Rs-"
def gauss_kernel_arr(x, y, gamma):
return np.exp(-gamma * np.linalg.norm(x - y, axis=1) ** 2)
# + id="7opihyAT7c20"
import tensorflow as tf
class QFeatureMapAdaptRFF(layers.QFeatureMapRFF):
def __init__(
self,
gamma_trainable=True,
weights_trainable=True,
**kwargs
):
self.g_trainable = gamma_trainable
self.w_trainable = weights_trainable
super().__init__(**kwargs)
def build(self, input_shape):
rbf_sampler = RBFSampler(
gamma=0.5,
n_components=self.dim,
random_state=self.random_state)
x = np.zeros(shape=(1, self.input_dim))
rbf_sampler.fit(x)
self.gamma_val = tf.Variable(
initial_value=self.gamma,
dtype=tf.float32,
trainable=self.g_trainable,
name="rff_gamma")
self.rff_weights = tf.Variable(
initial_value=rbf_sampler.random_weights_,
dtype=tf.float32,
trainable=self.w_trainable,
name="rff_weights")
self.offset = tf.Variable(
initial_value=rbf_sampler.random_offset_,
dtype=tf.float32,
trainable=self.w_trainable,
name="offset")
self.built = True
def call(self, inputs):
vals = tf.sqrt(2 * self.gamma_val) * tf.matmul(inputs, self.rff_weights) + self.offset # old framework
vals = tf.cos(vals)
vals = vals * tf.sqrt(2. / self.dim) # old framework
norms = tf.linalg.norm(vals, axis=-1)
psi = vals / tf.expand_dims(norms, axis=-1)
return psi
class DMRFF(tf.keras.Model):
def __init__(self,
dim_x,
num_rff,
gamma=1,
random_state=None):
super().__init__()
self.rff_layer = QFeatureMapAdaptRFF(input_dim=dim_x, dim=num_rff, gamma=gamma, random_state=random_state, gamma_trainable=False)
def call(self, inputs):
x1 = inputs[:, 0]
x2 = inputs[:, 1]
phi1 = self.rff_layer(x1)
phi2 = self.rff_layer(x2)
dot = tf.einsum('...i,...i->...', phi1, phi2) ** 2
return dot
def calc_rbf(dmrff, x1, x2):
return dmrff.predict(np.concatenate([x1[:, np.newaxis, ...],
x2[:, np.newaxis, ...]],
axis=1),
batch_size=256)
# + colab={"base_uri": "https://localhost:8080/", "height": 369} id="SGcCPhwoy-EY" outputId="c0af057f-aaef-437f-ad52-7a6b9ac1436f"
sigma = np.quantile(dists, 0.01)
gamma = 1/(2 * sigma ** 2)
gamma_index = 7 # index 7 corresponds to gamma = 2**(-7)
gammas = 1/(2**(np.arange(11)))
print(gammas)
n_rffs = 4
print(f'Gamma: {gammas[gamma_index ]}')
# y_train_rff = gauss_kernel_arr(x_train_rff[:, 0, ...], x_train_rff[:, 1, ...], gamma=gamma) # Original code
# y_test_rff = gauss_kernel_arr(x_test_rff[:, 0, ...], x_test_rff[:, 1, ...], gamma=gamma) # Original code
y_train_rff = gauss_kernel_arr(x_train_rff[:, 0, ...], x_train_rff[:, 1, ...], gamma=gammas[gamma_index ])
y_test_rff = gauss_kernel_arr(x_test_rff[:, 0, ...], x_test_rff[:, 1, ...], gamma=gammas[gamma_index ])
dmrff = DMRFF(dim_x=21, num_rff=n_rffs, gamma=gammas[gamma_index ], random_state=np.random.randint(10000)) # original rs = 0
#dmrff = DMRFF(dim_x=21, num_rff=n_rffs, gamma=gamma / 2, random_state=np.random.randint(10000)) # original rs = 0
dm_rbf = calc_rbf(dmrff, x_test_rff[:, 0, ...], x_test_rff[:, 1, ...])
pl.plot(y_test_rff, dm_rbf, '.')
dmrff.compile(optimizer="adam", loss='mse')
dmrff.evaluate(x_test_rff, y_test_rff, batch_size=16)
# + colab={"base_uri": "https://localhost:8080/", "height": 317} id="C5RQnaRvy7vz" outputId="29310954-a49f-47b4-8cc0-cb8c8b4a8bfc"
print(f'Mean: {np.mean(dmrff.rff_layer.rff_weights)}')
print(f'Std: {np.std(dmrff.rff_layer.rff_weights)}')
print(f'Gamma: {dmrff.rff_layer.gamma_val.numpy()}')
pl.hist(dmrff.rff_layer.rff_weights.numpy().flatten(), bins=30);
# + colab={"base_uri": "https://localhost:8080/"} id="TRZCYHie2EF8" outputId="63f19672-184e-4a4e-f671-81c4be70e209"
dmrff.fit(x_train_rff, y_train_rff, validation_split=0.1, epochs=40, batch_size=128)
# + colab={"base_uri": "https://localhost:8080/", "height": 89} id="GfrAHrRk9PnY" outputId="5b8eb5c0-0c19-43c2-af9e-190a171a84fb"
dm_rbf = calc_rbf(dmrff, x_test_rff[:, 0, ...], x_test_rff[:, 1, ...])
pl.plot(y_test_rff, dm_rbf, '.')
dmrff.evaluate(x_test_rff, y_test_rff, batch_size=128)
# + colab={"base_uri": "https://localhost:8080/", "height": 106} id="5gEgPGx6z6Qk" outputId="806d681f-daf4-4250-fc3d-c7794fc8c400"
print(f'Mean: {np.mean(dmrff.rff_layer.rff_weights)}')
print(f'Std: {np.std(dmrff.rff_layer.rff_weights)}')
print(f'Gamma: {dmrff.rff_layer.gamma_val.numpy()}')
pl.hist(dmrff.rff_layer.rff_weights.numpy().flatten(), bins=30);
# + colab={"base_uri": "https://localhost:8080/"} id="QtzJ5cyl63xd" outputId="282d26a9-69ea-4d70-ed06-4fcec92c754b"
X_feat_train = dmrff.rff_layer.call(tf.cast(X_train, tf.float32))
X_feat_test = dmrff.rff_layer.call(tf.cast(X_test, tf.float32))
X_feat_val = dmrff.rff_layer.call(tf.cast(X_val, tf.float32))
X_feat_train = np.float64((X_feat_train).numpy())
X_feat_test = np.float64((X_feat_test).numpy())
X_feat_val = np.float64((X_feat_val).numpy())
X_feat_train = X_feat_train / np.linalg.norm(X_feat_train, axis = 1).reshape(-1, 1)
X_feat_test = X_feat_test / np.linalg.norm(X_feat_test, axis = 1).reshape(-1, 1)
X_feat_val = X_feat_val / np.linalg.norm(X_feat_val, axis = 1).reshape(-1, 1)
X_feat_train.shape, X_feat_test.shape, X_feat_val.shape
# + [markdown] id="Vyob8x8fr1E1"
# # IBM Real Computer Attempt
# + [markdown] id="tjcl30qmR7SU"
# ## Pretrained Adp Feautures
# + id="vdBRQqFFP9rT" colab={"base_uri": "https://localhost:8080/"} outputId="9702bd29-7f31-4570-d72f-d9ef5c8a2757"
X_feat_train = np.load("/content/drive/MyDrive/TesisMaestria/ResearchData/adpFeatures_4t4_Cardio_train.npy")
X_feat_test = np.load("/content/drive/MyDrive/TesisMaestria/ResearchData/adpFeatures_4t4_Cardio_test.npy")
X_feat_val = np.load("/content/drive/MyDrive/TesisMaestria/ResearchData/adpFeatures_4t4_Cardio_val.npy")
X_feat_train.shape, X_feat_test.shape, X_feat_val.shape
# + [markdown] id="qgIJqAFRTdQW"
# ## First part
# + colab={"base_uri": "https://localhost:8080/"} id="3umCRGB7RjIU" outputId="e2d41a82-34b9-4ef3-c298-f709f7d44d20"
# !pip install qiskit_ibm_runtime
# + id="hnz7aC0qtiVd"
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, IBMQ, execute, transpile, Aer, assemble
from qiskit.tools.monitor import job_monitor
## Diego's token
TOKEN = '6d2fce5f8e30428840130bd3d576edf3571be02e4d12be8cdd08c7b102699a1931a8fd93be9472b020978fb0fe33d48e2521340e91ea04c0e3c1930cdfbcacf7'
# + id="bXXNxLQQTmAs"
from qiskit import IBMQ
IBMQ.save_account(TOKEN, overwrite=True)
provider = IBMQ.load_account()
device = provider.get_backend("ibmq_santiago")
# + colab={"base_uri": "https://localhost:8080/"} id="jH5-Bi-Rukh1" outputId="5c52c539-e3d6-407d-c6f5-a93bc491bc77"
available_cloud_backends = provider.backends()
print('\nHere is the list of cloud backends that are available to you:')
for i in available_cloud_backends: print(i)
# + [markdown] id="_8KKTnhhWcrV"
# ## Qiskit Runtime
# + id="CVMZE3HKh3zb"
from qiskit_ibm_runtime import QiskitRuntimeService, Sampler
# Save your credentials on disk.
QiskitRuntimeService.save_account(channel='ibm_quantum', token=TOKEN)
service = QiskitRuntimeService()
# + id="cveXYZoPbb3B"
# gamma = [2**-7]
# dim = 4
# num_exps = 1
# feature_map_fourier = RBFSampler(gamma=gamma[0], n_components=dim)
# feature_map_fourier.fit(X_train)
# rffmap = QFeatureMap_rff(rbf_sampler=feature_map_fourier)
# X_feat_train = rffmap.batch2wf(X_train)
# X_feat_val = rffmap.batch2wf(X_val)
# X_feat_test = rffmap.batch2wf(X_test)
# + id="L_ShpGx3QHnh"
# print(X_feat_train.shape)
# print(X_feat_val.shape)
# print(X_feat_test.shape)
# + [markdown] id="ZdoC3A8tMmr4"
# ## Mixed 4x4
# + [markdown] id="FpzUShI97ffl"
# ### Validation
# + id="67DLd0XdMmLC"
from qiskit import transpile
eigvals, U = mixed_state(X_feat_train)
qclist_rff_mixed_val = []
for i in range(len(X_feat_val)):
qc = QuantumCircuit(4, 2)
qc.initialize(X_feat_val[i], [0, 1])
qc.initialize(np.sqrt(eigvals), [2, 3])
qc.isometry(U.T, [], [0, 1])
qc.cnot(3, 1)
qc.cnot(2, 0)
qc.measure(0, 0)
qc.measure(1, 1)
qclist_rff_mixed_val.append(transpile(qc, device))
# + id="H4crwfDXA_-g"
indices_rff = list(range(123))
with Sampler(circuits=qclist_rff_mixed_val[0:123], service=service, options={ "backend": "ibmq_santiago" }) as sampler:
result = sampler(circuit_indices=indices_rff, shots=5000)
dists = result.quasi_dists
results_rff_mixed_val1 = [dists[i]['00'] for i in range(len(dists))]
# + id="y62M_4Jlztvt"
print(results_rff_mixed_val1)
# + id="SZgE1DL580vQ"
np.save("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_mixed_val1_expsantiago.npy", results_rff_mixed_val1)
# + id="wgHi77p9A_-i"
indices_rff = list(range(123))
with Sampler(circuits=qclist_rff_mixed_val[123:246], service=service, options={ "backend": "ibmq_santiago" }) as sampler:
result = sampler(circuit_indices=indices_rff, shots=5000)
dists = result.quasi_dists
results_rff_mixed_val2 = [dists[i]['00'] for i in range(len(dists))]
# + id="KradzsVHSBDx"
print(results_rff_mixed_val2)
# + id="5qWqTRXfSBDz"
np.save("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_mixed_val2_expsantiago.npy", results_rff_mixed_val2)
# + id="xYMcIjL1-m_u"
indices_rff = list(range(120))
with Sampler(circuits=qclist_rff_mixed_val[246:], service=service, options={ "backend": "ibmq_santiago" }) as sampler:
result = sampler(circuit_indices=indices_rff, shots=5000)
dists = result.quasi_dists
results_rff_mixed_val3 = [dists[i]['00'] for i in range(len(dists))]
# + id="eOqLDZIGSIix"
print(results_rff_mixed_val3)
# + id="YUCtxH8ASIiy"
np.save("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_mixed_val3_expsantiago.npy", results_rff_mixed_val3)
# + id="DtQnvvjSav32"
#results_rff_mixed_val1 = np.load("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_mixed_val1_exp2.npy")
# + id="37KA98oV-0Ba"
results_rff_mixed_val = np.concatenate((results_rff_mixed_val1, results_rff_mixed_val2, results_rff_mixed_val3), axis=0)
thredhold_mixed = np.percentile(results_rff_mixed_val, q = 9.54)
print(thredhold_mixed)
# + [markdown] id="fE7PMYtX_Jp6"
# ### Test
# + id="9qEPpT7h_Ocq"
from qiskit import transpile
eigvals, U = mixed_state(X_feat_train)
qclist_rff_mixed_test = []
for i in range(len(X_feat_test)):
qc = QuantumCircuit(4, 2)
qc.initialize(X_feat_test[i], [0, 1])
qc.initialize(np.sqrt(eigvals), [2, 3])
qc.isometry(U.T, [], [0, 1])
qc.cnot(3, 1)
qc.cnot(2, 0)
qc.measure(0, 0)
qc.measure(1, 1)
qclist_rff_mixed_test.append(transpile(qc, device))
# + id="aRN74fHW_Oct"
indices_rff = list(range(123))
with Sampler(circuits=qclist_rff_mixed_test[0:123], service=service, options={ "backend": "ibmq_santiago" }) as sampler:
result = sampler(circuit_indices=indices_rff, shots=5000)
dists = result.quasi_dists
results_rff_mixed_test1 = [dists[i]['00'] for i in range(len(dists))]
# + id="jkRO6tCuVrJI" colab={"base_uri": "https://localhost:8080/"} outputId="0435c7f4-ebd3-4481-abf3-39eefa0eeb6b"
print(results_rff_mixed_test1)
# + id="pIe5bzWgVrJJ"
np.save("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_mixed_test1_expsantiago.npy", results_rff_mixed_test1)
# + id="20S-0TAX_Ocv"
indices_rff = list(range(122))
with Sampler(circuits=qclist_rff_mixed_test[123:245], service=service, options={ "backend": "ibmq_santiago" }) as sampler:
result = sampler(circuit_indices=indices_rff, shots=5000)
dists = result.quasi_dists
results_rff_mixed_test2 = [dists[i]['00'] for i in range(len(dists))]
# + id="Wk3nNdOrV5me" colab={"base_uri": "https://localhost:8080/"} outputId="7035ad02-6745-4e59-82e1-fb808913b97d"
print(results_rff_mixed_test2)
# + id="4ZC74ahcV5mj"
np.save("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_mixed_test2_expsantiago.npy", results_rff_mixed_test2)
# + id="8Ny59Fg-_Ocv"
indices_rff = list(range(122))
with Sampler(circuits=qclist_rff_mixed_test[245:], service=service, options={ "backend": "ibmq_santiago" }) as sampler:
result = sampler(circuit_indices=indices_rff, shots=5000)
dists = result.quasi_dists
results_rff_mixed_test3 = [dists[i]['00'] for i in range(len(dists))]
# + id="u-A8znoZWCZs" colab={"base_uri": "https://localhost:8080/"} outputId="70d94f94-e212-4a42-dbcf-dea43e7a1e87"
print(results_rff_mixed_test3)
# + id="c5cgjlZpWCZs"
np.save("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_mixed_test3_expsantiago.npy", results_rff_mixed_test3)
# + id="fCJTNgG-A_-i" colab={"base_uri": "https://localhost:8080/"} outputId="a470a03a-a84e-4083-c8e0-1682666dbb65"
results_rff_mixed_test = np.concatenate((results_rff_mixed_test1, results_rff_mixed_test2, results_rff_mixed_test3), axis=0)
y_pred_mixed = results_rff_mixed_test > thredhold_mixed
print(classification_report(y_test, y_pred_mixed, digits=4))
# + id="i7vUl1eWhDNk" colab={"base_uri": "https://localhost:8080/"} outputId="797aa9e8-edc0-4af7-f5cd-60182972a223"
len(results_rff_mixed_val)
# + id="ge2nDDkCege0" colab={"base_uri": "https://localhost:8080/"} outputId="99be08e9-a61e-473b-cc4b-c3a393c32a31"
print(f"AUC = {round(roc_auc_score(y_test, results_rff_mixed_test), 4)}")
# + [markdown] id="ydAKrDUcTzov"
# ## Pure 4x4
# + [markdown] id="XoRvQzvJ_z3A"
# ### Validation
# + id="7xA7pNGVT3GT"
U_pure = pure_state(X_feat_train)
qclist_rff_pure_val = []
for i in range(len(X_feat_val)):
qc = QuantumCircuit(2, 2)
qc.initialize(X_feat_val[i], [0, 1])
qc.isometry(U_pure.T, [], [0, 1]) # ArbRot as a isometry
qc.measure(0, 0)
qc.measure(1, 1)
qclist_rff_pure_val.append(transpile(qc, device))
# + id="SWqipa-d_yBy"
indices_rff = list(range(123))
with Sampler(circuits=qclist_rff_pure_val[0:123], service=service, options={ "backend": "ibmq_santiago" }) as sampler:
result = sampler(circuit_indices=indices_rff, shots=5000)
dists = result.quasi_dists
results_rff_pure_val1 = [dists[i]['00'] for i in range(len(dists))]
# + colab={"base_uri": "https://localhost:8080/"} outputId="7a8c1f4c-6814-4cde-c550-92b8b17f7ac2" id="EW203QBMe7Bu"
print(results_rff_pure_val1)
# + id="jj32u3SOe7B7"
np.save("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_pure_val1_expsantiago.npy", results_rff_pure_val1)
# + id="tdLALtAV_yB0"
indices_rff = list(range(123))
with Sampler(circuits=qclist_rff_pure_val[123:246], service=service, options={ "backend": "ibmq_santiago" }) as sampler:
result = sampler(circuit_indices=indices_rff, shots=5000)
dists = result.quasi_dists
results_rff_pure_val2 = [dists[i]['00'] for i in range(len(dists))]
# + colab={"base_uri": "https://localhost:8080/"} outputId="6160ebbe-1450-491a-8b52-1f138d64a1fc" id="ihLadiIbfFHs"
print(results_rff_pure_val2)
# + id="wBMMMUJffFHt"
np.save("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_pure_val2_expsantiago.npy", results_rff_pure_val2)
# + id="m088iEij_yB2"
indices_rff = list(range(120))
with Sampler(circuits=qclist_rff_pure_val[246:], service=service, options={ "backend": "ibmq_santiago" }) as sampler:
result = sampler(circuit_indices=indices_rff, shots=5000)
dists = result.quasi_dists
results_rff_pure_val3 = [dists[i]['00'] for i in range(len(dists))]
# + colab={"base_uri": "https://localhost:8080/"} outputId="3cb63743-5d89-40b2-9a6d-63ad4fd3071f" id="PnP7LnOLfOLK"
print(results_rff_pure_val3)
# + id="mLN5Cs_wfOLL"
np.save("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_pure_val3_expsantiago.npy", results_rff_pure_val3)
# + id="Gy4wSQiY_yB3" colab={"base_uri": "https://localhost:8080/"} outputId="07086787-5fed-49ee-9369-6d88a425eb60"
results_rff_pure_val = np.concatenate((results_rff_pure_val1, results_rff_pure_val2, results_rff_pure_val3), axis=0)
thredhold_pure = np.percentile(results_rff_pure_val, q = 9.54)
print(thredhold_pure)
# + [markdown] id="5ZTO6pMMAafb"
# ### Test
# + id="U8E1ktHSAjPj"
U_pure = pure_state(X_feat_train)
qclist_rff_pure_test = []
for i in range(len(X_feat_test)):
qc = QuantumCircuit(2, 2)
qc.initialize(X_feat_test[i], [0, 1])
qc.isometry(U_pure.T, [], [0, 1]) # ArbRot as a isometry
qc.measure(0, 0)
qc.measure(1, 1)
qclist_rff_pure_test.append(transpile(qc, device))
# + id="O8wrhFB0A15l"
indices_rff = list(range(123))
with Sampler(circuits=qclist_rff_pure_test[0:123], service=service, options={ "backend": "ibmq_santiago" }) as sampler:
result = sampler(circuit_indices=indices_rff, shots=5000)
dists = result.quasi_dists
results_rff_pure_test1 = [dists[i]['00'] for i in range(len(dists))]
# + colab={"base_uri": "https://localhost:8080/"} outputId="01603e83-ffac-43ef-b4ab-b80cd69752b3" id="rDA--3VRf4tO"
print(results_rff_pure_test1)
# + id="yxghSz44f4tp"
np.save("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_pure_test1_expsantiago.npy", results_rff_pure_test1)
# + id="nJ0B8r49A15o"
indices_rff = list(range(122))
with Sampler(circuits=qclist_rff_pure_test[123:245], service=service, options={ "backend": "ibmq_santiago" }) as sampler:
result = sampler(circuit_indices=indices_rff, shots=5000)
dists = result.quasi_dists
results_rff_pure_test2 = [dists[i]['00'] for i in range(len(dists))]
# + colab={"base_uri": "https://localhost:8080/"} outputId="8d01e1be-4378-415d-e328-c4bc4ae248cf" id="He35f_argBTL"
print(results_rff_pure_test2)
# + id="ckWlkBGbgBTL"
np.save("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_pure_test2_expsantiago.npy", results_rff_pure_test2)
# + id="FqPZAYAPA15q"
indices_rff = list(range(122))
with Sampler(circuits=qclist_rff_pure_test[245:], service=service, options={ "backend": "ibmq_santiago" }) as sampler:
result = sampler(circuit_indices=indices_rff, shots=5000)
dists = result.quasi_dists
results_rff_pure_test3 = [dists[i]['00'] for i in range(len(dists))]
# + colab={"base_uri": "https://localhost:8080/"} outputId="695b5607-3adc-494c-b059-dbcd5cb79d78" id="QPh1d91TgI8Q"
print(results_rff_pure_test3)
# + id="XZRppbAWgI8Q"
np.save("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_pure_test3_expsantiago.npy", results_rff_pure_test3)
# + colab={"base_uri": "https://localhost:8080/"} id="jBonNi8AfVZl" outputId="a048c692-6802-47cf-d48e-4721c7c7b290"
results_rff_pure_val1 = np.load("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_pure_val1.npy")
results_rff_pure_val2 = np.load("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_pure_val2.npy")
results_rff_pure_val3 = np.load("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_pure_val3.npy")
results_rff_pure_val1 = np.sqrt(results_rff_pure_val1)
results_rff_pure_val2 = np.sqrt(results_rff_pure_val2)
results_rff_pure_val3 = np.sqrt(results_rff_pure_val3)
results_rff_pure_val = np.concatenate((results_rff_pure_val1, results_rff_pure_val2, results_rff_pure_val3), axis=0)
results_rff_pure_val1.shape, results_rff_pure_val2.shape, results_rff_pure_val3.shape, results_rff_pure_val.shape
# + colab={"base_uri": "https://localhost:8080/"} id="RxXrGraigRZR" outputId="d6240449-d344-41d2-87d8-da0fae785764"
thredhold_pure = np.percentile(results_rff_pure_val, q = 9.54)
print(thredhold_pure)
# + colab={"base_uri": "https://localhost:8080/"} id="eSb6Q-HpfD48" outputId="dbb2f13b-bb38-4a85-8e4f-0b9925e38cca"
results_rff_pure_test1 = np.load("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_pure_test1.npy")
results_rff_pure_test2 = np.load("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_pure_test2.npy")
results_rff_pure_test3 = np.load("/content/drive/MyDrive/TesisMaestria/ResearchData/results_aff_pure_test3.npy")
results_rff_pure_test1 = np.sqrt(results_rff_pure_test1)
results_rff_pure_test2 = np.sqrt(results_rff_pure_test2)
results_rff_pure_test3 = np.sqrt(results_rff_pure_test3)
results_rff_pure_test = np.concatenate((results_rff_pure_test1, results_rff_pure_test2, results_rff_pure_test3), axis=0)
results_rff_pure_test1.shape, results_rff_pure_test2.shape, results_rff_pure_test3.shape, results_rff_pure_test.shape
# + colab={"base_uri": "https://localhost:8080/"} outputId="9cb9b37e-9ba8-4ff7-a5c5-34a78343b698" id="gsdmN4FqA15r"
results_rff_pure_test = np.concatenate((results_rff_pure_test1, results_rff_pure_test2, results_rff_pure_test3), axis=0)
y_pred_pure = results_rff_pure_test > thredhold_pure
print(classification_report(y_test, y_pred_pure, digits=4))
# + colab={"base_uri": "https://localhost:8080/"} id="V4UbnE9dqeEj" outputId="11212b00-a9a2-4e51-f162-c2fb2de3371e"
print(f"AUC = {round(roc_auc_score(y_test, results_rff_pure_test), 4)}")
# + id="i1_Ry9lwxyr2"
# + [markdown] id="L-ZIkp3mksTV"
# ## Classical Pred AdpRFF
# + id="rF4Y9nWp37-w"
from sklearn.metrics import roc_curve, f1_score
from sklearn.metrics import classification_report
def classification(preds_val, preds_test, y_test):
thredhold = np.percentile(preds_val, q = 9.54)
y_pred = preds_test > thredhold
return classification_report(y_test, y_pred, digits=4)
# + colab={"base_uri": "https://localhost:8080/"} id="UDqW_ig3kv9P" outputId="59b8cee8-1266-43b8-9e44-a25e10a04390"
#gamma = dmrff.rff_layer.gamma_val.numpy()
dim = 4
print(f"{dim}x{dim} Pure, experiment AdaptiveRFF")
#print("Gamma:", gamma)
## Training pure state and create the Unitary matrix to initialize such state
psi_train = X_feat_train.sum(axis = 0)
psi_train = psi_train / np.linalg.norm(psi_train)
preds_val_expected = np.sqrt((X_feat_val @ psi_train)**2)
preds_test_expected = np.sqrt((X_feat_test @ psi_train)**2)
print(classification(preds_val_expected, preds_test_expected, y_test))
print(f"AUC = {round(roc_auc_score(y_test, preds_test_expected), 4)}")
# + colab={"base_uri": "https://localhost:8080/"} id="DFIFOuKflIMh" outputId="52455b00-c00f-472f-9b9b-104e647cc252"
#gamma = dmrff.rff_layer.gamma_val.numpy()
dim = 4
print(f"{dim}x{dim} mixed, experiment AdaptiveRFF")
#print("Gamma:", gamma)
## Training mixed state and create the Unitary matrix to initialize such state
rho_train = np.zeros((dim, dim))
#for i in range(1000):
for i in range(len(X_feat_train)):
rho_train += np.outer(X_feat_train[i], X_feat_train[i])
rho_train = rho_train / len(X_feat_train)
# Classical prediction
preds_val_mixed = np.zeros(len(X_feat_val))
for i in range(len(X_feat_val)):
preds_val_mixed[i] = X_feat_val[i].T @ rho_train @ X_feat_val[i]
preds_test_mixed = np.zeros(len(X_feat_test))
for i in range(len(X_feat_test)):
preds_test_mixed[i] = X_feat_test[i].T @ rho_train @ X_feat_test[i]
print(classification(preds_val_mixed, preds_test_mixed, y_test))
print(f"AUC = {round(roc_auc_score(y_test, preds_test_mixed), 4)}")
# + id="hx6sPiXgzUik"
| Paper Experiments/Anomaly_Detection_AdaptiveFF_Real_Quantum_Computer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
@author: <NAME>
"""
import numpy as np
import tensorflow as tf
from sklearn.neural_network import MLPClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import AdaBoostClassifier
from sklearn.model_selection import KFold
from sklearn.metrics import roc_auc_score, confusion_matrix
# +
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255.
X_test /= 255.
X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
roc_Decision = 0
# +
"""Decision Tree"""
tree = DecisionTreeClassifier()
# tree = DecisionTreeClassifier(max_depth=100, min_samples_leaf=1)
tree.fit(X_train, y_train)
y_pred = tree.predict(X_test)
sum = 0.0
for i in range(10000):
if (y_pred[i] == y_test[i]):
sum = sum + 1
print('Test set score: %f' % (sum / 10000.))
confusion_1 = confusion_matrix(y_test,y_pred)
print(confusion_1)
# +
"""Bagging Random Forest"""
model = RandomForestClassifier()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
# k-fold cross validation
score = np.mean(cross_val_score(model,X_train,y_train,cv=10))
sum = 0.0
for i in range(10000):
if (y_pred[i] == y_test[i]):
sum = sum + 1
print('Test set score: %f, cross valid score: %f' % (sum / 10000., score))
confusion_2 = confusion_matrix(y_test,y_pred)
print(confusion_2)
# +
"""Boosting Adaboost"""
model = AdaBoostClassifier()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
# k-fold cross validation
score = np.mean(cross_val_score(model,X_train,y_train,cv=10))
sum = 0.0
for i in range(10000):
if (y_pred[i] == y_test[i]):
sum = sum + 1
print('Test set score: %f, cross valid score: %f' % (sum / 10000., score))
confusion_3 = confusion_matrix(y_test,y_pred)
print(confusion_3)
# -
| hw2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/majahn/intro_data_analysis_biophys_101/blob/main/code/Interactive_Plotting_With_Bokeh_First_Steps.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="i1TV5FqkKKY_"
import pandas as pd
# + id="HGt65wBNKNBY"
df = pd.read_table("https://raw.githubusercontent.com/majahn/intro_data_analysis_biophys_101/main/data/simple_examples/mpg.dat", header=None, sep="\s+", names=['time', 'value'])
# + id="m51GpzTLKXJ9"
import bokeh.plotting
import bokeh.io
bokeh.io.output_notebook()
# + id="azQ7d5G_LKgX"
p = bokeh.plotting.figure(
height=250,
width=550,
x_axis_label="time",
y_axis_label="value",
title="Our first plot"
)
# + colab={"base_uri": "https://localhost:8080/", "height": 267} id="L_cLZZpCLRpy" outputId="f1dba5dc-9d62-49a4-d8d6-7c10a21ca348"
p.line(
x=df['time'],
y=df['value'],
line_width=2,
)
bokeh.io.show(p)
# + id="GIojDFxBL3R2"
# + [markdown] id="QjEU48bcL6aJ"
# ### Descriptive Statistics
# + colab={"base_uri": "https://localhost:8080/"} id="FfGoeaBOL_Ca" outputId="513d5525-554b-4019-8390-eabe5bdb66b5"
df["value"].mean()
# + colab={"base_uri": "https://localhost:8080/"} id="kRruxUO6MGpG" outputId="14bf1325-58d1-4bc0-b065-05b36ff11fe7"
df["value"].median()
# + colab={"base_uri": "https://localhost:8080/"} id="LbI3dvJ4MQpO" outputId="2480fca6-59c2-4c20-f1bb-741cc57f6351"
df["value"].max()
# + id="zN2YgJq9MKt_"
| code/Interactive_Plotting_With_Bokeh_First_Steps.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] colab_type="text" id="iCUZvZvBB7VD" slideshow={"slide_type": "-"}
# # Linear Mixed Effects Models
#
# A linear mixed effects model is a simple approach for modeling structured linear relationships (Harville, 1997; <NAME>, 1982). Each data point consists of inputs of varying type—categorized into groups—and a real-valued output. A linear mixed effects model is a _hierarchical model_: it shares statistical strength across groups in order to improve inferences about any individual data point.
#
# In this tutorial, we demonstrate linear mixed effects models with a real-world example in TensorFlow Probability. We'll use the Edward2 (`tfp.edward2`) and Markov Chain Monte Carlo (`tfp.mcmc`) modules.
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="2brwVZwEB7VF" slideshow={"slide_type": "-"}
# %matplotlib inline
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import csv
import IPython
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import requests
import tensorflow as tf
import tensorflow_probability as tfp
import warnings
from tensorflow_probability import edward2 as ed
plt.style.use('ggplot')
# + [markdown] colab_type="text" id="eikJTmPgB7VJ" slideshow={"slide_type": "-"}
# ## Data
#
# We use the `InstEval` data set from the popular [`lme4` package in R](https://CRAN.R-project.org/package=lme4) (Bates et al., 2015). It is a data set of courses and their evaluation ratings. Each course includes metadata such as `students`, `instructors`, and `departments`, and the response variable of interest is the evaluation rating.
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="lZ8OfS3cDMeG"
def load_insteval():
"""Loads the InstEval data set.
It contains 73,421 university lecture evaluations by students at ETH
Zurich with a total of 2,972 students, 2,160 professors and
lecturers, and several student, lecture, and lecturer attributes.
Implementation is built from the `observations` Python package.
Returns:
Tuple of np.darray `x_train` with 73,421 rows and 7 columns and
dictionary `metadata` of column headers (feature names).
"""
url = ('https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/'
'lme4/InstEval.csv')
with requests.Session() as s:
download = s.get(url)
f = download.content.decode().splitlines()
iterator = csv.reader(f)
columns = next(iterator)[1:]
x_train = np.array([row[1:] for row in iterator], dtype=np.int)
metadata = {'columns': columns}
return x_train, metadata
# + [markdown] colab_type="text" id="Um0EhvaDQcVI"
# We load and preprocess the data set. We hold out 20% of the data so we can evaluate our fitted model on unseen data points. Below we visualize the first few rows.
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "height": 204} colab_type="code" executionInfo={"elapsed": 66, "status": "ok", "timestamp": 1522960059205, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="YY_VbNt6fkcp" outputId="517ba48b-6692-4c45-e557-c704721506b8"
data, metadata = load_insteval()
data = pd.DataFrame(data, columns=metadata['columns'])
data = data.rename(columns={'s': 'students',
'd': 'instructors',
'dept': 'departments',
'y': 'ratings'})
data['students'] -= 1 # start index by 0
# Remap categories to start from 0 and end at max(category).
data['instructors'] = data['instructors'].astype('category').cat.codes
data['departments'] = data['departments'].astype('category').cat.codes
train = data.sample(frac=0.8)
test = data.drop(train.index)
train.head()
# + [markdown] colab_type="text" id="qWttG6OaVFMO"
# We set up the data set in terms of a `features` dictionary of inputs and a `labels` output corresponding to the ratings. Each feature is encoded as an integer and each label (evaluation rating) is encoded as a floating point number.
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="NzfVQJN9B7VQ" slideshow={"slide_type": "-"}
get_value = lambda dataframe, key, dtype: dataframe[key].values.astype(dtype)
features_train = {
k: get_value(train, key=k, dtype=np.int32)
for k in ['students', 'instructors', 'departments', 'service']}
labels_train = get_value(train, key='ratings', dtype=np.float32)
features_test = {k: get_value(test, key=k, dtype=np.int32)
for k in ['students', 'instructors', 'departments', 'service']}
labels_test = get_value(test, key='ratings', dtype=np.float32)
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "height": 85} colab_type="code" executionInfo={"elapsed": 239, "status": "ok", "timestamp": 1523409608178, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="80ylfxWtB7VT" outputId="50ec4661-f4ab-4255-c20d-66b7adf7cb60" slideshow={"slide_type": "-"}
num_students = max(features_train['students']) + 1
num_instructors = max(features_train['instructors']) + 1
num_departments = max(features_train['departments']) + 1
num_observations = train.shape[0]
print("Number of students:", num_students)
print("Number of instructors:", num_instructors)
print("Number of departments:", num_departments)
print("Number of observations:", num_observations)
# + [markdown] colab_type="text" id="jMRMLuWwB7VX" slideshow={"slide_type": "-"}
# ## Model
#
# A typical linear model assumes independence, where any pair of data points has a constant linear relationship. In the `InstEval` data set, observations arise in groups each of which may have varying slopes and intercepts. Linear mixed effects models, also known as hierarchical linear models or multilevel linear models, capture this phenomenon (Gelman & Hill, 2006).
#
# Examples of this phenomenon include:
#
# # + __Students__. Observations from a student are not independent: some students may systematically give low (or high) lecture ratings.
# # + __Instructors__. Observations from an instructor are not independent: we expect good teachers to generally have good ratings and bad teachers to generally have bad ratings.
# # + __Departments__. Observations from a department are not independent: certain departments may generally have dry material or stricter grading and thus be rated lower than others.
#
# To capture this, recall that for a data set of $N\times D$ features $\mathbf{X}$ and $N$ labels $\mathbf{y}$, linear regression posits the model
#
# \begin{equation*}
# \mathbf{y} = \mathbf{X}\beta + \alpha + \epsilon,
# \end{equation*}
#
# where there is a slope vector $\beta\in\mathbb{R}^D$, intercept $\alpha\in\mathbb{R}$, and random noise $\epsilon\sim\text{Normal}(\mathbf{0}, \mathbf{I})$. We say that $\beta$ and $\alpha$ are "fixed effects": they are effects held constant across the population of data points $(x, y)$. An equivalent formulation of the equation as a likelihood is $\mathbf{y} \sim \text{Normal}(\mathbf{X}\beta + \alpha, \mathbf{I})$. This likelihood is maximized during inference in order to find point estimates of $\beta$ and $\alpha$ that fit the data.
#
# A linear mixed effects model extends linear regression as
#
# \begin{align*}
# \eta &\sim \text{Normal}(\mathbf{0}, \sigma^2 \mathbf{I}), \\
# \mathbf{y} &= \mathbf{X}\beta + \mathbf{Z}\eta + \alpha + \epsilon.
# \end{align*}
#
# where there is still a slope vector $\beta\in\mathbb{R}^P$, intercept $\alpha\in\mathbb{R}$, and random noise $\epsilon\sim\text{Normal}(\mathbf{0}, \mathbf{I})$. In addition, there is a term $\mathbf{Z}\eta$, where $\mathbf{Z}$ is a features matrix and $\eta\in\mathbb{R}^Q$ is a vector of random slopes; $\eta$ is normally distributed with variance component parameter $\sigma^2$. $\mathbf{Z}$ is formed by partitioning the original $N\times D$ features matrix in terms of a new $N\times P$ matrix $\mathbf{X}$ and $N\times Q$ matrix $\mathbf{Z}$, where $P + Q=D$: this partition allows us to model the features separately using the fixed effects $\beta$ and the latent variable $\eta$ respectively.
#
# We say the latent variables $\eta$ are "random effects": they are effects that vary across the population (although they may be constant across subpopulations). In particular, because the random effects $\eta$ have mean 0, the data label's mean is captured by $\mathbf{X}\beta + \alpha$. The random effects component $\mathbf{Z}\eta$ captures variations in the data: for example, "Instructor \#54 is rated 1.4 points higher than the mean."
# + [markdown] colab_type="text" id="7B6ROTDQdTjH"
# In this tutorial, we posit the following effects:
#
# # + Fixed effects: `service`. `service` is a binary covariate corresponding to whether the course belongs to the instructor's main department. No matter how much additional data we collect, it can only take on values $0$ and $1$.
# # + Random effects: `students`, `instructors`, and `departments`. Given more observations from the population of course evaluation ratings, we may be looking at new students, teachers, or departments.
#
# In the syntax of R's lme4 package (Bates et al., 2015), the model can be summarized as
#
# ```
# ratings ~ service + (1|students) + (1|instructors) + (1|departments) + 1
# ```
# where `x` denotes a fixed effect,`(1|x)` denotes a random effect for `x`, and `1` denotes an intercept term.
#
# We implement this model below as an Edward program.
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="12nqmEIsB7VY" slideshow={"slide_type": "-"}
def linear_mixed_effects_model(features):
# Set up fixed effects and other parameters.
intercept = tf.get_variable("intercept", []) # alpha in eq
effect_service = tf.get_variable("effect_service", []) # beta in eq
stddev_students = tf.exp(
tf.get_variable("stddev_unconstrained_students", [])) # sigma in eq
stddev_instructors = tf.exp(
tf.get_variable("stddev_unconstrained_instructors", [])) # sigma in eq
stddev_departments = tf.exp(
tf.get_variable("stddev_unconstrained_departments", [])) # sigma in eq
# Set up random effects.
effect_students = ed.MultivariateNormalDiag(
loc=tf.zeros(num_students),
scale_identity_multiplier=stddev_students,
name="effect_students")
effect_instructors = ed.MultivariateNormalDiag(
loc=tf.zeros(num_instructors),
scale_identity_multiplier=stddev_instructors,
name="effect_instructors")
effect_departments = ed.MultivariateNormalDiag(
loc=tf.zeros(num_departments),
scale_identity_multiplier=stddev_departments,
name="effect_departments")
# Set up likelihood given fixed and random effects.
# Note we use `tf.gather` instead of matrix-multiplying a design matrix of
# one-hot vectors. The latter is memory-intensive if there are many groups.
ratings = ed.Normal(
loc=(effect_service * features["service"] +
tf.gather(effect_students, features["students"]) +
tf.gather(effect_instructors, features["instructors"]) +
tf.gather(effect_departments, features["departments"]) +
intercept),
scale=1.,
name="ratings")
return ratings
# Wrap model in a template. All calls to the model template will use the same
# TensorFlow variables.
model_template = tf.make_template("model", linear_mixed_effects_model)
# + [markdown] colab_type="text" id="3G_0t3jiZps2"
# As an Edward program, we can also visualize the model's structure in terms of its computational graph. This graph encodes dataflow across the random variables in the program, making explicit their relationships in terms of a graphical model (Jordan, 2003).
#
# As a statistical tool, we might look at the graph in order to better see, for example, that `intercept` and `effect_service` are conditionally dependent given `ratings`; this may be harder to see from the source code if the program is written with classes, cross references across modules, and/or subroutines. As a computational tool, we might also notice latent variables flow into the `ratings` variable via `tf.gather` ops. This may be a bottleneck on certain hardware accelerators if indexing `Tensor`s is expensive; visualizing the graph makes this readily apparent.
#
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "height": 641} colab_type="code" executionInfo={"elapsed": 3313, "status": "ok", "timestamp": 1523409611834, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="ZQQZrTtBZww3" outputId="f2866a4e-9c05-4f06-9ecb-519b0cf5c0e0"
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = bytes("<stripped %d bytes>"%size, 'utf-8')
return strip_def
def draw_graph(model, *args, **kwargs):
"""Visualize TensorFlow graph."""
graph = tf.Graph()
with graph.as_default():
model(*args, **kwargs)
graph_def = graph.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=32)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
IPython.display.display(IPython.display.HTML(iframe))
draw_graph(linear_mixed_effects_model, features_train)
# + [markdown] colab_type="text" id="ZPZTWsCeB7Va" slideshow={"slide_type": "-"}
# ## Parameter Estimation
#
# Given data, the goal of inference is to fit the model's fixed effects slope $\beta$, intercept $\alpha$, and variance component parameter $\sigma^2$. The maximum likelihood principle formalizes this task as
#
# $$
# \max_{\beta, \alpha, \sigma}~\log p(\mathbf{y}\mid \mathbf{X}, \mathbf{Z}; \beta, \alpha, \sigma) = \max_{\beta, \alpha, \sigma}~\log \int p(\eta; \sigma) ~p(\mathbf{y}\mid \mathbf{X}, \mathbf{Z}, \eta; \beta, \alpha)~d\eta.
# $$
#
# In this tutorial, we use the Monte Carlo EM algorithm to maximize this marginal density (Dempster et al., 1977; Wei and Tanner, 1990).¹ We perform Markov chain Monte Carlo to compute the expectation of the conditional likelihood with respect to the random effects ("E-step"), and we perform gradient descent to maximize the expectation with respect to the parameters ("M-step"):
#
# # + For the E-step, we set up Hamiltonian Monte Carlo (HMC). It takes a current state—the student, instructor, and department effects—and returns a new state. We assign the new state to TensorFlow variables, which will denote the state of the HMC chain.
#
# # + For the M-step, we use the posterior sample from HMC to calculate an unbiased estimate of the marginal likelihood up to a constant. We then apply its gradient with respect to the parameters of interest. This produces an unbiased stochastic descent step on the marginal likelihood. We implement it with the Adam TensorFlow optimizer and minimize the negative of the marginal.
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="F7uOcwQFB7Vb" slideshow={"slide_type": "-"}
log_joint = ed.make_log_joint_fn(model_template)
def target_log_prob_fn(effect_students, effect_instructors, effect_departments):
"""Unnormalized target density as a function of states."""
return log_joint( # fix `features` and `ratings` to the training data
features=features_train,
effect_students=effect_students,
effect_instructors=effect_instructors,
effect_departments=effect_departments,
ratings=labels_train)
tf.reset_default_graph()
# Set up E-step (MCMC).
effect_students = tf.get_variable( # `trainable=False` so unaffected by M-step
"effect_students", [num_students], trainable=False)
effect_instructors = tf.get_variable(
"effect_instructors", [num_instructors], trainable=False)
effect_departments = tf.get_variable(
"effect_departments", [num_departments], trainable=False)
hmc = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob_fn,
step_size=0.015,
num_leapfrog_steps=3)
current_state = [effect_students, effect_instructors, effect_departments]
with warnings.catch_warnings(UserWarning):
# TensorFlow raises a warning about converting sparse IndexedSlices to a
# dense Tensor during gradient computation. This can consume a large amount
# of memory. We're okay with that as the number of categories is small.
warnings.simplefilter("ignore")
next_state, kernel_results = hmc.one_step(
current_state=current_state,
previous_kernel_results=hmc.bootstrap_results(current_state))
expectation_update = tf.group(
effect_students.assign(next_state[0]),
effect_instructors.assign(next_state[1]),
effect_departments.assign(next_state[2]))
# Set up M-step (gradient descent).
# The following should work. However, TensorFlow raises an error about taking
# gradients through IndexedSlices tensors. This may be a TF bug. For now,
# we recompute the target's log probability at the current state.
# loss = -kernel_results.accepted_results.target_log_prob
with tf.control_dependencies([expectation_update]):
loss = -target_log_prob_fn(effect_students,
effect_instructors,
effect_departments)
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
minimization_update = optimizer.minimize(loss)
# + [markdown] colab_type="text" id="6BaHczzpkt0k"
# We perform a warm-up stage, which runs one MCMC chain for a number of iterations so that training may be initialized within the posterior's probability mass. We then run a training loop. It jointly runs the E and M-steps and records values during training.
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "height": 136} colab_type="code" executionInfo={"elapsed": 2937344, "status": "ok", "timestamp": 1523412552856, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="zxbcYtrUt3OG" outputId="815c1ca0-526a-4c10-e810-97f92d57c3f7"
init = tf.global_variables_initializer()
num_warmup_iters = 1000
num_iters = 1500
num_accepted = 0
effect_students_samples = np.zeros([num_iters, num_students])
effect_instructors_samples = np.zeros([num_iters, num_instructors])
effect_departments_samples = np.zeros([num_iters, num_departments])
loss_history = np.zeros([num_iters])
sess = tf.Session()
sess.run(init)
# Run warm-up stage.
for t in range(num_warmup_iters):
_, is_accepted_val = sess.run(
[expectation_update, kernel_results.is_accepted])
num_accepted += is_accepted_val
if t % 500 == 0 or t == num_warmup_iters - 1:
print("Warm-Up Iteration: {:>3} Acceptance Rate: {:.3f}".format(
t, num_accepted / (t + 1)))
num_accepted = 0 # reset acceptance rate counter
# Run training.
for t in range(num_iters):
for _ in range(5): # run 5 MCMC iterations before every joint EM update
_ = sess.run(expectation_update)
[
_,
_,
effect_students_val,
effect_instructors_val,
effect_departments_val,
is_accepted_val,
loss_val,
] = sess.run([
expectation_update,
minimization_update,
effect_students,
effect_instructors,
effect_departments,
kernel_results.is_accepted,
loss,
])
effect_students_samples[t, :] = effect_students_val
effect_instructors_samples[t, :] = effect_instructors_val
effect_departments_samples[t, :] = effect_departments_val
num_accepted += is_accepted_val
loss_history[t] = loss_val
if t % 500 == 0 or t == num_iters - 1:
print("Iteration: {:>4} Acceptance Rate: {:.3f} Loss: {:.3f}".format(
t, num_accepted / (t + 1), loss_val))
# + [markdown] colab_type="text" id="r6U2zkdbHj5z"
# Above, we did not run the algorithm until a convergence threshold was detected. To check whether training was sensible, we verify that the loss function indeed tends to converge over training iterations.
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "height": 294} colab_type="code" executionInfo={"elapsed": 5717, "status": "ok", "timestamp": 1523412832731, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="HR4A6FLCwD7b" outputId="9931413d-ea23-4c55-9672-40686b5edd6a"
plt.plot(loss_history)
plt.ylabel(r'Loss $-\log$ $p(y\mid\mathbf{x})$')
plt.xlabel('Iteration')
plt.show()
# + [markdown] colab_type="text" id="Fz7FphO9LwVE"
# We also use a trace plot, which shows the Markov chain Monte Carlo algorithm's trajectory across specific latent dimensions. Below we see that specific instructor effects indeed meaningfully transition away from their initial state and explore the state space. The trace plot also indicates that the effects differ across instructors but with similar mixing behavior.
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "height": 294} colab_type="code" executionInfo={"elapsed": 2009, "status": "ok", "timestamp": 1523412834873, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="_NvaIhgrvY9o" outputId="e194a815-4cc2-453d-bc44-8f281a717d7c"
for i in range(7):
plt.plot(effect_instructors_samples[:, i])
plt.legend([i for i in range(7)], loc='lower right')
plt.ylabel('Instructor Effects')
plt.xlabel('Iteration')
plt.show()
# + [markdown] colab_type="text" id="-xVCGWZoB7Vd" slideshow={"slide_type": "-"}
# ## Criticism
#
# Above, we fitted the model. We now look into criticizing its fit using data, which lets us explore and better understand the model. One such technique is a residual plot, which plots the difference between the model's predictions and ground truth for each data point. If the model were correct, then their difference should be standard normally distributed; any deviations from this pattern in the plot indicate model misfit.
#
# We build the residual plot by first forming the posterior predictive distribution over ratings, which replaces the prior distribution on the random effects with its posterior given training data. In particular, we run the model forward and intercept its dependence on prior random effects with their inferred posterior means.²
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="p4vreJekB7Vf" slideshow={"slide_type": "-"}
def interceptor(rv_constructor, *rv_args, **rv_kwargs):
"""Replaces prior on effects with empirical posterior mean from MCMC."""
name = rv_kwargs.pop("name")
if name == "effect_students":
rv_kwargs["value"] = np.mean(effect_students_samples, 0)
elif name == "effect_instructors":
rv_kwargs["value"] = np.mean(effect_instructors_samples, 0)
elif name == "effect_departments":
rv_kwargs["value"] = np.mean(effect_departments_samples, 0)
return rv_constructor(*rv_args, **rv_kwargs)
with ed.interception(interceptor):
ratings_posterior = model_template(features=features_test)
ratings_prediction = ratings_posterior.distribution.mean()
# + [markdown] colab_type="text" id="zTQJ3d-Hv93z"
# Upon visual inspection, the residuals look somewhat standard-normally distributed. However, the fit is not perfect: there is larger probability mass in the tails than a normal distribution, which indicates the model might improve its fit by relaxing its normality assumptions.
#
# In particular, although it is most common to use a normal distribution to model ratings in the `InstEval` data set, a closer look at the data reveals that course evaluation ratings are in fact ordinal values from 1 to 5. This suggests that we should be using an ordinal distribution, or even Categorical if we have enough data to throw away the relative ordering. This is a one-line change to the Edward program above; the same inference code is applicable.
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "height": 287} colab_type="code" executionInfo={"elapsed": 1022, "status": "ok", "timestamp": 1523412837934, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="0jIxfwuvEWLG" outputId="25c16070-d34a-4836-ee74-33a55e9b3adc"
ratings_pred = sess.run(ratings_prediction)
plt.title("Residuals for Predicted Ratings on Test Set")
plt.xlim(-4, 4)
plt.ylim(0, 800)
plt.hist(ratings_pred - labels_test, 75)
plt.show()
# + [markdown] colab_type="text" id="wi4hnI8UxFD2"
# To explore how the model makes individual predictions, we look at the histogram of effects for students, instructors, and departments. This lets us understand how individual elements in a data point's feature vector tends to influence the outcome.
#
# Not surprisingly, we see below that each student typically has little effect on an instructor's evaluation rating. Interestingly, we see that the department an instructor belongs to has a large effect.
# + colab={"autoexec": {"startup": false, "wait_interval": 0}} colab_type="code" id="JY1p26-W8hVz"
[
effect_students_mean,
effect_instructors_mean,
effect_departments_mean,
] = sess.run([
tf.reduce_mean(effect_students_samples, 0),
tf.reduce_mean(effect_instructors_samples, 0),
tf.reduce_mean(effect_departments_samples, 0),
])
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "height": 287} colab_type="code" executionInfo={"elapsed": 1074, "status": "ok", "timestamp": 1523412847283, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="MU-L604RFkxg" outputId="daae7805-c846-4a99-c301-bf5d412e77ca"
plt.title("Histogram of Student Effects")
plt.hist(effect_students_mean, 75)
plt.show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "height": 287} colab_type="code" executionInfo={"elapsed": 834, "status": "ok", "timestamp": 1523412848724, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="22qgTW7SGulD" outputId="3e35da15-558d-4d42-c476-b50903702585"
plt.title("Histogram of Instructor Effects")
plt.hist(effect_instructors_mean, 75)
plt.show()
# + colab={"autoexec": {"startup": false, "wait_interval": 0}, "height": 287} colab_type="code" executionInfo={"elapsed": 605, "status": "ok", "timestamp": 1523412849489, "user": {"displayName": "", "photoUrl": "", "userId": ""}, "user_tz": 420} id="lTd2_uodGu2F" outputId="2bad980f-eafd-4ccd-dbd9-02626fe3dfd1"
plt.title("Histogram of Department Effects")
plt.hist(effect_departments_mean, 75)
plt.show()
# + [markdown] colab_type="text" id="Ck3cPwIjvyqO"
# ## Footnotes
#
# ¹ Linear mixed effect models are a special case where we can analytically compute its marginal density. For the purposes of this tutorial, we demonstrate Monte Carlo EM, which more readily applies to non-analytic marginal densities such as if the likelihood were extended to be Categorical instead of Normal.
#
# ² For simplicity, we form the predictive distribution's mean using only one forward pass of the model. This is done by conditioning on the posterior mean and is valid for linear mixed effects models. However, this is not valid in general: the posterior predictive distribution's mean is typically intractable and requires taking the empirical mean across multiple forward passes of the model given posterior samples.
# + [markdown] colab_type="text" id="8pm6qMKvB7WB" slideshow={"slide_type": "-"}
# ## Acknowledgments
#
# This tutorial was originally written in Edward 1.0 ([source](https://github.com/blei-lab/edward/blob/master/notebooks/linear_mixed_effects_models.ipynb)). We thank all contributors to writing and revising that version.
# + [markdown] colab_type="text" id="sHw7WpM1IzLO"
# ## References
#
# 1. <NAME> and <NAME> and <NAME> and <NAME>. Fitting Linear Mixed-Effects Models Using lme4. _Journal of Statistical Software_, 67(1):1-48, 2015.
#
# 2. <NAME>, <NAME>, and <NAME>. Maximum likelihood from incomplete data via the EM algorithm. _Journal of the Royal Statistical Society, Series B (Methodological)_, 1-38, 1977.
#
# 3. <NAME> and <NAME>. _Data analysis using regression and multilevel/hierarchical models._ Cambridge University Press, 2006.
#
# 4. <NAME>. Maximum likelihood approaches to variance component estimation and to related problems. _Journal of the American Statistical Association_, 72(358):320-338, 1977.
#
# 5. <NAME>. An Introduction to Graphical Models. Technical Report, 2003.
#
# 6. <NAME> and <NAME>. Random-effects models for longitudinal data. _Biometrics_, 963-974, 1982.
#
# 7. <NAME> and <NAME>. A Monte Carlo implementation of the EM algorithm and the poor man's data augmentation algorithms. _Journal of the American Statistical Association_, 699-704, 1990.
| tensorflow_probability/examples/jupyter_notebooks/Linear_Mixed_Effects_Models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
from sklearn.cluster import KMeans
from patternly.detection import AnomalyDetection
from patternly._utils import UnionFind, DirectedGraph
from zedsuite.zutil import Llk
# +
# %%time
# Prepare data
quantized_time_series = pd.read_csv(
"./data/example1.dat", sep=" ", header=None, low_memory=False
).dropna(how="all", axis=1)
# Fit detection pipeline to training data
pipeline = AnomalyDetection(anomaly_sensitivity=2, n_clusters=5, reduce_clusters=True, quantize=False, eps=0.1, verbose=True)
pipeline = pipeline.fit(quantized_time_series)
# +
all_cluster_likelihoods = np.empty(shape=(pipeline.n_clusters, pipeline.n_clusters), dtype=np.float32)
all_ranked_likelihoods = np.empty(shape=(pipeline.n_clusters, pipeline.n_clusters), dtype=np.int32)
for i in range(pipeline.n_clusters):
cluster_llks = []
for pfsafile in pipeline.cluster_PFSA_files:
cluster_data = pipeline.quantized_data[pipeline.quantized_data["cluster"] == i].drop(columns=["cluster"], axis=1)
cluster_llks.append(np.asarray(Llk(data=cluster_data, pfsafile=pfsafile).run(), dtype=np.float32))
# which cluster PFSA each sequence most likely maps back to
closest_matches = np.argmin(cluster_llks, axis=0)
# the likelihoods of the sequences generated by the current PFSA mapping back to each cluster PFSA
cluster_likelihoods = np.count_nonzero(
(closest_matches.reshape(-1, 1) == np.arange(pipeline.n_clusters).reshape(1, -1)),
axis=0
# ) / pipeline.quantized_data[pipeline.quantized_data["cluster"] == i].shape[0]
) / pipeline.cluster_counts[i]
# list of cluster PFSAs sorted in descending order of likelihood
ranked_likelihoods = np.argsort(cluster_likelihoods)[::-1]
all_cluster_likelihoods[i] = cluster_likelihoods
all_ranked_likelihoods[i] = ranked_likelihoods
# -
print(all_cluster_likelihoods)
print(all_ranked_likelihoods)
print(pipeline.cluster_counts)
# for i in range(len(all_cluster_likelihoods)):
# if all_ranked_likelihoods[i][0] != i:
# all_cluster_likelihoods[all_ranked_likelihoods[i][0]][i] += 0.1
# print(all_cluster_likelihoods)
graph = DirectedGraph(5)
graph.from_matrix(all_cluster_likelihoods, threshold=0)
print(graph.find_scc())
len(set(graph.low_links))
graph.graph
graph = DirectedGraph(5)
graph.from_matrix(all_cluster_likelihoods >= 0.1)
graph.graph
# graph.find_scc()
# +
graph = UnionFind(pipeline.n_clusters)
for i in range(pipeline.n_clusters):
best_match = all_ranked_likelihoods[i][0]
second_best_match = all_ranked_likelihoods[i][1]
if best_match != i:
graph.union(i, best_match, ranks=(all_cluster_likelihoods[i]+all_cluster_likelihoods[best_match]))
if second_best_match != i and all_cluster_likelihoods[i][second_best_match] > 2 * (1 / pipeline.n_clusters):
graph.union(i, second_best_match, ranks=(all_cluster_likelihoods[i]+all_cluster_likelihoods[second_best_match]))
print(graph.roots)
print(f"\n{graph.compress_all().roots}: {graph.n_components} components")
print(set(graph.roots))
# +
# %%time
predictions = pd.DataFrame(pipeline.predict())
anomalies = predictions[predictions[0] == True]
print(anomalies.shape[0])
anomalies
# +
from IPython.display import Image, display
from IPython.core.display import HTML
for i, file in enumerate(pipeline.cluster_PFSA_pngs):
print(f"Cluster {i} PFSA")
display(Image(url=f"{file}.png", width=300))
pipeline.print_PFSAs()
| examples/example1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 5GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# + _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0"
import matplotlib.pyplot as plt
import seaborn as sns
import keras
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
from keras.preprocessing.image import ImageDataGenerator, load_img
# %matplotlib inline
# +
print(os.listdir("/kaggle/input"))
# +
fig, ax = plt.subplots(1, 2, figsize=(15,10))
img_name='NORMAL2-IM-0523-0001.jpeg'
img_norm=load_img('/kaggle/input/chest-xray-pneumonia/chest_xray/chest_xray/train/NORMAL/' + img_name)
ax[0].imshow(img_norm)
ax[0].set_title('NORMAL')
img_name='person1343_virus_2317.jpeg'
img_norm=load_img('/kaggle/input/chest-xray-pneumonia/chest_xray/chest_xray/train/PNEUMONIA/' + img_name)
ax[1].imshow(img_norm)
ax[0].set_title('PNEUMANIA')
# -
img_width,img_height=150,150
train_data_dir = '/kaggle/input/chest-xray-pneumonia/chest_xray/chest_xray/train'
validation_data_dir = '/kaggle/input/chest-xray-pneumonia/chest_xray/chest_xray/val'
test_data_dir = '/kaggle/input/chest-xray-pneumonia/chest_xray/chest_xray/test'
input_shape = (img_width, img_height, 3)
# +
model=Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (1, 1)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (2, 2)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(16, (1, 1),padding='valid'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
# -
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=16,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=16,
class_mode='binary')
test_generator=test_datagen.flow_from_directory(test_data_dir,target_size=(img_width,img_height)
,batch_size=16,class_mode='binary')
nb_train_samples = 5217
nb_validation_samples = 17
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // 16,
epochs=20,
validation_data=validation_generator,
validation_steps=nb_validation_samples //16)
scores = model.evaluate_generator(test_generator)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
| kernel11745e1115.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Optimisation
# https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#constrained-minimization-of-multivariate-scalar-functions-minimize
#
# ## One-dimensional optimisation
#
# A necessary and sufficient condition for a local minimum of a twice differentiable function $f:\mathbf{R}\to \mathbf{R}$
#
# $$f'(x_0) = 0, \qquad f''(x_0) > 0$$
#
# Here we want to optimize a univariate function:
#
# $$f(x)=4x^2e^{-2x}$$
#
# We first define the function:
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.lines import Line2D
import numpy as np
from scipy.optimize import fmin
def f1simple(x):
# gamma(2,3) density
if (x < 0):
return (0)
if (x == 0):
return (np.nan)
y = np.exp(-2*x)
return (4 * x**2 * y)
# Next we define the same function but return $f(x)$, $f'(x)$, and $f''(x)$.
#
# $$f'(x)=4(2xe^{-2x}+(-2)x^2e^{-2x})=8x(1-x)e^{-2x}$$
# $$f''(x)=8e^{-2x}(1-4x+2x^2)$$
def f1(x):
# gamma(2,3) density
if (x < 0):
return np.array([0, 0, 0])
if (x == 0):
return np.array([0, 0, np.nan])
y = np.exp(-2.0*x)
return np.array([4.0 * x**2.0 * y, \
8.0 * x*(1.0-x)*y, \
8.0*(1 - 4*x + 2 * x**2)*y])
# Plotting the function is always a good idea!
# +
xmin = 0.0
xmax = 6.0
xv = np.linspace(xmin, xmax, 200)
fx = np.zeros(len(xv),float) # define column vector
for i in range(len(xv)):
fx[i] = f1(xv[i])[0]
fig, ax = plt.subplots()
ax.plot(xv, fx)
plt.show()
# -
# ### Newton’s Method
#
# In order to implement the Newton method we basically look for the root of a first derivative so that $f'(x)=0$.
# +
myOpt = 1.0
fmaxval = f1simple(myOpt)
xmin = 0.0
xmax = 6.0
xv = np.linspace(xmin, xmax, 200)
fx = np.zeros(len(xv),float) # define column vector
for i in range(len(xv)):
fx[i] = f1(xv[i])[0]
fig, ax = plt.subplots()
ax.plot(xv, fx)
ax.plot(xv, fmaxval*np.ones(len(xv)))
ax.axvline(x = myOpt, ymin=0.0, color='r', linestyle='--')
plt.show()
# -
# We then use an adjustment of the Newthon-Raphson Root Finding Algorithm to find this point.
#
# Newthon-Raphson Root Finding Algorithm:
#
# $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$$
#
# We have to adjust this of course because the function we search the foot for is already the first derivative of a function, so that we have:
#
# $$x_{n+1}=x_n-\frac{f'(x_n)}{f''(x_n)}$$
def newton(f3, x0, tol = 1e-9, nmax = 100):
# Newton's method for optimization, starting at x0
# f3 is a function that given x returns the vector
# (f(x), f'(x), f''(x)), for some f
x = x0
f3x = f3(x)
n = 0
while ((abs(f3x[1]) > tol) and (n < nmax)):
x = x - f3x[1]/f3x[2]
f3x = f3(x)
n = n + 1
if (n == nmax):
print("newton failed to converge")
else:
return(x)
# We use these algorithms to find the maximum point of our function `f1`. Note that if we use the Newton algorithm we will need the first and second derivatives of the functions. This is why we use function f1 that returns f, f' and f'' via an array/vector as return value.
print(" -----------------------------------")
print(" Newton results ")
print(" -----------------------------------")
print(newton(f1, 0.25))
print(newton(f1, 0.5))
print(newton(f1, 0.75))
print(newton(f1, 1.75))
# Derivatives are oft hard to compute; therefore a numerical method that does not require the derivative is preferable. An example is bisection in the golden ratio (homework problem 10).
#
#
# ### Bisection in the golden-section
#
# The golden-section method works in one dimension only, but does not need the derivatives of the function. However, the function still needs to be continuous. In order to determine whether there is a local maximum we need three points. Then we can use the following:
#
# If $x_l<x_m<x_r$ and
# 1. $f(x_l)\le f(x_m)$ and
# 2. $f(x_r)\le f(x_m)$ then there must be a local maximum in the interval between $[x_l,x_r]$
#
# This method is very similar to the bisection method (root bracketing).
#
# The method starts with three starting values and operates by successively narrowing the range of values on the specified interval, which makes it relatively slow, but very robust. The technique derives its name from the fact that the algorithm maintains the function values for four points whose three interval widths are in the ratio
#
# $$2-\varphi :2 \times \varphi -3 : 2 - \varphi$$
#
# where $\varphi$ (phi) is the [golden ratio](https://en.wikipedia.org/wiki/Golden_ratio).
#
# In mathematics, two quantities $a$ and $b$ are in the golden ratio if their ratio is the same as the ratio of their sum to the larger of the two quantities. Assume $a>b$ then the ratio:
#
# $$\frac{a}{b}=\frac{a+b}{a}=\varphi$$
#
# Note: $a+b$ is to $a$ as $a$ is to $b$.
# 
#
# The golden ratio is the solution of the quadratic equation:
#
# $$\varphi^2 - \varphi - 1 = 0$$
#
# so that
#
# $$\varphi = \frac{1\pm\sqrt{5}}{2}=[1.6180339887, -0.6180339887]$$
#
# #### Algorithm
#
# 1. if $x_r-x_l\le \epsilon$ then stop
# 2. if $x_r-x_m>x_m-x_l$ then do (a) otherwise do (b)
#
# a. Let $y=x_m+(x_r-x_m)/(1+\varphi)$; if $f(y)\ge f(x_m)$ then put $x_l=x_m$ and $x_m = y$; otherwise put $x_r=y$
#
# b. Let $y=x_m+(x_m-x_l)/(1+\varphi)$; if $f(y)\ge f(x_m)$ then put $x_r=x_m$ and $x_m = y$; otherwise put $x_l=y$
#
# 3. go back to step 1.
#
def gsection(ftn, xl, xm, xr, tol = 1e-9):
# applies the golden-section algorithm to maximise ftn
# we assume that ftn is a function of a single variable
# and that x.l < x.m < x.r and ftn(x.l), ftn(x.r) <= ftn(x.m)
#
# the algorithm iteratively refines x.l, x.r, and x.m and
# terminates when x.r - x.l <= tol, then returns x.m
# golden ratio plus one
gr1 = 1 + (1 + np.sqrt(5))/2
#
# successively refine x.l, x.r, and x.m
fl = ftn(xl)
fr = ftn(xr)
fm = ftn(xm)
while ((xr - xl) > tol):
if ((xr - xm) > (xm - xl)):
y = xm + (xr - xm)/gr1
fy = ftn(y)
if (fy >= fm):
xl = xm
fl = fm
xm = y
fm = fy
else:
xr = y
fr = fy
else:
y = xm - (xm - xl)/gr1
fy = ftn(y)
if (fy >= fm):
xr = xm
fr = fm
xm = y
fm = fy
else:
xl = y
fl = fy
return(xm)
# We next use this algorithms to find the maximum point of our function `f1simple`. The Golden section algorithm does not require the derivates of the function, so we just call the `f1simple` function that only returns the functional value.
print(" -----------------------------------")
print(" Golden section results ")
print(" -----------------------------------")
myOpt = gsection(f1simple, 0.1, 0.25, 1.3)
print(gsection(f1simple, 0.1, 0.25, 1.3))
print(gsection(f1simple, 0.25, 0.5, 1.7))
print(gsection(f1simple, 0.6, 0.75, 1.8))
print(gsection(f1simple, 0.0, 2.75, 5.0))
# We can also use a built in function minimizer. The built in function `fmin` is in the `scipy.optimize` library. We need to import it first. So if we want to maximize our function we have to define it as a negated function, that is:
#
# $$g(x)=-f(x)$$
#
# then
#
# $$\min g(x)$$
#
# is the same as
#
# $$\max f(x)$$
#
# Since we want to find the maximum of the function, we need to “trick” the minimization algorithm. We therefore need to redefine the function as
def f1simpleNeg(x):
# gamma(2,3) density
if (x < 0):
return (0)
if (x == 0):
return (np.nan)
y = np.exp(-2*x)
return (-(4 * x**2 * y))
# Here we simply return negative values of this function. If we now minimize this function, we actually maximize the original function
#
# $$f(x)=4x^2e^{-2x}$$
print(" -----------------------------------")
print(" fmin results ")
print(" -----------------------------------")
print(fmin(f1simpleNeg, 0.25))
print(fmin(f1simpleNeg, 0.5))
print(fmin(f1simpleNeg, 0.75))
print(fmin(f1simpleNeg, 1.75))
# ## Multivariate Optimization
#
# # Function
#
# Here we want to optimize the following function `f3`
def f3simple(x):
a = x[0]**2/2.0 - x[1]**2/4.0
b = 2*x[0] - np.exp(x[1])
f = np.sin(a)*np.cos(b)
return(f)
# Its negative version:
def f3simpleNeg(x):
a = x[0]**2/2.0 - x[1]**2/4.0
b = 2*x[0] - np.exp(x[1])
f = -np.sin(a)*np.cos(b)
return(f)
# And the version that returns $f(x)$, $f'(x)$ (i.e., the gradient), and $f''(x)$ (i.e., the Hessian matrix):
def f3(x):
a = x[0]**2/2.0 - x[1]**2/4.0
b = 2*x[0] - np.exp(x[1])
f = np.sin(a)*np.cos(b)
f1 = np.cos(a)*np.cos(b)*x[0] - np.sin(a)*np.sin(b)*2
f2 = -np.cos(a)*np.cos(b)*x[1]/2 + np.sin(a)*np.sin(b)*np.exp(x[1])
f11 = -np.sin(a)*np.cos(b)*(4 + x[0]**2) + np.cos(a)*np.cos(b) \
- np.cos(a)*np.sin(b)*4*x[0]
f12 = np.sin(a)*np.cos(b)*(x[0]*x[1]/2.0 + 2*np.exp(x[1])) \
+ np.cos(a)*np.sin(b)*(x[0]*np.exp(x[1]) + x[1])
f22 = -np.sin(a)*np.cos(b)*(x[1]**2/4.0 + np.exp(2*x[1])) \
- np.cos(a)*np.cos(b)/2.0 - np.cos(a)*np.sin(b)*x[1]*np.exp(x[1]) \
+ np.sin(a)*np.sin(b)*np.exp(x[1])
# Function f3 returns: f(x), f'(x), and f''(x)
return (f, np.array([f1, f2]), np.array([[f11, f12], [f12, f22]]))
# We next plot the function:
# +
fig = plt.figure(figsize=(14, 16))
ax = plt.gca(projection='3d')
X = np.arange(-3, 3, .1)
Y = np.arange(-3, 3, .1)
X, Y = np.meshgrid(X, Y)
Z = np.zeros((len(X),len(Y)),float)
for i in range(len(X)):
for j in range(len(Y)):
Z[i][j] = f3simple([X[i][j],Y[i][j]])
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, \
cmap=plt.cm.jet, linewidth=0, antialiased=False)
plt.show()
# -
# ### Multivariate Newton Method
def newtonMult(f3, x0, tol = 1e-9, nmax = 100):
# Newton's method for optimisation, starting at x0
# f3 is a function that given x returns the list
# {f(x), grad f(x), Hessian f(x)}, for some f
x = x0
f3x = f3(x)
n = 0
while ((max(abs(f3x[1])) > tol) and (n < nmax)):
x = x - np.linalg.solve(f3x[2], f3x[1])
f3x = f3(x)
n = n + 1
if (n == nmax):
print("newton failed to converge")
else:
return(x)
# Compare the Newton method with the built in `fmin` method in `scipy.optimize`. We use various starting values to see whether we can find more than one optimum.
for x0 in np.arange(1.4, 1.6, 0.1):
for y0 in np.arange(0.4, 0.7, 0.1):
# This algorithm requires f(x), f'(x), and f''(x)
print("Newton: f3 " + str([x0,y0]) + ' --> ' + str(newtonMult(f3, \
np. array([x0,y0]))))
print("fmin: f3 " + str([x0,y0]) + ' --> ' \
+ str(fmin(f3simpleNeg, np.array([x0,y0]))))
print(" ----------------------------------------- ")
# ## Homework 10
# +
xl = 0;
xr = 1;
l = (-1 + np.sqrt(5))/2;
x1 = l*xl + (1-l)*xr;
x2 = (1-l)*xl + l*xr;
a = [xl,x1,x2,xr]
fig1 = plt.figure(facecolor='white',figsize=(4,1))
ax1 = plt.axes(frameon=False)
ax1.get_xaxis().tick_bottom()
ax1.axes.get_yaxis().set_visible(False)
ax1.eventplot(a, orientation='horizontal', colors='b')
ax1.annotate('$x_l$', (xl,1))
ax1.annotate('$x_r$', (xr,1))
ax1.annotate('$x_1$', (x1,1))
ax1.annotate('$x_2$', (x2,1))
xmin, xmax = ax1.get_xaxis().get_view_interval()
ymin, ymax = ax1.get_yaxis().get_view_interval()
ax1.add_artist(Line2D((xmin, xmax), (ymin, ymin), color='black', linewidth=2))
plt.show()
# -
def goldsectmin(f, xl, xr, tol = 1e-9, nmax = 100):
# GOLDSECTMIN finds a minimum of the function f
# in the interval [xl, xr] using the golden section method
l = (-1 + np.sqrt(5))/2;
x1 = l*xl + (1-l)*xr;
x2 = (1-l)*xl + l*xr;
f1 = f(x1);
f2 = f(x2);
n = 0;
while ((abs(xr - xl) > tol) and (n < nmax)):
if (f1 > f2):
xl = x1;
x1 = x2;
f1 = f2;
x2 = (1-l)*xl + l*xr;
f2 = f(x2);
else:
xr = x2;
x2 = x1;
f2 = f1;
x1 = l*xl + (1-l)*xr;
f1 = f(x1);
if (n == nmax):
print("GOLDSECTMIN failed to converge")
else:
return(x1)
# Angenommen, jeder Mensch hat eine feste Zahl von Herzschlaegen in seinem Leben zur Verfuegung. Sei $x$ der Anteil der Zeit, die der Mensch mit sportlichen Aktivitaeten verbringt. Beim Sport schlage das Herz mit 120 Schlaegen pro Minute, im Ruhezustand mit $g(x)$ Schlaegen pro Minute, wobei fuer untrainierte Personen $g(0)=80$, und $g(x)$ faellt fuer groessere $x$ schnell auf 50 ab, z.B.
g = lambda x : 50 + 30*np.exp(-100*x)
# Die durchschnittliche Zahl der Herzschlaege pro Minute wird geplottet. Finde die optimale Dauer der sportlichen Betaetigung pro Tag, d.h. finde das Minimum von $f(x)$.
# +
f = lambda x : 120*x + np.multiply(g(x),(1 - x))
xmin = 0.0
xmax = 0.2
xv = np.linspace(xmin, xmax, 200)
fig, ax = plt.subplots()
ax.plot(xv, f(xv))
ax.set(xlabel='Zeitlicher Anteil der sportlichen Aktivität x', ylabel='Herzschläge pro Minute f(x)',
title='durchschnittliche Zahl der Herzschlaege pro Minute')
ax.grid()
# Bisektion mit goldenem Schnitt -> goldsectmin.m
x = goldsectmin(f, xl, xr, 1e-6);
print('Das entspricht {} Minuten Sport pro Tag.'.format(x*24*60));
ax.plot(x, f(x), 'ro');
plt.show()
# -
# ## Multi-dimensional optimisation without constraints
#
# *Dimensionality of the problem:* The scale of an optimization problem is pretty much set by the dimensionality of the problem, i.e. the number of scalar variables on which the search is performed.
#
# $$z=f(x_1,x_2,\dots x_n)$$
#
# We want to find a (local) minimum $x_0$ of a function $f:\mathbf{R}^n \to \mathbf{R}^n$. If $f$ is twice differentiable then a necessary and sufficient condition is
#
# $$\nabla f(x_0) = 0, \qquad x^T H(x_0) x >0\;\forall\; x\in\mathbf{R}^n\setminus\{ 0\}$$
#
# where $H$ is the Hessian of $f$. Again, computing the gradient and solving the corresponding nonlinear system of equations can be difficult. Fortunately there are methods that do not require the gradient.
#
# As an example, consider:
# +
fig = plt.figure()
ax = fig.gca(projection='3d')
f = lambda x,y : np.multiply(x,
np.exp(- np.square(x) - np.square(y))
) + (np.square(x) + np.square(y))/20
# Make data.
X = np.linspace(-2, 2, 50)
Y = np.linspace(-2, 2, 50)
X, Y = np.meshgrid(X, Y)
Z = f(X,Y)
# Plot the surface.
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
# Customize the axis.
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
# -
# We first rewrite this as a function of a two-dimensional vector:
F = lambda x : f(x[0], x[1])
# We choose an initial guess $x_0$ and set some options that we pass to the Python function `minimize` (unconstrained minimisation), which is part of SciPy.
#
# Some of the `scipy.optimize` routines allow for a callback function. Below is an example using the "nelder-mead" routine where I use a callback function to display the current value of the arguments and the value of the objective function at each iteration.
#
# In the example below, the minimize routine is used with the Nelder-Mead simplex algorithm (selected through the method parameter):
# +
from scipy.optimize import minimize
Nfeval = 1
def callbackF(Xi):
global Nfeval
print('{0:4d} {1: 3.6f} {2: 3.6f} {3: 3.6f}'.format(Nfeval, Xi[0], Xi[1], F(Xi)))
Nfeval += 1
print('{0:4s} {1:9s} {2:9s} {3:9s}'.format('Iter', ' X1', ' X2', 'F(X)'))
x0 = np.array([-.5, 0]);
res = minimize(F,
x0,
callback=callbackF,
method='nelder-mead',
options={'xatol': 1e-8, 'disp': True})
res.x
# -
# The simplex algorithm is probably the simplest way to minimize a fairly well-behaved function. It requires only function evaluations and is a good choice for simple minimization problems. However, because it does not use any gradient evaluations, it may take longer to find the minimum.
# +
fig = plt.figure()
ax = fig.gca(projection='3d')
# Make data.
X = np.linspace(-2, 2, 50)
Y = np.linspace(-2, 2, 50)
X, Y = np.meshgrid(X, Y)
Z = f(X,Y)
# Plot the surface.
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
opt = ax.plot3D(res.x[0],res.x[1],F(res.x),'ro')
# Customize the axis.
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
# -
# ## Multi-dimensional optimisation with constraints
#
# Let's first look at a linear example.
#
# A company has two operation sites $O_1$ and $O_2$ and has to transport material to two construction sites $C_1$ and $C_2$.
#
# - There are 8 lorries at $O_1$ and 6 lorries at $O_2$.
# - Construction site $C_1$ requires 4 loads a day, $C_2$ requires 7 loads.
# - The distances are $O_1-C_1=8$km, $O_1-C_2=9$km, $O_2-C_1=3$km, $O_2-C_2=5$km.
#
# The task is to minimise the total distance travelled per day by all the lorries.
#
# Let $x_1$ be the number of lorries driving each day from $O_1$ to $C_1$, $x_2:O_1-C_2$, $x_3:O_2-C_1$, $x_4:O_2-C_2$. Then the function to be minimised is
#
# $$f:\mathbf{R}^4\to \mathbf{R}, \quad f(x) = 8 x_1 + 9 x_2 + 3 x_3 + 5 x_4$$
#
# and the constraints are
#
# $$x_1 + x_2 \leq 8,\\
# x_3 + x_4 \leq 6,\\
# x_1 + x_3 = 4,\\
# x_2 + x_4 = 7,\\
# x_1,x_2,x_3,x_4 \geq 0.$$
#
# We see that the constraints come in three types:
#
# - inequalities
# - equalities
# - lower (or upper) bounds on the unknowns
#
# This problem can actually be solved analytically without too much effort. First we eliminate $x_3$ and $x_4$:
#
# $$x_3 = 4 - x_1, \quad x_4 = 7 - x_2$$
#
# The modified target function is
#
# $$\tilde f (x_1, x_2) = 5 x_1 + 4 x_2 + 47$$
#
# and the constraints read
#
# $$x_1 + x_2 \leq 8, \quad x_1 + x_2 \geq 5, \quad x_1 \leq 4, \quad x_2 \leq 7, \quad x_1 \geq 0, \quad x_2 \geq 0.$$
#
# The allowed region in the plane looks like this:
# +
plt.figure()
xmin = -0.5
xmax = 5.0
ymin = -0.5
ymax = 8.0
plt.plot([0, 5], [8, 3], 'b-.') #x1 + x2 <= 8
plt.plot([0, 5], [5, 0], 'g-.') #x1 + x2 >= 5
plt.plot([4, 4], [ymin, ymax], 'r-.') #x1 <= 4
plt.plot([xmin, xmax], [7, 7], 'm-.') #x2 <= 7
plt.plot([0, 0], [ymin, ymax], 'c-.') #x1 => 0
plt.plot([xmin, xmax], [0, 0], 'd-.') #x2 => 0
plt.plot(0,5,'ro') #min
xlist = np.linspace(xmin, xmax, 5)
ylist = np.linspace(ymin, ymax, 8)
X, Y = np.meshgrid(xlist, ylist)
# Now we add the contour lines of the target function
ftilde = lambda x1,x2 : 5*x1 + 4*x2 + 47
Z = ftilde(X, Y)
#print(Z)
cp = plt.contour(X, Y, Z, colors='black', linestyles='dashed')
plt.clabel(cp, inline=True, fontsize=10)
plt.title('Contour Plot on $x_1,x_2$ plain for $x_3=4-x_1$ and $x_4=7-x_2$')
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
plt.show()
# -
# From this plot it is obvious that the point with the lowest value of $\tilde f$ is $x_1 = 0, \; x_2=5$, which implies $x_3 = 4, \; x_4 = 2$.
#
# Now we will solve this problem using Python. The objective function is
f = lambda x : 8*x[0] + 9*x[1] + 3*x[2] + 5*x[3]
# ### Defining linear Constraints
#
# The linear constraint $x_1 + x_2 \leq 8$, $x_3 + x_4 \leq 6$, $x_1 + x_3 = 4$, $x_2 + x_4 = 7$ has the general inequality form
#
# $$l_b\le Ax\le u_b$$
#
# where the vectors for lower bound $l_b$, upper bound $u_b$, and the independent variables $x$ is passed as ndarray of shape
# (n,) and the matrix $A$ has shape (m, n).
#
# It is possible to use equal bounds to represent an equality constraint or infinite bounds to represent a one-sided constraint.
#
# and can be written in the linear constraint standard format:
from scipy.optimize import LinearConstraint
A = [[1, 1, 0, 0], [0, 0, 1, 1], [1, 0, 1, 0], [0, 1, 0, 1]]
lb = [-np.inf, -np.inf, 4, 7]
ub = [8, 6, 4, 7]
linear_constraint = LinearConstraint(A,lb,ub)
# The bound constraints of the independent variables $x_1,x_2,x_3,x_4 \geq 0$ are defined using a Bounds object.
from scipy.optimize import Bounds
lb = [0, 0, 0, 0]
ub = [np.inf, np.inf, np.inf, np.inf]
bounds = Bounds(lb, ub)
# Finally, we specify an initial vector:
x0 = [1, 1, 1, 1]
# The method 'trust-constr' requires the constraints to be defined as a sequence of objects `LinearConstraint` and `NonlinearConstraint`. The implementation is based on [EQSQP] for equality-constraint problems and on [TRIP] for problems with inequality constraints. Both are trust-region type algorithms suitable for large-scale problems.
# +
Nfeval = 1
def callbackF(Xi,_):
global Nfeval
print('{0:4d} {1: 3.6f} {2: 3.6f} {3: 3.6f} {4: 3.6f} {5: 3.6f}'.format(Nfeval, Xi[0], Xi[1], Xi[2], Xi[3], f(Xi)))
Nfeval += 1
print('{0:4s} {1:9s} {2:9s} {3:9s} {4:9s} {5:9s}'.format('Iter', ' X1', ' X2', ' X3', ' X4', 'f(X)'))
res = minimize(f, x0, method='trust-constr',
callback=callbackF,
constraints=linear_constraint,
options={'verbose': 1},
bounds=bounds)
print(res.x)
# -
# ## Multi-dimensional optimisation with nonlinear constraints
#
# We want to minimise the function
f = lambda x,y : np.multiply(x,
np.exp(- np.square(x) - np.square(y))
) + (np.square(x) + np.square(y))/20
F = lambda x : f(x[0], x[1])
# ### Defining Nonlinear Constraints
#
# Lets assume we have the constraints $x_0^2 + x_1 \le 1$ and $x_0^2 - x_1 \le 1$. We can write this in vector form.
#
# The nonlinear constraint:
#
# $$c(x) = \left[ \begin{matrix} x_0^2 + x_1 \\ x_0^2 - x_1 \end{matrix} \right] \le \left[ \begin{matrix} 1 \\ 1 \end{matrix} \right]$$
#
# with Jacobian matrix:
#
# $$J(x) = \left[ \begin{matrix} 2x_0 & 1 \\ 2x_0 & -1 \end{matrix} \right]$$
#
# and linear combination of the Hessians:
#
# $$H(x,v)=\sum_{i=0}^{1}v_i \nabla^2 c_i(x)=v_0 \left[ \begin{matrix} 2 & 0 \\ 2 & 0 \end{matrix} \right] + v_1 \left[ \begin{matrix} 2 & 0 \\ 2 & 0 \end{matrix} \right]$$
#
# The nonlinear constraint can be defined using a NonlinearConstraint object:
from scipy.optimize import NonlinearConstraint
def cons_f(x):return [x[0]**2 + x[1], x[0]**2 - x[1]]
def cons_J(x):return [[2*x[0], 1], [2*x[0], -1]]
def cons_H(x, v):return v[0]*np.array([[2, 0], [0, 0]]) + v[1]*np.array([[2, 0], [0, 0]])
nonlinear_constraint = NonlinearConstraint(cons_f, -np.inf, 1, jac=cons_J, hess=cons_H)
# Alternatively, it is also possible to define the Hessian $H(x,v)$
# as a sparse matrix:
from scipy.sparse import csc_matrix
def cons_H_sparse(x, v):return v[0]*csc_matrix([[2, 0], [0, 0]]) + v[1]*csc_matrix([[2, 0], [0, 0]])
nonlinear_constraint = NonlinearConstraint(cons_f, -np.inf, 1,
jac=cons_J, hess=cons_H_sparse)
# or as a LinearOperator object.
from scipy.sparse.linalg import LinearOperator
def cons_H_linear_operator(x, v):
def matvec(p):
return np.array([p[0]*2*(v[0]+v[1]), 0])
return LinearOperator((2, 2), matvec=matvec)
nonlinear_constraint = NonlinearConstraint(cons_f, -np.inf, 1,
jac=cons_J,
hess=cons_H_linear_operator)
# When the evaluation of the Hessian $H(x,v)$ is difficult to implement or computationally infeasible, one may use HessianUpdateStrategy. Currently available strategies are BFGS and SR1.
from scipy.optimize import BFGS
nonlinear_constraint = NonlinearConstraint(cons_f, -np.inf, 1, jac=cons_J, hess=BFGS())
# Alternatively, the Hessian may be approximated using finite differences.
nonlinear_constraint = NonlinearConstraint(cons_f, -np.inf, 1, jac=cons_J, hess='2-point')
# The Jacobian of the constraints can be approximated by finite differences as well. In this case, however, the Hessian cannot be computed with finite differences and needs to be provided by the user or defined using HessianUpdateStrategy.
nonlinear_constraint = NonlinearConstraint(cons_f, -np.inf, 1, jac='2-point', hess=BFGS())
# +
Nfeval = 1
trace = np.array([])
def callbackF(Xi,_):
global Nfeval
global trace
trace=np.append(trace,Xi)
print('{0:4d} {1: 3.6f} {2: 3.6f} {3: 3.6f}'.format(Nfeval, Xi[0], Xi[1], F(Xi)))
Nfeval += 1
print('{0:4s} {1:9s} {2:9s} {3:9s}'.format('Iter', ' X1', ' X2', 'F(X)'))
x0 = np.array([-4, 1]);
res = minimize(F,
x0,
callback=callbackF,
constraints=nonlinear_constraint,
method='trust-constr',
options={'verbose': 1}
)
res.x
plt.figure()
xmin = -6.0
xmax = 1.0
ymin = -1.0
ymax = 5.0
line = trace.reshape((int(trace.size/2),2))
plt.plot(line[:,0],line[:,1],'r-')
plt.plot(res.x[0],res.x[1],'ro') #min
xlist = np.linspace(xmin, xmax, 50)
ylist = np.linspace(ymin, ymax, 50)
X, Y = np.meshgrid(xlist, ylist)
cpg1 = plt.contour(X, Y, cons_f([X,Y])[0], 1,colors='green')
plt.clabel(cpg1, inline=True, fmt='cons_f1(x)=%r',fontsize=10)
cpg2 = plt.contour(X, Y, cons_f([X,Y])[1], 1,colors='blue')
plt.clabel(cpg2, inline=True, fmt='cons_f2(x)=%r',fontsize=10)
cpf = plt.contour(X, Y, f(X, Y), colors='black', linestyles='dashed')
plt.clabel(cpf, inline=True, fontsize=10)
plt.title('Contour Plot on $x_1,x_2$ plain for $f$ and $cons_f=1$')
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
plt.show()
# -
# Lets considder the nonliner constraint $g(x,y)\leq 0$ with $G(x)=\frac{x_1x_2}{2}+(x_1-2)^2+\frac{(x_2-2)^2}{2}-2$
g = lambda x,y : np.multiply(x,y)/2 + np.square(x+2) + np.square(y-2)/2 - 2
G = lambda x : g(x[0], x[1])
nonlinear_constraint = NonlinearConstraint(G, -np.inf, 0, jac='2-point', hess=BFGS())
# nonlinear_constraint = NonlinearConstraint(G, -np.inf, 0, jac=cons_J, hess='2-point')
# We specify an initial vector and solve the optimization problem.
# +
Nfeval = 1
trace = np.array([])
def callbackF(Xi,_):
global Nfeval
global trace
trace=np.append(trace,Xi)
print('{0:4d} {1: 3.6f} {2: 3.6f} {3: 3.6f}'.format(Nfeval, Xi[0], Xi[1], F(Xi)))
Nfeval += 1
print('{0:4s} {1:9s} {2:9s} {3:9s}'.format('Iter', ' X1', ' X2', 'F(X)'))
x0 = np.array([-2, 1]);
res = minimize(F,
x0,
callback=callbackF,
constraints=nonlinear_constraint,
method='trust-constr',
options={'verbose': 1}
)
res.x
plt.figure()
xmin = -6.0
xmax = 1.0
ymin = -1.0
ymax = 8.0
line = trace.reshape((int(trace.size/2),2))
plt.plot(line[:,0],line[:,1],'r-')
plt.plot(res.x[0],res.x[1],'ro') #min
xlist = np.linspace(xmin, xmax, 50)
ylist = np.linspace(ymin, ymax, 50)
X, Y = np.meshgrid(xlist, ylist)
cpg = plt.contour(X, Y, g(X, Y), 0,colors='green')
plt.clabel(cpg, inline=True, fmt='g(x)=%r',fontsize=10)
cpf = plt.contour(X, Y, f(X, Y), colors='black', linestyles='dashed')
plt.clabel(cpf, inline=True, fontsize=10)
plt.title('Contour Plot on $x_1,x_2$ plain for $f$ and $g(x)=0$')
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
plt.show()
| notebooks/Optimisation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Spark + Kafka
#
# ```{note}
# Structured Streaming 支持多种 Source,而在这些 Source 中,Kafka 的应用最为广泛。
# ```
# ## Kafka 简介
#
# 在大数据的流计算生态中,Kafka 是应用最为广泛的消息中间件(Messaging Queue),消息中间件的核心功能有以下三点:
#
# 1. 连接消息生产者和消费者
# 2. 缓存生产者生产的消息
# 3. 有能力让消费者以最低延迟访问到消息
#
# 消息中间件的存在,让生产者和消费者这两个系统之间,享有解耦、异步、削峰这三大收益。
#
# Kafka 为无主架构,它依赖 ZooKeeper 来存储并维护全局元信息。所谓元信息,它指的是消息在 Kafka 集群中的分布与状态。<br/>
# Kafka 集群中的每台 Server 被称为 Kafka Broker,Broker 的职责在于存储生产者生产的消息,并为消费者提供数据访问,Broker 与 Broker 之间相互独立。
#
# 在逻辑上,消息隶属于一个又一个的 Topic,也就是消息的话题。<br/>
# 为了提供数据访问的高可用,在生产者把消息写入主分区(Leader)之后,Kafka 会把消息同步到多个分区副本(Follower)。
#
# 
# ## 消息的消费
#
# 本节的实例是资源利用率实时计算。<br/>
# 首先搜集集群中每台机器的资源(CPU、内存)利用率,并将其写入 Kafka.<br/>
# 然后我们用 Spark Structured Streaming 来消费 Kafka 数据流并将信息打印到 console.<br/>
# 最后对资源利用率数据做初步的分析与聚合,通过 Structured Streaming 将聚合结果写回到 Kafka.
#
# 我们主要关注后两步。
#
# ```python
# # 消息的消费
# # option 指定 Kafka Broker 地址
# # option 指定 Topic
# dfCPU = (spark.readStream
# .format("kafka")
# .option("kafka.bootstrap.servers", "hostname1:9092,hostname2:9092,hostname3:9092")
# .option("subscribe", "cpu-monitor")
# .load())
# ```
#
# ```python
# # 将信息打印到 console
# (dfCPU.writeStream
# .outputMode("Complete")
# .format("console")
# .trigger(Trigger.ProcessingTime(10.seconds))
# .start()
# .awaitTermination())
# ```
# ## 再次写入 Kafka
#
# ```python
# # 聚合并把结果写回 Kafka
# (dfCPU
# .withColumn("key", F.col("key").cast(StringType()))
# .withColumn("value", F.col("value").cast(FloatType()))
# .groupBy("key")
# .agg(avg("value").cast(StringType()).alias("value"))
# .writeStream
# .outputMode("Complete")
# .format("kafka")
# .option("kafka.bootstrap.servers", "localhost:9092")
# .option("topic", "cpu-monitor-agg-result")
# .option("checkpointLocation", "/tmp/checkpoint")
# .trigger(Trigger.ProcessingTime(10.seconds))
# .start()
# .awaitTermination())
# ```
#
# 这里有两点需要特别注意:
# 1. 读取与写入的 Topic 要分开,以免造成逻辑与数据上的混乱。
# 2. 写回 Kafka 的数据,在 Schema 上必须用“key”和“value”这两个固定的字段。
| stream/4.kafka.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Max-min fairness
# https://www.wikiwand.com/en/Max-min_fairness
from resources.utils import run_tests
def max_min_fairness(demands, capacity):
capacity_remaining = capacity
output = []
for i, demand in enumerate(demands):
share = capacity_remaining / (len(demands) - i)
allocation = min(share, demand)
if i == len(demands) - 1:
allocation = max(share, capacity_remaining)
output.append(allocation)
capacity_remaining -= allocation
return output
tests = [
(dict(demands=[1, 1], capacity=20), [1, 19]),
(dict(demands=[2, 8], capacity=10), [2, 8]),
(dict(demands=[2, 8], capacity=5), [2, 3]),
(dict(demands=[1, 2, 5, 10], capacity=20), [1, 2, 5, 12]),
(dict(demands=[2, 2.6, 4, 5], capacity=10), [2, 2.6, 2.7, 2.7]),
]
run_tests(tests, max_min_fairness)
| algorithms/Max-min-fairness.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="ewbWLGf0hYbt"
# ## CAS Connection
#
# ### Connect to the Cas Server
# + colab={} colab_type="code" id="xYLemxMShYby" outputId="998d7614-0fb5-45d3-ab5f-40dce97ae13d"
import swat
s = swat.CAS(host, port)
s.session.setLocale(locale="en_US")
s.sessionProp.setSessOpt(timeout=864000)
# + [markdown] colab_type="text" id="sFN0On_LhYcC"
# # Document Classification
# ## Part 3: Build Lean Models using Feature Importance
# In this notebook, you will quantify feature importance to determine which features are most important to the models trained in Part 2. Three methods are presented: a standard method, split-based Gini feature importance which is built into the decisionTree action set, and two new methods (patent pending) which consider the network structure of the tree based models. These methods are called Betweenness Centrality Feature Importance and Leaf Based Feature Importance.
# + [markdown] colab_type="text" id="SfUKTbqFhYcE"
# # Load Data
#
# The Cora data set is publicly available via [this hyperlink](https://linqs.soe.ucsc.edu/data).
# + colab={} colab_type="code" id="VR2hljjehYcH" outputId="3df710ec-63c5-43dc-c8f2-46e009e42837"
import document_classification_scripts as scripts
import importlib
importlib.reload(scripts)
from document_classification_scripts import AttributeDict, nClasses, nWords, targetColumn, baseFeatureList
demo = scripts.Demo(s)
# + colab={} colab_type="code" id="55LKQ00GhYcO" outputId="f943e96a-d0e4-4874-b175-0873feb57e6a"
demo.loadRawData()
# + [markdown] colab_type="text" id="K3z1ANBYhYcV"
# # Data Preprocessing
# ### Creates a custom format definition for target labels
# + colab={} colab_type="code" id="qBKWMez4hYcZ" outputId="b45b03d5-0706-4c67-ce1a-1f533db47e2a"
demo.defineTargetVariableFormat()
# + [markdown] colab_type="text" id="a8eLsm67hYcg"
# ### Partitions data into training and test
# + colab={} colab_type="code" id="OY7PInn6hYcj" outputId="32602fe7-ccd2-46e3-8fb8-b3a4745d1b55"
demo.loadOrPartitionData()
# + [markdown] colab_type="text" id="u8Wp23VfhYco"
# ### Performs Principal Component Analysis (PCA)
# + colab={} colab_type="code" id="vIc-DY9vhYcq" outputId="41533e1f-cf3f-4512-9741-8edcf1a018ba"
nPca = 40
demo.performPca(nPca)
pcaFeatureList = [f"pca{i}" for i in range(1,nPca)]
# + [markdown] colab_type="text" id="HLjq1FnKhYcw"
# ### Joins citations and training data targets
# + colab={} colab_type="code" id="gxaA-MbdhYcy" outputId="5fc15774-e90c-4051-e92d-7e15a3f12f9a"
demo.joinTrainingTargets()
# + [markdown] colab_type="text" id="yUBCD087hYc4"
# ## Generate Network Features
# + colab={} colab_type="code" id="M5TOl_6ehYc5"
# %%capture
networkParam=AttributeDict({
"useCentrality":True,
"useNodeSimilarity":True,
"useCommunity":True,
"useCore":True
})
tableContentNetwork, networkFeatureList = demo.addNetworkFeatures(
"contentTrain", "citesTrain", networkParam)
tableContentPartitionedNetwork, networkFeatureList = demo.addNetworkFeatures(
"contentPartitioned", "citesCombined", networkParam)
tableContentNetworkPca, networkFeatureList = demo.addNetworkFeatures(
"contentTrainPca", "citesTrain", networkParam)
tableContentPartitionedNetworkPca, networkFeatureList = demo.addNetworkFeatures(
"contentPartitionedPca", "citesCombined", networkParam)
# + colab={} colab_type="code" id="VsfJ7-rRhYdA" outputId="e51ed120-4a09-461d-c57f-2b17e102019a"
s.datastep.runCode(
code = f"data contentTestNetwork; set {tableContentPartitionedNetwork}(where=(partition=0)); run;"
)
print(f"contentTestNetwork: (rows, cols) = {s.CASTable('contentTestNetwork').shape}")
s.datastep.runCode(
code = f"data contentTestPcaNetwork; set {tableContentPartitionedNetworkPca}(where=(partition=0)); run;"
)
print(f"contentTestPcaNetwork: (rows, cols) = {s.CASTable('contentTestPcaNetwork').shape}")
# + [markdown] colab_type="text" id="MXlSredWhYdF"
# # Load the Autotuned Network+PCA Model (trained in Part 2)
# + [markdown] colab_type="text" id="A2YQEmUHhYdI"
# Here we load the best hyperparameter configuration found by autotune, for the feature set using PCA and Network features. We again train a forest model using these hyperparameters in order to examine the feature importance values computed by the forestTrain action in the decisionTree action set.
# + colab={} colab_type="code" id="MLJ27oj9hYdI"
networkPcaModelAuto = "networkPcaModelAuto"
# + colab={} colab_type="code" id="2To10bwmhYdN" outputId="c2f879e7-58b8-47f1-f700-ecc2ba35f3d5"
bestConfig = demo.loadOrTuneForestModel(networkPcaModelAuto,
"contentTrainPcaNetwork",
pcaFeatureList + networkFeatureList
)
print(bestConfig)
# + [markdown] colab_type="text" id="44jt-_3ThYdT"
# ## Train PCA Forest Model
# + colab={} colab_type="code" id="aMEH8pqRhYdU" outputId="827e1910-16fe-4386-8206-99cf0fedc6b9"
# %%time
resultsTrainNetworkPcaModelAuto = demo.trainForestModel(networkPcaModelAuto,
"contentTrainPcaNetwork",
pcaFeatureList + networkFeatureList,
forestParam=bestConfig)
# + [markdown] colab_type="text" id="TMUlxTPihYda"
# # View Gini (Split Based) Feature Importances
# + colab={} colab_type="code" id="UMexkv1vhYdb"
topNCutoff=12
# + colab={} colab_type="code" id="ZNHZBcymhYdg" outputId="76da63a1-b09e-47ad-a7a2-ed22b1ef6650"
resultsTrainNetworkPcaModelAuto['DTreeVarImpInfo'].head(topNCutoff)
# + [markdown] colab_type="text" id="pMP0BdvqhYdj"
# # New Methods for Feature Importance Calculation
#
# The following function calls include prototype python implementations of two new alternative methods to the commonly used methods for calculating feature importance. Either of the three methods (split based feature importance above, or the two methods below) can be used to determine a smaller feature set to use for the next iteration of model building.
# + [markdown] colab_type="text" id="mjR7KUJ1hYdl"
# ## Calculate and View Betweenness Feature Importances (patent pending)
# + colab={} colab_type="code" id="JOH1OZUThYdm" outputId="45afe8f8-8af4-4353-e5dd-25d45fdbe1b5"
demo.calculateBetweennessImportance(networkPcaModelAuto, casOut="betweennessImportances")
# + colab={} colab_type="code" id="j0AHe2EahYdq" outputId="c825a52f-e674-4449-9c51-641d70f4aa50"
s.CASTable("betweennessImportances").nlargest(topNCutoff,"betweenImportance")
# + [markdown] colab_type="text" id="1dS9LVXxhYdu"
# ## Calculate and View Leaf Based Feature Importances (patent pending)
# + colab={} colab_type="code" id="RXPllJaKhYdv"
classes = [
"Rule_Learning",
"Theory",
"Genetic_Algorithms",
"Reinforcement_Learning",
"Case_Based",
"Neural_Networks",
"Probabilistic_Methods"
]
# + colab={} colab_type="code" id="gW8iCRqohYd0" outputId="8cf07da9-d366-4da7-b8ac-6dbcb65f2463"
leafBasedImportances = demo.leafBasedImportances(networkPcaModelAuto,
"contentTrainPcaNetwork",
pcaFeatureList + networkFeatureList,
classes
)
# + colab={} colab_type="code" id="yU-FQ0-JhYd3" outputId="148d2f70-b052-429f-e397-b9033d12efc9"
rankedFeaturesLeafBased = scripts.printImportances(leafBasedImportances, topNCutoff)
# + colab={} colab_type="code" id="nZbVHjGphYd8"
rankedFeaturesSplitBased = resultsTrainNetworkPcaModelAuto['DTreeVarImpInfo']['Variable'].tolist()[0:topNCutoff]
rankedFeaturesBetweenness = s.CASTable("betweennessImportances").nlargest(topNCutoff,"betweenImportance")["Variable"].tolist()
# + [markdown] colab_type="text" id="ScrtscXKhYd_"
# Note that split based Gini feature importance and Betweenness feature importance produce the same set of top 12 features.
#
# Leaf Based feature importance, on the other hand includes two features (generated from network in-degree) and excludes two features (from PCA) in its top 12:
# + colab={} colab_type="code" id="tUNhgBllhYd_" outputId="d9abd39b-f870-483e-f75b-b3c5dc33b261"
a=set(rankedFeaturesLeafBased) - set(rankedFeaturesBetweenness)
b=set(rankedFeaturesBetweenness) - set(rankedFeaturesLeafBased)
print(f"""In Leaf Based Top {topNCutoff}, but not Betweenness Top {topNCutoff}:
{a}""")
print(f"""In Betweenness Top {topNCutoff}, but not Leaf Based Top {topNCutoff}:
{b}""")
a=set(rankedFeaturesLeafBased) - set(rankedFeaturesSplitBased)
b=set(rankedFeaturesSplitBased) - set(rankedFeaturesLeafBased)
print(f"""
In Leaf Based Top {topNCutoff}, but not Split Based Top {topNCutoff}:
{a}""")
print(f"""In Split Based Top {topNCutoff}, but not Leaf Based Top {topNCutoff}:
{b}""")
a=set(rankedFeaturesSplitBased) - set(rankedFeaturesBetweenness)
b=set(rankedFeaturesBetweenness) - set(rankedFeaturesSplitBased)
print(f"""
In Split Based Top {topNCutoff}, but not Betweenness Top {topNCutoff}:
{a}""")
print(f"""In Betweenness Top {topNCutoff}, but not Split Based Top {topNCutoff}:
{b}""")
# + [markdown] colab_type="text" id="aZUZVUdghYeE"
# # Build models using only top N features
# + [markdown] colab_type="text" id="1FekxgnBhYeE"
# The best-performing models from Part 1 and Part 2 use a total of 85 features -- 40 PCA features and 45 Network features.
#
# Can we achieve similar model performance by using only the 12 most important features?
# + [markdown] colab_type="text" id="0mHsY_57hYeG"
# # First, try the top 12 features by Split Based Feature Importance
# + colab={} colab_type="code" id="3IjdVTDphYeH" outputId="66b00263-28e0-4559-bdfd-6cb4ce2d548c"
topNFeatureList = rankedFeaturesSplitBased
topNFeatureList
# + [markdown] colab_type="text" id="Z3TfvIvShYeM"
# ## Train Neural Net Model Using Top N Split Based Features
# + colab={} colab_type="code" id="kYrLVe6vhYeN"
deepLearnParam = AttributeDict({
"randomSeed": 1337,
"dropout": 0.5,
"activation": "RECTIFIER",
"outputActivation": "SOFTMAX",
"denseLayers": [50, 50],
"nOutputs": nClasses,
"nEpochs": 100,
"algoMethod": "ADAM",
"useLocking": False
})
# + colab={} colab_type="code" id="EXjfw-MfhYeQ"
topNNnModel = "topNNnModelSplit"
demo.defineNnModel(topNNnModel, deepLearnParam)
# + colab={} colab_type="code" id="Dy4BVE7VhYeT" outputId="9e9e63b1-8ab0-4cfe-be18-2c0d658034dd"
# %%time
demo.trainNnModel(topNNnModel,"contentTrainPcaNetwork", topNFeatureList, deepLearnParam)
# + colab={} colab_type="code" id="sKAn-2mbhYeY" outputId="75bdbf1d-942b-4823-b44f-eb9de75628f3"
demo.scoreNnModel(topNNnModel,"contentTestPcaNetwork")
# + [markdown] colab_type="text" id="_5nnVu_jhYeb"
# ### Bootstrap Runs
# + colab={} colab_type="code" id="qfNk579bhYed" outputId="57239755-adab-48d3-fe3c-b5fd350c517e"
# %%time
accuracies = demo.bootstrapNnModel(topNNnModel,"contentTrainPcaNetwork",
"contentTestPcaNetwork",
topNFeatureList,
deepLearnParam,
25
);
# + [markdown] colab_type="text" id="6HWFc-JxhYeh"
# ## Train Forest Model Using Top N Split Based Features
# + colab={} colab_type="code" id="oG4JokIBhYeh"
topNForestModel = "topNForestModelSplit"
# + colab={} colab_type="code" id="1SR47rK0hYek" outputId="21fb24d9-9cea-4d5f-c4c8-01eb5aa09fcb"
# %%time
demo.trainForestModel(
topNForestModel, "contentTrainPcaNetwork", topNFeatureList)
# + colab={} colab_type="code" id="PKKtXyIPhYep" outputId="7ca6ea42-2ae3-4deb-eba1-8ee87a1fd876"
resultsScoreTopNForestModel=demo.scoreForestModel(topNForestModel,"contentTestPcaNetwork")
# + [markdown] colab_type="text" id="7-2KzDyihYet"
# ### Bootstrap Runs
# + colab={} colab_type="code" id="Qs7z41GphYet" outputId="e53982cb-72a4-496d-e684-e35b2fc086f8"
# %%time
accuracies = demo.bootstrapForestModel(topNForestModel,"contentTrainPcaNetwork",
"contentTestPcaNetwork",
topNFeatureList,
n=25
);
# + [markdown] colab_type="text" id="7mIR1ZPPhYev"
# ## Autotune Forest Model Using Top N Split Based Features
# + colab={} colab_type="code" id="JoEmJxchhYex"
topNForestModelAuto = f"topNForestModelAuto{topNCutoff}Split"
# + colab={} colab_type="code" id="KJJNwj7WhYez" outputId="246d460e-5456-41b0-bd8a-7a6e883491e1"
# %%time
bestConfigTopN = demo.loadOrTuneForestModel(topNForestModelAuto,
"contentTrainPcaNetwork",
topNFeatureList
)
print(bestConfigTopN)
# + colab={} colab_type="code" id="JtrBiPiDhYe3" outputId="0ac8b06b-e38e-4943-b541-5ad53171a91a"
resultsScoreTopNForestModelAuto=demo.scoreForestModel(topNForestModelAuto,"contentTestPcaNetwork")
# + [markdown] colab_type="text" id="S7uwpXNLhYe8"
# ### Bootstrap Runs
# + colab={} colab_type="code" id="ApGa5ui4hYe8" outputId="ea23bca6-4117-4532-cae7-d8ee75792ff1"
# %%time
accuracies = demo.bootstrapForestModel(topNForestModelAuto,"contentTrainPcaNetwork",
"contentTestPcaNetwork",
topNFeatureList,
bestConfigTopN,
25
);
# + [markdown] colab_type="text" id="IhblQOsmhYfA"
# # Now, try the top 12 features by Leaf Based Feature Importance
# + colab={} colab_type="code" id="wD2nK-yxhYfB" outputId="5db1533f-f06f-4f63-b1d8-d567a834a9bc"
topNFeatureList = rankedFeaturesLeafBased
topNFeatureList
# + [markdown] colab_type="text" id="UQH9HqR4hYfE"
# ## Train Neural Net Model Using Top N Leaf Based Features
# + colab={} colab_type="code" id="V4FOGh6BhYfG"
deepLearnParam = AttributeDict({
"randomSeed": 1337,
"dropout": 0.5,
"activation": "RECTIFIER",
"outputActivation": "SOFTMAX",
"denseLayers": [50, 50],
"nOutputs": nClasses,
"nEpochs": 100,
"algoMethod": "ADAM",
"useLocking": False
})
# + colab={} colab_type="code" id="lRD2Tw6rhYfJ"
topNNnModel = "topNNnModelLeaf"
demo.defineNnModel(topNNnModel, deepLearnParam)
# + colab={} colab_type="code" id="Hg480XF7hYfL" outputId="d472ce33-8db5-48f3-dbd2-193278900dc2"
# %%time
demo.trainNnModel(topNNnModel,"contentTrainPcaNetwork", topNFeatureList, deepLearnParam)
# + colab={} colab_type="code" id="G1ql43hXhYfO" outputId="6cfdcc92-cd5d-4e28-b445-a0a7c32c19e1"
demo.scoreNnModel(topNNnModel,"contentTestPcaNetwork")
# + [markdown] colab_type="text" id="j7PyItbohYfe"
# ### Bootstrap Runs
# + colab={} colab_type="code" id="KvpbvnsshYff" outputId="20d22386-f9f2-40e9-b4ed-e2a875af8f81"
# %%time
accuracies = demo.bootstrapNnModel(topNNnModel,"contentTrainPcaNetwork",
"contentTestPcaNetwork",
topNFeatureList,
deepLearnParam,
25
);
# + [markdown] colab_type="text" id="ezxCO4EqhYfh"
# ## Train Forest Model Using Top N Leaf Based Features
# + colab={} colab_type="code" id="lEav0ftrhYfj"
topNForestModel = "topNForestModelLeaf"
# + colab={} colab_type="code" id="i0IsKgNihYfl" outputId="fc7d1514-470a-42b2-8d8e-3bc937b0519b"
# %%time
demo.trainForestModel(
topNForestModel, "contentTrainPcaNetwork", topNFeatureList)
# + colab={} colab_type="code" id="GKKI_Dp3hYfo" outputId="31ad22b7-692c-4987-ddb8-c62d5554249b"
resultsScoreTopNForestModel=demo.scoreForestModel(topNForestModel,"contentTestPcaNetwork")
# + [markdown] colab_type="text" id="AEF6OLIThYfp"
# ### Bootstrap Runs
# + colab={} colab_type="code" id="7IY6SB5ghYfq" outputId="42944525-bc9e-420e-8b4a-2fedff049b14"
# %%time
accuracies = demo.bootstrapForestModel(topNForestModel,"contentTrainPcaNetwork",
"contentTestPcaNetwork",
topNFeatureList,
n=25
);
# + [markdown] colab_type="text" id="cHKMYOqnhYfu"
# ## Autotune Forest Model Using Top N Leaf Based Features
# + colab={} colab_type="code" id="LYqC-TpphYfu"
topNForestModelAuto = f"topNForestModelAuto{topNCutoff}Leaf"
# + colab={} colab_type="code" id="cmXFF27uhYfx" outputId="1fd176f8-0171-477b-d6e0-235b96f6de75"
# %%time
bestConfigTopN = demo.loadOrTuneForestModel(topNForestModelAuto,
"contentTrainPcaNetwork",
topNFeatureList
)
print(bestConfigTopN)
# + colab={} colab_type="code" id="sv6YqdUNhYf0" outputId="2fcd5050-c8cd-432a-aa6c-abba4331dd92"
resultsScoreTopNForestModelAuto=demo.scoreForestModel(topNForestModelAuto,"contentTestPcaNetwork")
# + [markdown] colab_type="text" id="bEJqDZK5hYf6"
# ### Bootstrap Runs
# + colab={} colab_type="code" id="wsv55rI5hYf7" outputId="6ea34b29-2316-4d9b-d19f-1835d7b64634"
# %%time
accuracies = demo.bootstrapForestModel(topNForestModelAuto,"contentTrainPcaNetwork",
"contentTestPcaNetwork",
topNFeatureList,
bestConfigTopN,
25
);
# + [markdown] colab_type="text" id="CiqzI06_8l6-"
# # Session Cleanup
# + colab={} colab_type="code" id="S1Yd6e90hYf9"
s.terminate();
| demos/PCT5300-Reese-CoraClassification/python/part_3_feature_importance.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Линейная регрессия и основные библиотеки Python для анализа данных и научных вычислений
# Это задание посвящено линейной регрессии. На примере прогнозирования роста человека по его весу Вы увидите, какая математика за этим стоит.
# ## Задание 1. Первичный анализ данных c Pandas
# В этом заданиии мы будем использовать данные [SOCR](http://wiki.stat.ucla.edu/socr/index.php/SOCR_Data_Dinov_020108_HeightsWeights) по росту и весу 25 тысяч подростков.
# **[1].** Если у Вас не установлена библиотека Seaborn - выполните в терминале команду *conda install seaborn*. (Seaborn не входит в сборку Anaconda, но эта библиотека предоставляет удобную высокоуровневую функциональность для визуализации данных).
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
# Считаем данные по росту и весу (*weights_heights.csv*, приложенный в задании) в объект Pandas DataFrame:
data = pd.read_csv('weights_heights.csv', index_col='Index')
# Чаще всего первое, что надо надо сделать после считывания данных - это посмотреть на первые несколько записей. Так можно отловить ошибки чтения данных (например, если вместо 10 столбцов получился один, в названии которого 9 точек с запятой). Также это позволяет познакомиться с данными, как минимум, посмотреть на признаки и их природу (количественный, категориальный и т.д.).
#
# После этого стоит построить гистограммы распределения признаков - это опять-таки позволяет понять природу признака (степенное у него распределение, или нормальное, или какое-то еще). Также благодаря гистограмме можно найти какие-то значения, сильно не похожие на другие - "выбросы" в данных.
# Гистограммы удобно строить методом *plot* Pandas DataFrame с аргументом *kind='hist'*.
#
# **Пример.** Построим гистограмму распределения роста подростков из выборки *data*. Используем метод *plot* для DataFrame *data* c аргументами *y='Height'* (это тот признак, распределение которого мы строим)
data.plot(y='Height', kind='hist',
color='red', title='Height (inch.) distribution')
# Аргументы:
#
# - *y='Height'* - тот признак, распределение которого мы строим
# - *kind='hist'* - означает, что строится гистограмма
# - *color='red'* - цвет
# **[2]**. Посмотрите на первые 5 записей с помощью метода *head* Pandas DataFrame. Нарисуйте гистограмму распределения веса с помощью метода *plot* Pandas DataFrame. Сделайте гистограмму зеленой, подпишите картинку.
data.head(5)
data.plot(y='Weight', kind='hist',
color='green', title='Weight distribution')
# Один из эффективных методов первичного анализа данных - отображение попарных зависимостей признаков. Создается $m \times m$ графиков (*m* - число признаков), где по диагонали рисуются гистограммы распределения признаков, а вне диагонали - scatter plots зависимости двух признаков. Это можно делать с помощью метода $scatter\_matrix$ Pandas Data Frame или *pairplot* библиотеки Seaborn.
#
# Чтобы проиллюстрировать этот метод, интересней добавить третий признак. Создадим признак *Индекс массы тела* ([BMI](https://en.wikipedia.org/wiki/Body_mass_index)). Для этого воспользуемся удобной связкой метода *apply* Pandas DataFrame и lambda-функций Python.
def make_bmi(height_inch, weight_pound):
METER_TO_INCH, KILO_TO_POUND = 39.37, 2.20462
return (weight_pound / KILO_TO_POUND) / \
(height_inch / METER_TO_INCH) ** 2
data['BMI'] = data.apply(lambda row: make_bmi(row['Height'],
row['Weight']), axis=1)
# **[3].** Постройте картинку, на которой будут отображены попарные зависимости признаков , 'Height', 'Weight' и 'BMI' друг от друга. Используйте метод *pairplot* библиотеки Seaborn.
sns.pairplot(data)
# Часто при первичном анализе данных надо исследовать зависимость какого-то количественного признака от категориального (скажем, зарплаты от пола сотрудника). В этом помогут "ящики с усами" - boxplots библиотеки Seaborn. Box plot - это компактный способ показать статистики вещественного признака (среднее и квартили) по разным значениям категориального признака. Также помогает отслеживать "выбросы" - наблюдения, в которых значение данного вещественного признака сильно отличается от других.
# **[4]**. Создайте в DataFrame *data* новый признак *weight_category*, который будет иметь 3 значения: 1 – если вес меньше 120 фунтов. (~ 54 кг.), 3 - если вес больше или равен 150 фунтов (~68 кг.), 2 – в остальных случаях. Постройте «ящик с усами» (boxplot), демонстрирующий зависимость роста от весовой категории. Используйте метод *boxplot* библиотеки Seaborn и метод *apply* Pandas DataFrame. Подпишите ось *y* меткой «Рост», ось *x* – меткой «Весовая категория».
# +
def weight_category(weight):
if weight < 120:
return 1
elif weight >=150:
return 3
else:
return 2
pass
data['Weight_category'] = data['Weight'].apply(weight_category)
sns.boxplot(x="Weight_category", y="Height", data = data)
# -
# **[5].** Постройте scatter plot зависимости роста от веса, используя метод *plot* для Pandas DataFrame с аргументом *kind='scatter'*. Подпишите картинку.
data.plot.scatter(x='Weight',y='Height', c='DarkBlue')
# ## Задание 2. Минимизация квадратичной ошибки
# В простейшей постановке задача прогноза значения вещественного признака по прочим признакам (задача восстановления регрессии) решается минимизацией квадратичной функции ошибки.
#
# **[6].** Напишите функцию, которая по двум параметрам $w_0$ и $w_1$ вычисляет квадратичную ошибку приближения зависимости роста $y$ от веса $x$ прямой линией $y = w_0 + w_1 * x$:
# $$error(w_0, w_1) = \sum_{i=1}^n {(y_i - (w_0 + w_1 * x_i))}^2 $$
# Здесь $n$ – число наблюдений в наборе данных, $y_i$ и $x_i$ – рост и вес $i$-ого человека в наборе данных.
def sq_error(w0, w1):
n = data.shape[0]
error = np.zeros(n)
error = (data['Height'] - (w0 + w1 * data['Weight'])) ** 2
return error.sum()
# Итак, мы решаем задачу: как через облако точек, соответсвующих наблюдениям в нашем наборе данных, в пространстве признаков "Рост" и "Вес" провести прямую линию так, чтобы минимизировать функционал из п. 6. Для начала давайте отобразим хоть какие-то прямые и убедимся, что они плохо передают зависимость роста от веса.
#
# **[7].** Проведите на графике из п. 5 Задания 1 две прямые, соответствующие значениям параметров ($w_0, w_1) = (60, 0.05)$ и ($w_0, w_1) = (50, 0.16)$. Используйте метод *plot* из *matplotlib.pyplot*, а также метод *linspace* библиотеки NumPy. Подпишите оси и график.
# +
# построение прямой
line = lambda x, w0, w1: w0 + w1 * x
# генерация координат х
points_num = 100
x_lst = np.linspace(0, 200, points_num)
# массив значений коэффициентов прямых
k = np.array([[60., 0.05], [50, 0.16]])
# количество коэфф
n = k.shape[0]
# массив точек по оси Y
y_lst = np.zeros((n, points_num))
for i in range(n):
y_lst[i] = np.array(line(x_lst, k[i, 0], k[i, 1]) )
#построение графиков
data.plot.scatter(x='Weight',y='Height', c='Purple')
for i in range(n):
text = 'w0: ' + str(k[i, 0]) + ', w1: ' + str(k[i, 1])
plt.plot(x_lst, y_lst[i], linewidth=3.0, label=text)
plt.legend()
plt.axis( [75, 175, 60, 75] )
plt.title(u'Зависимость роста от веса')
plt.xlabel(u'Вес')
plt.ylabel(u'Рост')
plt.show()
# -
# Минимизация квадратичной функции ошибки - относительная простая задача, поскольку функция выпуклая. Для такой задачи существует много методов оптимизации. Посмотрим, как функция ошибки зависит от одного параметра (наклон прямой), если второй параметр (свободный член) зафиксировать.
#
# **[8].** Постройте график зависимости функции ошибки, посчитанной в п. 6, от параметра $w_1$ при $w_0$ = 50. Подпишите оси и график.
# +
# генерируем данные для параметра w1
n = 100
w1_lst = np.linspace(-5., 5., n)
#ошибка для каждого w1
err_w1 = np.zeros((n))
for i in range(n):
err_w1[i] = sq_error(50., w1_lst[i])
#построение графика
plt.plot(w1_lst, err_w1)
plt.title(u'Зависимость функции ошибки\nот параметра w1 при w0 = 50')
plt.xlabel(u'w1')
plt.ylabel(u'Ошибка')
plt.show()
# -
# Теперь методом оптимизации найдем "оптимальный" наклон прямой, приближающей зависимость роста от веса, при фиксированном коэффициенте $w_0 = 50$.
#
# **[9].** С помощью метода *minimize_scalar* из *scipy.optimize* найдите минимум функции, определенной в п. 6, для значений параметра $w_1$ в диапазоне [-5,5]. Проведите на графике из п. 5 Задания 1 прямую, соответствующую значениям параметров ($w_0$, $w_1$) = (50, $w_1\_opt$), где $w_1\_opt$ – найденное в п. 8 оптимальное значение параметра $w_1$.
# +
from scipy.optimize import minimize_scalar
res = minimize_scalar(lambda w1: sq_error(50., w1), bounds=(-5, 5))
print ('Optimal w1 value for w0 = 50:', round(res.x, 3))
# +
# построение прямой
line = lambda x, w0, w1: w0 + w1 * x
# генерация координат х
points_num = 100
x_lst = np.linspace(0, 200, points_num)
# массив значений коэффициентов прямых
k = np.array([50, 0.141])
# количество коэфф
n = k.shape[0]
# массив точек по оси Y
y_lst = np.zeros(points_num)
for i in range(points_num):
y_lst[i] = line(x_lst[i], k[0], k[1])
#построение графиков
data.plot.scatter(x='Weight',y='Height', c='Purple')
text = 'w0: ' + str(k[0]) + ', w1: ' + str(k[1])
plt.plot(x_lst, y_lst, linewidth=3.0, label=text)
plt.legend()
plt.axis( [75, 175, 60, 75] )
plt.title(u'Зависимость роста от веса')
plt.xlabel(u'Вес')
plt.ylabel(u'Рост')
plt.show()
# -
# При анализе многомерных данных человек часто хочет получить интуитивное представление о природе данных с помощью визуализации. Увы, при числе признаков больше 3 такие картинки нарисовать невозможно. На практике для визуализации данных в 2D и 3D в данных выделаяют 2 или, соответственно, 3 главные компоненты (как именно это делается - мы увидим далее в курсе) и отображают данные на плоскости или в объеме.
#
# Посмотрим, как в Python рисовать 3D картинки, на примере отображения функции $z(x,y) = sin(\sqrt{x^2+y^2})$ для значений $x$ и $y$ из интервала [-5,5] c шагом 0.25.
from mpl_toolkits.mplot3d import Axes3D
# Создаем объекты типа matplotlib.figure.Figure (рисунок) и matplotlib.axes._subplots.Axes3DSubplot (ось).
# +
fig = plt.figure()
ax = fig.gca(projection='3d') # get current axis
# Создаем массивы NumPy с координатами точек по осям X и У.
# Используем метод meshgrid, при котором по векторам координат
# создается матрица координат. Задаем нужную функцию Z(x, y).
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
Z = np.sin(np.sqrt(X**2 + Y**2))
# Наконец, используем метод *plot_surface* объекта
# типа Axes3DSubplot. Также подписываем оси.
surf = ax.plot_surface(X, Y, Z)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
# -
# **[10].** Постройте 3D-график зависимости функции ошибки, посчитанной в п.6 от параметров $w_0$ и $w_1$. Подпишите ось $x$ меткой «Intercept», ось $y$ – меткой «Slope», a ось $z$ – меткой «Error».
# +
fig = plt.figure()
ax = fig.gca(projection='3d') # get current axis
# Создаем массивы NumPy с координатами точек по осям X и У.
# Используем метод meshgrid, при котором по векторам координат
# создается матрица координат. Задаем нужную функцию Z(x, y).
X = np.arange(0., 100., 1)
Y = np.arange(-5., 5., 0.5)
X, Y = np.meshgrid(X, Y)
squaredErrorVect = np.vectorize(sq_error)
Z = np.array( squaredErrorVect(X.ravel(), Y.ravel()) )
Z.shape = X.shape
# Наконец, используем метод *plot_surface* объекта
# типа Axes3DSubplot. Также подписываем оси.
surf = ax.plot_surface(X, Y, Z)
ax.set_xlabel('Intercept')
ax.set_ylabel('Slope')
ax.set_zlabel('Error')
plt.show()
# -
# **[11].** С помощью метода *minimize* из scipy.optimize найдите минимум функции, определенной в п. 6, для значений параметра $w_0$ в диапазоне [-100,100] и $w_1$ - в диапазоне [-5, 5]. Начальная точка – ($w_0$, $w_1$) = (0, 0). Используйте метод оптимизации L-BFGS-B (аргумент method метода minimize). Проведите на графике из п. 5 Задания 1 прямую, соответствующую найденным оптимальным значениям параметров $w_0$ и $w_1$. Подпишите оси и график.
# +
from scipy.optimize import minimize
function = lambda w: sq_error(w[0], w[1])
bounds = ((-100., 100.), (-5., 5.))
x0 = (0., 0.)
opt = minimize(function, x0, bounds=bounds, method='L-BFGS-B')
print(opt)
# +
# построение прямой
line = lambda x, w0, w1: w0 + w1 * x
# генерация координат х
points_num = 100
x_lst = np.linspace(0, 200, points_num)
# массив значений коэффициентов прямых
k = np.array([57.57179162, 0.08200637])
# количество коэфф
n = k.shape[0]
# массив точек по оси Y
y_lst = np.zeros(points_num)
for i in range(points_num):
y_lst[i] = line(x_lst[i], k[0], k[1])
#построение графиков
data.plot.scatter(x='Weight',y='Height', c='Purple')
text = 'w0: ' + str(k[0]) + ', w1: ' + str(k[1])
plt.plot(x_lst, y_lst, linewidth=3.0, label=text)
plt.legend()
plt.axis( [75, 175, 60, 75] )
plt.title(u'Зависимость роста от веса')
plt.xlabel(u'Вес')
plt.ylabel(u'Рост')
plt.show()
| 2. Supervised Learning/Linear Regression/weight_height/linRegression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Simple Linear Regression for stock using scikit-learn
#
# + outputHidden=false inputHidden=false
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import math
import seaborn as sns
# %matplotlib inline
import warnings
warnings.filterwarnings("ignore")
import yfinance as yf
yf.pdr_override()
# + outputHidden=false inputHidden=false
stock = 'AAPL'
start = '2016-01-01'
end = '2018-01-01'
data = yf.download(stock, start, end)
data.head()
# + outputHidden=false inputHidden=false
df = data.reset_index()
df.head()
# + outputHidden=false inputHidden=false
X = df.drop(['Date','Close'], axis=1, inplace=True)
y = df[['Adj Close']]
# + outputHidden=false inputHidden=false
df = df.as_matrix()
# + outputHidden=false inputHidden=false
from sklearn.model_selection import train_test_split
# Split X and y into X_
X_train, X_test, y_train, y_test = train_test_split(df, y, test_size=0.25, random_state=0)
# + outputHidden=false inputHidden=false
from sklearn.linear_model import LinearRegression
regression_model = LinearRegression()
regression_model.fit(X_train, y_train)
# + outputHidden=false inputHidden=false
intercept = regression_model.intercept_[0]
print("The intercept for our model is {}".format(intercept))
# + outputHidden=false inputHidden=false
regression_model.score(X_test, y_test)
# + outputHidden=false inputHidden=false
from sklearn.metrics import mean_squared_error
y_predict = regression_model.predict(X_test)
regression_model_mse = mean_squared_error(y_predict, y_test)
regression_model_mse
# + outputHidden=false inputHidden=false
math.sqrt(regression_model_mse)
# + outputHidden=false inputHidden=false
# input the latest Open, High, Low, Close, Volume
# predicts the next day price
regression_model.predict([[167.81, 171.75, 165.19, 166.48, 37232900]])
| Python_Stock/Basic_Machine_Learning_Predicts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:zip35]
# language: python
# name: conda-env-zip35-py
# ---
# +
import pandas as pd
import numpy as np
base_path = '../Backtests/'
# +
# Rebalance on percent divergence
class PercentRebalance(object):
def __init__(self, percent_target):
self.rebalance_count = 0
self.percent_target = percent_target
def rebalance(self, row, weights, date):
total = row.sum()
rebalanced = row
rebalanced = np.multiply(total, weights)
if np.any(np.abs((row-rebalanced)/rebalanced) > (self.percent_target/100.0)):
self.rebalance_count = self.rebalance_count + 1
return rebalanced
else:
return row
# Rebalance on calendar
class MonthRebalance(object):
def __init__(self, months):
self.month_to_rebalance = months
self.rebalance_count = 0
self.last_rebalance_month = 0
def rebalance(self, row, weights, date):
current_month = date.month
if self.last_rebalance_month != current_month:
total = row.sum()
rebalanced = np.multiply(weights, total)
self.rebalance_count = self.rebalance_count + 1
self.last_rebalance_month = date.month
return rebalanced
else:
return row
# +
# Calculate the rebalanced combination
def calc_rebalanced_returns(returns, rebalancer, weights):
returns = returns.copy() + 1
# create a numpy ndarray to hold the cumulative returns
cumulative = np.zeros(returns.shape)
cumulative[0] = np.array(weights)
# also convert returns to an ndarray for faster access
rets = returns.values
# using ndarrays all of the multiplicaion is now handled by numpy
for i in range(1, len(cumulative) ):
np.multiply(cumulative[i-1], rets[i], out=cumulative[i])
cumulative[i] = rebalancer.rebalance(cumulative[i], weights, returns.index[i])
# convert the cumulative returns back into a dataframe
cumulativeDF = pd.DataFrame(cumulative, index=returns.index, columns=returns.columns)
# finding out how many times rebalancing happens is an interesting exercise
print ("Rebalanced {} times".format(rebalancer.rebalance_count))
# turn the cumulative values back into daily returns
rr = cumulativeDF.pct_change() + 1
rebalanced_return = rr.dot(weights) - 1
return rebalanced_return
def get_strat(strat):
df = pd.read_csv(base_path + strat + '.csv', index_col=0, parse_dates=True, names=[strat] )
return df
# +
# Use monthly rebalancer, one month interval
rebalancer = MonthRebalance(1)
# Define strategies and weights
portfolio = {
'core_trend': 0.25,
'counter_trend': 0.25,
'curve_trading': 0.25,
'time_return': 0.25,
}
# Read all the files into one DataFrame
df = pd.concat(
[
pd.read_csv('{}{}.csv'.format(
base_path,
strat
),
index_col=0,
parse_dates=True,
names=[strat]
).pct_change().dropna()
for strat in list(portfolio.keys())
], axis=1
)
# Calculate the combined portfolio
df['Combined'] = calc_rebalanced_returns(
df,
rebalancer,
weights=list(portfolio.values())
)
df.dropna(inplace=True)
# +
# Make Graph
import matplotlib
import matplotlib.pyplot as plt
include_combined = True
include_benchmark = True
benchmark = 'SPXTR'
if include_benchmark:
returns[benchmark] = get_strat(benchmark).pct_change()
#returns = returns['2003-1-1':]
normalized = (returns+1).cumprod()
font = {'family' : 'eurostile',
'weight' : 'normal',
'size' : 16}
matplotlib.rc('font', **font)
fig = plt.figure(figsize=(15, 8))
# First chart
ax = fig.add_subplot(111)
ax.set_title('Strategy Comparisons')
dashstyles = ['-','--','-.','.-.', '-']
i = 0
for strat in normalized:
if strat == 'Combined':
if not include_combined:
continue
clr = 'black'
dash = '-'
width = 5
elif strat == benchmark:
if not include_benchmark:
continue
clr = 'black'
dash = '-'
width = 2
#elif strat == 'equity_momentum':
# continue
else:
clr = 'grey'
dash = dashstyles[i]
width = i + 1
i += 1
ax.semilogy(normalized[strat], dash, label=strat, color=clr, linewidth=width)
ax.legend()
# -
df.to_clipboard()
# +
portfolio = {
'x': 1,
'y': 2,
'z': 3
}
#print(portfolio.values())
x = np.array(list(portfolio.keys()))
print(x)
| book/Chapter 20 - Performance Visualization/Combined Models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# <style>
# .nbinput .prompt,
# .nboutput .prompt {
# display: none;
# }
# </style>
# + raw_mimetype="text/restructuredtext" active=""
# ########
# Examples
# ########
#
# Here we show a selection of examples of using cweqgen.
#
# Generating an equation
# ======================
#
# To generate an equation you just need the correct name for the equation, as given in the
# :ref:`Equations` section, and the :func:`~cweqgen.equations.equations` function. To generate
# the equation for the gravitational-wave amplitude :math:`h_0` you would do:
# -
from cweqgen import equations
eq = equations("h0")
# + raw_mimetype="text/restructuredtext" active=""
# If you print the returned :class:`~cweqgen.equations.EquationBase` object (called ``eq`` in this
# case) it will return a LaTeX string, via the :meth:`~cweqgen.equations.EquationBase.equation`
# method, giving the equation (note that the equation is not enclosed in "$" symbols):
# -
print(eq)
# + raw_mimetype="text/restructuredtext" active=""
# If working in a Jupyter notebook, you can show the typeset LaTeX equation by just running a cell
# containing ``eq``:
# -
eq
# + raw_mimetype="text/restructuredtext" active=""
# Equation as a figure
# --------------------
#
# You can return an equation as an object containing a :class:`matplotlib.figure.Figure`, which can then be saved in whatever format you require by using the :meth:`~cweqgen.equations.EquationBase.equation` method with the ``displaytype`` keyword set to ``"matplotlib"``. If running this in a Jupyter notebook, a png version of the equation will be shown.
# -
fig = eq.equation(displaytype="matplotlib")
fig.savefig("myequation.pdf") # save a pdf version of the equation
# + raw_mimetype="text/restructuredtext" active=""
# Equation with fiducial values
# -----------------------------
#
# Each equation is defined with a set of "`fiducial <https://en.wiktionary.org/wiki/fiducial>`_" values. A LaTeX string containing a version of the equation evaluated at the fiducial values can be created using the :meth:`~cweqgen.equations.EquationBase.fiducial_equation` method:
# -
print(eq.fiducial_equation())
# + raw_mimetype="text/restructuredtext" active=""
# If running this in a Jupyter notebook, it can display the typeset LaTeX equation:
# -
eq.fiducial_equation()
# + raw_mimetype="text/restructuredtext" active=""
# The :meth:`~cweqgen.equations.EquationBase.fiducial_equation` method can also take the ``displaytype="matplotlib"`` keyword argument to return a Matplotlib :class:`~matplotlib.figure.Figure` containing the equation.
# + raw_mimetype="text/restructuredtext" active=""
# Setting fiducial values
# ^^^^^^^^^^^^^^^^^^^^^^^
#
# You can generate the equation with different fiducial values. You can either do this when creating the :class:`~cweqgen.equations.EquationBase` object by passing :func:`~cweqgen.equations.equations` your own values, e.g.:
# -
eq = equations("h0", ellipticity=1e-7, distance=2.5)
eq.fiducial_equation()
# + raw_mimetype="text/restructuredtext" active=""
# or do it through :meth:`~cweqgen.equations.EquationBase.fiducial_equation`:
# -
eq.fiducial_equation(momentofinertia=2e38, rotationfrequency=200)
# + raw_mimetype="text/restructuredtext" active=""
# If you pass fiducial values as dimensionless values the default units from the equation definitons will be assumed. However, you can pass values with astropy :class:`~astropy.units.Unit` types and these will get correctly intepreted. For example, if you wanted to have the fiducial distance of 1000 light years and the principal moment of inertia in `cgs units <https://en.wikipedia.org/wiki/Centimetre%E2%80%93gram%E2%80%93second_system_of_units>`_, you could use:
# -
from astropy.units import Unit
eq = equations("h0", distance=1000 * Unit("lyr"), momentofinertia=1e45 * Unit("g cm^2"))
eq.fiducial_equation()
# + raw_mimetype="text/restructuredtext" active=""
# The keywords for providing the fiducial values can be from a range of aliases given in :obj:`cweqgen.definitions.ALLOWED_VARIABLES`.
#
# Evaluating the equation
# -----------------------
#
# The :class:`~cweqgen.equations.EquationBase` class does not only provide ways to output LaTeX strings, but can also be used to evaluate the equation at given values. To can be done using the :meth:`~cweqgen.equations.EquationBase.evaluate` method. If no values are provided to :meth:`~cweqgen.equations.EquationBase.evaluate` it will return the equation as evaluated at the default fiducial values (or those provided when initialising the equation), e.g.:
# -
eq = equations("h0")
eq.evaluate()
# + raw_mimetype="text/restructuredtext" active=""
# The :class:`~cweqgen.equations.EquationBase` actually has a `__call__ method <https://docs.python.org/3/reference/datamodel.html#object.__call__>`_ defined allowing you to use :class:`~cweqgen.equations.EquationBase` objects as functions by running the :meth:`~cweqgen.equations.EquationBase.evaluate` method, e.g.,
# -
eq()
# + raw_mimetype="text/restructuredtext" active=""
# You can pass values for any of the variables in the equation to calculate it at those values (any variable not provided will still assume the fiducial values). Values can have astropy :class:`~astropy.units.Unit` types, but if not the default units will be assumed. You can also pass arrays of values, e.g.:
# -
eq.evaluate(distance=[1.0, 2.0, 3.0] * Unit("kpc"))
# + raw_mimetype="text/restructuredtext" active=""
# If you pass equal length arrays then the output will be the same length as the inputs (i.e., the equation is evaluate by each index in the arrays):
# -
eq.evaluate(distance=[1.0, 2.0, 3.0] * Unit("kpc"), rotationfrequency=[50, 100, 150] * Unit("Hz"))
# + raw_mimetype="text/restructuredtext" active=""
# However, if you require values on a `mesh grid <https://numpy.org/doc/stable/reference/generated/numpy.meshgrid.html>`_ of values, you can add the ``mesh=True`` keyword argument:
# -
eq.evaluate(distance=[1.0, 2.0, 3.0] * Unit("kpc"), rotationfrequency=[50, 100, 150] * Unit("Hz"), mesh=True)
# + raw_mimetype="text/restructuredtext" active=""
# If arrays of different lengths are passed to :meth:`~cweqgen.equations.EquationBase.evaluate` then it will automatically perform the evaluation on a mesh grid.
# -
eq.evaluate(ellipticity=[1e-6, 1e-7], rotationfrequency=[50, 100, 150] * Unit("Hz"), mesh=True)
# + raw_mimetype="text/restructuredtext" active=""
# Rearranging an equation
# -----------------------
#
# You can rearrange an equation to switch the value on the left hand side with one of the other variables. This uses the :meth:`~cweqgen.equations.EquationBase.rearrange` method, which returns a new :class:`~cweqgen.equations.EquationBase` (the original :class:`~cweqgen.equations.EquationBase` will not be changed):
# +
# equation for the braking index
eq = equations("brakingindex")
# rearrange to put frequency derivative on the lhs
req = eq.rearrange("rotationfdot")
req.equation()
# + raw_mimetype="text/restructuredtext" active=""
# The fiducial values for the old right hand side variable will be set from the value evaluated at the fiducial values from the original equation. However, a new fiducial value can be set by passing it to :meth:`~cweqgen.equations.EquationBase.rearrange`, e.g.:
# -
# set fiducial value for the braking index of 4!
req = eq.rearrange("rotationfdot", 4)
req.fiducial_equation()
# + raw_mimetype="text/restructuredtext" active=""
# Substituting an equation
# ------------------------
#
# It is possible to substitute one equation into another using the :meth:`~cweqgen.equations.EquationBase.substitute` method. If we take the above rearranged equation giving rotation frequency derivative in terms of braking index, rotation frequency and frequency second derivative, it can be substituted into the equation for the gravitational-wave amplitude spin-down limit:
# +
from cweqgen import equations
# equation for the braking index
eq = equations("brakingindex")
# rearrange to put frequency derivative on the lhs
req = eq.rearrange("rotationfdot")
eqsd = equations("h0spindown")
subeq = eqsd.substitute(req) # substitute in req
subeq.equation()
# + raw_mimetype="text/restructuredtext" active=""
# Another useful example is getting the gravitational-wave amplitude in terms of the mass quadrupole :math:`Q_{22}`. This can be achieved with:
# +
# equation for gravitational-wave amplitude
eqh0 = equations("h0")
# equation for mass quadrupole (in terms of ellipticity and moment of inertia)
eqq22 = equations("massquadrupole")
# rearrange and substitute
eqh0q22 = eqh0.substitute(eqq22.rearrange("ellipticity"))
eqh0q22.equation()
# + raw_mimetype="text/restructuredtext" active=""
# Equivalent variables
# --------------------
#
# Some equations allow you to pass in variables that are equivalent (bar some conversion) to the required variables, e.g., using rotation period when rotation frequency is required:
# -
eq = equations("ellipticityspindown")
eq.equation()
# evaluate using rotation period rather than rotation frequency
eq.evaluate(rotationperiod=0.05 * Unit("s"))
# + raw_mimetype="text/restructuredtext" active=""
# Converting between frequencies
# ------------------------------
#
# If an equation contains a frequency parameter (or equivalently a rotation period) and/or its first derivative you can convert it to another frequency parameter using the :meth:`~cweqgen.equations.EquationBase.to` method. For example, an equation containing rotation frequency can be converted to one with rotation period:
# +
eq = equations("h0spindown")
neweq = eq.to("rotationperiod")
neweq.eqn
| docs/examples.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="Y4YlT-8B8lLN"
# # SMI AL Loop
# + colab={"base_uri": "https://localhost:8080/"} id="MMIAA-Ua8lLR" outputId="c379d728-9870-4fca-cbca-24658e0a12ef"
import h5py
import time
import random
import datetime
import copy
import numpy as np
import os
import csv
import json
import subprocess
import sys
import PIL.Image as Image
import torch
import torch.backends.cudnn as cudnn
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
import torchvision.models as models
from matplotlib import pyplot as plt
from distil.distil.utils.models.resnet import ResNet18
from trust.trust.utils.custom_dataset import load_dataset_custom
from torch.utils.data import Subset
from torch.autograd import Variable
import tqdm
from math import floor
from sklearn.metrics.pairwise import cosine_similarity, pairwise_distances
from distil.distil.active_learning_strategies.scg import SCG
from distil.distil.active_learning_strategies.badge import BADGE
from distil.distil.active_learning_strategies.entropy_sampling import EntropySampling
from distil.distil.active_learning_strategies.gradmatch_active import GradMatchActive
seed=42
torch.manual_seed(seed)
np.random.seed(seed)
random.seed(seed)
from distil.distil.utils.utils import *
# + id="ClNjNvIX8lLT"
def model_eval_loss(data_loader, model, criterion):
total_loss = 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(data_loader):
inputs, targets = inputs.to(device), targets.to(device, non_blocking=True)
outputs = model(inputs)
loss = criterion(outputs, targets)
total_loss += loss.item()
return total_loss
def init_weights(m):
# torch.manual_seed(35)
if isinstance(m, nn.Conv2d):
torch.nn.init.xavier_uniform_(m.weight)
elif isinstance(m, nn.Linear):
torch.nn.init.xavier_uniform_(m.weight)
m.bias.data.fill_(0.01)
def weight_reset(m):
if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear):
m.reset_parameters()
def create_model(name, num_cls, device, embedding_type):
if name == 'ResNet18':
if embedding_type == "gradients":
model = ResNet18(num_cls)
else:
model = models.resnet18()
elif name == 'MnistNet':
model = MnistNet()
elif name == 'ResNet164':
model = ResNet164(num_cls)
model.apply(init_weights)
model = model.to(device)
return model
def loss_function():
criterion = nn.CrossEntropyLoss()
criterion_nored = nn.CrossEntropyLoss(reduction='none')
return criterion, criterion_nored
def optimizer_with_scheduler(model, num_epochs, learning_rate, m=0.9, wd=5e-4):
optimizer = optim.SGD(model.parameters(), lr=learning_rate,
momentum=m, weight_decay=wd)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=num_epochs)
return optimizer, scheduler
def optimizer_without_scheduler(model, learning_rate, m=0.9, wd=5e-4):
# optimizer = optim.Adam(model.parameters(),weight_decay=wd)
optimizer = optim.SGD(model.parameters(), lr=learning_rate,
momentum=m, weight_decay=wd)
return optimizer
def generate_cumulative_timing(mod_timing):
tmp = 0
mod_cum_timing = np.zeros(len(mod_timing))
for i in range(len(mod_timing)):
tmp += mod_timing[i]
mod_cum_timing[i] = tmp
return mod_cum_timing/3600
def find_err_per_class(test_set, val_set, final_val_classifications, final_val_predictions, final_tst_classifications,
final_tst_predictions, saveDir, prefix):
#find queries from the validation set that are erroneous
# saveDir = os.path.join(saveDir, prefix)
# if(not(os.path.exists(saveDir))):
# os.mkdir(saveDir)
val_err_idx = list(np.where(np.array(final_val_classifications) == False)[0])
tst_err_idx = list(np.where(np.array(final_tst_classifications) == False)[0])
val_class_err_idxs = []
tst_err_log = []
val_err_log = []
for i in range(num_cls):
if(feature=="ood"): tst_class_idxs = list(torch.where(torch.Tensor(test_set.targets.float()) == i)[0].cpu().numpy())
if(feature=="classimb"): tst_class_idxs = list(torch.where(torch.Tensor(test_set.targets) == i)[0].cpu().numpy())
val_class_idxs = list(torch.where(torch.Tensor(val_set.targets.float()) == i)[0].cpu().numpy())
#err classifications per class
val_err_class_idx = set(val_err_idx).intersection(set(val_class_idxs))
tst_err_class_idx = set(tst_err_idx).intersection(set(tst_class_idxs))
if(len(val_class_idxs)>0):
val_error_perc = round((len(val_err_class_idx)/len(val_class_idxs))*100,2)
else:
val_error_perc = 0
tst_error_perc = round((len(tst_err_class_idx)/len(tst_class_idxs))*100,2)
print("val, test error% for class ", i, " : ", val_error_perc, tst_error_perc)
val_class_err_idxs.append(val_err_class_idx)
tst_err_log.append(tst_error_perc)
val_err_log.append(val_error_perc)
tst_err_log.append(sum(tst_err_log)/len(tst_err_log))
val_err_log.append(sum(val_err_log)/len(val_err_log))
return tst_err_log, val_err_log, val_class_err_idxs
def aug_train_subset(train_set, lake_set, true_lake_set, subset, lake_subset_idxs, budget, augrandom=False):
all_lake_idx = list(range(len(lake_set)))
if(not(len(subset)==budget) and augrandom):
print("Budget not filled, adding ", str(int(budget) - len(subset)), " randomly.")
remain_budget = int(budget) - len(subset)
remain_lake_idx = list(set(all_lake_idx) - set(subset))
random_subset_idx = list(np.random.choice(np.array(remain_lake_idx), size=int(remain_budget), replace=False))
subset += random_subset_idx
lake_ss = SubsetWithTargets(true_lake_set, subset, torch.Tensor(true_lake_set.targets.float())[subset])
if(feature=="ood"):
ood_lake_idx = list(set(lake_subset_idxs)-set(subset))
private_set = SubsetWithTargets(true_lake_set, ood_lake_idx, torch.Tensor(np.array([split_cfg['num_cls_idc']]*len(ood_lake_idx))).float())
remain_lake_idx = list(set(all_lake_idx) - set(lake_subset_idxs))
remain_lake_set = SubsetWithTargets(lake_set, remain_lake_idx, torch.Tensor(lake_set.targets.float())[remain_lake_idx])
remain_true_lake_set = SubsetWithTargets(true_lake_set, remain_lake_idx, torch.Tensor(true_lake_set.targets.float())[remain_lake_idx])
print(len(lake_ss),len(remain_lake_set),len(lake_set))
if(feature!="ood"): assert((len(lake_ss)+len(remain_lake_set))==len(lake_set))
aug_train_set = torch.utils.data.ConcatDataset([train_set, lake_ss])
if(feature=="ood"):
return aug_train_set, remain_lake_set, remain_true_lake_set, private_set, lake_ss
else:
return aug_train_set, remain_lake_set, remain_true_lake_set, lake_ss
def getQuerySet(val_set, val_class_err_idxs, imb_cls_idx, miscls):
miscls_idx = []
if(miscls):
for i in range(len(val_class_err_idxs)):
if i in imb_cls_idx:
miscls_idx += val_class_err_idxs[i]
print("total misclassified ex from imb classes: ", len(miscls_idx))
else:
for i in imb_cls_idx:
imb_cls_samples = list(torch.where(torch.Tensor(val_set.targets.float()) == i)[0].cpu().numpy())
miscls_idx += imb_cls_samples
print("total samples from imb classes as targets: ", len(miscls_idx))
return Subset(val_set, miscls_idx)
def getPrivateSet(lake_set, subset, private_set):
#augment prev private set and current subset
new_private_set = SubsetWithTargets(lake_set, subset, torch.Tensor(lake_set.targets.float())[subset])
# new_private_set = Subset(lake_set, subset)
total_private_set = torch.utils.data.ConcatDataset([private_set, new_private_set])
return total_private_set
def remove_ood_points(lake_set, subset, idc_idx):
idx_subset = []
subset_cls = torch.Tensor(lake_set.targets.float())[subset]
for i in idc_idx:
idc_subset_idx = list(torch.where(subset_cls == i)[0].cpu().numpy())
idx_subset += list(np.array(subset)[idc_subset_idx])
print(len(idx_subset),"/",len(subset), " idc points.")
return idx_subset
def getPerClassSel(lake_set, subset, num_cls):
perClsSel = []
subset_cls = torch.Tensor(lake_set.targets.float())[subset]
for i in range(num_cls):
cls_subset_idx = list(torch.where(subset_cls == i)[0].cpu().numpy())
perClsSel.append(len(cls_subset_idx))
return perClsSel
def check_overlap(prev_idx, prev_idx_hist, idx):
prev_idx = [int(x/num_rep) if x < ((split_cfg["num_rep"] * split_cfg["lake_subset_repeat_size"])-1) else x for x in prev_idx ]
prev_idx_hist = [int(x/num_rep) if x < ((split_cfg["num_rep"] * split_cfg["lake_subset_repeat_size"])-1) else x for x in prev_idx_hist]
idx = [int(x/num_rep) if x < ((split_cfg["num_rep"] * split_cfg["lake_subset_repeat_size"])-1) else x for x in idx]
# overlap = set(prev_idx).intersection(set(idx))
overlap = [value for value in idx if value in prev_idx]
# overlap_hist = set(prev_idx_hist).intersection(set(idx))
overlap_hist = [value for value in idx if value in prev_idx_hist]
new_points = set(idx) - set(prev_idx_hist)
total_unique_points = set(idx+prev_idx_hist)
print("Num unique points within this selection: ", len(set(idx)))
print("New unique points: ", len(new_points))
print("Total unique points: ", len(total_unique_points))
print("overlap % of sel with prev idx: ", len(overlap)/len(idx))
print("overlap % of sel with all prev idx: ", len(overlap_hist)/len(idx))
# return len(overlap)/len(idx), len(overlap_hist)/len(idx)
return len(total_unique_points)
def getPrivateSet(lake_set, subset, private_set):
#augment prev private set and current subset
new_private_set = SubsetWithTargets(lake_set, subset, torch.Tensor(lake_set.targets.float())[subset])
# new_private_set = Subset(lake_set, subset)
total_private_set = torch.utils.data.ConcatDataset([private_set, new_private_set])
return total_private_set
# + colab={"base_uri": "https://localhost:8080/"} id="mkbsfjml8lLX" outputId="283a4f46-1575-40a6-e915-88d2d7a4e793"
feature = "duplicate"
device_id = 0
run="fkna_3"
datadir = 'data/'
data_name = 'cifar10'
model_name = 'ResNet18'
num_rep = 10
learning_rate = 0.01
num_runs = 1 # number of random runs
computeClassErrorLog = False
magnification = 1
device = "cuda:"+str(device_id) if torch.cuda.is_available() else "cpu"
datkbuildPath = "./datk/build"
exePath = "cifarSubsetSelector"
print("Using Device:", device)
doublePrecision = True
linearLayer = True
miscls = False
# handler = DataHandler_CIFAR10
augTarget = True
embedding_type = "gradients"
if(feature=="classimb"):
num_cls = 10
budget = 125
num_epochs = int(10)
split_cfg = {"num_cls_imbalance":5, "per_imbclass_train":3, "per_imbclass_val":5, "per_imbclass_lake":300, "per_class_train":22, "per_class_val":5, "per_class_lake":3000} #cifar10_fk
initModelPath = "./"+data_name + "_" + model_name + "_" + str(learning_rate) + "_" + str(split_cfg["per_imbclass_train"]) + "_" + str(split_cfg["per_class_train"]) + "_" + str(split_cfg["num_cls_imbalance"])
if(feature=="ood"):
num_cls=8
budget=250
num_epochs = int(10)
split_cfg = {'num_cls_idc':8, 'per_idc_train':200, 'per_idc_val':10, 'per_idc_lake':500, 'per_ood_train':0, 'per_ood_val':0, 'per_ood_lake':5000}#cifar10
# split_cfg = {'num_cls_idc':50, 'per_idc_train':100, 'per_idc_val':2, 'per_idc_lake':100, 'per_ood_train':0, 'per_ood_val':0, 'per_ood_lake':500}#cifar100
initModelPath = "weights/" + data_name + "_" + feature + "_" + model_name + "_" + str(learning_rate) + "_" + str(split_cfg["per_idc_train"]) + "_" + str(split_cfg["per_idc_val"]) + "_" + str(split_cfg["num_cls_idc"])
if(feature=="duplicate"):
num_cls=10
budget=500
num_epochs = int(10)
split_cfg = {"train_size":500, "val_size":1000, "lake_size":5000, "num_rep":num_rep, "lake_subset_repeat_size":1000}
initModelPath = "weights/cg_" + data_name + "_" + model_name + "_" + str(learning_rate) + "_" + str(split_cfg["train_size"])
# + [markdown] id="9qKXzKtd8lLZ"
# # AL Like Train Loop
# + id="mMSfzzqM8lLZ"
def train_model_al(datkbuildPath, exePath, num_epochs, dataset_name, datadir, feature, model_name, budget, split_cfg, learning_rate, run,
device, computeErrorLog, strategy="SIM", sf=""):
# torch.manual_seed(42)
# np.random.seed(42)
print(strategy, sf)
#load the dataset based on type of feature
train_set, val_set, test_set, lake_set, num_cls = load_dataset_custom(datadir, dataset_name, feature, split_cfg, False, True)
if(feature=="ood"): num_cls+=1 #Add one class for OOD class
N = len(train_set)
trn_batch_size = 20
val_batch_size = 10
tst_batch_size = 100
trainloader = torch.utils.data.DataLoader(train_set, batch_size=trn_batch_size,
shuffle=True, pin_memory=True)
valloader = torch.utils.data.DataLoader(val_set, batch_size=val_batch_size,
shuffle=False, pin_memory=True)
tstloader = torch.utils.data.DataLoader(test_set, batch_size=tst_batch_size,
shuffle=False, pin_memory=True)
lakeloader = torch.utils.data.DataLoader(lake_set, batch_size=tst_batch_size,
shuffle=False, pin_memory=True)
true_lake_set = copy.deepcopy(lake_set)
# Budget for subset selection
bud = budget
# Variables to store accuracies
fulltrn_losses = np.zeros(num_epochs)
val_losses = np.zeros(num_epochs)
tst_losses = np.zeros(num_epochs)
timing = np.zeros(num_epochs)
val_acc = np.zeros(num_epochs)
full_trn_acc = np.zeros(num_epochs)
tst_acc = np.zeros(num_epochs)
final_tst_predictions = []
final_tst_classifications = []
best_val_acc = -1
csvlog = []
val_csvlog = []
# Results logging file
print_every = 3
# all_logs_dir = '/content/drive/MyDrive/research/tdss/SMI_active_learning_results_woVal/' + dataset_name + '/' + feature + '/'+ sf + '/' + str(bud) + '/' + str(run)
all_logs_dir = './SMI_active_learning_results/' + dataset_name + '/' + feature + '/'+ sf + '/' + str(bud) + '/' + str(run)
print("Saving results to: ", all_logs_dir)
subprocess.run(["mkdir", "-p", all_logs_dir])
exp_name = dataset_name + "_" + feature + "_" + strategy + "_" + sf + '_budget:' + str(bud) + '_epochs:' + str(num_epochs) + '_runs' + str(run)
print(exp_name)
res_dict = {"dataset":data_name,
"feature":feature,
"sel_func":sf,
"sel_budget":budget,
"num_selections":num_epochs,
"model":model_name,
"learning_rate":learning_rate,
"setting":split_cfg,
"all_class_acc":None,
"test_acc":[],
"sel_per_cls":[],
"num_unique_samples":[]}
# Model Creation
model = create_model(model_name, num_cls, device, embedding_type)
model1 = create_model(model_name, num_cls, device, embedding_type)
# Loss Functions
criterion, criterion_nored = loss_function()
strategy_args = {'batch_size': 20, 'device':'cuda', 'num_partitions':1, 'wrapped_strategy_class': None,
'embedding_type':'gradients', 'keep_embedding':False}
unlabeled_lake_set = LabeledToUnlabeledDataset(lake_set)
if(strategy == "AL"):
if(sf=="badge"):
strategy_sel = BADGE(train_set, unlabeled_lake_set, model, num_cls, strategy_args)
elif(sf=="us"):
strategy_sel = EntropySampling(train_set, unlabeled_lake_set, model, num_cls, strategy_args)
elif(sf=="glister" or sf=="glister-tss"):
strategy_sel = GLISTER(train_set, unlabeled_lake_set, model, num_cls, strategy_args, val_set, typeOf='rand', lam=0.1)
elif(sf=="gradmatch-tss"):
strategy_sel = GradMatchActive(train_set, unlabeled_lake_set, model, num_cls, strategy_args, val_set)
elif(sf=="coreset"):
strategy_sel = CoreSet(train_set, unlabeled_lake_set, model, num_cls, strategy_args)
elif(sf=="leastconf"):
strategy_sel = LeastConfidence(train_set, unlabeled_lake_set, model, num_cls, strategy_args)
elif(sf=="margin"):
strategy_sel = MarginSampling(train_set, unlabeled_lake_set, model, num_cls, strategy_args)
if(strategy == "SIM"):
strategy_args['scg_function'] = sf
strategy_args['verbose'] = True
strategy_args['optimizer'] = "LazyGreedy"
strategy_sel = SCG(train_set, unlabeled_lake_set, None, model, num_cls, strategy_args)
# Getting the optimizer and scheduler
# optimizer, scheduler = optimizer_with_scheduler(model, num_epochs, learning_rate)
optimizer = optimizer_without_scheduler(model, learning_rate)
private_set = []
#overlap vars
prev_idx = []
prev_idx_hist = []
per_ep_overlap = []
overall_overlap = []
idx_tracker = np.array(list(range(len(lake_set))))
for i in range(num_epochs):
print("AL epoch: ", i)
tst_loss = 0
tst_correct = 0
tst_total = 0
val_loss = 0
val_correct = 0
val_total = 0
if(i==0):
print("initial training epoch")
if(os.path.exists(initModelPath)):
model.load_state_dict(torch.load(initModelPath, map_location=device))
print("Init model loaded from disk, skipping init training: ", initModelPath)
model.eval()
with torch.no_grad():
final_val_predictions = []
final_val_classifications = []
for batch_idx, (inputs, targets) in enumerate(valloader):
inputs, targets = inputs.to(device), targets.to(device, non_blocking=True)
outputs = model(inputs)
loss = criterion(outputs, targets)
val_loss += loss.item()
if(feature=="ood"):
_, predicted = outputs[...,:-1].max(1)
else:
_, predicted = outputs.max(1)
val_total += targets.size(0)
val_correct += predicted.eq(targets).sum().item()
final_val_predictions += list(predicted.cpu().numpy())
final_val_classifications += list(predicted.eq(targets).cpu().numpy())
final_tst_predictions = []
final_tst_classifications = []
for batch_idx, (inputs, targets) in enumerate(tstloader):
inputs, targets = inputs.to(device), targets.to(device, non_blocking=True)
outputs = model(inputs)
loss = criterion(outputs, targets)
tst_loss += loss.item()
if(feature=="ood"):
_, predicted = outputs[...,:-1].max(1)
else:
_, predicted = outputs.max(1)
tst_total += targets.size(0)
tst_correct += predicted.eq(targets).sum().item()
final_tst_predictions += list(predicted.cpu().numpy())
final_tst_classifications += list(predicted.eq(targets).cpu().numpy())
best_val_acc = (val_correct/val_total)
val_acc[i] = val_correct / val_total
tst_acc[i] = tst_correct / tst_total
val_losses[i] = val_loss
tst_losses[i] = tst_loss
res_dict["test_acc"].append(tst_acc[i])
continue
else:
unlabeled_lake_set = LabeledToUnlabeledDataset(lake_set)
strategy_sel.update_data(train_set, unlabeled_lake_set)
#compute the error log before every selection
if(computeErrorLog):
tst_err_log, val_err_log, val_class_err_idxs = find_err_per_class(test_set, val_set, final_val_classifications, final_val_predictions, final_tst_classifications, final_tst_predictions, all_logs_dir, sf+"_"+str(bud))
csvlog.append(tst_err_log)
val_csvlog.append(val_err_log)
####SIM####
if(strategy=="SIM" or strategy=="SF"):
if(sf.endswith("cg")):
strategy_sel.update_privates(train_set)
print("Updated private set size: ", len(train_set))
elif(strategy=="random"):
subset = np.random.choice(np.array(list(range(len(lake_set)))), size=budget, replace=False)
strategy_sel.update_model(model)
subset = strategy_sel.select(budget)
# print("True targets of subset: ", torch.Tensor(true_lake_set.targets.float())[subset])
# hypothesized_targets = strategy_sel.predict(unlabeled_lake_set)
# print("Hypothesized targets of subset: ", hypothesized_targets)
# if(sf.endswith("cg")): #for first selection
# if(len(private_set)==0):
# private_set = SubsetWithTargets(true_lake_set, subset, torch.Tensor(true_lake_set.targets.float())[subset])
# else:
# private_set = getPrivateSet(true_lake_set, subset, private_set)
# print("size of private set: ", len(private_set))
print("#### SELECTION COMPLETE ####")
lake_subset_idxs = subset #indices wrt to lake that need to be removed from the lake
if(feature=="ood"): #remove ood points from the subset
subset = remove_ood_points(true_lake_set, subset, sel_cls_idx)
print("selEpoch: %d, Selection Ended at:" % (i), str(datetime.datetime.now()))
perClsSel = getPerClassSel(true_lake_set, lake_subset_idxs, num_cls)
res_dict['sel_per_cls'].append(perClsSel)
if(i>0):
curr_unique_points = check_overlap(prev_idx, prev_idx_hist, list(idx_tracker[subset]))
res_dict["num_unique_samples"].append(curr_unique_points)
#augment the train_set with selected indices from the lake
if(feature=="classimb"):
train_set, lake_set, true_lake_set, add_val_set = aug_train_subset(train_set, lake_set, true_lake_set, subset, lake_subset_idxs, budget, True) #aug train with random if budget is not filled
if(augTarget): val_set = ConcatWithTargets(val_set, add_val_set)
elif(feature=="ood"):
train_set, lake_set, true_lake_set, new_private_set, add_val_set = aug_train_subset(train_set, lake_set, true_lake_set, subset, lake_subset_idxs, budget)
train_set = torch.utils.data.ConcatDataset([train_set, new_private_set]) #Add the OOD samples with a common OOD class
val_set = ConcatWithTargets(val_set, add_val_set)
if(len(private_set)!=0):
private_set = ConcatWithTargets(private_set, new_private_set)
else:
private_set = new_private_set
else: #Redundancy
train_set, lake_set, true_lake_set, _ = aug_train_subset(train_set, lake_set, true_lake_set, subset, lake_subset_idxs, budget)
print("After augmentation, size of train_set: ", len(train_set), " lake set: ", len(lake_set), " val set: ", len(val_set))
prev_idx = list(idx_tracker[subset])
prev_idx_hist += list(idx_tracker[subset])
idx_tracker = np.delete(idx_tracker, subset, axis=0)
# Reinit train and lake loaders with new splits and reinit the model
trainloader = torch.utils.data.DataLoader(train_set, batch_size=trn_batch_size, shuffle=True, pin_memory=True)
lakeloader = torch.utils.data.DataLoader(lake_set, batch_size=tst_batch_size, shuffle=False, pin_memory=True)
assert(len(idx_tracker)==len(lake_set))
if(augTarget):
valloader = torch.utils.data.DataLoader(val_set, batch_size=len(val_set), shuffle=False, pin_memory=True)
model = create_model(model_name, num_cls, device, strategy_args['embedding_type'])
optimizer = optimizer_without_scheduler(model, learning_rate)
#Start training
start_time = time.time()
num_ep=1
while(full_trn_acc[i]<0.99 and num_ep<300):
model.train()
for batch_idx, (inputs, targets) in enumerate(trainloader):
inputs, targets = inputs.to(device), targets.to(device, non_blocking=True)
# Variables in Pytorch are differentiable.
inputs, target = Variable(inputs), Variable(inputs)
# This will zero out the gradients for this batch.
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
# scheduler.step()
full_trn_loss = 0
full_trn_correct = 0
full_trn_total = 0
model.eval()
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(trainloader):
inputs, targets = inputs.to(device), targets.to(device, non_blocking=True)
outputs = model(inputs)
loss = criterion(outputs, targets)
full_trn_loss += loss.item()
_, predicted = outputs.max(1)
full_trn_total += targets.size(0)
full_trn_correct += predicted.eq(targets).sum().item()
full_trn_acc[i] = full_trn_correct / full_trn_total
print("Selection Epoch ", i, " Training epoch [" , num_ep, "]" , " Training Acc: ", full_trn_acc[i], end="\r")
num_ep+=1
timing[i] = time.time() - start_time
with torch.no_grad():
final_val_predictions = []
final_val_classifications = []
for batch_idx, (inputs, targets) in enumerate(valloader): #Compute Val accuracy
inputs, targets = inputs.to(device), targets.to(device, non_blocking=True)
outputs = model(inputs)
loss = criterion(outputs, targets)
val_loss += loss.item()
if(feature=="ood"):
_, predicted = outputs[...,:-1].max(1)
else:
_, predicted = outputs.max(1)
val_total += targets.size(0)
val_correct += predicted.eq(targets).sum().item()
final_val_predictions += list(predicted.cpu().numpy())
final_val_classifications += list(predicted.eq(targets).cpu().numpy())
final_tst_predictions = []
final_tst_classifications = []
for batch_idx, (inputs, targets) in enumerate(tstloader): #Compute test accuracy
inputs, targets = inputs.to(device), targets.to(device, non_blocking=True)
outputs = model(inputs)
loss = criterion(outputs, targets)
tst_loss += loss.item()
if(feature=="ood"):
_, predicted = outputs[...,:-1].max(1)
else:
_, predicted = outputs.max(1)
tst_total += targets.size(0)
tst_correct += predicted.eq(targets).sum().item()
final_tst_predictions += list(predicted.cpu().numpy())
final_tst_classifications += list(predicted.eq(targets).cpu().numpy())
val_acc[i] = val_correct / val_total
tst_acc[i] = tst_correct / tst_total
val_losses[i] = val_loss
fulltrn_losses[i] = full_trn_loss
tst_losses[i] = tst_loss
full_val_acc = list(np.array(val_acc))
full_timing = list(np.array(timing))
res_dict["test_acc"].append(tst_acc[i])
print('Epoch:', i + 1, 'FullTrn,TrainAcc,ValLoss,ValAcc,TstLoss,TstAcc,Time:', full_trn_loss, full_trn_acc[i], val_loss, val_acc[i], tst_loss, tst_acc[i], timing[i])
if(i==0):
print("saving initial model")
torch.save(model.state_dict(), initModelPath) #save initial train model if not present
if(computeErrorLog):
tst_err_log, val_err_log, val_class_err_idxs = find_err_per_class(test_set, val_set, final_val_classifications, final_val_predictions, final_tst_classifications, final_tst_predictions, all_logs_dir, sf+"_"+str(bud))
csvlog.append(tst_err_log)
val_csvlog.append(val_err_log)
print(csvlog)
res_dict["all_class_acc"] = csvlog
res_dict["all_val_class_acc"] = val_csvlog
with open(os.path.join(all_logs_dir, exp_name+".csv"), "w") as f:
writer = csv.writer(f)
writer.writerows(csvlog)
# save results dir with test acc and per class selections
with open(os.path.join(all_logs_dir, exp_name+".json"), 'w') as fp:
json.dump(res_dict, fp)
# + [markdown] id="kp9_ZU7I8lLa"
# # FLCG
# + colab={"base_uri": "https://localhost:8080/"} id="GZKsblhs8lLa" outputId="5983fdfb-6541-43e9-a0f0-304ddfa543b6"
train_model_al(datkbuildPath, exePath, num_epochs, data_name, datadir, feature, model_name, budget, split_cfg, learning_rate, run, device, computeClassErrorLog, "SIM",'flcg')
# -
# # LOGDETCG
train_model_al(datkbuildPath, exePath, num_epochs, data_name, datadir, feature, model_name, budget, split_cfg, learning_rate, run, device, computeClassErrorLog, "SIM",'logdetcg')
# + [markdown] id="9Ou4D7n48lLb"
# # BADGE
# + colab={"base_uri": "https://localhost:8080/"} id="x9_mFqNi8lLb" outputId="f1793df9-ba1f-4b87-c89a-d3ad7135f4f7"
train_model_al(datkbuildPath, exePath, num_epochs, data_name, datadir, feature, model_name, budget, split_cfg, learning_rate, run, device, computeClassErrorLog, "AL","badge")
# + [markdown] id="7kwjxMFh8lLb"
# # US
# + colab={"base_uri": "https://localhost:8080/"} id="CF4un-LA8lLb" outputId="c5160f72-26ec-4601-e508-50235d4d9a70"
train_model_al(datkbuildPath, exePath, num_epochs, data_name, datadir, feature, model_name, budget, split_cfg, learning_rate, run, device, computeClassErrorLog, "AL","us")
# + [markdown] id="EPHhNlC58lLb"
# # GLISTER
# + colab={"base_uri": "https://localhost:8080/"} id="ZKQg16xY8lLb" outputId="ce3798dc-6b3d-4194-a9a2-91ee67d63bdb"
train_model_al(datkbuildPath, exePath, num_epochs, data_name, datadir, feature, model_name, budget, split_cfg, learning_rate, run, device, computeClassErrorLog, "AL","glister-tss")
# + [markdown] id="9alLsFyi8lLd"
# # FL
# + colab={"base_uri": "https://localhost:8080/"} id="qg1ksWWJ8lLd" outputId="30627285-7fde-482f-c247-d6cecf305334"
train_model_al(datkbuildPath, exePath, num_epochs, data_name, datadir, feature, model_name, budget, split_cfg, learning_rate, run, device, computeClassErrorLog, "SF",'fl')
# + [markdown] id="tQdih1_X8lLd"
# # LOGDET
# + colab={"base_uri": "https://localhost:8080/", "height": 717} id="wy8Qn5z-8lLd" outputId="a0d538d1-4ccf-4879-d4d2-ab86fcc95114"
train_model_al(datkbuildPath, exePath, num_epochs, data_name, datadir, feature, model_name, budget, split_cfg, learning_rate, run, device, computeClassErrorLog, "SF",'logdet')
# + [markdown] id="3dZqCZ1v8lLe"
# # Random
# + colab={"base_uri": "https://localhost:8080/"} id="FlbOpp438lLe" outputId="f754727a-dfb5-4853-d4c1-8a96115182db"
train_model_al(datkbuildPath, exePath, num_epochs, data_name, datadir, feature, model_name, budget, split_cfg, learning_rate, run, device, computeClassErrorLog, "random",'random')
# + [markdown] id="Z4sDppX8vFnw"
# # CORESET
# + colab={"base_uri": "https://localhost:8080/"} id="MgeWSB4qvJmJ" outputId="10e62461-d81c-4974-bae7-0526c7c8f5b5"
train_model_al(datkbuildPath, exePath, num_epochs, data_name, datadir, feature, model_name, budget, split_cfg, learning_rate, run, device, computeClassErrorLog, "AL","coreset")
# + [markdown] id="-zPJubq7vVwc"
# # LEASTCONF
# + colab={"base_uri": "https://localhost:8080/"} id="wmVWElmyvUED" outputId="a56dce6a-8fed-4684-b2f5-8e538b94f93d"
train_model_al(datkbuildPath, exePath, num_epochs, data_name, datadir, feature, model_name, budget, split_cfg, learning_rate, run, device, computeClassErrorLog, "AL","leastconf")
# + [markdown] id="1pkk2sbuvdak"
# # MARGIN SAMPLING
# + colab={"base_uri": "https://localhost:8080/"} id="OxIT27hqvcI3" outputId="8dcef4b6-a336-4bb0-ec4d-d8108c22f6bf"
train_model_al(datkbuildPath, exePath, num_epochs, data_name, datadir, feature, model_name, budget, split_cfg, learning_rate, run, device, computeClassErrorLog, "AL","margin")
# -
| benchmark_notebooks/similar/redundancy/Similar_Redundancy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Visualizing Hidden Layers
# +
# %matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.model_selection import train_test_split
# -
def plot_decision_boundary(model, X, y):
X_max = X.max(axis=0)
X_min = X.min(axis=0)
xticks = np.linspace(X_min[0], X_max[0], 100)
yticks = np.linspace(X_min[1], X_max[1], 100)
xx, yy = np.meshgrid(xticks, yticks)
ZZ = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = ZZ[:,0] >= 0.5
Z = Z.reshape(xx.shape)
fig, ax = plt.subplots()
ax = plt.gca()
ax.contourf(xx, yy, Z, cmap=plt.cm.bwr, alpha=0.2)
ax.scatter(X[:,0], X[:,1], c=y, alpha=0.4)
# +
df = pd.read_csv('../data/geoloc_elev.csv')
# we only use the 2 features that matter
X = df[['lat', 'lon']].values
y = df['target'].values
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size = 0.3, random_state=0)
# -
# we are using notebook interactive plotting.
# make sure to snap the plot to the notebook before proceeding
df.plot(kind='scatter',
x='lat',
y='lon',
c='target',
cmap='bwr');
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
# +
model = Sequential()
model.add(Dense(4, input_dim=2, activation='tanh'))
model.add(Dense(3, activation='tanh'))
model.add(Dense(1, activation='sigmoid'))
model.compile(SGD(lr=0.5), 'binary_crossentropy', metrics=['accuracy'])
h = model.fit(X_train, y_train, epochs=20, verbose=0, validation_split=0.1)
# -
# we are using notebook interactive plotting.
# make sure to snap the plot to the notebook before proceeding
plot_decision_boundary(model, X, y)
# ## Representation Learning: inspecting the output of the hidden layer
model.layers
# In order to extract the activations at the hidden layer from tensorflow.keras, we can [create a function](http://keras.io/getting-started/faq/#how-can-i-visualize-the-output-of-an-intermediate-layer) where we specify what layer we would like to "extract" the value of like so:
from tensorflow.keras import backend as K
input_t = model.layers[0].input
inner_t = model.layers[1].output
get_hidden_layer_output = K.function([input_t], [inner_t])
get_hidden_layer_output
# +
H = get_hidden_layer_output([X_test])[0]
H.shape
# -
from mpl_toolkits.mplot3d import Axes3D
# A helper function to make a 3d scatter plot
def plot_3d_representation(X, y):
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X[:,0], X[:,1], X[:,2], c=y, alpha=0.2)
ax.view_init(60, 30)
plot_3d_representation(H, y_test)
# What do you see? Are the classes linearly separable now? Discuss what you think is happening with a partner or instructor.
# ## Exercise 1
#
# - Reset the above model to random weights and inspect the hidden layer representation
# - Are the two classes well separated without training?
# + tags=["solution", "empty"]
model = Sequential()
model.add(Dense(4, input_dim=2, activation='tanh'))
model.add(Dense(3, activation='tanh'))
model.add(Dense(1, activation='sigmoid'))
model.compile(SGD(lr=0.5), 'binary_crossentropy', metrics=['accuracy'])
# + tags=["solution"]
get_hidden_layer_output = K.function([model.layers[0].input],
[model.layers[1].output])
# + tags=["solution"]
H = get_hidden_layer_output([X_test])[0]
# + tags=["solution"]
plot_3d_representation(H, y_test)
# -
# ## Execise 2
#
# Let's separate two True from False banknotes and look how the model learns the inner representation.
#
# - Load the `../data/banknotes.csv` dataset into a pandas dataframe
# - Inspect it using Seaborn Pairplot
# - Separate features from labels. Labels are contained in the `class` column
# - Split data into train and test sets, using a 30% test size and random_state=42
# - Create a model with the following architecture:
# Input: 4 features
# Inner layer: 2 nodes, relu activation
# Output layer: 1 node, sigmoid
#
# - Compile the model
# - Set the model weights to the initial weights provided below using `model.set_weights`
# - Train the model one epoch at a time, and at each epoch visualize the test data as it appears at the output of the inner layer `model.layers[0].output` on a 2D scatter plot.
#
# You should see model gradually learn to separate the 2 classes.
weights = [np.array([[-0.26285839, 0.82659411],
[ 0.65099144, -0.7858932 ],
[ 0.40144777, -0.92449236],
[ 0.87284446, -0.59128475]]),
np.array([ 0., 0.]),
np.array([[-0.7150408 ], [ 0.54277754]]),
np.array([ 0.])]
# + tags=["solution", "empty"]
df = pd.read_csv('../data/banknotes.csv')
df.head()
# + tags=["solution"]
import seaborn as sns
# + tags=["solution"]
sns.pairplot(df, hue="class");
# + tags=["solution"]
X = df.drop('class', axis=1).values
y = df['class'].values
# + tags=["solution"]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=42)
# + tags=["solution"]
from tensorflow.keras.optimizers import RMSprop
# + tags=["solution"]
model = Sequential()
model.add(Dense(2, input_shape=(4,), activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.01),
metrics=['accuracy'])
# + tags=["solution"]
model.set_weights(weights)
# + tags=["solution"]
features_function = K.function([model.layers[0].input],
[model.layers[0].output])
# + tags=["solution"]
plt.figure(figsize=(15,10))
for i in range(1, 26):
plt.subplot(5, 5, i)
h = model.fit(X_train, y_train, batch_size=16,
epochs=1, verbose=0)
test_acc = model.evaluate(X_test, y_test,
verbose=0)[1]
features = features_function([X_test])[0]
plt.scatter(features[:, 0], features[:, 1],
c=y_test, cmap='coolwarm', marker='.')
plt.xlim(-0.5, 15)
plt.ylim(-0.5, 15)
acc_ = test_acc * 100.0
t = 'Epoch: {}, Test Acc: {:3.1f} %'.format(i, acc_)
plt.title(t, fontsize=11)
plt.tight_layout();
# -
| solutions_do_not_open/Lab_12_DL Visualizing Hidden Layers_solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="39hj0mbqVkTI"
# # Trabajando con series temporales
# + [markdown] colab_type="text" id="_6AiP6Z7VkTJ"
# Pandas se desarrolló con objetivo del modelado financiero, por lo que, como era de esperar, contiene un conjunto bastante extenso de herramientas para trabajar con fechas, horas y datos indexados por tiempo.
# Los datos de fecha y hora vienen tienen más de una representación:
#
# - **Marcas temporales** hacen referencia a momentos particulares en el tiempo (por ejemplo, July 4th, 2015 at 7:00am).
# - **Intervalos de tiempo** y **periodos** hacen referencia a un período de tiempo entre dos puntos determinados; por ejemplo, el año 2015. Los *periods* generalmente hacen referencia a un caso especial de intervalos de tiempo en los que cada intervalo tiene una duración uniforme y no se superpone (por ejemplo, períodos de 24 horas que comprenden días).
# - **Deltas temporales** o **duraciones** hacen referencia a un período de tiempo exacto (por ejemplo, una duración de 22.56 segundos).
#
# En esta sección, veremos cómo trabajar con cada uno de estos tipos de datos de fecha/hora en Pandas.
#
# Esto no será una guía completa de las herramientas de series de tiempo disponibles en Python o Pandas, sino mñas bien una descripción general de cómo podemos abordar el trabajo con series temporales.
#
# Comenzaremos con las herramientas para lidiar con fechas y horas en Python, antes de pasar más específicamente a las herramientas proporcionadas por Pandas.
#
# Después de enumerar algunos recursos más profundos, veremos algunos ejemplos breves de cómo trabajar con datos temporales en Pandas.
# + [markdown] colab_type="text" id="rg2XTm_bVkTJ"
# ## Fechas y horas en Pandas
#
# El mundo de Python tiene varias representaciones disponibles de fechas, horas, deltas e intervalos de tiempo.
# Si bien las herramientas de series temporales proporcionadas por Pandas tienden a ser las más útiles para las aplicaciones de ciencia de datos, es útil ver su relación con otros paquetes utilizados en Python.
# + [markdown] colab_type="text" id="GUV2CpMRVkTK"
# ### Fechas y horas nativas de Python: ``datetime`` y ``dateutil``
#
# Los objetos básicos de Python para trabajar con fechas y horas residen en el módulo built-in ``datetime``.
# Junto con el módulo de terceros ``dateutil``, podemos usarlos para realizar fácilmente una serie de funciones muy útiles con fechas y horas.
# Por ejemplo, podemos crear manualmente una fecha usando el tipo ``datetime``:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="m3T17dJ5VkTK" jupyter={"outputs_hidden": false} outputId="bfbc6773-55b0-4e98-ca14-6131c4d54ff4"
from datetime import datetime
datetime(year=2015, month=7, day=4)
# + [markdown] colab_type="text" id="fqhuoVOiVkTO"
# O, usando el módulo ``dateutil``, puedes convertir a fecha desde strings con varios formatos:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="QTd0fkyoVkTP" jupyter={"outputs_hidden": false} outputId="d11ab145-92c6-4621-e727-495f49217a4a"
from dateutil import parser
date = parser.parse("4th of July, 2015")
date
# -
# Pero puede inferir muchos más:
print(parser.parse("2015/10/12"))
print(parser.parse("2015-10-12"))
# Ojo, por defecto, en este formato interpreta mes-día-año
print(parser.parse("12-10-2015"))
# Incluso podría sacar el día de hoy, ahora mismo:
date = datetime.now()
date
# + [markdown] colab_type="text" id="Bw5SXOOjVkTR"
# Una vez que tenemos el objeto ``datetime``, podemos hacer cosas como imprimir el día de la semana:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="pt0oIZovVkTS" jupyter={"outputs_hidden": false} outputId="4c4f545c-d335-4535-c69a-ed472f781baa"
date.strftime('%A')
# -
# ### EJERCICIO
#
# Interpreta las siguientes fechas y obtén qué día de la semana fue o va a ser (algunas las tendrás que interpretar tú y otras directamente):
# 1. "2020-09-15"
# 2. "12th October, 1492"
# 3. 20 de Enero de 1999
# 4. 7 de Marzo de 2077
# 5. "1512/02/01"
# 6. "2021-05-22"
print(parser.parse("2020-09-15"))
print(parser.parse("12th October, 1492"))
print(parser.parse("1999-01-20"))
print(parser.parse("2077-03-07"))
print(parser.parse("1512/02/01"))
print(parser.parse("2021-05-22"))
# + [markdown] colab_type="text" id="TZL9eW3EVkTU"
# En la última línea, hemos usado uno de los códigos de formato de cadena estándar para imprimir fechas (``"% A"``), sobre el cual podemos leer en la [sección de strftime](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior) de la [documentación de Python](https://docs.python.org/3/library/datetime.html).
#
# La documentación de otras utilidades de fecha se puede encontrar en la [documentación en línea de dateutil](http://labix.org/python-dateutil).
# Un paquete relacionado a tener en cuenta es [``pytz``](http://pytz.sourceforge.net/), que contiene herramientas para trabajar con zonas horarias.
# -
# Pero no solo podemos hacer esto, sino que también podemos sumar y restar deltas temporales. Para ello, nos ayudaremos de ``relativedelta``, al cual le pasaremos los días, meses, años... que queramos utilizar en nuestra operación temporal.
#
# [Aquí](http://labix.org/python-dateutil#head-ba5ffd4df8111d1b83fc194b97ebecf837add454) puedes consultar más sobre este objeto. A continuación, algunos ejemplos:
# +
from dateutil.relativedelta import relativedelta
# Restamos 8 días al 15 de enero de 2024:
nueva_fecha = datetime(year=2024, month=1, day=15) - relativedelta(days=8)
print(nueva_fecha)
# Restamos 13 meses al 20 de mayo de 1970:
nueva_fecha2 = datetime(year=1970, month=5, day=20) - relativedelta(months=13)
print(nueva_fecha2)
# Pero podemos juntar más cosas en una sola sentencia, y también funciona con horas:
# Restamos 1 año, 48 días y 53 minutos al 20 de mayo de 1970 a las 20:59:00:
nueva_fecha3 = datetime(year=1970, month=5, day=20, hour=20, minute=59, second=0) - relativedelta(years=1, days=48, minutes=53)
print(nueva_fecha3)
# -
# ### EJERCICIO
#
# Utiliza las 4 primeras fechas del último ejercicio y calcula:
# 1. Resta 24 días
# 2. Suma 5 meses
# 3. Suma 2 días y resta 4 meses
# 4. Suma 1 año y 2 días
#
# Finalmente, utiliza la fecha y hora actual para calcular:
# 1. Mes pasado
# 2. Año pasado
# 3. Hace 2 horas
# Realmente, lo que hace ``relativedelta`` es crear un interfaz de manejo más sencillo sobre datetime (con el que podríamos trabajar directamente importando ``timedelta`` de ``datetime``), ya que, por defecto, datetime trabaja con la diferencia de fechas con días y segundos. Con ``relativedelta`` podemos utilizar meses y años para realizar operaciones con fechas.
#
# Por ello, si realizamos operaciones con funciones, obtenemos un objeto ``datetime.timedelta``:
# +
a = datetime.now()
b = datetime.now() - relativedelta(months=7, days=7, hours=7)
b - a
# -
# Por ello, mejor que trabajemos con ``relativedelta``, sin embargo, también podríamos trabajar con diferencias de fechas directamente, tal como hemos visto en el paso anterior. Por ejemplo, si queremos saber cuánto tiempo llevamos de clase:
# +
comienzo = datetime(year=2020, month=12, day=17, hour=18, minute=5)
actual = datetime.now()
tiempo_de_clase = actual - comienzo
print(tiempo_de_clase)
# -
# O podemos jugar con intervalos de tiempo en base a las fechas. Por ejemplo, si sabemos los días de comienzo y final de un evento que ha sucedido el año pasado, y queremos saber hasta cuándo durará el año que viene, suponiendo que lo han movido por culpa de la pandemia, podríamos hacer lo siguiente:
# +
comienzo_evento_pasado = datetime(year=2019, month=5, day=1)
fin_evento_pasado = datetime(year=2019, month=6, day=12)
comienzo_evento_next = datetime(year=2021, month=7, day=29)
fin_evento_next = comienzo_evento_next + (fin_evento_pasado - comienzo_evento_pasado)
print(fin_evento_next)
# -
# El poder de ``datetime`` y ``dateutil`` radica en su flexibilidad y sintaxis sencilla: podemos usar estos objetos y sus métodos integrados para realizar fácilmente casi cualquier operación que nos pueda interesar.
# Donde fallan es cuando queremos trabajar con grandes conjuntos de fechas y horas:
# Así como las listas de variables numéricas de Python son subóptimas en comparación con los arrays numéricos de NumPy, las listas de objetos de fecha y hora de Python son subóptimas en comparación con los arrays tipados de fechas.
# + [markdown] colab_type="text" id="BSPEPLCGVkTU"
# ### Arrays tipados de fechas: ``datetime64`` de Numpy
#
# Las debilidades del formato de fecha y hora de Python inspiraron al equipo de NumPy a agregar un conjunto de tipos de datos de series temporales nativos a su propia librería.
# El dtype ``datetime64`` codifica las fechas como enteros de 64 bits y, por lo tanto, permite que losa rrays de fechas se representen de manera muy compacta.
# El ``datetime64`` requiere un formato de entrada muy específico:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="6YynSjAiVkTV" jupyter={"outputs_hidden": false} outputId="5bf3e5e8-71a4-4b18-f187-99cc4fd391fe"
import numpy as np
date = np.array('2015-07-04', dtype=np.datetime64)
date
# + [markdown] colab_type="text" id="lRK7lO-UVkTX"
# Sin embargo, una vez que tengamos esta fecha formateada, podemos realizar rápidamente operaciones vectorizadas con ella:
# + colab={"base_uri": "https://localhost:8080/", "height": 86} colab_type="code" id="xpNxeO0sVkTX" jupyter={"outputs_hidden": false} outputId="a2f5c4e4-ed33-486c-8a41-fad7900219f0"
date + np.arange(12) # suma días
# + [markdown] colab_type="text" id="cTw12V1dVkTa"
# Gracias al tipado uniforme en los arrays de NumPy ``datetime64``, este tipo de operación se puede realizar mucho más rápido que si estuviéramos trabajando directamente con los objetos ``datetime`` de Python, especialmente a medida que los arrays se hacen más grandes.
#
# Un detalle de los objetos ``datetime64`` y ``timedelta64`` es que están construidos en una unidad de tiempo fundamental.
# Dado que el objeto ``datetime64`` está limitado a una precisión de 64 bits, el rango de tiempos codificables es $2^{64}$ veces esta unidad fundamental.
# En otras palabras, ``datetime64`` impone un trade-off entre la resolución de tiempo y el período de tiempo máximo.
#
# Por ejemplo, si deseas una resolución de tiempo de un nanosegundo, solo tiene suficiente información para codificar un rango de $2^{64}$ nanosegundos, que es algo menos de 600 años.
# NumPy inferirá la unidad deseada a partir de la entrada; por ejemplo, aquí hay una fecha a nivel de día:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="9Xb7R88GVkTa" jupyter={"outputs_hidden": false} outputId="1df43f9e-8115-4c4c-b160-173e1e0cac33"
np.datetime64('2015-07-04') # guarda hasta el día
# + [markdown] colab_type="text" id="DZWmriPLVkTd"
# Y aquí otra a nivel de minutos:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="9LN1EXA1VkTd" jupyter={"outputs_hidden": false} outputId="66ef0006-f820-4103-de61-f9eb98dec347"
np.datetime64('2015-07-04 12:00') # guarda hasta los minutos
# + [markdown] colab_type="text" id="1OeED8pPVkTf"
# Observa que la zona horaria se establece automáticamente como la hora local del ordenador que ejecuta el código.
# Puedes forzar cualquier unidad fundamental utilizando uno de los muchos códigos de formato; por ejemplo, aquí forzaremos un tiempo basado en nanosegundos:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="wu9ao2w1VkTf" jupyter={"outputs_hidden": false} outputId="f793da2e-a226-4f09-b3a0-41788f75e883"
np.datetime64('2015-07-04 12:59:59.50', 'ns') # fuerzo que se guarde en nanosegundos
# la precisión de la variable datetime64 depende de hasta dónde se guarde el detalle de la fecha
# a más detalle, menos periodo de tiempo abarcado
# + [markdown] colab_type="text" id="tbHCELHKVkTi"
# La siguiente tabla, extraída de la [docuemntación de datetime64 de NumPy](http://docs.scipy.org/doc/numpy/reference/arrays.datetime.html), enumera los códigos de formato disponibles junto con los períodos de tiempo relativos y absolutos que puede codificar:
# + [markdown] colab_type="text" id="6CMaenZhVkTi"
# |Code | Meaning | Time span (relative) | Time span (absolute) |
# |--------|-------------|----------------------|------------------------|
# | ``Y`` | Year | ± 9.2e18 years | [9.2e18 BC, 9.2e18 AD] |
# | ``M`` | Month | ± 7.6e17 years | [7.6e17 BC, 7.6e17 AD] |
# | ``W`` | Week | ± 1.7e17 years | [1.7e17 BC, 1.7e17 AD] |
# | ``D`` | Day | ± 2.5e16 years | [2.5e16 BC, 2.5e16 AD] |
# | ``h`` | Hour | ± 1.0e15 years | [1.0e15 BC, 1.0e15 AD] |
# | ``m`` | Minute | ± 1.7e13 years | [1.7e13 BC, 1.7e13 AD] |
# | ``s`` | Second | ± 2.9e12 years | [ 2.9e9 BC, 2.9e9 AD] |
# | ``ms`` | Millisecond | ± 2.9e9 years | [ 2.9e6 BC, 2.9e6 AD] |
# | ``us`` | Microsecond | ± 2.9e6 years | [290301 BC, 294241 AD] |
# | ``ns`` | Nanosecond | ± 292 years | [ 1678 AD, 2262 AD] |
# | ``ps`` | Picosecond | ± 106 days | [ 1969 AD, 1970 AD] |
# | ``fs`` | Femtosecond | ± 2.6 hours | [ 1969 AD, 1970 AD] |
# | ``as`` | Attosecond | ± 9.2 seconds | [ 1969 AD, 1970 AD] |
# + [markdown] colab_type="text" id="VMwy49qrVkTi"
# Para los tipos de datos que vemos en el mundo real, el valor predeterminado es ``datetime64[ns]``, ya que puede codificar un rango útil de fechas con una precisión adecuada.
#
# Finalmente, notaremos que si bien el tipo de datos ``datetime64`` aborda algunas de las deficiencias del tipo incorporado de Python ``datetime``, carece de muchos de los métodos y funciones convenientes proporcionados por ``datetime`` y ``dateutil``.
# Puede encontrar más información en la [documentación del datetime64 de NumPy](http://docs.scipy.org/doc/numpy/reference/arrays.datetime.html).
# -
# ### EJERCICIO
#
# Interpreta las siguientes fechas como ``datetime64`` según el periodo relativo que indique o que mejor se adapte:
# 1. "2020-09-15 00:00"
# 2. 12th October, 1492 (al nanosegundo)
# 3. 20 de Enero de 1999 a las 15:24:10
# 4. 7 de Marzo de 2077 01:01:01.00000001
# 5. "1512/02/01" a las 23:00
# 6. "1512/02/01 23:30:10.00000034"
# 7. "1512/02/01 23:30:10.00000034" como segundos
# 8. "2021-05-22" como microsegundos
#
#
# ¿Has observado algo raro? ¿Entiendes por qué pasa?
# + [markdown] colab_type="text" id="kDYmV3uAVkTj"
# ### Fechas y horas en Pandas: lo mejor de ambos mundos
#
# Pandas se basa en todas las herramientas que acabamos de comentar para proporcionar un objeto ``Timestamp``, que combina la facilidad de uso de ``datetime`` y ``dateutil`` con el almacenamiento eficiente y la interfaz vectorizada de ``numpy.datetime64``.
#
# A partir de un grupo de estos objetos de ``Timestamp``, Pandas puede construir un ``DatetimeIndex``, que se puede usar para indexar datos en un ``Series`` o ``DataFrame``; veremos muchos ejemplos de esto a continuación.
#
# Por ejemplo, podemos usar las herramientas de Pandas para repetir la demostración de arriba.
# Podemos analizar una fecha de cadena con formato flexible y usar códigos de formato para generar el día de la semana:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="-hpurSDqVkTj" jupyter={"outputs_hidden": false} outputId="20083292-1bbc-42fd-93f2-2c8b3fccd930"
import pandas as pd
date = pd.to_datetime("4th of July, 2015")
date
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="NIY1t9zAVkTm" jupyter={"outputs_hidden": false} outputId="dcb4f161-ac22-4ef0-d191-f6ee7c9b68af"
date.strftime('%A')
# -
# Si estamos con ``Timestamp`` y no con ``DatetimeIndex`` (es decir, que estamos con una fecha concreta y no con conjuntos de fechas), podemos utilizar el relativedelta:
date - relativedelta(months=10)
# + [markdown] colab_type="text" id="Jx5GL7WIVkTo"
# Con ello, podríamos hacer operaciones de conjuntos de datos basándonos en algo parecido a lo que hacíamos con las listas. Sin embargo, existe una forma de hacer operacinoes temporales al estilo de NumPy, y es con la función ``pd.to_timedelta``, tal como se ve a continuación:
# + colab={"base_uri": "https://localhost:8080/", "height": 86} colab_type="code" id="igNTs1nJVkTo" jupyter={"outputs_hidden": false} outputId="ffd4445e-853e-4318-aaf9-a90f252d7323"
date + pd.to_timedelta(np.arange(12), 'd') # suma 12 días
# -
# También podemos utilizar diferentes frecuencias, no solo diario, tal como indica esta tabla:
#
# | Code | Description |
# |--------|---------------------|
# | ``d`` | Calendar day |
# | ``w`` | Weekly |
# | ``h`` | Hours |
# | ``T`` | Minutes |
# | ``s`` | Seconds |
# | ``l`` | Milliseonds |
# | ``u`` | Microseconds |
# | ``n`` | nanoseconds |
# ### EJERCICIO
#
#
# 1.
# Ayudándote de la tabla anterior, crea una función que tome como parámetro:
# - fecha: un string con el formato de fecha "YYYY-MM-DD" (año-mes-día)
# - valor: un entero con el que se hará una operación sobre la fecha recibida
# - unidad: un string que valdrá cualquiera de los siguietnes strings: "microsegundos", "milisegundos", "segundos", "minutos", "horas", "días" o "semanas"
# - operación": string que sea "+" o "-"
#
# La función deberá recoger un string con formato de fecha que debrá convertirse a tipo ``Timestamp``, tal como hemos visto con la función de Pandas. Después, se le sumará o restará "valor" con frecuencia "unidad", en función del parámetro "operación". Habrá que traducir el campo "unidad" a frecuencias entendibles por la función de Pandas.
#
# 2.
# Modifica la función para aceptar una lista en el parámetro "valor", y que devuelva el DatetimeIndex correspondiente de hacer la operación con ese conjunto de fechas. Si quieres, puedes cambiar el nombre del parámetro a "valores" en lugar de "valor".
#
# ### EJERCICIO
#
# Haz las siguientes operaciones y devuelve el día de la semana que fue esa fecha:
#
# 1. "2020-09-15" + 15 días
# 2. 20 de Enero de 1999 a las 15:24:10 + 2 minutos
# 3. 7 de Marzo de 2077 01:01:01.00000001 - 1 año
# 4. "1512/02/01" a las 23:00 + 5 nanosegundos
# 5. "1984-10-01" - 370 semanas
#
# + [markdown] colab_type="text" id="FddBsAG_VkTq"
# En la siguiente sección, veremos más de cerca la manipulación de datos de series de tiempo con las herramientas proporcionadas por Pandas.
# + [markdown] colab_type="text" id="YCDiBYBDVkTq"
# ## Series temporales de Pandas: Indexando por tiempo
#
# Donde las herramientas de series temporales de Pandas se vuelven realmente útiles es cuando comenzamos a indexar datos por marcas temporales.
# Por ejemplo, podemos construir un objeto ``Series`` que tenga datos indexados por tiempo del siguiente modo:
# + colab={"base_uri": "https://localhost:8080/", "height": 103} colab_type="code" id="194oLXuXVkTr" jupyter={"outputs_hidden": false} outputId="4fd82729-7df9-4e9c-d94a-ebef098816d8"
index = pd.DatetimeIndex(['2014-07-04', '2014-08-04',
'2015-07-04', '2015-08-04'])
data = pd.Series([0, 1, 2, 3], index=index)
data
# + [markdown] colab_type="text" id="d6eShUseVkTt"
# Ahora que tenemos estos datos en una ``Series``, podemos hacer uso de cualquiera de los patrones de indexación de las ``Series`` que discutimos en las secciones anteriores, pasando valores que se puedan convertir en fechas:
# + colab={"base_uri": "https://localhost:8080/", "height": 87} colab_type="code" id="nIqvs9uiVkTt" jupyter={"outputs_hidden": false} outputId="dd023173-dd6c-4ad9-b83e-25e735d3572a"
data['2014-07-04':'2015-07-04']
# + [markdown] colab_type="text" id="LV5fPY2zVkTv"
# Hay operaciones adicionales de indexación especiales exclusivas de las fechas, como pasar un año para obtener una porción de todos los datos de ese año:
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="gYRiNBFqVkTv" jupyter={"outputs_hidden": false} outputId="80d4b1bc-7033-4d30-869f-c5c19174ddb5"
data['2015']
# + [markdown] colab_type="text" id="Eu8YjqLqVkTx"
# Más adelante, veremos ejemplos adicionales de la conveniencia de las fechas como índices.
# Pero antes, es interesante ver las estructuras de datos de series de tiempo disponibles.
# + [markdown] colab_type="text" id="-eDiBHz7VkTy"
# ## Estructuras de datos de series temporales en Pandas
#
# A continuación, presentaremos las estructuras de datos fundamentales de Pandas para trabajar con datos de series temporales:
#
# - Para *marcas temporales*, Pandas proporciona el tipo ``Timestamp``. Como se mencionó anteriormente, es esencialmente un reemplazo del ``datetime`` nativo de Python, pero se basa en el tipo de datos ``numpy.datetime64`` más eficiente. La estructura de índice asociada es `` DatetimeIndex ''.
# - Para *periodos de tiempo*, Pandas proporciona el tipo ``Period``. Esto codifica un intervalo de frecuencia fija basado en ``numpy.datetime64``. La estructura de índice asociada es ``PeriodIndex``.
# - Para *deltas de tiempo* o *duraciones*, Pandas proporciona el tipo ``Timedelta``. ``Timedelta`` es un reemplazo más eficiente para el tipo nativo de Python ``datetime.timedelta`` y se basa en ``numpy.timedelta64``. La estructura de índice asociada es ``TimedeltaIndex``.
# + [markdown] colab_type="text" id="L21KeuAPVkTy"
# Lo más importante de estos objetos de fecha/hora son los objetos ``Timestamp`` y ``DatetimeIndex``.
# Si bien estos objetos se pueden invocar directamente instanciándolos desde su clase, es más común usar la función ``pd.to_datetime()``, que puede analizar una amplia variedad de formatos e inferirlos automáticamente.
#
# Pasar una sola fecha a ``pd.to_datetime()`` devuelve un ``Timestamp``; pasar una serie de fechas por defecto devuelve un ``DatetimeIndex``:
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="MijsFC0PVkTy" jupyter={"outputs_hidden": false} outputId="85596cc1-6c92-4733-9428-3be2374fbaaa"
from datetime import datetime
dates = pd.to_datetime([datetime(2015, 7, 3), '4th of July, 2015',
'2015-Jul-6', '07-07-2015', '20150708'])
dates
# año mes día
# + [markdown] colab_type="text" id="S0H_dfZOVkT0"
# Cualquier ``DatetimeIndex`` puede ser convertido a ``PeriodIndex`` con la función ``to_period()`` añadiendo el código de la frecuencia. En este caso, usaremos ``'D'`` para indicar frecuencia diaria:
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="BW6ILvNwVkT0" jupyter={"outputs_hidden": false} outputId="650f7c79-95a1-485b-94ce-e77544dd7366"
dates.to_period('D')
# -
# Fíjate que, si tenemos fechas diarias, utilizar una frecuencia no acorde con su distribución, como puede ser mensual, produce el mismo resultado si repetimos fechas del mismo mes:
dates.to_period('M')
# + [markdown] colab_type="text" id="JqpM8bUiVkT2"
# Los ``TimedeltaIndex`` aparecen, por ejemplo, cuando se restan dos fechas:
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="cHGzM3l8VkT2" jupyter={"outputs_hidden": false} outputId="959c800e-c85c-4afd-b442-46211f89829e"
dates - dates[0]
# -
# ### EJERCICIOS
#
# Ahora que ya hemos aprendido un poco más de esto, vamos a hacer ejercicios que realmente sirvan para algo:
# 1. Lee el dataset de reseñas de yelp! (../../data/yelp_academic_dataset_review.csv)
# 2. Quédate solo con las columnas "stars" y "date"
# 3. Convierte la columna "date" en formato fecha
# 4. Quédate con 1 registro por día y haz la media de stars
# 5. ¿Cuál es la mayor diferencia de días sin registros?
# 6. Vuelve al df original y crea un DatetimeIndex a partir de la columna 'date'. (Puede que no valga directamente con usar el ``Series``, sino que haya que tener los datos en cierto tipo de variable, mira el ejemplo)
# 7. Créate, a partir de esa variable DatetimeIndex, una nueva variable PeriodIndex con frecuencia mensual, y asígnala a una nueva columna "date_M"
# 8. Agrupa por esta variable y obtén el máximo, el mínimo y el total de "stars" por mes. ¿Qué mes ha sido el que más reseñas ha recibido?
# + [markdown] colab_type="text" id="mds1iIpnVkT4"
# ### Secuencias temporales: ``pd.date_range()``
#
# Para crer secuencias temporales, existen otros métodos más convenientes, para los que Pandas ofrece las siguientes funciones: ``pd.date_range()`` para marcas de tiempo, ``pd.period_range ()`` para períodos y ``pd.timedelta_range()`` para deltas de tiempo.
#
# Hemos visto que tanto ``range()`` de Python como ``np.arange()`` de NumPy generan una secuencia a partirde un punto de inicio, un punto final y un tamaño de paso (opcional).
#
# De manera similar, ``pd.date_range()`` acepta una fecha de inicio, una fecha de final y un código de frecuencia (opcional) para crear una secuencia regular de fechas.
# Por defecto, la frecuencia es de un día:
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="CBBzyGBaVkT4" jupyter={"outputs_hidden": false} outputId="d6e0a13d-8c9a-4b54-af91-2745a6714b27"
pd.date_range('2015-07-03', '2015-07-10')
# + [markdown] colab_type="text" id="37-YtiJyVkT6"
# Alternativamente, el rango de fechas se puede especificar con un punto de inicio y un número de períodos, en lugar de un punto final:
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="Nsr5QVUjVkT6" jupyter={"outputs_hidden": false} outputId="14f8906f-d654-4b2c-eb11-3c69247d511a"
pd.date_range('2015-07-03', periods=8) # 8 días
# + [markdown] colab_type="text" id="5G35rPaNVkT8"
# El espaciado se puede modificar alterando el argumento ``freq``, que por defecto es ``D`` (diario).
# Por ejemplo, si queremos crearnos un rango de marcas de tiempo por hora:
# + colab={"base_uri": "https://localhost:8080/", "height": 104} colab_type="code" id="p7S4ivGzVkT8" jupyter={"outputs_hidden": false} outputId="5fa94b74-b049-4ad4-8317-1c05d01634b9"
pd.date_range('2015-07-03', periods=8, freq='H') # 8 horas
# + [markdown] colab_type="text" id="G1ZoEA8PVkT9"
# Para crear secuencias regulares con valores ``Period`` o ``Timedelta``, podemos utilizar las funciones ``pd.period_range()`` y ``pd.timedelta_range()``, como se muestra en los siguientes ejemplos:
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="XeJ3EdGjVkT-" jupyter={"outputs_hidden": false} outputId="91206377-a0fd-4227-99bd-889407460200"
# Variando periodos de manera mensual:
pd.period_range('2015-07', periods=8, freq='M') # 8 meses, de tipo periodo
# + [markdown] colab_type="text" id="snpbJGlbVkUA"
# And a sequence of durations increasing by an hour:
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="61nFPzLAVkUA" jupyter={"outputs_hidden": false} outputId="06561a3c-9d8d-4803-fc3b-145e77df25b7"
# Variando timedeltas de manera horaria:
pd.timedelta_range(0, periods=10, freq='H')
# + [markdown] colab_type="text" id="BSbceS4DVkUF"
# Todos estos métodos requieren una comprensión de los códigos de frecuencia de Pandas, que resumiremos en la siguiente sección.
# + [markdown] colab_type="text" id="NrjUcHqJVkUF"
# ## Frecuencias y Offsets
#
# Los conceptos de frecuencia y offset (temporal) son básicos para entender estas herramientas de series de tiempo de Pandas que estamos viendo.
#
# Así como hemos visto los códigos ``D`` (día) y ``H`` (hora), podemos usar dichos códigos para especificar cualquier espaciado de frecuencia deseado.
# La siguiente tabla resume los principales códigos disponibles:
# + [markdown] colab_type="text" id="NCfAVihAVkUF"
# | Code | Description | Code | Description |
# |--------|---------------------|--------|----------------------|
# | ``D`` | Calendar day | ``B`` | Business day |
# | ``W`` | Weekly | | |
# | ``M`` | Month end | ``BM`` | Business month end |
# | ``Q`` | Quarter end | ``BQ`` | Business quarter end |
# | ``A`` | Year end | ``BA`` | Business year end |
# | ``H`` | Hours | ``BH`` | Business hours |
# | ``T`` | Minutes | | |
# | ``S`` | Seconds | | |
# | ``L`` | Milliseonds | | |
# | ``U`` | Microseconds | | |
# | ``N`` | nanoseconds | | |
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="61nFPzLAVkUA" jupyter={"outputs_hidden": false} outputId="06561a3c-9d8d-4803-fc3b-145e77df25b7"
# Variando periodos:
pd.period_range('2015-07-01', periods=8, freq='M')
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="61nFPzLAVkUA" jupyter={"outputs_hidden": false} outputId="06561a3c-9d8d-4803-fc3b-145e77df25b7"
# Variando marcas de tiempo:
pd.date_range("2020-1", periods=250, freq='BQ')
# + [markdown] colab_type="text" id="W-Je7pQ4VkUG"
# Las frecuencias mensuales, trimestrales y anuales se marcan todas al final del período especificado.
# Al agregar el sufijo ``S`` a cualquiera de estos, hará que se devuelva la primera fecha de cada uno de ellos:
# + [markdown] colab_type="text" id="nLs88cVkVkUG"
# | Code | Description || Code | Description |
# |---------|------------------------||---------|------------------------|
# | ``MS`` | Month start ||``BMS`` | Business month start |
# | ``QS`` | Quarter start ||``BQS`` | Business quarter start |
# | ``AS`` | Year start ||``BAS`` | Business year start |
# -
# Sacando el primer día laborable del año a lo largo de los próximos 20 años:
pd.date_range("2020-1", periods=20, freq='BAS')
# + [markdown] colab_type="text" id="dJll0NcWVkUG"
# Además, podemos cambiar el mes utilizado para marcar cualquier código trimestral o anual agregando un código de mes de tres letras como sufijo:
#
# - ``Q-JAN``, ``BQ-FEB``, ``QS-MAR``, ``BQS-APR``, etc.
# - ``A-JAN``, ``BA-FEB``, ``AS-MAR``, ``BAS-APR``, etc.
#
# Del mismo modo, el día de la semana que se utiliza de referencia se puede modificar agregando un código de día de la semana de tres letras:
#
# - ``W-SUN``, ``W-MON``, ``W-TUE``, ``W-WED``, etc.
#
# Además de esto, los códigos se pueden combinar con números para especificar otras frecuencias.
# Por ejemplo, para una frecuencia de 2 horas 30 minutos, podemos combinar los códigos de hora (``H``) y minutos (``T``) de la siguiente manera:
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="_R3q6aY5VkUH" jupyter={"outputs_hidden": false} outputId="a1ae90b7-cff5-4a15-a718-fab781030d27"
pd.timedelta_range(0, periods=9, freq="2H30T") # tengo 9 timedelta separados 2H30T
# + [markdown] colab_type="text" id="9YXmLnZsVkUI"
# Todos estos códigos hacen referencia a instancias específicas de offsets de series temporales de Pandas, que se pueden encontrar en el módulo ``pd.tseries.offsets``.
#
# Por ejemplo, podemos crear un offset de día laborable directamente de la siguiente manera:
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="gS9-arY7VkUI" jupyter={"outputs_hidden": false} outputId="c336756b-2cac-48b6-bf13-e06487978856"
from pandas.tseries.offsets import BDay
pd.date_range('2015-07-01', periods=5, freq=BDay()) # me devuelve 5 business days seguidos
# + [markdown] colab_type="text" id="blfIUfNXVkUK"
# Si quieres profundizar sobre las frecuencias y los offsets, puedes acceder a la documentación de Pandas en la sección [DateOffset objects](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects).
# + [markdown] colab_type="text" id="sT4Kc3SVVkUL"
# ## Remuestreo, desplazamiento y ventaneo
#
# La capacidad de utilizar fechas y horas como índices para organizar y acceder a los datos de forma intuitiva es una pieza importante de las herramientas de series temporales de Pandas.
# Los beneficios de los datos indexados en general (alineación automática durante las operaciones, slicing y acceso intuitivo de datos, etc.) siguen aplicando, y Pandas proporciona varias operaciones adicionales específicas de series de tiempo para trabajar con ellos.
#
#
# Echaremos un vistazo a algunos de ellos aquí, usando algunos datos de precios de acciones, por ejemplo.
# Debido a que Pandas se desarrolló principalmente en un contexto financiero, incluye algunas herramientas muy específicas para datos financieros.
# Por ejemplo, el paquete adjunto ``pandas-datareader`` (instalable a través de `` conda install pandas-datareader ``), sabe cómo importar datos financieros de varias fuentes disponibles, incluidas Yahoo Finance, Google Finance y otras.
# Aquí cargaremos el historial de precios de cierre de Google:
# +
import pandas as pd
goog = pd.read_csv('GOOG.csv')
goog.info()
goog['Date'] = pd.to_datetime(goog['Date'])
goog.set_index('Date', inplace=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 450} colab_type="code" id="ZxqQwkINMt-O" outputId="f35ea031-d1ff-48e7-e272-ee601dfb477b"
goog
# + [markdown] colab_type="text" id="cvADoADKVkUM"
# Para simplificar, usaremos solo el precio de cierre:
# + colab={"base_uri": "https://localhost:8080/", "height": 244} colab_type="code" id="e-gSWt0AVkUN" outputId="d64431dc-8fb8-4cd6-9ac5-1a9cc57f6a7f"
goog = goog['Close']
goog
# + [markdown] colab_type="text" id="20TK9yGYVkUO"
# Podemos visualizar esto usando el método ``plot()``, para lo que teníamos que importar y realizar las configuraciones previas de Matplotlib:
# + colab={"base_uri": "https://localhost:8080/", "height": 72} colab_type="code" id="ryhoy576VkUO" jupyter={"outputs_hidden": false} outputId="264cf5ed-fd44-435d-db47-012b0a00d61c"
# %matplotlib inline
import matplotlib.pyplot as plt
# + colab={"base_uri": "https://localhost:8080/", "height": 521} colab_type="code" id="9_zVdmXnVkUQ" jupyter={"outputs_hidden": false} outputId="f7a2ba36-2cb2-4e60-ac51-bbaa7ee28b71"
goog.plot(figsize=(12,9));
# + [markdown] colab_type="text" id="WiM5xAh-VkUR"
# ### Remuestreo y conversión de frecuencias
#
# Una necesidad común para las series de datos temporales es el remuestreo a una frecuencia mayor o menor.
# Esto se puede hacer usando el método ``resample()`` o el método ``asfreq()``, que es más simple.
# La principal diferencia entre los dos es que ``resample()`` es fundamentalmente una *agregación de datos*, mientras que ``asfreq()`` es una *selección de datos*.
#
# Esto lo veremos mejor con un ejemplo, así que echando un vistazo al precio de cierre de Google, será interesante comparar lo que devuelven los dos cuando bajamos la muestra de los datos con ambos métodos.
#
# En este caso, vamos a muestrear los datos al final del año comercial:
# + colab={"base_uri": "https://localhost:8080/", "height": 521} colab_type="code" id="mIHieOfYVkUR" jupyter={"outputs_hidden": false} outputId="4070b93e-6bdf-4757-b6a2-ab719ace2f72"
goog.plot(alpha=0.5, style='-',figsize=(12,9))
goog.resample('BA').mean().plot(style=':') # con una función de agregación, la media, BA es final de año
goog.asfreq('BA').plot(style='--'); # nos devolverá los valores de los puntos de fin de año (por lo que perderemos la info del resto de días)
plt.legend(['input', 'resample', 'asfreq'],
loc='upper left');
# + colab={"base_uri": "https://localhost:8080/", "height": 244} colab_type="code" id="QEF--yB0P0QX" outputId="6d295cd5-13d6-42fe-8888-32a6f4673c30"
goog.resample('BA').mean()
# -
goog.asfreq('BA')
# + [markdown] colab_type="text" id="eW4c049dVkUT"
# Fíjate que, en cada punto, ``resample`` devuelve la media del año completo, mientras que ``asfreq`` devuelve el valor al final del año (perdiendo info del resto de días del año).
# + [markdown] colab_type="text" id="BiJ4-uGeVkUU"
# Para hacer un muestreo a órdenes superiores, ``resample()`` y ``asfreq()`` son prácticamente equivalentes, aunque ``resample`` tiene muchas más opciones disponibles.
#
# En este caso, el valor predeterminado para ambos métodos es dejar vacíos los puntos muestreados hacia arriba, es decir, rellenados con valores NA.
# Al igual que con la función ``pd.fillna()``, ``asfreq()`` acepta un argumento de ``método`` para especificar cómo se imputan los valores.
#
# Ahora, vamos a volver a muestrear los datos de los días hábiles originales con una frecuencia diaria (es decir, incluidos los fines de semana):
# + colab={"base_uri": "https://localhost:8080/", "height": 308} colab_type="code" id="ymRsHqZNVkUU" jupyter={"outputs_hidden": false} outputId="ea201e72-b759-4251-c4fa-3ca7a0156d51"
fig, ax = plt.subplots(2, sharex=True)
data = goog.iloc[:10] # 10 primeros elementos de la serie
data.asfreq('D').plot(ax=ax[0], marker='o') # primer eje, con escala diaria, no sale el fin de semana
data.asfreq('D', method='bfill').plot(ax=ax[1], style='-o')
data.asfreq('D', method='ffill').plot(ax=ax[1], style='--o')
ax[1].legend(["back-fill", "forward-fill"]);
# ffill: rellenamos hacia adelante los nulos (es decir, con el valor que había antes del nulo), backfill / bfill: rellenamos hacia atrás (valor siguiente)
# + [markdown] colab_type="text" id="GiOGAY5EVkUV"
# La gráfica de arriba representa los datos predeterminados: los días no hábiles se dejan como valores NA y no aparecen en el gráfico.
# La gráfica inferior muestra las diferencias entre las dos estrategias para llenar los nulos: hacia adelante (ffill) y llenado hacia atrás (bfill).
# -
# ### EJERCICIOS
#
# ### 1.
# Lee el fichero "daily-minimum-temperatures.csv" y remuestréalo para quedarte con los máximos de temperaturas por mes. Almacena este nuevo DataFrame resultante en una variable llamada ``df_temp``.
#
# Asegúrate de tener el campo temporal como índice antes de hacer cosas.
# ### 2.
#
# Ahora, remuestrea el DataFrame de la variable ``df_temp`` de nuevo como si fuera diario y rellena los nulos con el método que creas más conveniente:
# + [markdown] colab_type="text" id="uNMcE1Y8VkUW"
# ### Desplazamentos de tiempo
#
# Otra operación específica bastante común con las series temporales es el desplazamiento de datos en el tiempo.
# Pandas tiene dos métodos estrechamente relacionados para ello: ``shift()`` y ``tshift()``
#
# En resumen, la diferencia entre ellos es que ``shift()`` *cambia los datos*, mientras que ``tshift()`` *cambia el índice*.
# En ambos casos, el cambio se especifica en múltiplos de la frecuencia.
#
# A continuación, tenemos un ejemplo de ``shift()`` y ``tshift()`` con 900 días:
# + colab={"base_uri": "https://localhost:8080/", "height": 285} colab_type="code" id="FuMl5qVyVkUW" jupyter={"outputs_hidden": false} outputId="b9062a01-a41c-4539-cb8a-dc1f6a2edf94"
fig, ax = plt.subplots(3, sharey=True, figsize=(10, 10))
# apply a frequency to the data
goog = goog.asfreq('D', method='pad') # pad es relleno
goog.plot(ax=ax[0])
goog.shift(900).plot(ax=ax[1])
goog.tshift(900).plot(ax=ax[2])
# legends and annotations
local_max = pd.to_datetime('2007-11-05')
offset = pd.Timedelta(900, 'D')
ax[0].legend(['input'], loc=2)
ax[0].get_xticklabels()[2].set(weight='heavy', color='red')
ax[0].axvline(local_max, alpha=0.3, color='red')
ax[1].legend(['shift(900)'], loc=2)
ax[1].get_xticklabels()[2].set(weight='heavy', color='red')
ax[1].axvline(local_max + offset, alpha=0.3, color='red')
ax[2].legend(['tshift(900)'], loc=2)
ax[2].get_xticklabels()[1].set(weight='heavy', color='red')
ax[2].axvline(local_max + offset, alpha=0.3, color='red');
# + [markdown] colab_type="text" id="pf9ERBqJVkUY"
# Como puedes observar, ``shift(900)`` mueve los datos 900 días, eliminando parte del final del gráfico (y dejando los valores NA en el otro extremo), mientras que ``tshift(900)`` desplaza los valores del índice en 900 días, es decir, desplazamos todo y no perdemos información.
#
# Un contexto común para este tipo de desplazamiento es calcular las diferencias a lo largo del tiempo. Por ejemplo, usamos valores desplazados para calcular el retorno de la inversión de un año para las acciones de Google a lo largo del curso del dataset:
# + colab={"base_uri": "https://localhost:8080/", "height": 285} colab_type="code" id="aDMWJq8dVkUZ" jupyter={"outputs_hidden": false} outputId="71fd0def-8b05-44cb-e5e0-a97bb6e07b8d"
ROI = 100 * (goog.tshift(-365) / goog - 1)
ROI.plot()
plt.ylabel('% Return on Investment');
# + [markdown] colab_type="text" id="i6_F4ApHVkUa"
# Esto nos permite ver la tendencia general de las acciones de Google: hasta ahora, los momentos más rentables para invertir en Google han sido (como era de esperar, en retrospectiva) poco después de su salida a bolsa y en medio de la recesión de 2009.
# + [markdown] colab_type="text" id="Zs71M2QSVkUb"
# ### Ventanas temporales
#
# Las estadísticas temporales son un tercer tipo de operación específica de series temporales implementada por Pandas.
# Esto se puede lograr mediante el atributo ``rolling()`` de los objetos ``Series`` y ``DataFrame``, que devuelve una vista similar a la que vimos con la operación ``groupby``.
# Esta vista pone a disposición una serie de operaciones de agregación de forma predeterminada.
#
# Por ejemplo, podríamos calcular la media móvil centrada en un año y la desviación estándar de los precios de las acciones de Google:
# + colab={"base_uri": "https://localhost:8080/", "height": 521} colab_type="code" id="p2FjadbzVkUb" jupyter={"outputs_hidden": false} outputId="310711e6-1ee7-4937-88ac-decb6f5c5bc5"
rolling = goog.rolling(365, center=False)
data = pd.DataFrame({'input': goog,
'one-year rolling_mean': rolling.mean(),
'one-year rolling_std': rolling.std()})
ax = data.plot(style=['-', '--', ':'], figsize=(12,9))
ax.lines[0].set_alpha(0.3)
# + colab={"base_uri": "https://localhost:8080/", "height": 521} colab_type="code" id="p2FjadbzVkUb" jupyter={"outputs_hidden": false} outputId="310711e6-1ee7-4937-88ac-decb6f5c5bc5"
# Si queremos pintarlo como la media y a los lados las desviaciones típicas, podríamos hacer algo así:
rolling = goog.rolling(365)
data = pd.DataFrame({'input': goog,
'one-year rolling_mean': rolling.mean(),
'one-year rolling_std_max': rolling.mean() + rolling.std(),
'one-year rolling_std_min': rolling.mean() - rolling.std()})
ax = data.plot(style=['-', '--', ':', ':'], figsize=(10,5))
ax.lines[0].set_alpha(0.3)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="m22eGpUc__Nx" outputId="5decb452-9fc5-4802-cc26-446ac833b05f"
rolling
# + [markdown] colab_type="text" id="TpCENs-AVkUe"
# Al igual que con los groupby, los métodos ``aggregate()`` y ``apply()`` se pueden utilizar para cálculos continuos personalizados.
# + [markdown] colab_type="text" id="lKv3wONgVkUe"
# ## Más información
#
# En este notebook hemos visto algunas de las funcionalidades báiscas de las series temporales, pero puedes investigar mucho más en la sección ["Time Series/Date" section](http://pandas.pydata.org/pandas-docs/stable/timeseries.html) de la documentación online de Pandas.
# -
| semana_14/dia_1/1_Series_Temporales.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# The notebook gives the experiment reported in Fig. 4 (top right), where model recovery error is plotted with respect to time in the robust linear regression setting, where clean data points have additive Gaussian noise.
# n - number of data points
# d - dimensionality
# alpha - number of corrupted points, change the fraction within expression
# mu, sigma - parameters for any gaussian being used, feel free to change
import numpy as np
import matplotlib.pyplot as plt
from irls_lib import *
import pickle
import statsmodels.api as sm
import time
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
# +
start_time=time.time()
store_result=False
cross_validation=False
n = 10000
d = 100
alpha = 0.2
n_corr = int(alpha*n)
Idx= np.random.permutation(n)
corrIdx= Idx[0:n_corr]
cleanIdx=Idx[n_corr:n]
mu = 0
sigma = 1
X = np.random.normal(mu, sigma, (n, d))
w_star= np.random.normal(0,1, (d, 1))
w_star = w_star / np.linalg.norm(w_star)
w_adv= np.random.normal(0,1, (d, 1))
y=np.zeros(shape=(n,1))
y[cleanIdx] = np.dot(X[cleanIdx,:], w_star)
y[corrIdx] = np.dot(X[corrIdx,:], w_adv)
noise_sigma=0.1
y=y+np.random.normal(0,noise_sigma,(n,1))
# +
#------------APIS-------------#
if cross_validation:
alpha_range = np.linspace( 0.01, 0.2, 20 )
param_grid = dict( alpha = alpha_range )
cv = ShuffleSplit( n_splits = 5, test_size = 0.3, random_state = 42 )
grid = GridSearchCV( APIS( w_init = w_adv, w_star = w_star ), param_grid=param_grid, cv = cv, refit = False )
grid.fit( X, y )
best = grid.best_params_
print("The best parameters for APIS are %s with a score of %0.2f" % (grid.best_params_, grid.best_score_))
apis = APIS(alpha= best["alpha"], w_init = w_adv, w_star = w_star )
else:
apis =APIS(alpha=alpha, w_init = w_adv, w_star = w_star )
apis.fit( X, y )
l2_altproj = apis.l2
clock_altproj = apis.clock
#------------STIR-------------#
if cross_validation:
eta_range = np.linspace( 1.01, 3.01, 21 )
alpha_range = np.linspace( alpha, alpha, 1 )
# STIR does not itself use alpha as a hyperparameter in the algorithm
# but does need it to perform cross-validation since the validation sets
# are also corrupted. To avoid an unfair comparison, We offer STIR a
# handicap by giving it the true value of alpha
param_grid = dict( eta = eta_range, alpha = alpha_range )
cv = ShuffleSplit( n_splits = 5, test_size = 0.3, random_state = 42 )
grid = GridSearchCV( STIR( w_init = w_adv, w_star = w_star ), param_grid=param_grid, cv = cv, refit = False )
grid.fit( X, y )
best = grid.best_params_
print("The best parameters for STIR are %s with a score of %0.2f" % (grid.best_params_, grid.best_score_))
stir = STIR( eta = best["eta"], alpha = best["alpha"], M_init = np.power(10, 1), w_init = w_adv, w_star = w_star )
else:
stir = STIR( eta = 2, alpha = alpha, M_init = np.power(10, 1), w_init = w_adv, w_star = w_star )
stir.fit( X, y )
l2_stir = stir.l2
clock_stir = stir.clock
#------------TORRENT----------#
if cross_validation:
alpha_range = np.linspace( 0.05, 0.2, 20 )
param_grid = dict( alpha = alpha_range )
cv = ShuffleSplit( n_splits = 5, test_size = 0.3, random_state = 42 )
grid = GridSearchCV( TORRENT( w_init = w_adv, w_star = w_star ), param_grid=param_grid, cv = cv, refit = False )
grid.fit( X, y )
best = grid.best_params_
print("The best parameters for TORRENT are %s with a score of %0.2f" % (grid.best_params_, grid.best_score_))
torrent = TORRENT( alpha = best["alpha"], w_init = w_adv, w_star = w_star )
else:
torrent = TORRENT( alpha = alpha, w_init = w_adv, w_star = w_star )
torrent.fit( X, y )
l2_torrent = torrent.l2
clock_torrent = torrent.clock
# +
n = 10000
d = 100
alpha = 0.2
noise_sigma=0.1
file_name='n='+str(n)+' d='+str(d)+' alpha='+str(alpha)+' noise_sigma='+str(noise_sigma)
import matplotlib.pyplot as plt
plt.rcParams['pdf.fonttype'] = 42
plt.rcParams['ps.fonttype'] = 42
plt.figure()
plt.xlabel('Time in sec')
plt.ylabel('$||w-w^*||_2$')
plt.plot(clock_stir, l2_stir, label = 'STIR', ls='--',color='red',linewidth=4)
plt.plot(clock_torrent, l2_torrent, label = 'TORRENT',color='green',linewidth=4)
plt.plot(clock_altproj, l2_altproj, label = 'APIS', color='blue',linewidth=4)
plt.legend(loc='upper right',prop = {'size': 10}, framealpha=0.3)
plt.grid()
plt.title('n='+str(n)+', d='+str(d)+', k/n='+str(alpha)+r'$, \sigma_{noise}=$'+str(noise_sigma))
plt.xscale('log')
plt.yscale('log')
# -
print(f"Elapsed time: {time.time()-start_time:.2f} sec" )
| lin_reg/LinearReg_with_Gaussian_noise.ipynb |
# +
"""
18. How to convert the first character of each element in a series to uppercase?
"""
"""
Difficulty Level: L2
"""
"""
Change the first character of each word to upper case in each word of ser.
"""
"""
ser = pd.Series(['how', 'to', 'kick', 'ass?'])
"""
# Input
ser = pd.Series(['how', 'to', 'kick', 'ass?'])
# Solution 1
ser.map(lambda x: x.title())
# Solution 2
ser.map(lambda x: x[0].upper() + x[1:])
# Solution 3
pd.Series([i.title() for i in ser])
| pset_pandas_ext/101problems/solutions/nb/p18.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Q#
# language: qsharp
# name: iqsharp
# ---
# # Deutsch-Jozsa algorithm
#
# The **Deutsch–Jozsa algorithm** quantum kata is a series of exercises designed
# to get you familiar with programming in Q#.
#
# It covers the following topics:
# * writing oracles (quantum operations which implement certain classical functions),
# * Bernstein-Vazirani algorithm for recovering the parameters of a scalar product function,
# * Deutsch-Jozsa algorithm for recognizing a function as constant or balanced, and
# * writing tests in Q#.
#
#
# Each task is wrapped in one operation preceded by the description of the task.
# Your goal is to fill in the blanks (marked with `// ...` comments)
# with some Q# code that solves the task. To verify your answer, run the cell with Ctrl/⌘+Enter.
# To begin, first prepare this notebook for execution (if you skip the first step, you'll get "Syntax does not match any known patterns" error when you try to execute Q# code in the next cells; if you skip the second step, you'll get "Invalid kata name" error):
%package Microsoft.Quantum.Katas::0.7.1905.3109
# > The package versions in the output of the cell above should always match. If you are running the Notebooks locally and the versions do not match, please install the IQ# version that matches the version of the `Microsoft.Quantum.Katas` package.
# > <details>
# > <summary><u>How to install the right IQ# version</u></summary>
# > For example, if the version of `Microsoft.Quantum.Katas` package above is 0.1.2.3, the installation steps are as follows:
# >
# > 1. Stop the kernel.
# > 2. Uninstall the existing version of IQ#:
# > dotnet tool uninstall microsoft.quantum.iqsharp -g
# > 3. Install the matching version:
# > dotnet tool install microsoft.quantum.iqsharp -g --version 0.1.2.3
# > 4. Reinstall the kernel:
# > dotnet iqsharp install
# > 5. Restart the Notebook.
# > </details>
#
%workspace reload
# ## Part I. Oracles
#
# In this section you will implement oracles defined by classical functions using the following rules:
# - a function $f\left(x_0, ..., x_{N-1}\right)$ with N bits of input $x = \left(x_0, ..., x_{N-1}\right)$ and 1 bit of output $y$
# defines an oracle which acts on N input qubits and 1 output qubit.
# - the oracle effect on qubits in computational basis states is defined as follows:
# $|x\rangle |y\rangle \to |x\rangle |y \oplus f(x)\rangle$ ($\oplus$ is addition modulo 2).
# - the oracle effect on qubits in superposition is defined following the linearity of quantum operations.
# - the oracle must act properly on qubits in all possible input states.
#
# You can read more about quantum oracles in [Q# documentation](https://docs.microsoft.com/quantum/concepts/oracles).
# ### Task 1.1. $f(x) = 0$
#
# **Inputs:**
# 1. N qubits in an arbitrary state $|x\rangle$ (input register)
# 2. a qubit in an arbitrary state $|y\rangle$ (output qubit)
#
#
# **Goal:** transform state $|x, y\rangle$ into state $|x, y \oplus f(x)\rangle$ ($\oplus$ is addition modulo 2).
# +
%kata T11_Oracle_Zero_Test
operation Oracle_Zero (x : Qubit[], y : Qubit) : Unit {
// Since f(x) = 0 for all values of x, |y ⊕ f(x)⟩ = |y⟩.
// This means that the operation doesn't need to do any transformation to the inputs.
// Run the cell (using Ctrl/⌘ + Enter) to see that the test passes.
}
# -
# ### Task 1.2. $f(x) = 1$
#
# **Inputs:**
# 1. N qubits in an arbitrary state $|x\rangle$ (input register)
# 2. a qubit in an arbitrary state $|y\rangle$ (output qubit)
#
#
# **Goal:** transform state $|x, y\rangle$ into state $|x, y \oplus f(x)\rangle$ ($\oplus$ is addition modulo 2).
#
# <br/>
# <details>
# <summary>Need a hint? Click here</summary>
# Since $f(x) = 1$ for all values of x, $|y \oplus f(x)\rangle = |y \oplus 1\rangle = |NOT y\rangle$.
# This means that the operation needs to flip qubit y (i.e. transform $|0\rangle$ to $|1\rangle$ and vice versa).
# </details>
# +
%kata T12_Oracle_One_Test
operation Oracle_One (x : Qubit[], y : Qubit) : Unit {
// ...
}
# -
# ### Task 1.3. $f(x) = x_k$ (the value of k-th qubit)
#
# **Inputs:**
# 1. N qubits in an arbitrary state $|x\rangle$ (input register)
# 2. a qubit in an arbitrary state $|y\rangle$ (output qubit)
# 3. 0-based index of the qubit from input register ($0 \le k < N$)
#
# **Goal:** transform state $|x, y\rangle$ into state $|x, y \oplus x_k\rangle$ ($\oplus$ is addition modulo 2).
# +
%kata T13_Oracle_Kth_Qubit_Test
open Microsoft.Quantum.Diagnostics;
operation Oracle_Kth_Qubit (x : Qubit[], y : Qubit, k : Int) : Unit {
// The following line enforces the constraints on the value of k that you are given.
// You don't need to modify it. Feel free to remove it, this won't cause your code to fail.
EqualityFactB(0 <= k and k < Length(x), true, "k should be between 0 and N-1, inclusive");
// ...
}
# -
# ### Task 1.4. f(x) = 1 if x has odd number of 1s, and 0 otherwise
#
# **Inputs:**
# 1. N qubits in an arbitrary state $|x\rangle$ (input register)
# 2. a qubit in an arbitrary state $|y\rangle$ (output qubit)
#
#
# **Goal:** transform state $|x, y\rangle$ into state $|x, y \oplus f(x)\rangle$ ($\oplus$ is addition modulo 2).
#
# <br/>
# <details>
# <summary>Need a hint? Click here</summary>
# $f(x)$ can be represented as $x_0 \oplus x_1 \oplus ... \oplus x_{N-1}$.
# </details>
# +
%kata T14_Oracle_OddNumberOfOnes_Test
operation Oracle_OddNumberOfOnes (x : Qubit[], y : Qubit) : Unit {
// ...
}
# -
# ### Task 1.5. $f(x) = \bigoplus\limits_{i=0}^{N-1} r_i x_i$ for a given bit vector r (scalar product function)
#
# **Inputs:**
# 1. N qubits in an arbitrary state $|x\rangle$ (input register)
# 2. a qubit in an arbitrary state $|y\rangle$ (output qubit)
# 3. a bit vector of length N represented as an `Int[]`.
# You are guaranteed that the qubit array and the bit vector have the same length.
#
# **Goal:** transform state $|x, y\rangle$ into state $|x, y \oplus f(x)\rangle$ ($\oplus$ is addition modulo 2).
# +
%kata T15_Oracle_ProductFunction_Test
open Microsoft.Quantum.Diagnostics;
operation Oracle_ProductFunction (x : Qubit[], y : Qubit, r : Int[]) : Unit {
// The following line enforces the constraint on the input arrays.
// You don't need to modify it. Feel free to remove it, this won't cause your code to fail.
EqualityFactI(Length(x), Length(r), "Arrays should have the same length");
// ...
}
# -
# ### Task 1.6. $f(x) = \bigoplus\limits_{i=0}^{N-1} \left(r_i x_i + (1 - r_i) (1 - x_i) \right)$ for a given bit vector r (scalar product function)
#
# **Inputs:**
# 1. N qubits in an arbitrary state $|x\rangle$ (input register)
# 2. a qubit in an arbitrary state $|y\rangle$ (output qubit)
# 3. a bit vector of length N represented as an `Int[]`.
# You are guaranteed that the qubit array and the bit vector have the same length.
#
# **Goal:** transform state $|x, y\rangle$ into state $|x, y \oplus f(x)\rangle$ ($\oplus$ is addition modulo 2).
#
# <br/>
# <details>
# <summary>Need a hint? Click here</summary>
# Since each addition is done modulo 2, you can evaluate the effect of each term independently$.
# </details>
# +
%kata T16_Oracle_ProductWithNegationFunction_Test
open Microsoft.Quantum.Diagnostics;
operation Oracle_ProductWithNegationFunction (x : Qubit[], y : Qubit, r : Int[]) : Unit {
// The following line enforces the constraint on the input arrays.
// You don't need to modify it. Feel free to remove it, this won't cause your code to fail.
EqualityFactI(Length(x), Length(r), "Arrays should have the same length");
// ...
}
# -
# ### Task 1.7. $f(x) = \bigoplus\limits_{i=0}^{N-1} x_i + $ (1 if prefix of x is equal to the given bit vector, and 0 otherwise) modulo 2
#
# **Inputs:**
# 1. N qubits in an arbitrary state $|x\rangle$ (input register)
# 2. a qubit in an arbitrary state $|y\rangle$ (output qubit)
# 3. a bit vector of length $K$ represented as an `Int[]` ($1 \le K \le N$).
#
# **Goal:** transform state $|x, y\rangle$ into state $|x, y \oplus f(x)\rangle$ ($\oplus$ is addition modulo 2).
#
# > A prefix of length K of a state $|x\rangle = |x_0, ..., x_{N-1}\rangle$ is the state of its first K qubits $|x_0, ..., x_{K-1}\rangle$. For example, a prefix of length 2 of a state $|0110\rangle$ is 01.
#
# <br/>
# <details>
# <summary>Need a hint? Click here</summary>
# The first term is the same as in task 1.4. To implement the second term, you can use `Controlled` functor which allows to perform multicontrolled gates (gates with multiple control qubits).
# </details>
# +
%kata T17_Oracle_HammingWithPrefix_Test
open Microsoft.Quantum.Diagnostics;
operation Oracle_HammingWithPrefix (x : Qubit[], y : Qubit, prefix : Int[]) : Unit {
// The following line enforces the constraint on the input arrays.
// You don't need to modify it. Feel free to remove it, this won't cause your code to fail.
let K = Length(prefix);
EqualityFactB(1 <= K and K <= Length(x), true, "K should be between 1 and N, inclusive");
// ...
}
# -
# ### Task 1.8. f(x) = 1 if x has two or three bits (out of three) set to 1, and 0 otherwise (majority function)
#
# **Inputs:**
# 1. 3 qubits in an arbitrary state $|x\rangle$ (input register)
# 2. a qubit in an arbitrary state $|y\rangle$ (output qubit)
#
#
# **Goal:** transform state $|x, y\rangle$ into state $|x, y \oplus f(x)\rangle$ ($\oplus$ is addition modulo 2).
#
# <br/>
# <details>
# <summary>Need a hint? Click here</summary>
# Represent f(x) in terms of AND and $\oplus$ operations.
# </details>
# +
%kata T18_Oracle_MajorityFunction_Test
open Microsoft.Quantum.Diagnostics;
operation Oracle_MajorityFunction (x : Qubit[], y : Qubit) : Unit {
// The following line enforces the constraint on the input array.
// You don't need to modify it. Feel free to remove it, this won't cause your code to fail.
EqualityFactI(3, Length(x), "x should have exactly 3 qubits");
// ...
}
# -
# ## Part II. Deutsch-Jozsa Algorithm
#
# In this section you will implement the Deutsch-Jozsa algorithm and run it on the oracles you've defined in part I to observe the results.
#
# This algorithm solves the following problem. You are given a quantum oracle which implements a classical function $f(x): \{0, 1\}^N \to \{0, 1\}$. You are guaranteed that the function $f$ is either constant (has the same value for all inputs) or balanced (has value 0 for half of the inputs and 1 for the other half of the inputs). The goal of the algorithm is to figure out whether the function is constant or balanced in just one oracle call.
#
# * You can read more about the Deutsch-Jozsa algorithm in [Wikipedia](https://en.wikipedia.org/wiki/Deutsch%E2%80%93Jozsa_algorithm).
# * [Lecture 5: A simple searching algorithm; the Deutsch-Jozsa algorithm](https://cs.uwaterloo.ca/~watrous/CPSC519/LectureNotes/05.pdf).
# ### Task 2.1. Deutsch-Jozsa Algorithm
#
# **Inputs:**
# 1. the number of qubits $N$ in the input register for the function f
# 2. a quantum operation which implements the oracle $|x, y\rangle \to |x, y \oplus f(x)\rangle$, where x is an $N$-qubit input register, y is a 1-qubit answer register, and f is a Boolean function
#
#
# **Output:** `true` if the function f is constant, or `false` if the function f is balanced.
# +
%kata T31_DJ_Algorithm_Test
operation DJ_Algorithm (N : Int, oracle : ((Qubit[], Qubit) => Unit)) : Bool {
// Create a boolean variable for storing the return value.
// You'll need to update it later, so it has to be declared as mutable.
// ...
// Allocate an array of N qubits for the input register x and one qubit for the answer register y.
using ((x, y) = (Qubit[N], Qubit())) {
// Newly allocated qubits start in the |0⟩ state.
// The first step is to prepare the qubits in the required state before calling the oracle.
// Each qubit of the input register has to be in the |+⟩ state.
// ...
// The answer register has to be in the |-⟩ state.
// ...
// Apply the oracle to the input register and the answer register.
// ...
// Apply a Hadamard gate to each qubit of the input register again.
// ...
// Measure each qubit of the input register in the computational basis using the M operation.
// If any of the measurement results is One, the function implemented by the oracle is balanced.
// ...
// Before releasing the qubits make sure they are all in the |0⟩ state.
// ...
}
// Return the answer.
// ...
}
# -
# ### Task 2.2. Running Deutsch-Jozsa Algorithm
#
# **Goal**: Use your implementation of Deutsch-Jozsa algorithm from task 2.1 to test each of the oracles you've implemented in part I for being constant or balanced.
#
# > This is an open-ended task, and is not covered by a unit test. To run the code, execute the cell with the definition of the `Run_DeutschJozsa_Algorithm` operation first; if it compiled successfully without any errors, you can run the operation by executing the next cell (`%simulate Run_DeutschJozsa_Algorithm`).
#
# > Note that this task relies on your implementations of the previous tasks. If you are getting the "No variable with that name exists." error, you might have to execute previous code cells before retrying this task.
# +
open Microsoft.Quantum.Diagnostics;
operation Run_DeutschJozsa_Algorithm () : String {
// You can use EqualityFactB function to check that the return value of DJ_Algorithm operation matches the expected value
EqualityFactB(DJ_Algorithm(4, Oracle_Zero), true, "f(x) = 0 not identified as constant");
// Run the algorithm for the rest of the oracles
// ...
// If all tests pass, report success!
return "Success!";
}
# -
%simulate Run_DeutschJozsa_Algorithm
# ## Part III. Bernstein–Vazirani Algorithm
#
# In this section you will implement the Bernstein-Vazirani algorithm and run it on the oracles you've defined in part I to observe the results.
#
# This algorithm solves the following problem. You are given a quantum oracle which implements a classical function $f(x): \{0, 1\}^N \to \{0, 1\}$. You are guaranteed that the function $f$ can be represented as a scalar product, i.e., there exists a bit vector $r = (r_0, ..., r_{N-1})$ such that $f(x) = \bigoplus \limits_{i=0}^{N-1} x_i r_i$. The goal of the algorithm is to reconstruct the bit vector $r$ in just one oracle call.
#
# You can read more about the Bernstein-Vazirani algorithm in ["Quantum Algorithm Implementations for Beginners"](https://arxiv.org/abs/1804.03719), section III.
# ### Task 3.1. Bernstein-Vazirani Algorithm
#
# **Inputs:**
# 1. the number of qubits $N$ in the input register for the function f
# 2. a quantum operation which implements the oracle $|x, y\rangle \to |x, y \oplus f(x)\rangle$, where x is an $N$-qubit input register, y is a 1-qubit answer register, and f is a Boolean function
#
#
# **Output:** The bit vector $r$ reconstructed from the oracle.
# +
%kata T22_BV_Algorithm_Test
operation BV_Algorithm (N : Int, oracle : ((Qubit[], Qubit) => Unit)) : Int[] {
// The algorithm is very similar to Deutsch-Jozsa algorithm; try to implement it without hints.
// ...
}
# -
# ### Task 3.2. Running Bernstein-Vazirani Algorithm
#
# **Goal**: Use your implementation of Bernstein-Vazirani algorithm from task 3.1 to reconstruct the hidden vector $r$ for the oracles you've implemented in part I.
#
# > This is an open-ended task, and is not covered by a unit test. To run the code, execute the cell with the definition of the `Run_BernsteinVazirani_Algorithm` operation first; if it compiled successfully without any errors, you can run the operation by executing the next cell (`%simulate Run_BernsteinVazirani_Algorithm`).
#
# > Note that this task relies on your implementations of the previous tasks. If you are getting the "No variable with that name exists." error, you might have to execute previous code cells before retrying this task.
#
# <details>
# <summary>Need a hint? Click here</summary>
# Not all oracles from part I can be represented as scalar product functions. The most generic oracle you can use in this task is Oracle_ProductFunction from task 1.5; Oracle_Zero, Oracle_Kth_Qubit and Oracle_OddNumberOfOnes are special cases of this oracle.
# </details>
# +
// Start by implementing a function AllEqualityFactI
// to check the results of applying the algorithm to each oracle in a uniform manner.
function AllEqualityFactI(actual : Int[], expected : Int[]) : Bool {
// Check that array lengths are equal
// ...
// Check that the corresponding elements of the arrays are equal
// ...
fail "AllEqualityFactI is not implemented";
}
operation Run_BernsteinVazirani_Algorithm () : String {
// Now use AllEqualityFactI to verify the results of the algorithm
if (not AllEqualityFactI(BV_Algorithm(3, Oracle_Zero), [0, 0, 0])) {
return "Incorrect result for f(x) = 0";
}
// Run the algorithm on the rest of the oracles
// ...
// If all tests pass, report success!
return "Success!";
}
# -
%simulate Run_BernsteinVazirani_Algorithm
# ## Part IV. Come up with your own algorithm!
#
# In this section you will come up with your own algorithm to solve a problem similar to the one described in part III.
#
# The problem is formulated as follows. You are given a quantum oracle which implements a classical function $f(x): \{0, 1\}^N \to \{0, 1\}$. You are guaranteed that there exists a bit vector $r = (r_0, ..., r_{N-1})$ such that the function $f$ can be represented as follows: $f(x) = \bigoplus \limits_{i=0}^{N-1} \left( x_i r_i + (1 - x_i)(1 - r_i) \right)$. You have to reconstruct the bit vector $r$ in just one oracle call.
#
# > Note that you have implemented the oracle for this function in task 1.6.
# ### Task 4. Noname Algorithm
#
# **Inputs:**
# 1. the number of qubits $N$ in the input register for the function f
# 2. a quantum operation which implements the oracle $|x, y\rangle \to |x, y \oplus f(x)\rangle$, where x is an $N$-qubit input register, y is a 1-qubit answer register, and f is a Boolean function
#
# **Output:** Any bit vector $r$ that would generate the same oracle as the one you are given.
#
# <br/>
# <details>
# <summary>Need a hint? Click here</summary>
# For each oracle there are multiple bit vectors that generate it; it is sufficient to find any one of them.
# </details>
# +
%kata T41_Noname_Algorithm_Test
operation Noname_Algorithm (N : Int, oracle : ((Qubit[], Qubit) => Unit)) : Int[] {
// The algorithm is very similar to Bernstein-Vazirani algorithm; try to implement it without hints.
// ...
}
| DeutschJozsaAlgorithm/DeutschJozsaAlgorithm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Problemy wydawania reszty i problem plecakowy
#
# W tej lekcji zajmiemy się dwoma problemami algorytmicznymi. Oba one posiadają bardzo proste sformułowania - nie dajmy się jednak nabrać - gdyż okazuje się, że oba należą do grupy problemów NP. Przyjrzyjmy się obu zaczynając od nieco prostszego z nich.
#
# # Problem wydawania reszty
#
# Problem wydawania reszty można sformułować następująco:
#
# *Przypuśćmy, że zawieramy transakcję z pewną osobą. W jej efekcie musimy wydać pewną ilość pieniędzy. Otwieramy swój portfel i musimy teraz wybrać konkretne nominały aby 'zbudować' odliczoną kwotę*
#
# ## Definicja
#
# Niech będzie dany zbiór nominałów $n_1, n_2, \ldots, n_n$ (posortowanych rosnąco w sposób ostry $i < j \Rightarrow n_i < n_j$) i kwota do wydania $C$. Wyznaczamy takie współczynniki nieujemne całkowite $x_1,\ldots, x_n$, że
# $$\sum_{j=1}^n n_j x_j = C,$$
# oraz użyto najmniejszej liczby monet tj. wartość
# $$\sum_{j=1}^n x_j$$
# jest najmniejsza.
#
# ## Opis problemu
#
# Jak widzimy problem oprócz samego wydania reszty zakłada minimalizację formy tej reszty. U jego podstaw leży oczywiście jakaś forma grzeczności - wydawanie zbyt dużej ilości monet/banknotów (tzw. rozliczanie się 1-groszówkami) wydaje się nieeleganckie i kłopotliwe dla drugiej strony.
#
# Problem ten występuje w dwóch wersjach:
# * klasycznej z nieograniczonym zasobem każdego z nominałów - w tej wersji jest on rozważany najczęściej przez algorytmików, oraz
# * z dodatkowymi ograniczeniami - w tej wersji dla każdego z nominałów może być dostępna jedynie określona ilość monet. Mamy więc dodatkowe ograniczenia postaci $x_i \leq b_i$.
#
# Dla problemu kluczowa okazuje się jednak postać zbioru z nominałami. Istnieją matematyczne wyniki orzekające, że niektóre takie systemy nominałów posiadają szybkie liniowe algorytmy rozwiązujące. Istnieją również systemy nominałów przy których algorytmy te zawodzą a poszukiwanie rozwiązania problemu zdaje się mieć wykładniczą złożoność.
#
# Przykładem 'dobrego układu nominałów' może być ten którym się posługujemy tj. 1,2,5,10,20,50,100,200,500.
# Dla takich zbiorów nominałów optymalny okazuje się być następujący algorytm.
#
# ## Algorytm zachłanny dla problemu reszty
#
# Podstawowym algorytmem rozwiązywania problemu jest tzw. algorytm zachłanny - czyli algorytm gdzie w danej sytuacji podejmujemy chwilowo najlepszą decyzję. Algorytmy te opierają się na przypuszczeniu, że rozwiązanie optymalne powstaje poprzez podejmowanie najlepszych decyzji na każdym etapie poszukiwań. Jak łatwo przypuszczać
#
# * algorytm taki, generuje jakościowo dobre rozwiązania problemów, ale
# * nie zawsze uzyskuje on jakościowo najlepsze rozwiązanie.
#
# W przypadku problemu wydawania reszty algorytm zachłanny polega na wybieraniu największego dostępnego nominału (tak aby maksymalnie pomniejszyć kwotę do dalszej wypłaty). Przyjrzyjmy się jego prostej implementacji
# +
K = 132
nominaly = [1, 2, 5, 10, 20, 50]
# -
def change_making_greedy(K, nominaly):
nominaly.sort(reverse = True)
coins = []
for i in nominaly:
while K >= i:
coins.append(i)
K -= i
return coins
change_making_greedy(K, nominaly)
# Jeśli interesuje nas ile monet potrzeba aby wypłacić resztę wystarczyło by dodać
#
len(change_making_greedy(K, nominaly))
# Zauważmy jednak, że algorytm ten potrafi znaleźć nieoptymalne rozwiązanie. Rozważmy przykład
#
K = 8
nominaly = [1,4,5]
change_making_greedy(K,nominaly)
# Tymczasem
4+4
# ## Algorytm z zastosowaniem programowania dynamicznego
#
# Ogólna technika odszukiwania optymalnego rozwiązania wymaga bardziej zaawansowanego algorytmu. Dobre wyniki są zwracane przez techniki programowania dynamicznego. Idea jest następująca:
#
# 1. Poszukujemy optymalnego sposobu na wydanie reszt mniejszych kwot,
# 1. Budujemy reszty wyższych kwot poprzez parowanie wydania mniejszych reszt.
#
# Jak to działa. Wyobraźmy sobie, że mamy do wydania kwotę 100.
# Kwotę 100 można np. podzielić na 1+99. Jeśli mamy, że 99 możemy optymalnie wydać za pomocą k monet a 1 - 1 jedną monetą - to mamy pomysł by wydać 100 za pomocą k+1 monet. Nie możemy jednak na tym poprzestać. Trzeba sprawdzić
#
# * 2 + 98,
# * 3 + 97,
# * 4 + 96,
# * ...
# * 50 + 50;
#
# I ze wszystkich tych możliwości wybrać tę, która generuje resztę o minimalnej ilości monet.
def change_making_dynamic(K, nominaly, maks_size = 1000000):
change_making = {}
for i in range(1,K+1):
change_making[i] = [] #najpierw nie znamy przepisu na zadna z liczb
for i in range(1,K+1):
if i in nominaly: #jesli nasza liczba jest wsrod dostepnych nominalow - to najprosciej wydac ja na raz
change_making[i].append(i)
continue # nie ma bardziej optymalnego sposobu
# inaczej rozwazamy wszystkie pary i wyszukujemy najmniejszego zestawu reszt
min_size = maks_size
min_ind = 1
for j in range(1,i):
# len(change_making[j]) + len(change_making[i-j]) to ilosc monet w reszcie przy probie podzialu
# i na j + (i-j)
if min_size > len(change_making[j]) + len(change_making[i-j]):
min_size = len(change_making[j]) + len(change_making[i-j])
min_ind = j
# przypisanie najoptymalniejszego podziału (dowolnego tj pierwszego) do tablicy wyników
change_making[i] = change_making[min_ind].copy()
change_making[i].extend(change_making[i-min_ind])
return change_making
K = 20
nominaly = [1,4,5]
change_making_dynamic(K,nominaly)
# # Problem plecakowy
#
# Problem plecakowy jest problemem nieco ogólniejszym od problemu wydawania reszty. Formułuje się go następująco:
#
# *Przypuśćmy, że ruszamy w podróż pieszą i możemy zabrać tylko pewną określoną ilość rzeczy. Z reguły ograniczeniem dla nas jest masa - gdyż nie możemy podróżować ze zbyt ciężkim plecakiem. Chcemy jednak aby plecak zawierał jedynie najpotrzebniejsze/najcenniejsze dla nas przedmioty.*
#
# ## Definicja
#
# Rozważmy zbiór $n$ przedmiotów gdzie dla $i$-tego przedmiotu $v_i$ oznaczać będzie jego wartość oraz $w_i$ jego wagę. Poszukujemy takiego układu wartości $x_i$, że dla każdego $i \in \{1,\ldots, n\}$ zachodzi $x_i \in \{ 0, 1 \}$ spełniającego warunek nieprzepełniania plecaka
# $$
# \sum_{i=1}^{n} w_i \cdot x_i \leq W,
# $$
# przy maksymalizacji jego wartości
# $$
# \sum_{i=1}^{n} v_i \cdot x_i \to \max.
# $$
#
# ## Algorytm zachłanny
#
# Dla algorytmu plecakowego możemy rozważyć algorytm przybliżonego rozwiązywanie problemu w sposób zachłanny. Jego postać jest następująca
#
# 1. Sortujemy obiekty wg malejącego stosunku cena do wagi.
# 1. W danej chwili wybieramy pierwszy z listy element, który mieści się do plecaka
#
#
# +
# ['nazwa przedmiotu', wartosc, waga ]
items = [ ['a', 2, 4] , ['b', 4, 2] , ['c', 2, 2], ['d', 3, 3], ['e', 4, 1] ]
size = 6
# +
import copy
def knapsack_greedy(items, size):
inner_items = copy.deepcopy(items)
for item in inner_items:
item.append(item[1]/item[2])
inner_items.sort(key=lambda x : x[3], reverse = True)
knapsack = {}
for item in inner_items:
if item[2] <= size:
size -= item[2]
knapsack[item[0]] = item[:]
return knapsack
# -
knapsack_greedy(items, size)
# ## Algorytm przeglądu zupełnego
#
# W przypadku gdy przedmioty mają nieskończoną liczność - możliwy jest analogiczny do problemu reszty - algorytm programowania dynamicznego. Jednak w tym najczęściej spotykanym przypadku - jedyny algorytm odszukujący poprawne rozwiązanie problemu jest algorytm przeglądu całkowitego.
#
#
# +
from itertools import chain, combinations
def powerset(iterable):
"powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)"
s = list(iterable)
return chain.from_iterable(combinations(s, r) for r in range(len(s)+1))
for i in powerset(items):
print(i)
# -
def knapsack_full(items, size):
max_combination = None
max_value = 0
def value_and_weight(combination):
value = 0
weight = 0
for item in combination:
value += item[1]
weight += item[2]
return value, weight
for combination in powerset(items):
value, weight = value_and_weight(combination)
if weight > size:
continue
if value > max_value:
max_value = value
max_combination = combination
return max_combination
knapsack_full(items, size)
| problem plecakowy/Problem wydawania reszty i problem plecakowy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # DataPath :: Data Update Example
# This notebook demonstrates how to perform simple data manipulations.
from deriva.core import ErmrestCatalog, get_credential
# This example uses a development server with a throw away catalog. You *will not* have sufficient permissions to be able to run this example. This notebook is for documentation purpose only.
scheme = 'https'
hostname = 'dev.facebase.org'
catalog_number = 1
# Use DERIVA-Auth to get a `credential` or use `None` if your catalog allows anonymous access.
credential = get_credential(hostname)
# Now, connect to your catalog and the `pathbuilder` interface for the catalog.
assert scheme == 'http' or scheme == 'https', "Invalid http scheme used."
assert isinstance(hostname, str), "Hostname not set."
assert isinstance(catalog_number, int), "Invalid catalog number"
catalog = ErmrestCatalog(scheme, hostname, catalog_number, credential)
pb = catalog.getPathBuilder()
# For this example, we will create or modify entities of the "Dataset" table of a catalog that uses the FaceBase data model.
dataset = pb.isa.dataset
dataset
# ## Insert example
# Here we will insert an entity into the dataset table.
new_entity = {
'title': 'A test dataset by derivapy',
'description': 'This was created by the deriva-py API.',
'project': 311
}
entities = dataset.insert([new_entity], defaults={'id', 'accession'})
# The insert operation returns the inserted entities, which now have any system generated attributes filled in.
list(entities)
# ## Update example
# Here we will change the description for the entity we inserted and update it in the catalog.
entities[0]['description'] = 'A test dataset that was updated by derivapy'
updated_entities = dataset.update(entities)
# Similar to the insert operation, the update operation also returns the updated entities. Notice that the system-managed 'RMT' (Row Modified Timestamp) attribute has been update too.
list(updated_entities)
# ### Update with custom correlation and targets specified
# You can also specify which columns to use to correlate the input with the existing rows in the table and which columsn to be the targets of the update. Per the ERMrest protocol, extra data in the update payload (`entities`) will be ignored. The inputs must be `iterable`s of strings or objects that implement the `__str__` method.
entities[0]['description'] = 'Yet another update using derivapy'
entities[0]['title'] = 'And a title change'
updated_entities = dataset.update(entities, [dataset.id], [dataset.description, 'title'])
list(updated_entities)
# ## Delete example
# Unlike `insert` and `update` which are performed within the context of a table, the `delete` operation is performed within the context of a data path.
#
# We know the `RID` from above, which is a single-column key for the entities in the `dataset` (and any other EMRrest) table. We can use this attribute to form a path to the newly inserted and updated entity.
#
# Note: Any filters could be used in this example; we do not have to use a key column only. We use the key only because we want to delete that specific entity which we just created. If we wanted to, we could link addition tables and apply additional filters to delete entities computed from a _complex_ path.
path = dataset.filter(dataset.RID == entities[0]['RID'])
# On successful delete, no content will be returned.
path.delete()
| docs/derivapy-datapath-update.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Initial enthalpy calculations and enthalpy modelling
#
# Experimentally, the enthalpy of adsorption can be obtained either indirectly, through the isosteric enthalpy method, or directly, using adsoption microcalorimetry. Once an enthalpy curve is calculated, a useful performance indicator is the enthalpy of adsorption at zero loading, corresponding to the inital interactions of the probe with the surface.
# pyGAPS contains two methods to determine the initial enthalpy of adsorption starting from an enthalpy curve.
#
# First, make sure the data is imported by running the import notebook.
# %run import.ipynb
# ### Initial point method
#
# The point method of determining enthalpy of adsorption is the simplest method. It just returns the first measured point in the enthalpy curve.
#
# Depending on the data, the first point method may or may not be representative of the actual value.
# +
import matplotlib.pyplot as plt
# Initial point method
isotherm = next(i for i in isotherms_calorimetry if i.material=='HKUST-1(Cu)')
res = pygaps.initial_enthalpy_point(isotherm, 'enthalpy', verbose=True)
plt.show()
isotherm = next(i for i in isotherms_calorimetry if i.material=='Takeda 5A')
res = pygaps.initial_enthalpy_point(isotherm, 'enthalpy', verbose=True)
plt.show()
# -
# ### Compound model method
# This method attempts to model the enthalpy curve by the superposition of several contributions. It is slower, as it runs a constrained minimisation algorithm with several initial starting guesses, then selects the optimal one.
# +
# Modelling method
isotherm = next(i for i in isotherms_calorimetry if i.material=='HKUST-1(Cu)')
res = pygaps.initial_enthalpy_comp(isotherm, 'enthalpy', verbose=True)
plt.show()
isotherm = next(i for i in isotherms_calorimetry if i.material=='Takeda 5A')
res = pygaps.initial_enthalpy_comp(isotherm, 'enthalpy', verbose=True)
plt.show()
| docs/examples/initial_enthalpy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Testing Distributions
#
# As the start of our second pass through the epicycle, we wish to refine and expand our exploratory analysis. We will compute vertex and edge features on our graphs across multiple scales and multiple datasets.
# #### Setup
# +
from scipy.stats import gaussian_kde
from ipywidgets import widgets
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
import collections
import os
# %matplotlib inline
font = {'weight' : 'bold',
'size' : 14}
import matplotlib
matplotlib.rc('font', **font)
# +
S = 10
n = 70
p = 0.4
myer = {('er'+str(s)): nx.erdos_renyi_graph(n, p) for s in range(S)}
mydd = {('dd'+str(s)): nx.duplication_divergence_graph(n, p) for s in range(S)}
mypl = {('pl'+str(s)): nx.powerlaw_cluster_graph(n, int(n/3), p) for s in range(S)}
myba = {('ba'+str(s)): nx.barabasi_albert_graph(n, int(n/3)) for s in range(S)}
myrr = {('rr'+str(s)): nx.random_regular_graph(int(n/3), n) for s in range(S)}
myws = {('ws'+str(s)): nx.watts_strogatz_graph(n, int(n/3), p) for s in range(S)}
myls = {('ls'+str(s)): nx.random_lobster(n, 2*p, p) for s in range(S)}
mm = collections.OrderedDict()
mm["<NAME>"]=myer
mm["Duplication Divergence"]=mydd
mm["Power Law"]=mypl
mm["<NAME>"]=myba
mm["Random Regular"]=myrr
mm["<NAME>"]=myws
mm["Random Lobster"]=myls
# -
# #### Number of Non-Zero (NNZ) edge weights
nnz = collections.OrderedDict((gs, np.mean([len(nx.edges(mm[gs][key])) for key in mm[gs]])) for gs in mm)
fig = plt.figure(figsize=(12,6))
plt.bar(range(len(nnz)),nnz.values(), alpha=0.7)
plt.title('Number of Non-Zeros in Sampled Distributions')
plt.ylabel('Mean Count')
plt.xlabel('Distribution')
plt.xticks(np.arange(len(nnz))+0.4,mm.keys(), rotation=40)
plt.xlim((0, len(nnz.keys())))
plt.savefig('../figs/distribs/sample_nnz.png')
plt.show()
# #### Vertex Degree
# +
degrees = collections.OrderedDict((gs, np.array([item for sublist in [nx.degree(mm[gs][key]).values()
for key in mm[gs]] for item in sublist])) for gs in mm)
# avg_degrees = [np.mean(degrees[key]) for key in degrees]
# -
fig = plt.figure(figsize=(12,6))
plt.violinplot(degrees.values(), range(len(degrees)), points=20, widths=1, showmeans=True, showextrema=True)
plt.title('Degree Sequence in Sampled Distributions')
plt.ylabel('Degree Sequence')
plt.xlabel('Distribution')
plt.xticks(np.arange(len(degrees)),mm.keys(), rotation=40)
plt.xlim((-1, len(degrees.keys())))
plt.ylim((0, 70))
plt.savefig('../figs/distribs/sample_degree.png')
plt.show()
# #### Edge count
# e_count = collections.OrderedDict((key, len(nx.edges(mygs[key]))) for key in mygs)
e_count = collections.OrderedDict((gs, np.mean([len(nx.edges(mm[gs][key])) for key in mm[gs]])) for gs in mm)
fig = plt.figure(figsize=(12,6))
plt.bar(range(len(e_count)),e_count.values(), alpha=0.7)
plt.title('Edge Count in Sampled Distributions')
plt.ylabel('Mean Count')
plt.xlabel('Distribution')
plt.xticks(np.arange(len(nnz))+0.4,mm.keys(), rotation=40)
plt.xlim((0, len(e_count.keys())))
plt.savefig('../figs/distribs/sample_edges.png')
plt.show()
# #### Number of Local 3-cliques
keyss = [mm[gs][key] for gs in mm.keys() for key in mm[gs].keys() ]
# [mm[key] for key in keys]
three_cliques = collections.OrderedDict((gs, np.mean([[clique for clique in
nx.algorithms.clique.enumerate_all_cliques(mm[gs][key])
if len(clique) == 3] for key in mm[gs].keys()])) for gs in mm.keys())
# nnz = collections.OrderedDict((gs, np.mean([len(nx.edges(mm[gs][key])) for key in mm[gs]])) for gs in mm)
n_three_cliques = [len(three_cliques[key]) for key in three_cliques]
fig = plt.figure(figsize=(12,6))
plt.bar(range(len(n_three_cliques)),n_three_cliques, alpha=0.7)
plt.title('Number of local 3-cliques')
plt.ylabel('Number of local 3-cliques')
plt.xlabel('Graph')
plt.xlim((0, len(three_cliques.keys())))
plt.show()
# #### Clustering Coefficient
# ccoefs = collections.OrderedDict((key, nx.clustering(mygs[key]).values()) for key in mygs)
ccoefs = collections.OrderedDict((gs, np.array([item for sublist in [nx.clustering(mm[gs][key]).values()
for key in mm[gs]] for item in sublist])) for gs in mm)
avg_ccoefs = [np.mean(ccoefs[key]) for key in ccoefs]
fig = plt.figure(figsize=(12,6))
plt.violinplot(ccoefs.values(), range(len(ccoefs)), points=20, widths=1, showmeans=True, showextrema=True)
plt.title('Clustering Coefficient Distributions')
plt.ylabel('Clustering Coefficient')
plt.xlabel('Graph')
plt.xticks(np.arange(len(degrees)),mm.keys(), rotation=40)
# plt.xlim((-1, len(ccoefs.keys())))
plt.ylim((-0.01, 1.01))
plt.savefig('../figs/distribs/sample_cc.png')
plt.show()
# #### Scan Statistic-i
# +
i = 1
def scan_statistic(mygs, i):
ss = collections.OrderedDict()
for key in mygs.keys():
g = mygs[key]
tmp = np.array(())
for n in g.nodes():
subgraph = nx.ego_graph(g, n, radius = i)
tmp = np.append(tmp, np.sum([1 for e in subgraph.edges()]))
ss[key] = tmp
return ss
ss1 = scan_statistic(mm, i)
# -
fig = plt.figure(figsize=(12,6))
plt.violinplot(ss1.values(), range(len(ss1)), points=20, widths=1, showmeans=True, showextrema=True)
plt.title('Scan Statistic-1 Distributions')
plt.ylabel('Scan Statistic-1')
plt.xlabel('Graph')
plt.xlim((-1, len(ss1.keys())))
plt.savefig('../figs/distribs/sample_cc.png')
plt.show()
i = 2
ss2 = scan_statistic(mygs, i)
fig = plt.figure(figsize=(12,6))
plt.violinplot(ss2.values(), range(len(ss2)), points=20, widths=1, showmeans=True, showextrema=True)
plt.title('Scan Statistic-2 Distributions')
plt.ylabel('Scan Statistic-2')
plt.xlabel('Graph')
plt.xlim((-1, len(ss2.keys())))
plt.show()
# #### Eigen value
# +
# ccoefs = collections.OrderedDict((gs, np.array([item for sublist in [nx.clustering(mm[gs][key]).values()
# for key in mm[gs]] for item in sublist])) for gs in mm)
laplacian = collections.OrderedDict((gs, np.array([np.asarray(item) for sublist in [nx.normalized_laplacian_matrix(mm[gs][key])
for key in mm[gs]] for item in sublist])) for gs in mm)
# laplacian = collections.OrderedDict((gs, nx.normalized_laplacian_matrix(mygs[key])) for key in mygs)
eigs = collections.OrderedDict((gs, np.array([item for sublist in [np.sort(np.linalg.eigvals(laplacian[gs][key].A))[::-1]
for key in laplacian[gs]] for item in sublist])) for gs in laplacian)
# eigs = collections.OrderedDict((key, np.sort(np.linalg.eigvals(laplacian[key].A))[::-1]) for key in laplacian)
# -
laplacian['<NAME>'][0]
fig = plt.figure(figsize=(6,6))
plt.hold(True)
for key in eigs.keys():
# dens = gaussian_kde(eigs[key])
# x = np.linspace(0, 1.2*np.max(eigs[key]), 1000)
plt.plot(eigs[key], 'ro-', markersize=0.4, color='#888888', alpha=0.4)
plt.title('Eigen Values')
plt.ylabel('Eigen Value')
plt.xlabel('D')
plt.show()
# #### Betweenness Centrality
centrality = collections.OrderedDict((key, nx.algorithms.betweenness_centrality(mm[key]).values())
for key in mm.keys())
fig = plt.figure(figsize=(12,6))
plt.violinplot(centrality.values(), range(len(centrality.values())), points=20, widths=1, showmeans=True, showextrema=True)
plt.title('Node Centrality Distributions')
plt.ylabel('Centrality')
plt.xlabel('Graph')
plt.xlim((-1, len(centrality.keys())))
plt.ylim((-0.001, .2 ))
plt.show()
# #### Connected Compontent (abandonning for now)
ccs = {keys: nx.connected_component_subgraphs(mygs[keys]) for keys in mygs.keys()}
# nccs = {keys: len(list(ccs[keys])) for keys in ccs.keys()}
# print nccs
lccs = {keys: max(ccs[keys], key=len) for keys in ccs.keys()}
| code/testing_distributions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from collections import Counter
import json
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# %matplotlib inline
plt.rcParams['font.size'] = 12
# -
with open('results/cache/result_run_9268772.json', 'r') as f:
pred = json.load(f)
pred[0]
with open('data/vqa/raw/v2_OpenEnded_mscoco_val2014_questions.json') as f:
ques = json.load(f)['questions']
ques[0]
with open('data/vqa/raw/v2_mscoco_val2014_annotations.json') as f:
anno = json.load(f)['annotations']
anno[0]
# +
pred_df = pd.DataFrame(pred)
ques_df = pd.DataFrame(ques)
anno_df = pd.DataFrame(anno)
df = pred_df.merge(ques_df, how='inner').merge(anno_df, how='inner')
df = df.reindex(columns=['image_id','question_id', 'question_type', 'question',
'answer_type', 'multiple_choice_answer', 'answers', 'answer'],
copy=False)
# df.rename(columns=lambda col: 'prediction' if col == 'answer' else col, inplace=True)
del pred, ques, anno, pred_df, ques_df, anno_df
df.head()
# +
ans_set = df['answers'].map(lambda ans: set(map(lambda a: a['answer'], ans)))
wrong_idx = [a not in s for a,s in zip(df['answer'], ans_set)]
wrong = df[wrong_idx]
wrong.head()
# -
| validation_predictions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Import Python libraries
from typing import *
import os
#import ibm_watson
#import ibm_watson.natural_language_understanding_v1 as nlu
#import ibm_cloud_sdk_core
import pandas as pd
import spacy
import sys
from IPython.core.display import display, HTML
import textwrap
# And of course we need the text_extensions_for_pandas library itself.
_PROJECT_ROOT = "../.."
try:
import text_extensions_for_pandas as tp
except ModuleNotFoundError as e:
# If we're running from within the project source tree and the parent Python
# environment doesn't have the text_extensions_for_pandas package, use the
# version in the local source tree.
if not os.getcwd().endswith("market"):
raise e
if _PROJECT_ROOT not in sys.path:
sys.path.insert(0, _PROJECT_ROOT)
import text_extensions_for_pandas as tp
# Download the SpaCy model if necessary
try:
spacy.load("en_core_web_trf")
except IOError:
raise IOError("SpaCy dependency parser not found. Please run "
"'python -m spacy download en_core_web_trf', then "
"restart JupyterLab.")
if "IBM_API_KEY" not in os.environ:
raise ValueError("IBM_API_KEY environment variable not set. Please create "
"a free instance of IBM Watson Natural Language Understanding "
"(see https://www.ibm.com/cloud/watson-natural-language-understanding) "
"and set the IBM_API_KEY environment variable to your instance's "
"API key value.")
api_key = os.environ.get("IBM_API_KEY")
service_url = os.environ.get("IBM_SERVICE_URL")
# natural_language_understanding = ibm_watson.NaturalLanguageUnderstandingV1(
# version="2021-01-01",
# authenticator=ibm_cloud_sdk_core.authenticators.IAMAuthenticator(api_key)
# )
# natural_language_understanding.set_service_url(service_url)
# Github notebook gists will be this wide: ------------------>
# Screenshots of this notebook should be this wide: ----------------------------->
# +
# Code from the Github gist at https://gist.github.com/frreiss/038ac63ef20eed323a5637f9ddb2de8d
# Be sure to update this cell if the gist changes!
import pandas as pd
import text_extensions_for_pandas as tp
import ibm_watson
import ibm_watson.natural_language_understanding_v1 as nlu
import ibm_cloud_sdk_core
def find_persons_quoted_by_name(doc_url, api_key, service_url) -> pd.DataFrame:
# Ask Watson Natural Language Understanding to run its "semantic_roles"
# and "entities" models.
natural_language_understanding = ibm_watson.NaturalLanguageUnderstandingV1(
version="2021-01-01",
authenticator=ibm_cloud_sdk_core.authenticators.IAMAuthenticator(api_key)
)
natural_language_understanding.set_service_url(service_url)
nlu_results = natural_language_understanding.analyze(
url=doc_url,
return_analyzed_text=True,
features=nlu.Features(
entities=nlu.EntitiesOptions(mentions=True),
semantic_roles=nlu.SemanticRolesOptions())).get_result()
# Convert the output of Watson Natural Language Understanding to DataFrames.
dataframes = tp.io.watson.nlu.parse_response(nlu_results)
entity_mentions_df = dataframes["entity_mentions"]
semantic_roles_df = dataframes["semantic_roles"]
# Extract mentions of person names and company names
person_mentions_df = entity_mentions_df[entity_mentions_df["type"] == "Person"]
# Extract instances of subjects that made statements
quotes_df = semantic_roles_df[semantic_roles_df["action.normalized"] == "say"]
subjects_df = quotes_df[["subject.text"]].copy().reset_index(drop=True)
# Retrieve the full document text from the entity mentions output.
doc_text = entity_mentions_df["span"].array.document_text
# Filter down to just the rows and columns we're interested in
subjects_df = quotes_df[["subject.text"]].copy().reset_index(drop=True)
# Use String.index() to find where the strings in "subject.text" begin
subjects_df["begin"] = pd.Series(
[doc_text.index(s) for s in subjects_df["subject.text"]], dtype=int)
# Compute end offsets and wrap the <begin, end, text> triples in a SpanArray column
subjects_df["end"] = subjects_df["begin"] + subjects_df["subject.text"].str.len()
subjects_df["span"] = tp.SpanArray(doc_text, subjects_df["begin"], subjects_df["end"])
# Align subjects with person names
execs_df = tp.spanner.contain_join(subjects_df["span"],
person_mentions_df["span"],
"subject", "person")
# Add on the document URL.
execs_df["url"] = doc_url
return execs_df[["person", "url"]]
# -
# # Part 2: Using Pandas DataFrames to analyze sentence structure
# *In this article, we show how to use Pandas DataFrames to extract useful structure from the parse trees of English-language sentences.*
#
# *Dependency parsing* is a natural language processing technique that identifies the relationships between the words that make up a sentence. We can treat these relationships as the edges of a graph.
#
# For example, here's the graph that a dependency parser produces for the sentence, "I like natural language processing":
# 
# + tags=[]
# Do not include this cell in the blog post.
# Code to generate the above image
import spacy
spacy_language_model = spacy.load("en_core_web_trf")
token_features = tp.io.spacy.make_tokens_and_features(
"I like natural language processing.", spacy_language_model)
tp.io.spacy.render_parse_tree(token_features)
# -
# This graph is always a tree, so we call it the *dependency-based parse tree* of the sentence. We often shorten the phrase "dependency-based parse tree" to **dependency parse** or **parse tree**.
#
# Every word in the sentence (including the period at the end) becomes a node of the parse tree:
# 
#
# The most important verb in the sentence
# becomes the root of the tree. We call this root node the *head* node. In this example, the head node is the verb "like".
#
# Edges in the tree connect pairs of related words:
# 
#
# Each edge is tagged with information about why the words are related. For example, the first two words in the sentence, "I" and "like", have an `nsubj` relationship. The pronoun "I" is the subject for the verb "like".
#
# Dependency parsing is useful because it lets you solve problems with very little code. The parser acts as a universal machine learning model, extracting many facts at once from the text. Pattern matching over the parse tree lets you filter this set of facts down to the ones that are relevant to your application.
# # An enterprise application of dependency parsing
#
# In a [previous article](https://medium.com/@fred.reiss/market-intelligence-with-pandas-and-ibm-watson-natural-language-understanding-a939323a31ea), we showed how to use [Watson Natural Language Understanding](https://www.ibm.com/cloud/watson-natural-language-understanding?cm_mmc=open_source_technology) to find places where a press release quotes an executive by name. In this article, we'll use dependency parsing to associate those names with **job titles**.
#
# A person's job title is a valuable piece of context. The title can tell you whether the person is an important decision maker. Titles can tell you relationship between different employees at a company. By looking at how titles change over time, you can reconstruct a person's job history.
# + tags=[]
# Don't include this cell in the blog
# Code to generate parse tree of entire sentence
# Take a screenshot at 25% to create the png version.
quote_text = '''\
"By combining the power of AI with the flexibility and agility of hybrid cloud, \
our clients are driving innovation and digitizing their operations at a fast \
pace," said <NAME>, general manager, Data and AI, IBM.'''
tokens = tp.io.spacy.make_tokens_and_features(quote_text, spacy_language_model)
print(f"{len(tokens.index)} tokens")
tp.io.spacy.render_parse_tree(tokens)
# -
# Here's an example of how names and job titles can appear in press releases. This example is from an [IBM press release](https://newsroom.ibm.com/2020-12-02-IBM-Named-a-Leader-in-the-2020-IDC-MarketScape-For-Worldwide-Advanced-Machine-Learning-Software-Platform) from December 2020:
#
# 
#
# This sentence is 45 words long, so the entire parse tree is a bit daunting...
#
# 
#
# ...but if we zoom in on just the phrase, "<NAME>, general manager, Data and AI, IBM," some structure becomes clear:
#
# 
#
# The arrows in the diagram point "downwards", from root to leaf. The entire job title is a child of the name. There's a single edge from the head (highest) node of <NAME>'s name to the head node of his job title.
#
# The edge types in this parse tree come from the [Universal Dependencies](https://universaldependencies.org/) framework. The edge between the name and job title has the type `appos`. `appos` is short for "[appositional modifier](https://universaldependencies.org/docs/en/dep/appos.html)", or [appositive](https://owl.purdue.edu/owl/general_writing/grammar/appositives.html). An appositive is a noun that describes another noun. In this case, the noun phrase "general manager, Data and AI, IBM" describes the noun phrase "<NAME>".
#
# The pattern in the picture above happens whenever a person's job title is an appositive for that person's name. The title will be below the name in the tree, and the head nodes of the name and title will be connected by an `appos` edge. We can use this pattern to find the job title via a three-step process:
#
# 1. Look for an `appos` edge coming out of any of the parse tree nodes for the name.
# 2. The node at the other end of this edge should be the head node of the job title.
# 3. Find all the other nodes that are reachable from the head node of the job title.
#
# Remember that each node represents a word. Once you know all the nodes that make up the job title, you know all the words in the title.
#
# Step 3 here requires a [*transitive closure*](https://en.wikipedia.org/wiki/Transitive_closure) operation:
# * Start with a set of nodes consisting of just the head node
# * Look for nodes that are connected to nodes of the set. Add those nodes to the set.
# * Repeat the previous step until your set of nodes stops growing.
#
# We can implement this algorithm with Pandas DataFrames.
# # Transitive closure with Pandas
#
# We're going to use Pandas to match person names with job titles. The first thing we'll need is the locations of the person names. In our previous post, we created a function `find_persons_quoted_by_name()` that finds all the people that a news article quotes by name. If you're curious, you can find the source code [here](https://gist.github.com/frreiss/038ac63ef20eed323a5637f9ddb2de8d). The function produces a DataFrame with the location of each person name. Here's the output when you run the function over an [example press release](https://newsroom.ibm.com/2020-12-02-IBM-Named-a-Leader-in-the-2020-IDC-MarketScape-For-Worldwide-Advanced-Machine-Learning-Software-Platform):
doc_url = "https://newsroom.ibm.com/2020-12-02-IBM-Named-a-Leader-in-the-2020-IDC-MarketScape-For-Worldwide-Advanced-Machine-Learning-Software-Platform"
persons = find_persons_quoted_by_name(doc_url, api_key,
service_url)
persons
# The second thing we will need is a parse tree. We'll use the dependency parser from the [SpaCy](https://spacy.io) NLP library. Our open source library [Text Extensions for Pandas](https://ibm.biz/text-extensions-for-pandas) can convert the output of this parser into a DataFrame:
# +
import spacy
import text_extensions_for_pandas as tp
# The original document had HTML tags. Get the detagged text.
doc_text = persons["person"].array.document_text
# Run dependency parsing and convert the parse to a DataFrame.
spacy_language_model = spacy.load("en_core_web_trf")
all_token_features = tp.io.spacy.make_tokens_and_features(
doc_text, spacy_language_model)
# Drop the columns we won't need for this analysis.
tokens = all_token_features[["id", "span", "dep", "head",
"sentence"]]
tokens
# -
# This `tokens` DataFrame contains one row for every *token* in the document. The term "token" here refers to a part of the document that is a word, an abbreviation, or a piece of punctuation. The columns "id", "dep" and "head" encode the edges of the parse tree.
#
# Since we're going to be analyzing the parse tree, it's more convenient to have the nodes and edges in separate DataFrames. So let's split `tokens` into DataFrames of nodes and edges:
nodes = tokens[["id", "span"]].reset_index(drop=True)
edges = tokens[["id", "head", "dep"]].reset_index(drop=True)
nodes
edges
# We will start with the nodes that are parts of person names. To find these nodes, we need to match the person names in `person` with tokens in `nodes`.
#
# The "person" column of `persons` and the "span" column in `nodes` both hold *span* data. Spans are a common concept in natural language processing. A span represents a region of the document, usually as begin and end offsets and a reference to the document's text. The span data in these two DataFrames is stored using the `SpanDtype` extension type from Text Extensions for Pandas.
#
# Text Extensions for Pandas also includes functions for manipulating span data. We can use one of these functions, `overlap_join()`, to find all the places where a token from `nodes` overlaps with a person name from `persons`:
person_nodes = (
tp.spanner.overlap_join(persons["person"], nodes["span"],
"person", "span")
.merge(nodes)
)
person_nodes
# This set of nodes defines a starting point for navigating the parse tree. Now we need to look for nodes that are on the other side of an `appos` link. Since the nodes and edges of our graph are Pandas DataFrames, we can use the Pandas `merge()` method to match edges with nodes and walk the graph. Here's a function that finds all the nodes that are one edge away from the nodes in its argument `start_nodes`:
def traverse_edges_once(start_nodes: pd.DataFrame,
edges: pd.DataFrame,
metadata_cols = ["person"]) -> pd.DataFrame:
return (
start_nodes[["person", "id"]] # Propagate original "person" span
.merge(edges, left_on="id", right_on="head",
suffixes=["_head", ""])[["person", "id"]]
.merge(nodes)
)
# Now we can find all the nodes that are reachable by traversing an `appos` link downward from part of a person name:
appos_targets = \
traverse_edges_once(person_nodes,
edges[edges["dep"] == "appos"])
appos_targets
# Each element of the "span" column of `appos_targets` holds the head node of a person's title. To find the remaining nodes of the titles, we'll do the transitive closure operation we described earlier. We use a Pandas DataFrame to store our set of selected nodes. We use the `traverse_edges_once` function to perform each step of walking the tree. Then we use `Pandas.concat()` and `DataFrame.drop_duplicates()` to add the new nodes to our selected set of nodes. The entire algorithm looks like this:
# +
# Start with the root nodes of the titles.
selected_nodes = appos_targets.copy()
# Transitive closure.
# Keep going as long as the previous round enlarged our set.
previous_num_nodes = 0
while len(selected_nodes.index) > previous_num_nodes:
# Find all the nodes that are directly reachable from
# the selected set.
addl_nodes = traverse_edges_once(selected_nodes, edges)
# Merge the new nodes into the selected set.
previous_num_nodes = len(selected_nodes.index)
selected_nodes = (pd.concat([selected_nodes, addl_nodes])
.drop_duplicates())
selected_nodes
# -
# Now we know the spans of all the words that make up each job title. The "addition" operation
# for spans is defined as:
# ```
# span1 + span2 = smallest span that contains both span1 and span2
# ```
# We can recover the span of the entire title by "adding" spans using Pandas' `groupby()` method:
# Aggregate the nodes of each title to find the span of the
# entire title.
titles = (
selected_nodes
.groupby("person")
.aggregate({"span": "sum"})
.reset_index()
.rename(columns={"span": "title"})
)
titles
# Now we have found a job title for each of the executive names in this document!
# ## Tying it all together
# Let's put all of the code we've presented so far into a single function.
# +
# Keep the contents of this cell synchronized with the gist at
# https://gist.github.com/frreiss/a731438dda4ac948beca85d3fe167ff3
import pandas as pd
import text_extensions_for_pandas as tp
def find_titles_of_persons(persons: pd.DataFrame,
spacy_language_model) -> pd.DataFrame:
"""
:param persons: DataFrame containing information about person names.
:param spacy_language_model: Loaded SpaCy language model with dependency
parsing support.
:returns: A DataFrame with a row for every title identified and two columns,
"person" and "title".
"""
def traverse_edges_once(start_nodes: pd.DataFrame, edges: pd.DataFrame,
metadata_cols = ["person"]) -> pd.DataFrame:
return (
start_nodes[["person", "id"]] # Propagate original "person" span
.merge(edges, left_on="id", right_on="head",
suffixes=["_head", ""])[["person", "id"]]
.merge(nodes)
)
if len(persons.index) == 0:
# Special case: Empty input --> empty output
return pd.DataFrame({
"person": pd.Series([], dtype=tp.SpanDtype()),
"title": pd.Series([], dtype=tp.SpanDtype()),
})
# Retrieve the document text from the person spans.
doc_text = persons["person"].array.document_text
# Run dependency parsing on the text and convert the parse to a DataFrame.
all_token_features = tp.io.spacy.make_tokens_and_features(doc_text, spacy_language_model)
# Drop the columns we won't need for this analysis.
tokens = all_token_features[["id", "span", "tag", "dep", "head", "sentence"]]
# Split the parse tree into nodes and edges and filter the edges.
nodes = tokens[["id", "span", "tag"]].reset_index(drop=True)
edges = tokens[["id", "head", "dep"]].reset_index(drop=True)
# Start with the nodes that are inside person names.
person_nodes = (
tp.spanner.overlap_join(persons["person"], nodes["span"],
"person", "span")
.merge(nodes)
)
# Step 1: Follow `appos` edges from the person names
appos_targets = traverse_edges_once(person_nodes,
edges[edges["dep"] == "appos"])
# Step 2: Transitive closure to find all tokens in the titles
selected_nodes = appos_targets.copy()
previous_num_nodes = 0
while len(selected_nodes.index) > previous_num_nodes:
# Find all the nodes that are directly reachable from our selected set.
addl_nodes = traverse_edges_once(selected_nodes, edges)
# Merge the new nodes into the selected set
previous_num_nodes = len(selected_nodes.index)
selected_nodes = (pd.concat([selected_nodes, addl_nodes])
.drop_duplicates())
# Aggregate the nodes of each title to find the span of the entire title.
titles = (
selected_nodes
.groupby("person")
.aggregate({"span": "sum"})
.reset_index()
.rename(columns={"span": "title"})
)
# As of Pandas 1.2.1, groupby() over extension types downgrades them to object
# dtype. Cast back up to the extension type.
titles["person"] = titles["person"].astype(tp.SpanDtype())
return titles
# -
# If we combine this `find_titles_of_persons()` function with the `find_persons_quoted_by_name()` function we created in our previous post, we can build a data mining pipeline. This pipeline finds the names and titles of executives in corporate press releases. Here's the output that we get if we pass a year's worth of IBM press releases through this pipeline:
# +
# Don't include this cell in the blog post.
# Load press release URLs from a file
with open("ibm_press_releases.txt", "r") as f:
lines = [l.strip() for l in f.readlines()]
ibm_press_release_urls = [l for l in lines if len(l) > 0 and l[0] != "#"]
# +
to_concat = []
for url in ibm_press_release_urls:
persons = find_persons_quoted_by_name(url, api_key,
service_url)
titles = find_titles_of_persons(persons,
spacy_language_model)
titles["url"] = url
to_concat.append(titles)
all_titles = pd.concat(to_concat).reset_index(drop=True)
all_titles
# -
# Our pipeline has processed 191 press releases, and it found the names and titles of 259 executives!
#
# To find out more about the extensions to Pandas that made this possible, check out Text Extensions for Pandas [here](https://ibm.biz/text-extensions-for-pandas).
#
# +
# Don't include this cell in the blog.
# Check the last 50 rows
all_titles[-50:]
# -
| tutorials/market/Market_Intelligence_Part2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
from network import Network, TargetNetwork
from layer import LowPassFilter
# +
np.random.seed(seed=0)
network = Network()
save_dir = "saved"
#network.load(save_dir)
# +
network.set_target_prediction_mode()
dt = 0.1
lp_filter = LowPassFilter(dt, 3)
target_values = np.random.rand(10)
values = np.random.rand(30)
train_iteration = 200
erros = []
u_targets = []
u_outputs = []
for i in range(train_iteration):
for j in range(1000):
filtered_values = lp_filter.process(values)
network.set_target_firing_rate(target_values)
network.set_input_firing_rate(filtered_values)
network.update(dt)
error = np.mean(network.layers[1].v_p_a)
u_target = network.layers[2].u_target
u_p = network.layers[2].u_p
erros.append(error)
u_targets.append(u_target)
u_outputs.append(u_p)
network.clear_target()
for i in range(100):
for j in range(1000):
filtered_values = lp_filter.process(values)
network.set_input_firing_rate(filtered_values)
network.update(dt)
u_p = network.layers[2].u_p
u_outputs.append(u_p)
u_targets = np.array(u_targets)
u_outputs = np.array(u_outputs)
# Save weights
network.save(save_dir)
# -
plt.plot(erros, label="apical error")
plt.legend()
plt.show()
check_index = 8
plt.plot(u_outputs[:,check_index], label="output potential")
plt.plot(u_targets[:,check_index], label="target potential")
#plt.ylim(-1.3, 1.3)
plt.legend()
plt.show()
# +
np.random.seed(seed=0)
network = Network()
save_dir = "saved"
network.load(save_dir)
# -
# ## Check with newly loaded network
# +
network.set_target_prediction_mode()
dt = 0.1
lp_filter = LowPassFilter(dt, 3)
check_u_targets = []
check_u_outputs = []
for j in range(1000):
filtered_values = lp_filter.process(values)
network.set_input_firing_rate(filtered_values)
network.update(dt)
u_p = network.layers[2].u_p
check_u_outputs.append(u_p)
check_u_targets = np.array(u_targets)
check_u_outputs = np.array(u_outputs)
# -
check_index = 1
plt.plot(check_u_outputs[:,check_index], label="output potential")
plt.plot(check_u_targets[:,check_index], label="target potential")
plt.ylim(-1.3, 1.3)
plt.legend()
plt.show()
# +
data_file_path = "saved/layer1.npz"
data = np.load(data_file_path)
w_pp_bu_1 = data["w_pp_bu"] # (10, 20)
w_pp_td_1 = data["w_pp_td"] # (20, 10)
w_ip_1 = data["w_ip"] # (10,20)
w_pi_1 = data["w_pi"] # (20,10)
# -
sns.heatmap(w_pp_bu_1)
plt.show()
sns.heatmap(w_pp_td_1) # 固定
plt.show()
sns.heatmap(w_ip_1)
plt.show()
sns.heatmap(w_pi_1)
plt.show()
sns.heatmap(-w_pp_td_1) # 固定, w_ipとマイナスの関係に近くなっているか?
plt.show()
data_file_path = "saved/layer0.npz"
data = np.load(data_file_path)
w_pp_bu_0 = data["w_pp_bu"]
w_pp_td_0 = data["w_pp_td"]
sns.heatmap(w_pp_bu_0) # 学習対象
plt.show()
sns.heatmap(w_pp_td_0) # 固定
plt.show()
| visualize_target_prediction.ipynb |