code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# If Statements
# ===
# By allowing you to respond selectively to different situations and conditions, if statements open up whole new possibilities for your programs. In this section, you will learn how to test for certain conditions, and then respond in appropriate ways to those conditions.
# [Previous: Introducing Functions](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/introducing_functions.ipynb) |
# [Home](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/index.ipynb) |
# [Next: While Loops and Input](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/while_input.ipynb)
# Contents
# ===
# - [What is an *if* statement?](#What-is-an-*if*-statement?)
# - [Example](#Example)
# - [Logical tests](#Logical-tests)
# - [Equality](#Equality)
# - [Inequality](#Inequality)
# - [Other inequalities](#Other-inequalities)
# - [Checking if an item is in a list](#Checking-if-an-item-is-in-a-list)
# - [Exercises](#Exercises-logical)
# - [The if-elif...else chain](#The-if-elif...else-chain)
# - [Simple if statements](#Simple-if-statements)
# - [if-else statements](#if-else-statements)
# - [if-elif...else chains](#if-elif...else-chains)
# - [Exercises](#Exercises-elif)
# - [More than one passing test](#More-than-one-passing-test)
# - [True and False values](#True-and-False-values)
# - [Overall Challenges](#Overall-Challenges)
# What is an *if* statement?
# ===
# An *if* statement tests for a condition, and then responds to that condition. If the condition is true, then whatever action is listed next gets carried out. You can test for multiple conditions at the same time, and respond appropriately to each condition.
#
# Example
# ---
# Here is an example that shows a number of the desserts I like. It lists those desserts, but lets you know which one is my favorite.
# +
# A list of desserts I like.
desserts = ['ice cream', 'chocolate', 'rhubarb crisp', 'cookies']
favorite_dessert = 'apple crisp'
# Print the desserts out, but let everyone know my favorite dessert.
for dessert in desserts:
if dessert == favorite_dessert:
# This dessert is my favorite, let's let everyone know!
print("%s is my favorite dessert!" % dessert.title())
else:
# I like these desserts, but they are not my favorite.
print("I like %s." % dessert)
# -
# #### What happens in this program?
#
# - The program starts out with a list of desserts, and one dessert is identified as a favorite.
# - The for loop runs through all the desserts.
# - Inside the for loop, each item in the list is tested.
# - If the current value of *dessert* is equal to the value of *favorite_dessert*, a message is printed that this is my favorite.
# - If the current value of *dessert* is not equal to the value of *favorite_dessert*, a message is printed that I just like the dessert.
#
# You can test as many conditions as you want in an if statement, as you will see in a little bit.
# Logical Tests
# ===
# Every if statement evaluates to *True* or *False*. *True* and *False* are Python keywords, which have special meanings attached to them. You can test for the following conditions in your if statements:
#
# - [equality](#equality) (==)
# - [inequality](#inequality) (!=)
# - [other inequalities](#other_inequalities)
# - greater than (>)
# - greater than or equal to (>=)
# - less than (<)
# - less than or equal to (<=)
# - [You can test if an item is **in** a list.](#in_list)
#
# ### Whitespace
# Remember [learning about](http://introtopython.org/lists_tuples.html#pep8) PEP 8? There is a [section of PEP 8](http://www.python.org/dev/peps/pep-0008/#other-recommendations) that tells us it's a good idea to put a single space on either side of all of these comparison operators. If you're not sure what this means, just follow the style of the examples you see below.
# Equality
# ---
# Two items are *equal* if they have the same value. You can test for equality between numbers, strings, and a number of other objects which you will learn about later. Some of these results may be surprising, so take a careful look at the examples below.
#
# In Python, as in many programming languages, two equals signs tests for equality.
#
# **Watch out!** Be careful of accidentally using one equals sign, which can really throw things off because that one equals sign actually sets your item to the value you are testing for!
5 == 5
3 == 5
5 == 5.0
'eric' == 'eric'
'Eric' == 'eric'
'Eric'.lower() == 'eric'.lower()
'5' == 5
'5' == str(5)
# Inequality
# ---
# Two items are *inequal* if they do not have the same value. In Python, we test for inequality using the exclamation point and one equals sign.
#
# Sometimes you want to test for equality and if that fails, assume inequality. Sometimes it makes more sense to test for inequality directly.
3 != 5
5 != 5
'Eric' != 'eric'
# Other Inequalities
# ---
# ### greater than
5 > 3
# ### greater than or equal to
5 >= 3
3 >= 3
# ### less than
3 < 5
# ### less than or equal to
3 <= 5
3 <= 3
# Checking if an item is **in** a list
# ---
# You can check if an item is in a list using the **in** keyword.
vowels = ['a', 'e', 'i', 'o', 'u']
'a' in vowels
vowels = ['a', 'e', 'i', 'o', 'u']
'b' in vowels
# <a id="Exercises-logical"></a>
# Exercises
# ---
# #### True and False
# - Write a program that consists of at least ten lines, each of which has a logical statement on it. The output of your program should be 5 **True**s and 5 **False**s.
# - Note: You will probably need to write `print(5 > 3)`, not just `5 > 3`.
# The if-elif...else chain
# ===
# You can test whatever series of conditions you want to, and you can test your conditions in any combination you want.
# Simple if statements
# ---
# The simplest test has a single **if** statement, and a single statement to execute if the condition is **True**.
# +
dogs = ['willie', 'hootz', 'peso', 'juno']
if len(dogs) > 3:
print("Wow, we have a lot of dogs here!")
# -
# In this situation, nothing happens if the test does not pass.
# +
###highlight=[2]
dogs = ['willie', 'hootz']
if len(dogs) > 3:
print("Wow, we have a lot of dogs here!")
# -
# Notice that there are no errors. The condition `len(dogs) > 3` evaluates to False, and the program moves on to any lines after the **if** block.
# if-else statements
# ---
# Many times you will want to respond in two possible ways to a test. If the test evaluates to **True**, you will want to do one thing. If the test evaluates to **False**, you will want to do something else. The **if-else** structure lets you do that easily. Here's what it looks like:
# +
dogs = ['willie', 'hootz', 'peso', 'juno']
if len(dogs) > 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
# -
# Our results have not changed in this case, because if the test evaluates to **True** only the statements under the **if** statement are executed. The statements under **else** area only executed if the test fails:
# +
###highlight=[2]
dogs = ['willie', 'hootz']
if len(dogs) > 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
# -
# The test evaluated to **False**, so only the statement under `else` is run.
# if-elif...else chains
# ---
# Many times, you will want to test a series of conditions, rather than just an either-or situation. You can do this with a series of if-elif-else statements
#
# There is no limit to how many conditions you can test. You always need one if statement to start the chain, and you can never have more than one else statement. But you can have as many elif statements as you want.
# +
dogs = ['willie', 'hootz', 'peso', 'monty', 'juno', 'turkey']
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
# -
# It is important to note that in situations like this, only the first test is evaluated. In an if-elif-else chain, once a test passes the rest of the conditions are ignored.
# +
###highlight=[2]
dogs = ['willie', 'hootz', 'peso', 'monty']
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
# -
# The first test failed, so Python evaluated the second test. That test passed, so the statement corresponding to `len(dogs) >= 3` is executed.
# +
###highlight=[2]
dogs = ['willie', 'hootz']
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
# -
# In this situation, the first two tests fail, so the statement in the else clause is executed. Note that this statement would be executed even if there are no dogs at all:
# +
###highlight=[2]
dogs = []
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
# -
# Note that you don't have to take any action at all when you start a series of if statements. You could simply do nothing in the situation that there are no dogs by replacing the `else` clause with another `elif` clause:
# +
###highlight=[8]
dogs = []
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
elif len(dogs) >= 1:
print("Okay, this is a reasonable number of dogs.")
# -
# In this case, we only print a message if there is at least one dog present. Of course, you could add a new `else` clause to respond to the situation in which there are no dogs at all:
# +
###highlight=[10,11]
dogs = []
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
elif len(dogs) >= 1:
print("Okay, this is a reasonable number of dogs.")
else:
print("I wish we had a dog here.")
# -
# As you can see, the if-elif-else chain lets you respond in very specific ways to any given situation.
# <a id="Exercises-elif"></a>
# Exercises
# ---
# #### Three is a Crowd
# - Make a list of names that includes at least four people.
# - Write an if test that prints a message about the room being crowded, if there are more than three people in your list.
# - Modify your list so that there are only two people in it. Use one of the methods for removing people from the list, don't just redefine the list.
# - Run your if test again. There should be no output this time, because there are less than three people in the list.
# - **Bonus:** Store your if test in a function called something like `crowd_test`.
#
# #### Three is a Crowd - Part 2
# - Save your program from *Three is a Crowd* under a new name.
# - Add an `else` statement to your if tests. If the `else` statement is run, have it print a message that the room is not very crowded.
#
# #### Six is a Mob
# - Save your program from *Three is a Crowd - Part 2* under a new name.
# - Add some names to your list, so that there are at least six people in the list.
# - Modify your tests so that
# - If there are more than 5 people, a message is printed about there being a mob in the room.
# - If there are 3-5 people, a message is printed about the room being crowded.
# - If there are 1 or 2 people, a message is printed about the room not being crowded.
# - If there are no people in the room, a message is printed abou the room being empty.
# More than one passing test
# ===
# In all of the examples we have seen so far, only one test can pass. As soon as the first test passes, the rest of the tests are ignored. This is really good, because it allows our code to run more efficiently. Many times only one condition can be true, so testing every condition after one passes would be meaningless.
#
# There are situations in which you want to run a series of tests, where every single test runs. These are situations where any or all of the tests could pass, and you want to respond to each passing test. Consider the following example, where we want to greet each dog that is present:
# +
dogs = ['willie', 'hootz']
if 'willie' in dogs:
print("Hello, Willie!")
if 'hootz' in dogs:
print("Hello, Hootz!")
if 'peso' in dogs:
print("Hello, Peso!")
if 'monty' in dogs:
print("Hello, Monty!")
# -
# If we had done this using an if-elif-else chain, only the first dog that is present would be greeted:
# +
###highlight=[6,7,8,9,10,11]
dogs = ['willie', 'hootz']
if 'willie' in dogs:
print("Hello, Willie!")
elif 'hootz' in dogs:
print("Hello, Hootz!")
elif 'peso' in dogs:
print("Hello, Peso!")
elif 'monty' in dogs:
print("Hello, Monty!")
# -
# Of course, this could be written much more cleanly using lists and for loops. See if you can follow this code.
# +
dogs_we_know = ['willie', 'hootz', 'peso', 'monty', 'juno', 'turkey']
dogs_present = ['willie', 'hootz']
# Go through all the dogs that are present, and greet the dogs we know.
for dog in dogs_present:
if dog in dogs_we_know:
print("Hello, %s!" % dog.title())
# -
# This is the kind of code you should be aiming to write. It is fine to come up with code that is less efficient at first. When you notice yourself writing the same kind of code repeatedly in one program, look to see if you can use a loop or a function to make your code more efficient.
# True and False values
# ===
# Every value can be evaluated as True or False. The general rule is that any non-zero or non-empty value will evaluate to True. If you are ever unsure, you can open a Python terminal and write two lines to find out if the value you are considering is True or False. Take a look at the following examples, keep them in mind, and test any value you are curious about. I am using a slightly longer test just to make sure something gets printed each time.
if 0:
print("This evaluates to True.")
else:
print("This evaluates to False.")
if 1:
print("This evaluates to True.")
else:
print("This evaluates to False.")
# Arbitrary non-zero numbers evaluate to True.
if 1253756:
print("This evaluates to True.")
else:
print("This evaluates to False.")
# Negative numbers are not zero, so they evaluate to True.
if -1:
print("This evaluates to True.")
else:
print("This evaluates to False.")
# An empty string evaluates to False.
if '':
print("This evaluates to True.")
else:
print("This evaluates to False.")
# Any other string, including a space, evaluates to True.
if ' ':
print("This evaluates to True.")
else:
print("This evaluates to False.")
# Any other string, including a space, evaluates to True.
if 'hello':
print("This evaluates to True.")
else:
print("This evaluates to False.")
# None is a special object in Python. It evaluates to False.
if None:
print("This evaluates to True.")
else:
print("This evaluates to False.")
# Overall Challenges
# ===
# #### Alien Points
# - Make a list of ten aliens, each of which is one color: 'red', 'green', or 'blue'.
# - You can shorten this to 'r', 'g', and 'b' if you want, but if you choose this option you have to include a comment explaining what r, g, and b stand for.
# - Red aliens are worth 5 points, green aliens are worth 10 points, and blue aliens are worth 20 points.
# - Use a for loop to determine the number of points a player would earn for destroying all of the aliens in your list.
# - [hint](#hint_alien_points)
# - - -
# [Previous: Introducing Functions](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/introducing_functions.ipynb) |
# [Home](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/index.ipynb) |
# [Next: While Loops and Input](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/while_input.ipynb)
# Hints
# ===
# These are placed at the bottom, so you can have a chance to solve exercises without seeing any hints.
#
# #### Alien Invaders
# - After you define your list of aliens, set a variable called `current_score` or `current_points` equal to 0.
# - Inside your for loop, write a series of if tests to determine how many points to add to the current score.
# - To keep a running total, use the syntax `current_score = current_score + points`, where *points* is the number of points for the current alien.
|
notebooks/if_statements.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sys
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
import seaborn as sns
import sklearn
from sklearn.model_selection import train_test_split
names = ['n_pregnant', 'glucose_concentration', 'blood_pressure (mm Hg)', 'skin_thinckness (mm)', 'serum_insulin (mu U/ml)'
'BMI', 'pedigree_function', 'age', 'class']
df = pd.read_csv('datasets_228_482_diabetes.csv')
df.head()
df.describe()
# determine the columns with missing values
df[df['BloodPressure'] == 0].shape
# preprocess the data
# remove the rows with missing values from relevent columns
columns = ['Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI']
for c in columns:
df[c].replace(0, np.NaN, inplace=True)
df.describe()
# drop the rows with NaN vlues
df.dropna(inplace=True)
df. describe()
# convert the data to numpy array
data = df.values
print(data.shape)
# extract the input and lebels
x = data[:, 0:-1]
y = data[:, -1].astype(int)
print(x.shape)
print(y.shape)
# +
# normalize and standardize the data
from sklearn.preprocessing import StandardScaler
stand = StandardScaler().fit(x)
standardizedData = stand.transform(x)
data = pd.DataFrame(standardizedData)
data.describe()
# -
# ### Hyperparameter Grid search
# import the modules
from sklearn.model_selection import GridSearchCV, KFold
from tensorflow.keras.layers import Dense
from tensorflow.keras import Model, Sequential
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
from tensorflow.keras.optimizers import Adam
# +
# define and run the grid search model
# set a seed
seed = 1
np.random.seed(seed)
# define the model
def MyModel(lr, n1, n2, activation, init):
model = Sequential()
model.add(Dense(n1, input_dim=8, kernel_initializer=init, activation=activation))
model.add(Dense(n2, kernel_initializer=init, activation=activation))
model.add(Dense(1, activation='sigmoid'))
optimizer = Adam(lr = lr)
loss = 'binary_crossentropy'
model.compile(loss=loss, optimizer=optimizer, metrics=['accuracy'])
return model
# create KerasClassifier model
model = KerasClassifier(build_fn=MyModel, verbose=0)
# set the grid search parameters
batch_size = [8, 16, 32]
epochs = [10, 50, 100]
lr = [0.001, 0.005, 0.01]
n1 = [4, 8, 16, 32]
n2 = [2, 4, 8, 16]
activation = ['relu', 'softmax', 'tanh', 'linear']
init = ['uniform', 'normal', 'zero']
# make a dictionary of grid search parameters
params = dict(batch_size=batch_size, epochs=epochs, activation=activation, init=init, lr=lr, n1=n1, n2=n2)
# make and fit the grid search model
gridModel = GridSearchCV(estimator=model, param_grid=params, cv=KFold(random_state=seed), refit=True, verbose=10, n_jobs=8)
gridSearchResults = gridModel.fit(standardizedData, y)
# print some of the results
print('Best: {} Using: {}'.format(gridSearchResults.best_score_, gridSearchResults.best_params_))
# +
means = gridSearchResults.cv_results_['mean_test_score']
stds = gridSearchResults.cv_results_['std_test_score']
parameters = gridSearchResults.cv_results_['params']
results = list(zip(means, stds, parameters))
results = sorted(results, key = lambda x: x[0], reverse=True)
# print top 10 parameter combinations
for i in range(0, 10):
print(results[i])
# -
# make predictions using best estimator
predictions = gridModel.predict(standardizedData) > 0.5
# +
# print accuracy metics
from sklearn.metrics import classification_report, accuracy_score
print(accuracy_score(y, predictions))
print(classification_report(y, predictions))
# -
# ### Model performs even better on the entire data set
|
GridSearch.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Programming Bootcamp 2018
# # Lesson 5 Exercises
# ---
# ** Earning points (optional) **
#
# - Enter your name below.
# - Post your notebook privately to instructors on piazza **before 9:00 pm on 8/21**.
# **Name**:
# ---
# ## 1. Guess the output: dictionary practice (1pt)
#
# For the following blocks of code, first try to guess what the output will be, and then run the code yourself. Points will be given for filling in the guesses; guessing wrong won't be penalized.
# run this cell first!
fruits = {"apple":"red", "banana":"yellow", "grape":"purple"}
print fruits["banana"]
# Your guess: yellow
#
query = "apple"
print fruits[query]
# Your guess: red
#
print fruits[0]
# Your guess: Error
#
print fruits.keys()
# Your guess: ["grape", "apple", "banana"]
#
print fruits.values()
# Your guess: ["purple", "red", "yellow"]
#
for key in fruits:
print fruits[key]
# Your guess:
# purple
# red
# yellow
#
del fruits["banana"]
print fruits
# Your guess: ["grape":"purple", "apple":"red"]
#
print fruits["pear"]
# Your guess: Error
#
fruits["pear"] = "green"
print fruits["pear"]
# Your guess: green
#
fruits["apple"] = fruits["apple"] + " or green"
print fruits["apple"]
# Your guess: red or green
#
# ---
# ## 2. On your own: using dictionaries (6pts)
#
# Using the info in the table below, write code to accomplish the following tasks.
#
# | Name | Favorite Food |
# |:---------:|:-------------:|
# | Wilfred | Steak |
# | Manfred | French fries |
# | Wadsworth | Spaghetti |
# | Jeeves | Ice cream |
#
# **(A)** Create a dictionary based on the data above, where each person's name is a key, and their favorite foods are the values.
favFood = {"Wilfred":"Steak", "Manfred":"French fries", "Wadsworth":"Spaghetti", "Jeeves":"Ice cream"}
# **(B)** Using a `for` loop, go through the dictionary you created above and print each name and food combination in the format:
#
# <NAME>'s favorite food is <FOOD>
for (name, food) in favFood.items():
print name + "'s favorite food is", food
# **(C)** (1) Change the dictionary so that Wilfred's favorite food is Pizza. (2) Add a new entry for Mitsworth, whose favorite food is Tuna.
#
# Do not recreate the whole dictionary while doing these things. Edit the dictionary you created in (A) using the syntax described in the lecture.
# +
favFood["Wilfred"] = "Pizza"
favFood["Mitsworth"] = "Tuna"
for (name, food) in favFood.items():
print name + "'s favorite food is", food
# -
# **(D)** Prompt the user to input a name. Check if the name they entered is a valid key in the dictionary using an `if` statement. If the name is in the dictionary, print out the corresponding favorite food. If not, print a message saying "That name is not in our database".
# +
nameLookup = str(raw_input("Name? "))
if nameLookup in favFood.keys():
print nameLookup + "'s favorite food is", favFood[nameLookup]
else:
print "That name is not in our database"
# -
# **(E)** Print just the names in the dictionary in alphabetical order. Use the sorting example from the slides.
for name in sorted(favFood.keys()):
print name
# **(F)** Print just the names in sorted order based on their favorite food. Use the value-sorting example from the slides.
for name in sorted(favFood, key=favFood.get):
print name
# ---
# ## 3. File writing (3pts)
# **(A)** Write code that prints "Hello, world" to a file called `hello.txt`
hello = open("hello.txt", "w")
hello.write("Hello, world\n")
hello.close()
# **(B)** Write code that prints the following text to a file called `meow.txt`. It must be formatted exactly as it here (you will need to use \n and \t):
# ```
# Dear Mitsworth,
#
# Meow, meow meow meow.
#
# Sincerely,
# A friend
# ```
meow = open("meow.txt", "w")
meow.write("Dear Mitsworth,\n\n\tMeow, meow, meow, meow.\n\nSincerely,\nA friend")
meow.close()
# **(C)** Write code that reads in the gene IDs from `genes.txt` and prints the **unique** gene IDs to a **new file** called `genes_unique.txt`. (You can re-use your code or the answer sheet from lab4 for getting the unique IDs.)
# +
fileName = "genes.txt"
genes = open(fileName, 'r')
outName = "genes_unique.txt"
genes2 = open(outName, "w")
uniGenes = []
for lines in genes:
if lines not in uniGenes:
genes2.write(lines)
uniGenes.append(lines)
genes.close()
genes2.close()
# -
# ---
# ## 4. The "many counters" problem (4pts)
# **(A)** Write code that reads a file of sequences and tallies how many sequences there are of each length. Use `sequences3.txt` as input.
#
# *Hint: you can use a dictionary to keep track of all the tallies. For example:*
# +
# hint code:
tallyDict = {}
seq = "ATGCTGATCGATATA"
length = len(seq)
if length not in tallyDict:
tallyDict[length] = 1 #initialize to 1 if this is the first occurrence of the length...
else:
tallyDict[length] = tallyDict[length] + 1 #...otherwise just increment the count.
# +
inFile = "sequences3.txt"
seq = open(inFile, "r")
tallyDict = {}
for line in seq:
length = len(line)
if length not in tallyDict:
tallyDict[length] = 1
else:
tallyDict[length] = tallyDict[length] + 1
# -
# **(B)** Using the tally dictionary you created above, figure out which sequence length was the most common, and print it to the screen.
# +
maxValue = max(tallyDict.values())
for length, tally in tallyDict.items():
if tally == maxValue:
print "Most common sequence length:",length
# -
# ### Comment: 52 is not correct! (But you got the right idea. You forgot to strip the endline character. Answer is 51!)
# ---
# ## 5. Codon table (6pts)
#
# For this question, use `codon_table.txt`, which contains a list of all possible codons and their corresponding amino acids. We will be using this info to create a dictionary, which will allow us to translate a nucleotide sequence into amino acids. Each part of this question builds off the previous parts.
# **(A)** Thinkin' question (short answer, not code): If we want to create a codon dictionary and use it to translate nucleotide sequences, would it be better to use the codons or amino acids as keys?
# Your answer: Codons as keys
# **(B)** Read in `codon_table.txt` (note that it has a header line) and use it to create a codon dictionary. Then use `input()` to prompt the user to enter a single codon (e.g. ATG) and print the amino acid corresponding to that codon to the screen.
# +
inFile = "codon_table.txt"
codonTable = open(inFile, "r")
first_line = codonTable.readline()
codonKey = {}
for lines in codonTable:
lines = lines.rstrip("\n")
lines = lines.split("\t")
codonKey[lines[0]] = lines[1]
codonInput = raw_input("Enter a codon:")
print "Amino acid: ", codonKey[codonInput]
# -
# **(C)** Now we will adapt the code in (B) to translate a longer sequence. Instead of prompting the user for a single codon, allow them to enter a longer sequence. First, check that the sequence they entered has a length that is a multiple of 3 (Hint: use the mod operator, %), and print an error message if it is not. If it is valid, then go on to translate every three nucleotides to an amino acid. Print the final amino acid sequence to the screen.
# +
inFile = "codon_table.txt"
codonTable = open(inFile, "r")
first_line = codonTable.readline()
codonKey = {}
for lines in codonTable:
lines = lines.rstrip("\n")
lines = lines.split("\t")
codonKey[lines[0]] = lines[1]
#for i, j in codonKey.items():
# print i, j
codonInput = raw_input("Enter a nucelotide sequence:")
codonTranslation = []
if len(codonInput)%3 != 0:
print "Not a multiple of 3 nucleotides"
else:
for i in range(0, len(codonInput), 3):
codonTranslation.append(codonKey[codonInput[i:i+3]])
AASeq = ""
for i in codonTranslation:
AASeq = AASeq + i
print "Amino Acid Sequence: ", AASeq
# -
# **(D)** Now, instead of taking user input, you will apply your translator to a set of sequences stored in a file. Read in the sequences from `sequences3.txt` (assume each line is a separate sequence), translate it to amino acids, and print it to a new file called `proteins.txt`.
# +
inFile = "codon_table.txt"
codonTable = open(inFile, "r")
first_line = codonTable.readline()
inFile = "sequences3.txt"
sequences3 = open(inFile, "r")
proteins = open("proteins.txt", "w")
#for lines in sequences3:
# print lines
codonKey = {}
for lines in codonTable:
lines = lines.rstrip("\n")
lines = lines.split("\t")
codonKey[lines[0]] = lines[1]
for lines in sequences3:
lines = lines.rstrip("\n")
codonTranslation = []
if len(lines)%3 != 0:
print "Not a multiple of 3 nucleotides"
else:
AASeq = ""
for i in range(0, len(lines), 3):
codonTranslation.append(codonKey[lines[i:i+3]])
for i in codonTranslation:
AASeq = AASeq + i
proteins.write(AASeq)
proteins.close()
codonTable.close()
sequences3.close()
# -
# ### Comment: You should add an endline character after each sequence ( each line). proteins.write(AASeq + "\n")
|
class_materials/File_Writing_and_Dictionaries/2018/.ipynb_checkpoints/lab5_exercises_GuChristopher_graded-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # PySPTKによる音声の分析再合成 (PARCOR係数による全極ラティスフィルタ)
from pysptk.synthesis import AllPoleLatticeDF, Synthesizer
from scipy.io import wavfile
import librosa
import numpy as np
import pysptk
import matplotlib.pyplot as plt
from IPython.display import Audio
# 音声の分析条件
FRAME_LENGTH = 1024 # フレーム長 (point)
HOP_LENGTH = 80 # フレームシフト (point)
MIN_F0 = 60 # 基本周波数の最小値 (Hz)
MAX_F0 = 240 # 基本周波数の最大値 (Hz)
ORDER = 20 # LPCの分析次数
IN_WAVE_FILE = "in.wav" # 入力音声
OUT_WAVE_FILE = "out.wav" # 分析再合成した音声
# 音声の読み込み
fs, x = wavfile.read(IN_WAVE_FILE)
x = x.astype(np.float64)
# 音声の切り出しと窓掛け
frames = librosa.util.frame(x, frame_length=FRAME_LENGTH,
hop_length=HOP_LENGTH).astype(np.float64).T
frames *= pysptk.blackman(FRAME_LENGTH) # 窓掛け(ブラックマン窓)
# ピッチ抽出
pitch = pysptk.swipe(x, fs=fs, hopsize=HOP_LENGTH,
min=MIN_F0, max=MAX_F0, otype="pitch")
# 励振源信号(声帯音源)の生成
source_excitation = pysptk.excite(pitch, HOP_LENGTH)
# 線形予測分析による線形予測係数の抽出
lpc = pysptk.lpc(frames, ORDER)
lpc[:, 0] = np.log(lpc[:, 0]) # 対数ゲイン for AllPoleDF
# LPC係数をPARCOR係数に変換
parcor = pysptk.lpc2par(lpc)
# 全極フィルタの作成
synthesizer = Synthesizer(AllPoleLatticeDF(order=ORDER), HOP_LENGTH)
# 励振源信号でフィルタを駆動して音声を合成
y = synthesizer.synthesis(source_excitation, parcor)
# 音声の書き込み
y = y.astype(np.int16)
wavfile.write(OUT_WAVE_FILE, fs, y)
# +
# オリジナルの音声をプロット
n_samples = len(x)
time = np.arange(n_samples) / fs # 時間軸の設定
# プロット枠を確保 (10がヨコのサイズ、4はタテのサイズ)
plt.figure(figsize=(10, 4))
# x軸のラベル
plt.xlabel("Time (sec)")
# y軸のラベル
plt.ylabel("Amplitude")
# 画像のタイトル
plt.title("Waveform")
# 余白を最小化
plt.tight_layout()
# 波形のプロット
plt.plot(time, x)
# -
x = x.astype(np.int16)
Audio(x, rate=fs) # オリジナルの音声
# +
# 分析再合成した音声をプロット
n_samples = len(y)
time = np.arange(n_samples) / fs # 時間軸の設定
# プロット枠を確保 (10がヨコのサイズ、4はタテのサイズ)
plt.figure(figsize=(10, 4))
# x軸のラベル
plt.xlabel("Time (sec)")
# y軸のラベル
plt.ylabel("Amplitude")
# 画像のタイトル
plt.title("Waveform")
# 余白を最小化
plt.tight_layout()
# 波形のプロット
plt.plot(time, y)
# -
Audio(y, rate=fs) # 再合成した音声
|
SpeechAnalysisSynthesis/pysptk_anasyn_parcor.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="fMKxA-slER58"
# # Hyperparameter Tuning with Image Clasification
#
# [](https://colab.research.google.com/github/mohanen/noob_vision/blob/master/hyperparameter_tuning_tf.ipynb)
#
# > we already know how to classify images from [cnn classification](https://colab.research.google.com/github/mohanen/noob_vision/blob/master/cnn_classification_tf.ipynb)
#
# ## Hyperparameter Tuning
#
# > Choosing the set of optimal parameters from given choices
#
# we can find the optimal parameters for all things, such as
# 1. No. of layers
# 2. No. of nodes on each layers
# 3. optimizer
# 4. learning rate
# 5. loss function
# 6. so on,..
#
# But it takes more time as the combination increases, If you powerful processing environments you can try this to tune for optimal parameters
#
# ### Lets use the same steps as [cnn classification](https://colab.research.google.com/github/mohanen/noob_vision/blob/master/cnn_classification_tf.ipynb) with some overfit prevention mechanisms
#
# 1. Download and arrange dataset in file system
# 2. Load the training and valdiation data
# 3. Create a CNN Model with HyperParameter Tuning
# 4. Train the model
# 5. Test the model
#
# + [markdown] id="2O7rEAE8ffc-"
# ## Step 1: Download and arrange dataset in file systems
# + colab={"base_uri": "https://localhost:8080/"} id="Z-gNjcmLCHWR" outputId="e5afe121-bd8d-4ed1-9e6a-d5ed20e06e44"
import tensorflow as tf
import numpy as np
import pathlib # For folder/file related operations
import matplotlib.pyplot as plt # For plotting graphs and showing images
# Reduce precission and speed thing up 60% on TPU and 3x on Modern GPU - optional
tf.keras.mixed_precision.set_global_policy('mixed_float16')
# Download the dataset
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
# You don't need to seperate libraries to download and zip file, tensorflow itself has one built one, lets use that
data_dir = tf.keras.utils.get_file(origin=dataset_url, fname="the_dataset", extract=True, cache_dir="./")
# Pathlib allows you to do some path handling operations
data_dir = pathlib.Path(data_dir)
data_dir = data_dir.parent.joinpath ('flower_photos')
# list the contents of data dir
# "!" indicated shell commands
# !ls $data_dir
# + [markdown] id="L0TDkP0ofnkA"
# ## Step 2: Load the training and validation data
#
# + colab={"base_uri": "https://localhost:8080/"} id="cIguXr6ufo3_" outputId="4a2e8493-6d3d-4284-acac-c12b84e5c023"
BATCH_SIZE = 32 # Instead of running all dataset one session its split into multiple batches for faster training and better convergence
IMG_HEIGHT = 128
IMG_WIDTH = 128
IMG_CHANNELS = 3 # RGB has 3 channels each represensting intensity of Red, Green, Blue
# To load the image from directory we can use the tensorflow api itself
# training and validation split can also be configured with the same api
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir, # path to dataset
validation_split=0.2, # percent of data reserved for validation
subset="training", # training or validation subset
seed=123, # seed helps in shuffling images same way for both training and validation
image_size=(IMG_HEIGHT, IMG_WIDTH),
batch_size=BATCH_SIZE
)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(IMG_HEIGHT, IMG_WIDTH),
batch_size=BATCH_SIZE)
# store the class names
class_names = train_ds.class_names
# Improves tensorflow performance drastically
train_ds = train_ds.cache()
train_ds = train_ds.prefetch(tf.data.experimental.AUTOTUNE)
val_ds = val_ds.cache()
val_ds = val_ds.prefetch(tf.data.experimental.AUTOTUNE)
print (class_names)
# + [markdown] id="Tiu90x9XkvXJ"
# ## Step 3: Hyperparameter tuning - Find optimal parameters for model
#
# 1. Since it involves training the model to find the best parameter, we can get the best model once done instead of training again.
#
# 2. Or you if the resource or time is not enought to tune with all dataset, you can use a random subset of the dataset and get optimal parameters and again train with full dataset for the best model which as best parameters and has good starting point values
# + colab={"base_uri": "https://localhost:8080/"} id="1WZWNWdy3yMi" outputId="23a7326e-b728-4530-9521-32076bc3546b"
# install keras tuner if haven't already
# !pip install -q -U keras-tuner
# + colab={"base_uri": "https://localhost:8080/"} id="VgSZHtE-wdJX" outputId="eacab6c6-ab2c-4643-9b5d-72970529794b"
from tensorflow.keras import layers, models
import kerastuner as kt
# The more options the more time it takes to find the best combination of optimal params
# So If you already know some good limits, it'll help you save some time
def model_builder(hp):
inputs = tf.keras.Input(shape=(IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS))
x = inputs
for i in range(hp.Int('layers', 3, 4, step=1)):
filters = hp.Int('fitlers_'+str(i), 64, 128, step=32)
x = layers.Conv2D(filters=filters,
kernel_size=3,
padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
x = layers.MaxPool2D()(x)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(
hp.Int('hidden_size', 128, 256, step=32),
activation='relu'
)(x)
x = layers.Dropout(
hp.Float('dropout', 0.2, 0.4, step=0.2, default=0.4)
)(x)
output = layers.Dense(len(class_names), activation='softmax')(x)
model = tf.keras.Model(inputs, output)
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy']
)
return model
tuner = kt.Hyperband(model_builder,
objective='val_accuracy',
max_epochs=30,
hyperband_iterations=2)
tuner.search(
train_ds,
validation_data=val_ds,
epochs=30,
callbacks=[tf.keras.callbacks.EarlyStopping(patience=5)]
)
model = tuner.get_best_models(1)[0]
best_hyperparameters = tuner.get_best_hyperparameters(1)[0]
print(best_hyperparameters)
model.summary()
# + [markdown] id="9xD6dcm7hBn2"
# ### Plot Confusion matrix for validation data
#
# It gives an visual represention of how good the model is performing
# + colab={"base_uri": "https://localhost:8080/", "height": 519} id="YF6oPXt8ma78" outputId="ac6a587c-ac16-4a2a-ce7a-ca847cee3f0d"
#Defining function for confusion matrix plot - https://analyticsindiamag.com/implementing-efficientnet-a-powerful-convolutional-neural-network/
from sklearn.metrics import confusion_matrix
from google.colab.patches import cv2_imshow
def plot_confusion_matrix(y_true, y_pred, classes,
normalize=False,
title=None,
cmap=plt.cm.Blues):
if not title:
if normalize:
title = 'Normalized confusion matrix'
else:
title = 'Confusion matrix, without normalization'
# Compute confusion matrix
cm = confusion_matrix(y_true, y_pred)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
#Print Confusion matrix
fig, ax = plt.subplots(figsize=(7,7))
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
ax.figure.colorbar(im, ax=ax)
# We want to show all ticks...
ax.set(xticks=np.arange(cm.shape[1]),
yticks=np.arange(cm.shape[0]),
xticklabels=classes, yticklabels=classes,
title=title,
ylabel='True label',
xlabel='Predicted label')
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, format(cm[i, j], fmt),
ha="center", va="center",
color="white" if cm[i, j] > thresh else "black")
fig.tight_layout()
return ax
np.set_printoptions(precision=2)
val_ds_u = val_ds.unbatch()
val_ds_u = list(val_ds_u.as_numpy_iterator())
img_arr = np.array([image for image, label in val_ds_u])
lbl_arr = np.array([label for image, label in val_ds_u])
val_p = model.predict(img_arr)
val_r = [np.argmax(y) for y in val_p]
plot_confusion_matrix(val_r, lbl_arr, class_names)
# + [markdown] id="5WNCklHdq1iN"
# ## Step 5: Test the model
#
# + colab={"base_uri": "https://localhost:8080/"} id="O-Z5PYaZq6rI" outputId="5678f08c-fcef-47cd-9c47-041fbbca8075"
# Place an url of any new images and try the results
image_url = "https://upload.wikimedia.org/wikipedia/commons/c/ce/Daisy_G%C3%A4nsebl%C3%BCmchen_Bellis_perennis_01.jpg"
# tf cahces the data to be dowloaded, so just make sure we clean if any previous image was already stored
# !rm datasets/Image.png
# Download the image to local storage
image_path = tf.keras.utils.get_file('Image.png', origin=image_url, cache_dir="./")
# Load the image from local storage using tensorflow api's
img = tf.keras.preprocessing.image.load_img(
image_path,
target_size=(IMG_WIDTH, IMG_HEIGHT)
)
img_array = tf.keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0)
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
print("This definitely an {} - {}".format(class_names[np.argmax(score)], round(100 * np.max(score))))
|
hyperparameter_tuning_tf.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Input and Output
#
# This notebook is the reference code for getting input and output, pandas can read a variety of file types using its pd.read_ methods. Let's take a look at the most common data types:
import numpy as np
import pandas as pd
# ## CSV
#
# ### CSV Input
df = pd.read_csv('example')
df
# ### CSV Output
df.to_csv('example',index=False)
# ## Excel
# Pandas can read and write excel files, keep in mind, this only imports data. Not formulas or images, having images or macros may cause this read_excel method to crash.
# ### Excel Input
pd.read_excel('Excel_Sample.xlsx',sheetname='Sheet1')
# ### Excel Output
df.to_excel('Excel_Sample.xlsx',sheet_name='Sheet1')
# ## HTML
#
# You may need to install htmllib5,lxml, and BeautifulSoup4. In your terminal/command prompt run:
#
# conda install lxml
# conda install html5lib
# conda install BeautifulSoup4
#
# Then restart Jupyter Notebook.
# (or use pip install if you aren't using the Anaconda Distribution)
#
# Pandas can read table tabs off of html. For example:
# ### HTML Input
#
# Pandas read_html function will read tables off of a webpage and return a list of DataFrame objects:
df = pd.read_html('http://www.fdic.gov/bank/individual/failed/banklist.html')
df[0]
# ____
# _____
# _____
# # SQL (Optional)
# The pandas.io.sql module provides a collection of query wrappers to both facilitate data retrieval and to reduce dependency on DB-specific API. Database abstraction is provided by SQLAlchemy if installed. In addition you will need a driver library for your database. Examples of such drivers are psycopg2 for PostgreSQL or pymysql for MySQL. For SQLite this is included in Python’s standard library by default. You can find an overview of supported drivers for each SQL dialect in the SQLAlchemy docs.
#
#
# If SQLAlchemy is not installed, a fallback is only provided for sqlite (and for mysql for backwards compatibility, but this is deprecated and will be removed in a future version). This mode requires a Python database adapter which respect the Python DB-API.
#
# See also some cookbook examples for some advanced strategies.
#
# The key functions are:
#
# * read_sql_table(table_name, con[, schema, ...])
# * Read SQL database table into a DataFrame.
# * read_sql_query(sql, con[, index_col, ...])
# * Read SQL query into a DataFrame.
# * read_sql(sql, con[, index_col, ...])
# * Read SQL query or database table into a DataFrame.
# * DataFrame.to_sql(name, con[, flavor, ...])
# * Write records stored in a DataFrame to a SQL database.
from sqlalchemy import create_engine
engine = create_engine('sqlite:///:memory:')
df.to_sql('data', engine)
sql_df = pd.read_sql('data',con=engine)
sql_df
# # Great Job!
|
Data Input and Output.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Evaluate, optimize, and fit a classifier <img align="right" src="../../Supplementary_data/dea_logo.jpg">
#
# * [**Sign up to the DEA Sandbox**](https://docs.dea.ga.gov.au/setup/sandbox.html) to run this notebook interactively from a browser
# * **Compatibility:** Notebook currently compatible with the `DEA Sandbox` environment
#
# ## Background
#
# Now that we've extracted training data from the ODC, and inspected it to ensure the features we selected are appropriate and useful, we can train a machine learning model. The first step is to decide which machine learning model to use. Deciding which one to pick depends on the classification task at-hand. The table below provides a useful summary of the pros and cons of different models (all of which are available through [scikit-Learn](https://scikit-learn.org/stable/)). This sckit-learn [cheat sheet](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html) may also help.
#
# _Table 1: Some of the pros and cons of different classifiers available through scikit-learn_
#
# <img align="center" src="../../Supplementary_data/Scalable_machine_learning/classifier_pro_cons.png" width="700">
#
# The approach to evaluating, optimizing, and training the supervised machine learning model demonstrated in this notebook has been developed to suit the default training dataset provided. The training dataset is small, contains geospatial data, and contains only two classes (crop and non-crop).
#
# * Because the dataset is relatively small (`n=430`) as shown in the [Extract_training_data](1_Extract_training_data.ipynb) notebook), splitting the data into a training and testing set, and only training the model on the smaller training set would likely substantially degrade the quality of the model. Thus we will fit the final model on _all_ the training data.
# * Because we are fitting the model on all the data, we won't have a testing set left over to estimate the model's prediction accuracy. We therefore rely on a method called **nested k-fold cross-validation** to estimate the prediction ability of our model. This method is described further in the markdown before the code.
# * And because we are generating a binary prediction (crop/non-crop), the metrics used to evaluate the classifier are those which are well suited to binary classifications.
#
# While the approach described above works well for the default training data provided, it may not suit your own classification problem. It is advisable to research the different methods for evaluating and training a model to determine which approach is best for you.
#
# ## Description
#
# This notebook runs through evaluating, optimizing, and fitting a machine learning classifier (in the default example, a Random Forest model is used). Under each of the sub-headings you will find more information on how and why each method is used. The steps are as follows:
#
# 1. Demonstrate how to group the training data into spatial clusters to assist with Spatial K-fold Cross Validation (SKCV)
# 2. Calculate an unbiased performance estimate via nested cross-validation
# 3. Optimize the hyperparameters of the model
# 4. Fit a model to all the training data using the best hyperparameters identified in the previous step
# 5. Save the model to disk for use in the subsequent notebook, `4_Classify_satellite_data.ipynb`
# ***
# ## Getting started
#
# To run this analysis, run all the cells in the notebook, starting with the "Load packages" cell.
# ## Load packages
# +
# -- scikit-learn classifiers, uncomment the one of interest----
# from sklearn.svm import SVC
# from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
# from sklearn.naive_bayes import GaussianNB
# from sklearn.linear_model import LogisticRegression
# from sklearn.neighbors import KNeighborsClassifier
import os
import sys
import joblib
import numpy as np
import pandas as pd
from joblib import dump
import subprocess as sp
import dask.array as da
from pprint import pprint
import matplotlib.pyplot as plt
from odc.io.cgroups import get_cpu_quota
from sklearn.metrics import roc_curve, auc
from sklearn.model_selection import GridSearchCV, ShuffleSplit, KFold
from sklearn.metrics import roc_curve, auc, balanced_accuracy_score, f1_score
# -
# ## Analysis Parameters
#
# * `training_data`: Name and location of the training data `.txt` file output from runnning `1_Extract_training_data.ipynb`
# * `Classifier`: This parameter refers to the scikit-learn classification model to use, first uncomment the classifier of interest in the `Load Packages` section and then enter the function name into this parameter `e.g. Classifier = SVC`
# * `metric` : A single string that denotes the scorer used to find the best parameters for refitting the estimator to evaluate the predictions on the test set. See the scoring parameter page [here](https://scikit-learn.org/stable/modules/model_evaluation.html#scoring-parameter) for a pre-defined list of options. e.g. `metric='balanced_accuracy'`
# +
training_data = "results/test_training_data.txt"
Classifier = RandomForestClassifier
metric = 'balanced_accuracy'
# -
# ### K-Fold Cross Validation Analysis Parameters
#
#
# * `outer_cv_splits` : The number of cross validation splits to use for the outer loop of the nested CV. These splits are used to estimate the accuracy of the classifier. A good default number is 5-10
# * `inner_cv_splits` : The number of cross validation splits to use for the inner loop of the nested CV - the inner loop splits are used for optimizing the hyperparameters. A good default number is 5.
# * `test_size` : This will determine what fraction of the dataset will be set aside as the testing dataset. There is a trade-off here between having a larger test set that will help us better determine the quality of our classifier, and leaving enough data to train the classifier. A good deafult is to set 10-20 % of your dataset aside for testing purposes.
#
# +
inner_cv_splits = 5
outer_cv_splits = 5
test_size = 0.20
# -
# ### Find the number of cpus
ncpus = round(get_cpu_quota())
print('ncpus = ' + str(ncpus))
# ## Import training data
# +
# load the data
model_input = np.loadtxt(training_data)
# load the column_names
with open(training_data, 'r') as file:
header = file.readline()
column_names = header.split()[1:]
# Extract relevant indices from training data
model_col_indices = [column_names.index(var_name) for var_name in column_names[1:]]
#convert variable names into sci-kit learn nomenclature
X = model_input[:, model_col_indices]
y = model_input[:, 0]
# -
# ## Calculate an unbiased performance estimate via nested cross-validation
#
# K-fold [cross-validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)) is a statistical method used to estimate the performance of machine learning models when making predictions on data not used during training. It is a popular method because it is conceptually straightforward and because it generally results in a less biased or less optimistic estimate of the model skill than other methods, such as a simple train/test split.
#
# This procedure can be used both when optimizing the hyperparameters of a model on a dataset, and when comparing and selecting a model for the dataset. However, when the same cross-validation procedure and dataset are used to both tune and select a model, it is likely to lead to an optimistically biased evaluation of the model performance.
#
# One approach to overcoming this bias is to nest the hyperparameter optimization procedure under the model selection procedure. This is called **nested cross-validation**. The paper [here](https://jmlr.csail.mit.edu/papers/v11/cawley10a.html) provides more context to this issue. The image below depicts how the nested cross-validation works.
#
# <img align="center" src="../../Supplementary_data/Scalable_machine_learning/nested_CV.png" width="500">
#
# The result of our nested cross-validation will be a set of accuracy scores that show how well our classifier is doing at recognising unseen data points. The default example is set up to show the `balanced_accuracy` score, the `f1` score, and the `Receiver-Operating Curve, Area Under the Curve (ROC-AUC)`. This latter metric is a robust measure of a classifier's prediction ability. [This article](https://towardsdatascience.com/understanding-auc-roc-curve-68b2303cc9c5) has a good explanation on ROC-AUC, which is a common machine learning metric.
#
# All measures return a value between 0 and 1, with a value of 1 indicating a perfect score.
#
# To conduct the nested cross-validation, we first need to define a grid of parameters to be used in the optimization:
# * `param_grid`: a dictionary of model specific parameters to search through during hyperparameter optimization.
#
# > **Note**: the parameters in the `param_grid` object depend on the classifier being used. The default example is set up for a Random Forest classifier, to adjust the parameters to suit a different classifier, look up the important parameters under the relevant [sklearn documentation](https://scikit-learn.org/stable/supervised_learning.html).
# Create the parameter grid based on the results of random search
param_grid = {
'class_weight': ['balanced', None],
'max_features': ['auto', 'log2', None],
'n_estimators': [100, 200, 300],
'criterion': ['gini', 'entropy']
}
# +
outer_cv = KFold(n_splits=outer_cv_splits, shuffle=True,
random_state=0)
# lists to store results of CV testing
acc = []
f1 = []
roc_auc = []
i = 1
for train_index, test_index in outer_cv.split(X, y):
print(f"Working on {i}/5 outer cv split", end='\r')
model = Classifier(random_state=1)
# index training, testing, and coordinate data
X_tr, X_tt = X[train_index, :], X[test_index, :]
y_tr, y_tt = y[train_index], y[test_index]
# inner split on data within outer split
inner_cv = KFold(n_splits=inner_cv_splits,
shuffle=True,
random_state=0)
clf = GridSearchCV(
estimator=model,
param_grid=param_grid,
scoring=metric,
n_jobs=ncpus,
refit=True,
cv=inner_cv.split(X_tr, y_tr),
)
clf.fit(X_tr, y_tr)
# predict using the best model
best_model = clf.best_estimator_
pred = best_model.predict(X_tt)
# evaluate model w/ multiple metrics
# ROC AUC
probs = best_model.predict_proba(X_tt)
probs = probs[:, 1]
fpr, tpr, thresholds = roc_curve(y_tt, probs)
auc_ = auc(fpr, tpr)
roc_auc.append(auc_)
# Overall accuracy
ac = balanced_accuracy_score(y_tt, pred)
acc.append(ac)
# F1 scores
f1_ = f1_score(y_tt, pred)
f1.append(f1_)
i += 1
# -
# #### Print the results of our model evaluation
print("=== Nested K-Fold Cross-Validation Scores ===")
print("Mean balanced accuracy: "+ str(round(np.mean(acc), 2)))
print("Std balanced accuracy: "+ str(round(np.std(acc), 2)))
print('\n')
print("Mean F1: "+ str(round(np.mean(f1), 2)))
print("Std F1: "+ str(round(np.std(f1), 2)))
print('\n')
print("Mean roc_auc: "+ str(round(np.mean(roc_auc), 3)))
print("Std roc_auc: "+ str(round(np.std(roc_auc), 2)))
print('=============================================')
# These scores represent a robust estimate of the accuracy of our classifier. However, because we are using only a subset of data to fit and optimize the models, and the total amount of training data we have is small, it is reasonable to expect these scores are a modest under-estimate of the final model's accuracy.
#
# Also, its possible the _map_ accuracy will differ from the accuracies reported here since training data is not always a perfect representation of the data in the real world. For example, we may have purposively over-sampled from hard-to-classify regions, or the proportions of classes in our dataset may not match the proportions in the real world. This point underscores the importance of conducting a rigorous and independent map validation, rather than relying on cross-validation scores.
# ## Optimize hyperparameters
#
# Machine learning models require certain 'hyperparameters': model parameters that can be tuned to increase the prediction ability of a model. Finding the best values for these parameters is a 'hyperparameter search' or an 'hyperparameter optimization'.
#
# To optimize the parameters in our model, we use [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) to exhaustively search through a set of parameters and determine the combination that will result in the highest accuracy based upon the accuracy metric defined.
#
# We'll search the same set of parameters that we definied earlier, `param_grid`.
#
#generate n_splits of train-test_split
rs = ShuffleSplit(n_splits=outer_cv_splits, test_size=test_size, random_state=0)
# +
#instatiate a gridsearchCV
clf = GridSearchCV(Classifier(),
param_grid,
scoring=metric,
verbose=1,
cv=rs.split(X, y),
n_jobs=ncpus)
clf.fit(X, y)
print('\n')
print("The most accurate combination of tested parameters is: ")
pprint(clf.best_params_)
print('\n')
print("The "+metric+" score using these parameters is: ")
print(round(clf.best_score_, 2))
# -
# ## Fit a model
#
# Using the best parameters from our hyperparmeter optimization search, we now fit our model on all the data.
# Create a new model
new_model = Classifier(**clf.best_params_, random_state=1, n_jobs=ncpus)
new_model.fit(X, y)
# ## Save the model
#
# Running this cell will export the classifier as a binary`.joblib` file. This will allow for importing the model in the subsequent script, `4_Classify_satellite_data.ipynb`
#
dump(new_model, 'results/ml_model.joblib')
# ## Recommended next steps
#
# To continue working through the notebooks in this `Scalable Machine Learning on the ODC` workflow, go to the next notebook `4_Classify_satellite_data.ipynb.ipynb`.
#
# 1. [Extracting training data from the ODC](1_Extract_training_data.ipynb)
# 2. [Inspecting training data](2_Inspect_training_data.ipynb)
# 3. **Evaluate, optimize, and fit a classifier (this notebook)**
# 4. [Classifying satellite data](4_Classify_satellite_data.ipynb)
# 5. [Object-based filtering of pixel classifications](5_Object-based_filtering.ipynb)
# ***
#
# ## Additional information
#
# **License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
# Digital Earth Australia data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.
#
# **Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).
# If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/GeoscienceAustralia/dea-notebooks).
#
# **Last modified:** March 2022
#
# ## Tags
# Browse all available tags on the DEA User Guide's [Tags Index](https://docs.dea.ga.gov.au/genindex.html)
# + raw_mimetype="text/restructuredtext" active=""
# **Tags**: :index:`machine learning`, :index:`SKCV`, :index:`clustering`, :index:`hyperparameters`, :index:`Random Forest`
|
Real_world_examples/Scalable_machine_learning/3_Evaluate_optimize_fit_classifier.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# import
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# ## Single-layer perceptron
# * Binary classification (-1/1), Attention: Delta rule didn't work with (0/1)
# +
def plot_decision_boundary_simple(weights):
if(weights.shape[1]==3): # with bias: w = (w1, w2, b)
w1 = weights[0][0]
w2 = weights[0][1]
b = weights[0][2]
slope = -(b/w2) / (b/w1)
y_intercept = -b / w2
axes = plt.gca() # Get the current Axes instance on the current figure
x_vals = np.array(axes.get_xlim()) # get_xlim: Return the x-axis view limits
y_vals = y_intercept + slope * x_vals
plt.plot(x_vals, y_vals,'--',label='decision boundary with bias')
if(weights.shape[1]==2): # without bias: w = (w1, w2)
w1 = weights[0][0]
w2 = weights[0][1]
def initialize_weights(dim_row, dim_col):
W = np.random.randn(dim_row,dim_col) * 0.01
return W
# Evaluation metrics
def ratio_misclassified_examples(W,X,Y):
return
def mean_squared_error(W,X,Y):
#mse =
#return mse
return
# -
# ### Perceptron Learning Rule
# * activation function is step-function
# * threshold in step-function is minus bias: -b, the last entry in weight matrix
# * difference to delta rule: calculate output y = w*x, update w += (+/-) * learn_rate * x if misclassified
# +
def perceptron_learning_rule(W, X, Y, learning_rate):
patterns = X
targets = Y
weights = W
flag = True
while(flag): # Make sure every instance gets classified correctly: if sample gets classified wrong then flag = True, and iterate from the beginning
flag = False
for i in range(0,patterns.shape[1]):
sample = patterns[:,i].reshape(patterns.shape[0],1)
y = np.dot(weights,sample) # output: y = w*x
if y < -weights[0][2]: # if y (output) is lower than threshold
y = -1
elif y >= -weights[0][2]: # y is higher than threshold
y = 1
if y == -1 and targets[0][i] == 1:
delta_W = learning_rate * sample.T
weights += delta_W
flag = True
elif y == 1 and targets[0][i] == -1:
delta_W = (-1) * learning_rate * sample.T
weights += delta_W
flag = True
#ratio_misclassified_examples(weights, patterns, targets)
#mean_squared_error(weights, patterns, targets)
return weights
def single_layer_perceptronrule(X, Y, learning_rate=0.001, epochs=20):
# initialize
patterns = X # features (training)
targets = Y # labels (training)
# initialize weights
W = initialize_weights(targets.shape[0],patterns.shape[0])
print(W)
# Apply perceptron_learning_rule epochs times
#for x in range(epochs): # epoch doesn't make sense here?
W = perceptron_learning_rule(W, patterns, targets, learning_rate)
print(W)
return W
# -
# ### Delta rule
# * difference to perceptron learning rule: calculate activation function only in the end
# +
def delta_rule_batch(W, patterns, targets, learning_rate):
delta_W = (-1)*learning_rate * np.dot((np.dot(W,patterns)-targets), patterns.T)
W += delta_W
return W
def delta_rule_sequential(W, patterns, targets, learning_rate):
weights = W
for i in range(0,patterns.shape[1]):
sample = patterns[:,i].reshape(patterns.shape[0],1)
target = targets[0][i]
delta_W = (-1)*learning_rate * np.dot((np.dot(weights,sample)-target), sample.T)
W += delta_W
return weights
def single_layer_batch_deltarule(X, Y, learning_rate=0.001, epochs=20):
# initialize
patterns = X # features (training)
targets = Y # labels (training)
# initialize weights
W = initialize_weights(targets.shape[0],patterns.shape[0])
# Apply delta-rule (batch) epochs times
for x in range(epochs):
W = delta_rule_batch(W, patterns, targets, learning_rate)
#ratio_misclassified_examples(W, patterns, targets)
#mean_squared_error(W, patterns, targets)
return W
def single_layer_sequential_deltarule(X, Y, learning_rate=0.001, epochs=20):
# initialize
patterns = X # features (training)
targets = Y # labels (training)
# initialize weights
W = initialize_weights(targets.shape[0],patterns.shape[0])
# Apply delta-rule (sequential) epochs times
for x in range(epochs):
W = delta_rule_sequential(W, patterns, targets, learning_rate)
#ratio_misclassified_examples(W, patterns, targets)
#mean_squared_error(W, patterns, targets)
return W
# +
# 3.1.1
# Generate dataset for binary classification (linearly separable dataset)
n = 100 # number of samples
mA = (2, 0.5) # mean of class A
mB = (-1.5, 0.0)
sigmaA = 0.5 # variance of class A
sigmaB = 0.5
classA_x1 = np.squeeze(np.random.randn(1,n) * sigmaA + mA[0]) # class A: -1
classA_x2 = np.squeeze(np.random.randn(1,n) * sigmaA + mA[1])
classB_x1 = np.squeeze(np.random.randn(1,n) * sigmaB + mB[0]) # class B: 1
classB_x2 = np.squeeze(np.random.randn(1,n) * sigmaB + mB[1])
# Create matrices
classA = np.vstack((classA_x1,classA_x2))
classB = np.vstack((classB_x1,classB_x2))
X = np.concatenate((classA,classB), axis=1)
X = np.append(X, np.full(shape=(1,X.shape[1]),fill_value=1,dtype=X.dtype), axis=0) # append bias = row filled with 1
classA_y = np.full(shape=(n,),fill_value=-1)
classB_y = np.ones(n)
Y = np.concatenate((classA_y,classB_y)).reshape((1,(n*2)))
# Permute
indices = np.arange(X.shape[1])
permuted_indices = np.random.permutation(indices)
X = X.take(permuted_indices, axis=1)
Y = Y.take(permuted_indices, axis=1)
#print("X: ",X)
#print("Y: ",Y)
# 3.1.2
# Invoke single-layer perceptron-learning
#W = single_layer_perceptronrule(X, Y, learning_rate=0.001, epochs=20)
# Invoke single-layer batch delta-rule
#W = single_layer_batch_deltarule(X, Y, learning_rate=0.001, epochs=20)
# Invoke single-layer sequential delta-rule
W = single_layer_sequential_deltarule(X, Y, learning_rate=0.001, epochs=20)
# plot dataset with decision boundary
fig = plt.figure()
ax = plt.axes()
plt.xlim(-4, 4)
plt.ylim(-4, 4)
plt.title('dataset visualization')
plt.scatter(classA_x1, classA_x2, c="red")
plt.scatter(classB_x1, classB_x2, c="blue")
plot_decision_boundary_simple(W) # plot decision boundary
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
# -
# #### Classification of samples that are not linearly separable (part 1)
# * create non-linearly separable dataset by yourself
# * Apply and compare: perceptron learning, delta rule (batch), delta rule (sequential) similar to previous one
# +
# 3.1.3 (part 1)
# dataset not linearly separable
n = 100 # number of samples
mA = (1, 0.5) # mean of class A
mB = (-1, 0.0)
sigmaA = 0.8 # variance of class A
sigmaB = 0.8
classA_x1 = np.squeeze(np.random.randn(1,n) * sigmaA + mA[0]) # class A: -1
classA_x2 = np.squeeze(np.random.randn(1,n) * sigmaA + mA[1])
classB_x1 = np.squeeze(np.random.randn(1,n) * sigmaB + mB[0]) # class B: 1
classB_x2 = np.squeeze(np.random.randn(1,n) * sigmaB + mB[1])
# Create matrices
classA = np.vstack((classA_x1,classA_x2))
classB = np.vstack((classB_x1,classB_x2))
X = np.concatenate((classA,classB), axis=1)
X = np.append(X, np.full(shape=(1,X.shape[1]),fill_value=1,dtype=X.dtype), axis=0) # append bias = row filled with 1
classA_y = np.full(shape=(n,),fill_value=-1)
classB_y = np.ones(n)
Y = np.concatenate((classA_y,classB_y)).reshape((1,(n*2)))
# Permute
indices = np.arange(X.shape[1])
permuted_indices = np.random.permutation(indices)
X = X.take(permuted_indices, axis=1)
Y = Y.take(permuted_indices, axis=1)
#print("X: ",X)
#print("Y: ",Y)
# Invoke single-layer perceptron-learning: !! Infinite loop since it doesn't converge !!
#W = single_layer_perceptronrule(X, Y, learning_rate=0.001, epochs=20)
# Invoke single-layer batch delta-rule
# W = single_layer_batch_deltarule(X, Y, learning_rate=0.001, epochs=20)
# Invoke single-layer sequential delta-rule
W = single_layer_sequential_deltarule(X, Y, learning_rate=0.001, epochs=20)
# plot dataset with decision boundary
fig = plt.figure()
ax = plt.axes()
plt.xlim(-4, 4)
plt.ylim(-4, 4)
plt.title('dataset visualization')
plt.scatter(classA_x1, classA_x2, c="red")
plt.scatter(classB_x1, classB_x2, c="blue")
plot_decision_boundary_simple(W) # plot decision boundary
plt.xlabel('x1')
plt.ylabel('x2')
plt.show()
# -
# #### Classification of samples that are not linearly separable (part 2)
# +
# 3.1.3 (part 2)
# -
# ### Two-layer perceptron
# * trained with backprop
# * generalised Delta rule, a.k.a. error backpropagation algorithm
# 3.2.1
# 3.2.2 The encoder problem
# 3.2.3 Function approximation
# ### Multi-layer perceptron
# * Scikit-learn library
# ### Backup code
# Neural Network model
def two_layer_model(X, Y, n_h, epochs=20):
"""
Arguments:
X -- dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
n_h -- size of the hidden layer
epochs -- Number of iterations in gradient descent loop
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
# get dimensions
n_x = X.shape[0]
n_h = n_h # redundant
n_y = Y.shape[0]
# initialization: create W
#
#for i in range(0, epochs):
return parameters
|
Lab_1_ANNDA.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Unordered List
class Node:
def __init__(self, data):
self.data = data
self.next = None
class UnorderedList:
def __init__(self):
self.head = None
def isEmpty(self):
return self.head == None
def add(self, val):
node = Node(val)
node.next = self.head
self.head = node
# +
class Node:
def __init__(self, data):
self.data = data
self.next = None
class UnorderedList:
def __init__(self):
self.head = None
def isEmpty(self):
return self.head == None
def add(self, val):
node = Node(val)
node.next = self.head
self.head = node
def size(self):
count = 0
cur = self.head
while cur:
count += 1
cur = cur.next
return count
def search(self, data):
cur = self.head
while cur:
if cur.data == data:
return True
cur = cur.next
return False
def remove(self, data):
cur = self.head
prev = None
while cur:
if cur.data == data:
if not prev: # current is head
self.head = cur.next
else:
prev.next = cur.next
break
else:
prev = cur
cur = cur.next
def append(self, data):
node = Node(data)
if not self.head:
self.head = node
return
cur = self.head
while cur.next:
cur = cur.next
cur.next = node
def show(self):
cur = self.head
while cur:
print(cur.data, end=' ')
cur = cur.next
print()
mylist = UnorderedList()
mylist.add(3)
mylist.add(4)
mylist.add(5)
mylist.add(6)
print(mylist.size())
print(mylist.search(5))
print('before remove')
mylist.show()
mylist.remove(5)
print('after remove')
mylist.show()
print('before append')
mylist = UnorderedList()
mylist.show()
mylist.append('a')
print('after append')
mylist.show()
# -
# ## Ordered List
class OrderedList:
def __init__(self):
self.head = None
def isEmpty(self):
return self.head == None
def size(self):
cur = self.head
count = 0
while cur:
count += 1
cur = cur.next
def add(self, data):
node = Node(data)
if not self.head:
self.head = node
else:
cur = self.head
while cur:
if cur.next:
if cur.data < data < cur.next.data:
node.next = cur.next
cur.next = node
break
elif cur.data > data:
node.next = self.head
self.head = node
break
else:
cur = cur.next
else: # current is the last one
cur.next = node
break
def add2(self, data):
cur = self.head
prev = None
stop = False
while cur and not stop:
if cur.data > data:
stop = True
else:
prev = cur
cur = cur.next
node = Node(data)
if not prev:
node.next = self.head
self.head = node
else:
node.next = cur
prev.next = node
def remove(self, data):
cur = self.head
prev = None
while cur:
if cur.data == data:
if not prev:
self.head = None
break
else:
prev.next = cur.next
break
else:
prev = cur
cur = cur.next
def search(self, data):
if self.isEmpty():
return False
cur = self.head
while cur:
if data < cur.data:
return False
if cur.data == data:
return True
cur = cur.next
return False
def show(self):
cur = self.head
while cur:
print(cur.data, end=' ')
cur = cur.next
print()
mylist = OrderedList()
mylist.add2(31)
mylist.show()
mylist.add2(77)
mylist.show()
mylist.add2(26)
print(mylist.search(77))
print(mylist.search(54))
mylist.show()
mylist.add(54)
mylist.show()
mylist.remove(31)
mylist.show()
mylist.remove(54)
mylist.show()
mylist.remove(77)
mylist.show()
mylist.remove(26)
mylist.show()
# +
class Node:
def __init__(self,initdata):
self.data = initdata
self.next = None
def getData(self):
return self.data
def getNext(self):
return self.next
def setData(self,newdata):
self.data = newdata
def setNext(self,newnext):
self.next = newnext
class UnorderedList:
def __init__(self):
self.head = None
self.tail = None
def isEmpty(self):
return self.head == None
def add(self,item):
temp = Node(item)
temp.setNext(self.head)
if self.tail == None:
self.tail = temp
self.head = temp
def size(self):
current = self.head
count = 0
while current != None:
count = count + 1
current = current.getNext()
return count
def search(self,item):
current = self.head
found = False
while current != None and not found:
if current.getData() == item:
found = True
else:
current = current.getNext()
return found
def remove(self,item):
current = self.head
previous = None
found = False
while not found:
if current.getData() == item:
found = True
else:
previous = current
current = current.getNext()
if previous == None:
self.head = current.getNext()
else:
if current == self.tail:
self.tail = previous
previous.setNext(current.getNext())
def append(self,item):
current = self.tail
temp = Node(item)
current.setNext(temp)
self.tail = temp
def show(self):
cur = self.head
while cur:
print(cur.data, end=' ')
cur = cur.next
print()
mylist = UnorderedList()
mylist.add(31)
mylist.add(77)
mylist.add(17)
mylist.add(93)
mylist.add(26)
mylist.add(54)
print(mylist.size())
mylist.show()
mylist.remove(31)
mylist.show()
mylist.remove(77)
mylist.show()
print(mylist.tail.data)
mylist.append(1)
mylist.append(7)
mylist.append(3)
mylist.append(2)
mylist.show()
cur = mylist.tail
while cur:
print(cur.data, end=' ')
cur = cur.next
# -
|
dsa/Basic Data Structures/List.ipynb
|
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .scala
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: Scala
// language: scala
// name: scala
// ---
// # Scala Singletons
//
//
// > "Singleton objects in Scala"
//
// - toc:true
// - branch: master
// - badges: false
// - comments: false
// - author: <NAME>
// - categories: [scala, classes, singletons, programming]
//
// ## <a name="overview"></a> Overview
// Often in software modeling we need to represent an entity that it does not make sense to have more than one instances throughout program execution. We call these objects singletons. Typically, singletons are modeled using static functions. Scala does not support static functions. Instead we use the ```object``` construct [1].
// ## <a name="sec1"></a> Singletons
// An object defines a single instance of a class with the features we want. For example
object Counter{
private var theCounter = 0
def getNewCounter : Int = {theCounter +=1; theCounter}
}
// When the application requires a new counter, simply calls ```Counter.getNewCounter```
println("New counter " + Counter.getNewCounter)
// The constructor of an object is executed when the object is first used [1]. If an object is never used, its constructor is, obviously, not executed [1].
// An object can have all the features of a class, including extending other
// classes or traits [1]. However, an ```object``` cannot have a constructor with parameters.
// ## <a name="refs"></a> References
// 1. <NAME>, ```Scala for the Impatient 1st Edition```
|
_notebooks/2021-04-26-scala-singletons.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import os.path
# +
files=[]
import os
for filename in os.listdir('C:/Users/durve/Google Drive (<EMAIL>)/Hyperparameter Project/Leaderboard CSVs'):
if filename.endswith(".csv"):# or filename.endswith(".py"):
files.append(os.path.join('C:/Users/durve/Google Drive (<EMAIL>)/Hyperparameter Project/Leaderboard CSVs', filename))
continue
else:
continue
# -
run_id = []
i=0
new_df= pd.DataFrame()
for file in files:
id = os.path.basename(file)
run_id.append(id.split('_')[0].split('lb')[0])
df=pd.read_csv(file,index_col=0)
df.insert(0, 'run_id', run_id[i])
new_df = new_df.append(df)
i=i+1
#It is Null since the dataset is a regression one
new_df['auc'] = None
new_df['log_loss'] = None
new_df.head()
new_df.to_csv('Leaderboard.csv',index=False)
|
Leaderboard.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# default_exp models.utils
# -
# # Models utils
#
# > Utility functions used to build PyTorch timeseries models.
#export
from fastai.tabular.model import *
from fastai.vision.models.all import *
from tsai.imports import *
from tsai.models.layers import *
# +
#export
def get_layers(model, cond=noop, full=True):
if isinstance(model, Learner): model=model.model
if full: return [m for m in flatten_model(model) if any([c(m) for c in L(cond)])]
else: return [m for m in model if any([c(m) for c in L(cond)])]
def is_layer(*args):
def _is_layer(l, cond=args):
return isinstance(l, cond)
return partial(_is_layer, cond=args)
def is_linear(l):
return isinstance(l, nn.Linear)
def is_bn(l):
types = (nn.BatchNorm1d, nn.BatchNorm2d, nn.BatchNorm3d)
return isinstance(l, types)
def is_conv_linear(l):
types = (nn.Conv1d, nn.Conv2d, nn.Conv3d, nn.Linear)
return isinstance(l, types)
def is_affine_layer(l):
return has_bias(l) or has_weight(l)
def is_conv(l):
types = (nn.Conv1d, nn.Conv2d, nn.Conv3d)
return isinstance(l, types)
def has_bias(l):
return (hasattr(l, 'bias') and l.bias is not None)
def has_weight(l):
return (hasattr(l, 'weight'))
def has_weight_or_bias(l):
return any((has_weight(l), has_bias(l)))
# +
#export
def check_bias(m, cond=noop, verbose=False):
mean, std = [], []
for i,l in enumerate(get_layers(m, cond=cond)):
if hasattr(l, 'bias') and l.bias is not None:
b = l.bias.data
mean.append(b.mean())
std.append(b.std())
pv(f'{i:3} {l.__class__.__name__:15} shape: {str(list(b.shape)):15} mean: {b.mean():+6.3f} std: {b.std():+6.3f}', verbose)
return np.array(mean), np.array(std)
def check_weight(m, cond=noop, verbose=False):
mean, std = [], []
for i,l in enumerate(get_layers(m, cond=cond)):
if hasattr(l, 'weight') and l.weight is not None:
w = l.weight.data
mean.append(w.mean())
std.append(w.std())
pv(f'{i:3} {l.__class__.__name__:15} shape: {str(list(w.shape)):15} mean: {w.mean():+6.3f} std: {w.std():+6.3f}', verbose)
return np.array(mean), np.array(std)
# -
#export
def get_nf(m):
"Get nf from model's first linear layer in head"
return get_layers(m[-1], is_linear)[0].in_features
# +
#export
def ts_splitter(m):
"Split of a model between body and head"
return L(m.backbone, m.head).map(params)
def transfer_weights(model, weights_path:Path, device:torch.device=None, exclude_head:bool=True):
"""Utility function that allows to easily transfer weights between models.
Taken from the great self-supervised repository created by <NAME>.
https://github.com/KeremTurgutlu/self_supervised/blob/d87ebd9b4961c7da0efd6073c42782bbc61aaa2e/self_supervised/utils.py"""
device = ifnone(device, default_device())
state_dict = model.state_dict()
new_state_dict = torch.load(weights_path, map_location=device)
matched_layers = 0
unmatched_layers = []
for name, param in state_dict.items():
if exclude_head and 'head' in name: continue
if name in new_state_dict:
matched_layers += 1
input_param = new_state_dict[name]
if input_param.shape == param.shape: param.copy_(input_param)
else: unmatched_layers.append(name)
else:
unmatched_layers.append(name)
pass # these are weights that weren't in the original model, such as a new head
if matched_layers == 0: raise Exception("No shared weight names were found between the models")
else:
if len(unmatched_layers) > 0:
print(f'check unmatched_layers: {unmatched_layers}')
else:
print(f"weights from {weights_path} successfully transferred!\n")
def build_ts_model(arch, c_in=None, c_out=None, seq_len=None, d=None, dls=None, device=None, verbose=False,
pretrained=False, weights_path=None, exclude_head=True, cut=-1, init=None, **kwargs):
device = ifnone(device, default_device())
if dls is not None:
c_in = ifnone(c_in, dls.vars)
c_out = ifnone(c_out, dls.c)
seq_len = ifnone(seq_len, dls.len)
d = ifnone(d, dls.d)
if is_listy(d) and len(d) == 2:
if 'custom_head' not in kwargs.keys():
kwargs['custom_head'] = partial(lin_3d_head, d=d)
else:
kwargs['custom_head'] = partial(kwargs['custom_head'], d=d)
if sum([1 for v in ['RNN_FCN', 'LSTM_FCN', 'RNNPlus', 'LSTMPlus', 'GRUPlus', 'InceptionTimePlus',
'GRU_FCN', 'OmniScaleCNN', 'mWDN', 'TST', 'XCM', 'MLP', 'MiniRocket']
if v in arch.__name__]):
pv(f'arch: {arch.__name__}(c_in={c_in} c_out={c_out} seq_len={seq_len} device={device}, kwargs={kwargs})', verbose)
model = arch(c_in, c_out, seq_len=seq_len, **kwargs).to(device=device)
elif 'xresnet' in arch.__name__ and not '1d' in arch.__name__:
pv(f'arch: {arch.__name__}(c_in={c_in} c_out={c_out} device={device}, kwargs={kwargs})', verbose)
model = (arch(c_in=c_in, n_out=c_out, **kwargs)).to(device=device)
elif 'rocket' in arch.__name__.lower():
pv(f'arch: {arch.__name__}(c_in={c_in} seq_len={seq_len} device={device}, kwargs={kwargs})', verbose)
model = (arch(c_in=c_in, seq_len=seq_len, **kwargs)).to(device=device)
else:
pv(f'arch: {arch.__name__}(c_in={c_in} c_out={c_out} device={device}, kwargs={kwargs})', verbose)
model = arch(c_in, c_out, **kwargs).to(device=device)
try:
model[0], model[1]
subscriptable = True
except:
subscriptable = False
if hasattr(model, "head_nf"): head_nf = model.head_nf
else:
try: head_nf = get_nf(model)
except: head_nf = None
if not subscriptable and 'Plus' in arch.__name__:
model = nn.Sequential(*model.children())
model.backbone = model[:cut]
model.head = model[cut:]
if pretrained and not ('xresnet' in arch.__name__ and not '1d' in arch.__name__):
assert weights_path is not None, "you need to pass a valid weights_path to use a pre-trained model"
transfer_weights(model, weights_path, exclude_head=exclude_head, device=device)
if init is not None:
apply_init(model[1] if pretrained else model, init)
setattr(model, "head_nf", head_nf)
setattr(model, "__name__", arch.__name__)
return model
build_model = build_ts_model
create_model = build_ts_model
@delegates(TabularModel.__init__)
def build_tabular_model(arch, dls, layers=None, emb_szs=None, n_out=None, y_range=None, device=None, **kwargs):
if device is None: device = default_device()
if layers is None: layers = [200,100]
emb_szs = get_emb_sz(dls.train_ds, {} if emb_szs is None else emb_szs)
if n_out is None: n_out = get_c(dls)
assert n_out, "`n_out` is not defined, and could not be inferred from data, set `dls.c` or pass `n_out`"
if y_range is None and 'y_range' in kwargs: y_range = kwargs.pop('y_range')
model = arch(emb_szs, len(dls.cont_names), n_out, layers, y_range=y_range, **kwargs).to(device=device)
if hasattr(model, "head_nf"): head_nf = model.head_nf
else: head_nf = get_nf(model)
setattr(model, "__name__", arch.__name__)
if head_nf is not None: setattr(model, "head_nf", head_nf)
return model
create_tabular_model = build_tabular_model
@delegates(XResNet.__init__)
def build_tsimage_model(arch, c_in=None, c_out=None, dls=None, pretrained=False, device=None, verbose=False, init=None, **kwargs):
device = ifnone(device, default_device())
if dls is not None:
c_in = ifnone(c_in, dls.vars)
c_out = ifnone(c_out, dls.c)
model = arch(pretrained=pretrained, c_in=c_in, n_out=c_out, **kwargs).to(device=device)
setattr(model, "__name__", arch.__name__)
if init is not None:
apply_init(model[1] if pretrained else model, init)
return model
def count_parameters(model, trainable=True):
if trainable: return sum(p.numel() for p in model.parameters() if p.requires_grad)
else: return sum(p.numel() for p in model.parameters())
# -
from tsai.data.external import get_UCR_data
from tsai.data.features import get_ts_features
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, split_data=False)
ts_features_df = get_ts_features(X, y)
from tsai.data.tabular import get_tabular_dls
from tsai.models.TabModel import TabModel
cat_names = None
cont_names = ts_features_df.columns[:-2]
y_names = 'target'
tab_dls = get_tabular_dls(ts_features_df, cat_names=cat_names, cont_names=cont_names, y_names=y_names, splits=splits)
tab_model = build_tabular_model(TabModel, dls=tab_dls)
b = first(tab_dls.train)
test_eq(tab_model(*b[:-1]).shape, (64,6))
a = 'MLSTM_FCN'
if sum([1 for v in ['RNN_FCN', 'LSTM_FCN', 'GRU_FCN', 'OmniScaleCNN', 'Transformer', 'mWDN'] if v in a]): print(1)
#export
def get_clones(module, N):
return nn.ModuleList([deepcopy(module) for i in range(N)])
m = nn.Conv1d(3,4,3)
get_clones(m, 3)
#export
def split_model(m): return m.backbone, m.head
#export
def seq_len_calculator(seq_len, **kwargs):
t = torch.rand(1, 1, seq_len)
return nn.Conv1d(1, 1, **kwargs)(t).shape[-1]
seq_len = 345
kwargs = dict(kernel_size=5, stride=5)
seq_len_calculator(seq_len, **kwargs)
#export
def change_model_head(model, custom_head, **kwargs):
r"""Replaces a model's head by a custom head as long as the model has a head, head_nf, c_out and seq_len attributes"""
model.head = custom_head(model.head_nf, model.c_out, model.seq_len, **kwargs)
return model
# +
# export
def naive_forecaster(o, split, horizon=1):
if is_listy(horizon):
_f = []
for h in horizon:
_f.append(o[np.asarray(split)-h])
return np.stack(_f)
return o[np.asarray(split) - horizon]
def true_forecaster(o, split, horizon=1):
o_true = o[split]
if is_listy(horizon):
o_true = o_true[np.newaxis].repeat(len(horizon), 0)
return o_true
# -
a = np.random.rand(20).cumsum()
split = np.arange(10, 20)
a, naive_forecaster(a, split, 1), true_forecaster(a, split, 1)
#hide
out = create_scripts(); beep(out)
|
nbs/100b_models.utils.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + language="bash"
#
# cd ../improvement-prediction
#
# if test -f /Users/fchirigati/projects/dataset-ranking/use-cases/data/training-records-features; then
# rm /Users/fchirigati/projects/dataset-ranking/use-cases/data/training-records-features
# fi
#
# cat > .params_feature_generation.json << EOL
# {
# "learning_data_filename": "/Users/fchirigati/projects/dataset-ranking/use-cases/data/training-records",
# "augmentation_learning_data_filename": "/Users/fchirigati/projects/dataset-ranking/use-cases/data/training-records-features",
# "file_dir": "data/",
# "cluster": false,
# "hdfs_address": "http://gray01.poly.edu:50070",
# "hdfs_user": "fsc234"
# }
# EOL
#
# ./run-spark-client
# +
import pandas as pd
features = pd.read_csv(
'/Users/fchirigati/projects/dataset-ranking/use-cases/data/training-records-features',
header=None,
names=[
'query',
'target',
'candidate',
'query_num_of_columns',
'query_num_of_rows',
'query_row_column_ratio',
'query_max_mean',
'query_max_outlier_percentage',
'query_max_skewness',
'query_max_kurtosis',
'query_max_unique',
'candidate_num_of_columns',
'candidate_num_rows',
'candidate_row_column_ratio',
'candidate_max_mean',
'candidate_max_outlier_percentage',
'candidate_max_skewness',
'candidate_max_kurtosis',
'candidate_max_unique',
'query_target_max_pearson',
'query_target_max_spearman',
'query_target_max_covariance',
'query_target_max_mutual_info',
'candidate_target_max_pearson',
'candidate_target_max_spearman',
'candidate_target_max_covariance',
'candidate_target_max_mutual_info',
'max_pearson_difference',
'containment_fraction',
'decrease_in_mae',
'decrease_in_mse',
'decrease_in_medae',
'gain_in_r2_score'
]
)
features.to_csv(
'/Users/fchirigati/projects/dataset-ranking/use-cases/data/training-records-features',
index=False
)
# -
|
use-cases/generate-features-for-use-cases.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="https://qworld.net" target="_blank" align="left"><img src="../qworld/images/header.jpg" align="left"></a>
# $ \newcommand{\bra}[1]{\langle #1|} $
# $ \newcommand{\ket}[1]{|#1\rangle} $
# $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
# $ \newcommand{\dot}[2]{ #1 \cdot #2} $
# $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
# $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
# $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
# $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
# $ \newcommand{\mypar}[1]{\left( #1 \right)} $
# $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
# $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
# $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
# $ \newcommand{\onehalf}{\frac{1}{2}} $
# $ \newcommand{\donehalf}{\dfrac{1}{2}} $
# $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
# $ \newcommand{\vzero}{\myvector{1\\0}} $
# $ \newcommand{\vone}{\myvector{0\\1}} $
# $ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
# $ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
# $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
# $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
# $ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $
# $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
# $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
# $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
# $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
# $ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
# $ \newcommand{\greenbit}[1] {\mathbf{{\color{green}#1}}} $
# $ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}#1}}} $
# $ \newcommand{\redbit}[1] {\mathbf{{\color{red}#1}}} $
# $ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}#1}}} $
# $ \newcommand{\blackbit}[1] {\mathbf{{\color{black}#1}}} $
# <font style="font-size:28px;" align="left"><b> Reflections</b></font>
# <br>
# _prepared by <NAME>_
# <br><br>
# [<img src="../qworld/images/watch_lecture.jpg" align="left">](https://youtu.be/nzj7kw1Ycms)
# <br><br><br>
# _We use certain tools from python library "<b>matplotlib.pyplot</b>" for drawing. Check the notebook [Python: Drawing](../python/Python06_Drawing.ipynb) for the list of these tools._
# We start with a very basic reflection.
# <h3> Z-gate (operator) </h3>
#
# The indentity operator $ I = \mymatrix{cc}{1 & 0 \\ 0 & 1} $ does not affect the computation.
#
# What about the following operator?
#
# $ Z = \Z $.
#
# It is very similar to the identity operator.
#
# Consider the quantum state $ \ket{u} = \myvector{ \frac{3}{5} \\ \frac{4}{5} } $.
#
# We calculate the new quantum state after applying $ Z $ to $ \ket{u} $:
#
# $ \ket{u'} = Z \ket{u} = \Z \myvector{ \frac{3}{5} \\ \frac{4}{5} } = \myrvector{ \frac{3}{5} \\ -\frac{4}{5} } $.
#
# We draw both states below.
# +
# %run quantum.py
draw_qubit()
draw_quantum_state(3/5,4/5,"u")
draw_quantum_state(3/5,-4/5,"u'")
show_plt()
# -
# When we apply $ Z $ to the state $ \ket{u'} $, we obtain the state $\ket{u}$ again: $ \Z \myrvector{ \frac{3}{5} \\ -\frac{4}{5} } = \myvector{ \frac{3}{5} \\ \frac{4}{5} } $.
#
# It is easy to see that the operator $Z$ is a reflection and its **line of reflection** is the $x$-axis.
#
# Remark that applying the same reflection twice on the unit circle does not make any change.
# <h3> Task 1 </h3>
#
# Create a quantum circuit with 5 qubits.
#
# Apply h-gate (Hadamard operator) to each qubit.
#
# Apply z-gate ($Z$ operator) to randomly picked qubits. (i.e., $ mycircuit.z(qreg[i]) $)
#
# Apply h-gate to each qubit.
#
# Measure each qubit.
#
# Execute your program 1000 times.
#
# Compare the outcomes of the qubits affected by z-gates and the outcomes of the qubits not affected by z-gates.
#
# Does z-gate change the outcome?
#
# Why?
# +
# import all necessary objects and methods for quantum circuits
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
# import randrange for random choices
from random import randrange
#
# your code is here
#
# -
# [click for our solution](Q48_Reflections_Solutions.ipynb#task1)
# <h3> Hadamard operator </h3>
# Is Hadamard operator a reflection? If so, what is its line of reflection?
#
# Remember the following transitions.
#
# $ H \ket{0} = \hadamard \vzero = \stateplus = \ket{+} ~~~$ and $~~~ H \ket{+} = \hadamard \stateplus = \vzero = \ket{0} $.
#
# $ H \ket{1} = \hadamard \vone = \stateminus = \ket{-} ~~~$ and $~~~ H \ket{-} = \hadamard \stateminus = \vone = \ket{1} $.
# +
# %run quantum.py
draw_qubit()
sqrttwo=2**0.5
draw_quantum_state(1,0,"")
draw_quantum_state(1/sqrttwo,1/sqrttwo,"|+>")
# +
# %run quantum.py
draw_qubit()
sqrttwo=2**0.5
draw_quantum_state(0,1,"")
draw_quantum_state(1/sqrttwo,-1/sqrttwo,"|->")
# -
# <h3> Hadamard - geometrical interpretation </h3>
# Hadamard operator is a reflection and its line of reflection is represented below.
#
# It is the line obtained by rotating $x$-axis with $ \frac{\pi}{8} $ radians in counter-clockwise direction.
# +
# %run quantum.py
draw_qubit()
sqrttwo=2**0.5
draw_quantum_state(1,0,"")
draw_quantum_state(1/sqrttwo,1/sqrttwo,"|+>")
draw_quantum_state(0,1,"")
draw_quantum_state(1/sqrttwo,-1/sqrttwo,"|->")
# line of reflection for Hadamard
from matplotlib.pyplot import arrow
arrow(-1.109,-0.459,2.218,0.918,linestyle='dotted',color='red')
# drawing the angle with |0>-axis
from matplotlib.pyplot import gca, text
from matplotlib.patches import Arc
gca().add_patch( Arc((0,0),0.4,0.4,angle=0,theta1=0,theta2=22.5) )
text(0.09,0.015,'.',fontsize=30)
text(0.25,0.03,'\u03C0/8')
gca().add_patch( Arc((0,0),0.4,0.4,angle=0,theta1=22.5,theta2=45) )
text(0.075,0.065,'.',fontsize=30)
text(0.21,0.16,'\u03C0/8')
show_plt()
# -
# <h3> Task 2 </h3>
#
# Randomly create a quantum state and multiply it with Hadamard matrix to find its reflection.
#
# Draw both states.
#
# Repeat the task for a few times.
# +
# %run quantum.py
draw_qubit()
# line of reflection for Hadamard
from matplotlib.pyplot import arrow
arrow(-1.109,-0.459,2.218,0.918,linestyle='dotted',color='red')
#
# your code is here
#
# -
# [click for our solution](Q48_Reflections_Solutions.ipynb#task2)
# <h3> Task 3 </h3>
#
# Find the matrix representing the reflection over the line $y=x$.
#
# <i>Hint: Think about the reflections of the points $ \myrvector{0 \\ 1} $, $ \myrvector{-1 \\ 0} $, and $ \myrvector{-\sqrttwo \\ \sqrttwo} $ over the line $y=x$.</i>
#
# Randomly create a quantum state and multiply it with this matrix to find its reflection over the line $y = x$.
#
# Draw both states.
#
# Repeat the task for a few times.
# +
# %run quantum.py
draw_qubit()
# the line y=x
from matplotlib.pyplot import arrow
arrow(-1,-1,2,2,linestyle='dotted',color='red')
#
# your code is here
#
# draw_quantum_state(x,y,"name")
# -
# [click for our solution](Q48_Reflections_Solutions.ipynb#task3)
# <h3>Reflection Operators</h3>
# As we have observed, the following operators are reflections on the unit circle.
# <b> Z operator:</b> $ Z = \mymatrix{rr}{ 1 & 0 \\ 0 & -1 } $. The line of reflection is $x$-axis.
# <b> NOT operator:</b> $ X = \mymatrix{rr}{ 0 & 1 \\ 1 & 0 } $. The line of reflection is $y=x$.
# <b> Hadamard operator:</b> $ H = \hadamard $. The line of reflection is $y= \frac{\sin(\pi/8)}{\cos(\pi/8)} x$.
#
# It is the line passing through the origin making an angle $ \pi/8 $ radians with $x$-axis.
# <b>Arbitrary reflection operator:</b> Let $ \theta $ be the angle of the line of reflection. Then, the martix form of reflection is represented as follows:
#
# $$ Ref(\theta) = \mymatrix{rr}{ \cos(2\theta) & \sin(2\theta) \\ \sin(2\theta) & -\cos(2\theta) } . $$
# ---
#
# <h3> Extra: Task 4 </h3>
#
# The matrix forms of rotations and reflections are similar to each other.
#
# Represent $ Ref(\theta) $ as a combination of a basic reflection operator (i.e., $X$, $H$, or $Z$) and rotation $ R(\theta) $.
# <h3> Extra: Task 5 </h3>
#
# Randomly pick the angle $\theta$.
#
# Draw the line of reflection with the unit circle.
#
# Construct the corresponding reflection matrix.
#
# Randomly create a quantum state and multiply it with this matrix to find its reflection.
#
# Draw both states.
#
# Repeat the task for a few times.
# +
# %run quantum.py
draw_qubit()
#
# your code is here
#
# line of reflection
# from matplotlib.pyplot import arrow
# arrow(x,y,dx,dy,linestyle='dotted',color='red')
#
#
# draw_quantum_state(x,y,"name")
|
quantum-with-qiskit/Q48_Reflections.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Write a Python Script that captures images from your webcam video stream
# Extracts all Faces from the image frame (using haarcascades)
# Stores the Face information into numpy arrays
# 1. Read and show video stream, capture images
# 2. Detect Faces and show bounding box (haarcascade)
# 3. Flatten the largest face image(gray scale) and save in a numpy array
# 4. Repeat the above for multiple people to generate training data
import cv2
import numpy as np
#Init Camera
cap = cv2.VideoCapture(0)
# Face Detection
face_cascade = cv2.CascadeClassifier("haarcascade_frontalface_alt.xml")
skip = 0
face_data = []
dataset_path = './data/'
file_name = input("Enter the name of the person : ")
while True:
ret,frame = cap.read()
if ret==False:
continue
faces = face_cascade.detectMultiScale(frame,1.3,5)
if len(faces)==0:
continue
faces = sorted(faces,key=lambda f:f[2]*f[3])
#Pick the last face (because it is the largest face acc to area(f[2]*f[3]))
for face in faces[-1:]:
x,y,w,h = face
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
#Extract (Crop out the required face) : Region of Interest
offset = 10
face_section = frame[y-offset:y+h+offset,x-offset:x+w+offset]
face_section = cv2.resize(face_section,(100,100))
skip += 1
if skip%10==0:
face_data.append(face_section)
print(len(face_data))
cv2.imshow("Frame",frame)
cv2.imshow("Face Section",face_section)
key_pressed = cv2.waitKey(1) & 0xFF
if key_pressed == ord('q'):
break
# Convert our face list array into a numpy array
face_data = np.asarray(face_data)
face_data = face_data.reshape((face_data.shape[0],-1))
print(face_data.shape)
# Save this data into file system
np.save(dataset_path+file_name+'.npy',face_data)
print("Data Successfully save at "+dataset_path+file_name+'.npy')
cap.release()
cv2.destroyAllWindows()
# +
# Recognise Faces using some classification algorithm - like Logistic, KNN, SVM etc.
# 1. load the training data (numpy arrays of all the persons)
# x- values are stored in the numpy arrays
# y-values we need to assign for each person
# 2. Read a video stream using opencv
# 3. extract faces out of it
# 4. use knn to find the prediction of face (int)
# 5. map the predicted id to name of the user
# 6. Display the predictions on the screen - bounding box and name
import cv2
import numpy as np
import os
########## KNN CODE ############
def distance(v1, v2):
# Eucledian
return np.sqrt(((v1-v2)**2).sum())
def knn(train, test, k=5):
dist = []
for i in range(train.shape[0]):
# Get the vector and label
ix = train[i, :-1]
iy = train[i, -1]
# Compute the distance from test point
d = distance(test, ix)
dist.append([d, iy])
# Sort based on distance and get top k
dk = sorted(dist, key=lambda x: x[0])[:k]
# Retrieve only the labels
labels = np.array(dk)[:, -1]
# Get frequencies of each label
output = np.unique(labels, return_counts=True)
# Find max frequency and corresponding label
index = np.argmax(output[1])
return output[0][index]
################################
#Init Camera
cap = cv2.VideoCapture(0)
# Face Detection
face_cascade = cv2.CascadeClassifier("haarcascade_frontalface_alt.xml")
skip = 0
dataset_path = './data/'
face_data = []
labels = []
class_id = 0 # Labels for the given file
names = {} #Mapping btw id - name
# Data Preparation
for fx in os.listdir(dataset_path):
if fx.endswith('.npy'):
#Create a mapping btw class_id and name
names[class_id] = fx[:-4]
print("Loaded "+fx)
data_item = np.load(dataset_path+fx)
face_data.append(data_item)
#Create Labels for the class
target = class_id*np.ones((data_item.shape[0],))
class_id += 1
labels.append(target)
face_dataset = np.concatenate(face_data,axis=0)
face_labels = np.concatenate(labels,axis=0).reshape((-1,1))
print(face_dataset.shape)
print(face_labels.shape)
trainset = np.concatenate((face_dataset,face_labels),axis=1)
print(trainset.shape)
# Testing
while True:
ret,frame = cap.read()
if ret == False:
continue
faces = face_cascade.detectMultiScale(frame,1.3,5)
if(len(faces)==0):
continue
for face in faces:
x,y,w,h = face
#Get the face ROI
offset = 10
face_section = frame[y-offset:y+h+offset,x-offset:x+w+offset]
face_section = cv2.resize(face_section,(100,100))
#Predicted Label (out)
out = knn(trainset,face_section.flatten())
#Display on the screen the name and rectangle around it
pred_name = names[int(out)]
cv2.putText(frame,pred_name,(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,1,(255,0,0),2,cv2.LINE_AA)
cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,255),2)
cv2.imshow("Faces",frame)
key = cv2.waitKey(1) & 0xFF
if key==ord('q'):
break
cap.release()
cv2.destroyAllWindows()
# -
|
.ipynb_checkpoints/Selfie Training Data-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.9 64-bit
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/christianwarmuth/openhpi-kipraxis/blob/main/Woche%203/3_6_Mehr_Daten%2C_bessere_Ergebnisse.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="fr1OlX9YZPt6"
# ## 0. Installieren aller Pakete
# + id="0uTRddOPZRrb"
# Hier die Kaggle Credentials einfügen (ohne Anführungszeichen)
# %env KAGGLE_USERNAME=openhpi
# %env KAGGLE_KEY=das_ist_der_key
# + id="hRSgrairar7O"
# !pip3 install skorch
# + id="fac57d66"
import pandas as pd
import numpy as np
from bs4 import BeautifulSoup
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from skorch import NeuralNetClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from torch import nn
import torch
# + id="NIH4HySUawpl"
class NeuralNetModule(nn.Module):
def __init__(self, num_inputs, num_units=20, nonlin=nn.ReLU()):
super(NeuralNetModule, self).__init__()
self.nonlin = nonlin
self.dense0 = nn.Linear(num_inputs, num_units)
self.dropout = nn.Dropout(0.2)
self.dense1 = nn.Linear(num_units, num_units)
self.output = nn.Linear(num_units, 2)
self.softmax = nn.Softmax(dim=-1)
def forward(self, X, **kwargs):
X = self.nonlin(self.dense0(X))
X = self.dropout(X)
X = self.nonlin(self.dense1(X))
X = self.softmax(self.output(X))
return X
# + id="b77c3599"
def evaluate_clf(test_X, labels, clf):
predictions = clf.predict(test_X)
report = classification_report(labels, predictions)
print(report)
# + [markdown] id="1eb217d9"
# ## Download Dataset
# + [markdown] id="00fc0b24"
# ### Manuell
# via https://www.kaggle.com/columbine/imdb-dataset-sentiment-analysis-in-csv-format
# + [markdown] id="4f172699"
# ### Via API
#
# Hinzufügen der kaggle.json
# Speichern als ~/.kaggle/kaggle.json auf Linux, OSX, oder andere UNIX-based Betriebssysteme und unter C:\Users<Windows-username>.kaggle\kaggle.json auf Windows
#
# Siehe https://www.kaggle.com/docs/api oder https://github.com/Kaggle/kaggle-api
#
# Beispiel:
# ~/kaggle/kaggle.json
#
# {"username":"openHPI","key":"das_ist_der_key"}
# + id="c1bef49c"
# !pip3 install kaggle
# !kaggle datasets download -d columbine/imdb-dataset-sentiment-analysis-in-csv-format
# + id="LQa1I8ZNftBH"
import zipfile
with zipfile.ZipFile("imdb-dataset-sentiment-analysis-in-csv-format.zip", 'r') as zip_ref:
zip_ref.extractall("")
import os
os.rename('Train.csv','sentiment.csv')
# + id="dd78c83a"
def read_and_parse_train_df(nrows):
df = pd.read_csv("sentiment.csv", nrows=nrows)
# preprocessing
df["text"] = df["text"].apply(lambda x: x.lower())
df["text"] = df["text"].apply(lambda x: x.replace("\'", ""))
df["text"] = df["text"].apply(lambda x: BeautifulSoup(x).text)
return df
def read_and_parse_test_df():
df = pd.read_csv("sentiment.csv")
df = df.tail(1000)
# preprocessing
df["text"] = df["text"].apply(lambda x: x.lower())
df["text"] = df["text"].apply(lambda x: x.replace("\'", ""))
df["text"] = df["text"].apply(lambda x: BeautifulSoup(x).text)
return df
# + id="9d5f85c9"
test_df = read_and_parse_test_df()
# + [markdown] id="705ef7b1"
# # 3.6 Mehr Daten, bessere Ergebnisse
# + id="3a72820d"
for nrows in [100, 500, 1_000, 5_000, 10_000, 49_000]:
print(f"#{nrows}")
print("-" * 53)
train_df = read_and_parse_train_df(nrows)
vectorizer = TfidfVectorizer()
train_X = vectorizer.fit_transform(train_df["text"]).astype(np.float32)
test_X = vectorizer.transform(test_df["text"]).astype(np.float32)
neural_net = NeuralNetClassifier(
module=NeuralNetModule,
module__num_inputs = len(vectorizer.vocabulary_),
max_epochs=10,
optimizer=torch.optim.Adam,
iterator_train__shuffle=True,
verbose=0
)
neural_net.fit(train_X, train_df['label'])
evaluate_clf(test_X, test_df["label"], neural_net)
print("\n"*2)
|
Woche 3/3_6_Mehr_Daten,_bessere_Ergebnisse.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # The Theta Model
#
# The Theta model of Assimakopoulos & Nikolopoulos (2000) is a simple method for forecasting the involves fitting two $\theta$-lines, forecasting the lines using a Simple Exponential Smoother, and then combining the forecasts from the two lines to produce the final forecast. The model is implemented in steps:
#
#
# 1. Test for seasonality
# 2. Deseasonalize if seasonality detected
# 3. Estimate $\alpha$ by fitting a SES model to the data and $b_0$ by OLS.
# 4. Forecast the series
# 5. Reseasonalize if the data was deseasonalized.
#
# The seasonality test examines the ACF at the seasonal lag $m$. If this lag is significantly different from zero then the data is deseasonalize using `statsmodels.tsa.seasonal_decompose` use either a multiplicative method (default) or additive.
#
# The parameters of the model are $b_0$ and $\alpha$ where $b_0$ is estimated from the OLS regression
#
# $$
# X_t = a_0 + b_0 (t-1) + \epsilon_t
# $$
#
# and $\alpha$ is the SES smoothing parameter in
#
# $$
# \tilde{X}_t = (1-\alpha) X_t + \alpha \tilde{X}_{t-1}
# $$
#
# The forecasts are then
#
# $$
# \hat{X}_{T+h|T} = \frac{\theta-1}{\theta} \hat{b}_0
# \left[h - 1 + \frac{1}{\hat{\alpha}}
# - \frac{(1-\hat{\alpha})^T}{\hat{\alpha}} \right]
# + \tilde{X}_{T+h|T}
# $$
#
# Ultimately $\theta$ only plays a role in determining how much the trend is damped. If $\theta$ is very large, then the forecast of the model is identical to that from an Integrated Moving Average with a drift,
#
# $$
# X_t = X_{t-1} + b_0 + (\alpha-1)\epsilon_{t-1} + \epsilon_t.
# $$
#
# Finally, the forecasts are reseasonalized if needed.
#
# This module is based on:
#
# * <NAME>., & <NAME>. (2000). The theta model: a decomposition
# approach to forecasting. International journal of forecasting, 16(4), 521-530.
# * <NAME>., & <NAME>. (2003). Unmasking the Theta method.
# International Journal of Forecasting, 19(2), 287-290.
# * <NAME>., <NAME>., <NAME>., & <NAME>.
# (2015). The optimized theta method. arXiv preprint arXiv:1503.03529.
# ## Imports
#
# We start with the standard set of imports and some tweaks to the default matplotlib style.
import numpy as np
import pandas as pd
import pandas_datareader as pdr
import matplotlib.pyplot as plt
import seaborn as sns
plt.rc("figure",figsize=(16,8))
plt.rc("font",size=15)
plt.rc("lines",linewidth=3)
sns.set_style("darkgrid")
# ## Load some Data
#
# We will first look at housing starts using US data. This series is clearly seasonal but does not have a clear trend during the same.
reader = pdr.fred.FredReader(["HOUST"], start="1980-01-01", end="2020-04-01")
data = reader.read()
housing = data.HOUST
housing.index.freq = housing.index.inferred_freq
ax = housing.plot()
# We fit specify the model without any options and fit it. The summary shows that the data was deseasonalized using the multiplicative method. The drift is modest and negative, and the smoothing parameter is fairly low.
from statsmodels.tsa.forecasting.theta import ThetaModel
tm = ThetaModel(housing)
res = tm.fit()
print(res.summary())
# The model is first and foremost a forecasting method. Forecasts are produced using the `forecast` method from fitted model. Below we produce a hedgehog plot by forecasting 2-years ahead every 2 years.
#
# **Note**: the default $\theta$ is 2.
forecasts = {"housing": housing}
for year in range(1995, 2020, 2):
sub = housing[:str(year)]
res = ThetaModel(sub).fit()
fcast = res.forecast(24)
forecasts[str(year)] = fcast
forecasts = pd.DataFrame(forecasts)
ax = forecasts["1995":].plot(legend=False)
children = ax.get_children()
children[0].set_linewidth(4)
children[0].set_alpha(0.3)
children[0].set_color("#000000")
ax.set_title("Housing Starts")
plt.tight_layout(pad=1.0)
# We could alternatively fir the log of the data. Here it makes more sense to force the deseasonalizing to use the additive method, if needed. We also fit the model parameters using MLE. This method fits the IMA
#
# $$ X_t = X_{t-1} + \gamma\epsilon_{t-1} + \epsilon_t $$
#
# where $\hat{\alpha}$ = $\min(\hat{\gamma}+1, 0.9998)$ using `statsmodels.tsa.SARIMAX`. The parameters are similar although the drift is closer to zero.
tm = ThetaModel(np.log(housing), method="additive")
res = tm.fit(use_mle=True)
print(res.summary())
# The forecast only depends on the forecast trend component,
# $$
# \hat{b}_0
# \left[h - 1 + \frac{1}{\hat{\alpha}}
# - \frac{(1-\hat{\alpha})^T}{\hat{\alpha}} \right],
# $$
#
# the forecast from the SES (which does not change with the horizon), and the seasonal. These three components are available using the `forecast_components`. This allows forecasts to be constructed using multiple choices of $\theta$ using the weight expression above.
res.forecast_components(12)
# ## Personal Consumption Expenditure
#
# We next look at personal consumption expenditure. This series has a clear seasonal component and a drift.
reader = pdr.fred.FredReader(["NA000349Q"], start="1980-01-01", end="2020-04-01")
pce = reader.read()
pce.columns = ["PCE"]
_ = pce.plot()
# Since this series is always positive, we model the $\ln$.
mod = ThetaModel(np.log(pce))
res = mod.fit()
print(res.summary())
# Next we explore differenced in the forecast as $\theta$ changes. When $\theta$ is close to 1, the drift is nearly absent. As $\theta$ increases, the drift becomes more obvious.
forecasts = pd.DataFrame({"ln PCE":np.log(pce.PCE),
"theta=1.2": res.forecast(12, theta=1.2),
"theta=2": res.forecast(12),
"theta=3": res.forecast(12, theta=3),
"No damping": res.forecast(12, theta=np.inf)
})
_ = forecasts.tail(36).plot()
plt.title("Forecasts of ln PCE")
plt.tight_layout(pad=1.0)
# Finally, `plot_predict` can be used to visualize the predictions and prediction intervals which are constructed assuming the IMA is true.
ax = res.plot_predict(24, theta=2)
# We conclude be producing a hedgehog plot using 2-year non-overlapping samples.
ln_pce = np.log(pce.PCE)
forecasts = {"ln PCE": ln_pce}
for year in range(1995,2020,3):
sub = ln_pce[:str(year)]
res = ThetaModel(sub).fit()
fcast = res.forecast(12)
forecasts[str(year)] = fcast
forecasts = pd.DataFrame(forecasts)
ax = forecasts["1995":].plot(legend=False)
children = ax.get_children()
children[0].set_linewidth(4)
children[0].set_alpha(0.3)
children[0].set_color("#000000")
ax.set_title("ln PCE")
plt.tight_layout(pad=1.0)
|
v0.12.1/examples/notebooks/generated/theta-model.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # 逻辑回归中添加多项式特征
import numpy as np
import matplotlib.pyplot as plt
# +
np.random.seed(666)
X = np.random.normal(0, 1, size=(200,2))
# 有一个大致上的圆形,圆形内的分类为1,圆形外的分类为0
y = np.array(X[:,0]**2 + X[:,1]**2 < 1.5, dtype='int')
# -
plt.scatter(X[y==0,0], X[y==0,1], color='r')
plt.scatter(X[y==1,0], X[y==1,1], color='b')
plt.show()
# ## 使用逻辑回归
from playML.logistic_regression import LogisticRegression
# ### 假如不添加多项式项
log_reg = LogisticRegression()
log_reg.fit(X,y)
log_reg.score(X,y)
def plot_decision_boundary(model, axis):
"""绘制不规则决策边界"""
x0, x1 = np.meshgrid(
np.linspace(axis[0], axis[1], int((axis[1] - axis[0])*100)).reshape(1,-1),
np.linspace(axis[2], axis[3], int((axis[3] - axis[2])*100)).reshape(1,-1)
)
X_new= np.c_[x0.ravel(), x1.ravel()]
y_predict = model.predict(X_new)
zz = y_predict.reshape(x0.shape)
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#EF9A9A','#FFF59D','#90CAF9'])
plt.contourf(x0, x1, zz, linewidth=5, cmap=custom_cmap)
plot_decision_boundary(log_reg,[-4,4,-4,4])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
plt.show()
# 很明显有很多分类错误,所以打分只有0.6左右
# ### 添加多项式项
# +
from sklearn.pipeline import Pipeline
from sklearn.preprocessing.data import PolynomialFeatures
from sklearn.preprocessing.data import StandardScaler
def PolynomialLogisticRegression(degree):
return Pipeline([
('poly', PolynomialFeatures(degree=degree)),
('std_scaler', StandardScaler()),
('log_reg', LogisticRegression())
])
# -
poly_log_reg = PolynomialLogisticRegression(degree=2)
poly_log_reg.fit(X,y)
poly_log_reg.score(X,y)
plot_decision_boundary(poly_log_reg,[-4,4,-4,4])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
plt.show()
# ### 增加多项式项的阶数会是什么样呢?
poly_log_reg2 = PolynomialLogisticRegression(degree=20)
poly_log_reg2.fit(X,y)
poly_log_reg2.score(X,y)
plot_decision_boundary(poly_log_reg2,[-4,4,-4,4])
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
plt.show()
|
c6_logistic_regression/06_Polynomial_Features_in_Logistic_Regression.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="yEDk9UXhez8U"
import tensorflow as tf
import os
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.TPUStrategy(resolver)
import pandas as pd
import requests
from bs4 import BeautifulSoup
main_url = "https://www.dhlottery.co.kr/gameResult.do?method=byWin"
basic_url = "https://www.dhlottery.co.kr/gameResult.do?method=byWin&drwNo="
def GetLast():
resp = requests.get(main_url)
soup = BeautifulSoup(resp.text, "lxml")
result = str(soup.find("meta", {"id" : "desc", "name" : "description"})['content']) # meta
s_idx = result.find(" ")
e_idx = result.find("회")
return int(result[s_idx + 1 : e_idx])
def Crawler(s_count, e_count, fp):
for i in range(s_count , e_count + 1):
crawler_url = basic_url + str(i)
resp = requests.get(crawler_url)
soup = BeautifulSoup(resp.text, "html.parser")
text = soup.text
s_idx = text.find(" 당첨결과")
s_idx = text.find("당첨번호", s_idx) + 4
e_idx = text.find("보너스", s_idx)
numbers = text[s_idx:e_idx].strip().split()
s_idx = e_idx + 3
e_idx = s_idx + 3
bonus = text[s_idx:e_idx].strip()
s_idx = text.find("1등", e_idx) + 2
e_idx = text.find("원", s_idx) + 1
e_idx = text.find("원", e_idx)
money1 = text[s_idx:e_idx].strip().replace(',','').split()[2]
s_idx = text.find("2등", e_idx) + 2
e_idx = text.find("원", s_idx) + 1
e_idx = text.find("원", e_idx)
money2 = text[s_idx:e_idx].strip().replace(',','').split()[2]
s_idx = text.find("3등", e_idx) + 2
e_idx = text.find("원", s_idx) + 1
e_idx = text.find("원", e_idx)
money3 = text[s_idx:e_idx].strip().replace(',','').split()[2]
s_idx = text.find("4등", e_idx) + 2
e_idx = text.find("원", s_idx) + 1
e_idx = text.find("원", e_idx)
money4 = text[s_idx:e_idx].strip().replace(',','').split()[2]
s_idx = text.find("5등", e_idx) + 2
e_idx = text.find("원", s_idx) + 1
e_idx = text.find("원", e_idx)
money5 = text[s_idx:e_idx].strip().replace(',','').split()[2]
line = str(i) + ',' + numbers[0] + ',' + numbers[1] + ',' + numbers[2] + ',' + numbers[3] + ',' + numbers[4] + ',' + numbers[5] + ',' + bonus + ',' + money1 + ',' + money2 + ',' + money3 + ',' + money4 + ',' + money5
print(line)
line += '\n'
fp.write(line)
last = GetLast()
fp = open('lotto.csv', 'w')
Crawler(1, last, fp)
fp.close()
import numpy as np
dataset = np.loadtxt("./lotto.csv", delimiter=",")
def numbers2ohbin(numbers):
ohbin = np.zeros(45)
for i in range(6):
ohbin[int(numbers[i])-1] = 1
return ohbin
#dataset = dataset[:-1]
total_count = len(dataset)
print('total_count {0}'.format(total_count))
numbers = dataset[:, 1:7]
samples = list(map(numbers2ohbin, numbers))
x_train = samples[0:total_count-1]
y_train = samples[1:total_count]
'''
x_val = x_train
y_val = y_train
x_test = x_train
x_test = y_train
x_train = samples[0:700]
y_train = samples[1:701]
x_val = samples[700:800]
y_val = samples[701:801]
x_test = samples[800:total_count-1]
y_test = samples[801:total_count]
'''
from __future__ import absolute_import, division, print_function, unicode_literals
# !pip install tensorflow-gpu==2.8.0-rc1
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import models
model = keras.Sequential([
keras.layers.LSTM(128, batch_input_shape=(1, 1, 45), return_sequences=False, stateful=True),
keras.layers.Dense(45, activation='sigmoid')
])
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print('Train...')
train_loss = []
train_acc = []
val_loss = []
val_acc = []
for epoch in range(100):
mean_train_loss = []
mean_train_acc = []
model.reset_states()
for i in range(len(x_train)):
xs = x_train[i].reshape(1, 1, 45)
ys = y_train[i].reshape(1, 45)
loss, acc = model.train_on_batch(xs, ys)
mean_train_loss.append(loss)
mean_train_acc.append(acc)
train_loss.append(np.mean(mean_train_loss))
train_acc.append(np.mean(mean_train_acc))
'''
mean_val_loss = []
mean_val_acc = []
for i in range(len(x_val)):
xs = x_val[i].reshape(1, 1, 45)
ys = y_val[i].reshape(1, 45)
loss, acc = model.test_on_batch(xs, ys)
mean_val_loss.append(loss)
mean_val_acc.append(acc)
val_loss.append(np.mean(mean_val_loss))
val_acc.append(np.mean(mean_val_acc))
print('epoch {0:2d} train acc {1:0.3f} loss {2:0.3f} val acc {3:0.3f} loss {4:0.3f}'.format(epoch, np.mean(mean_train_acc), np.mean(mean_train_loss), np.mean(mean_val_acc), np.mean(mean_val_loss)))
'''
print('epoch {0:2d} train acc {1:0.3f} loss {2:0.3f}'.format(epoch, np.mean(mean_train_acc), np.mean(mean_train_loss)))
# %matplotlib inline
import matplotlib.pyplot as plt
fig, loss_ax = plt.subplots()
acc_ax = loss_ax.twinx()
loss_ax.plot(train_loss, 'y', label='train loss')
loss_ax.plot(val_loss, 'r', label='val loss')
acc_ax.plot(train_acc, 'b', label='train acc')
acc_ax.plot(val_acc, 'g', label='val acc')
loss_ax.set_xlabel('epoch')
loss_ax.set_ylabel('loss')
acc_ax.set_ylabel('accuray')
loss_ax.legend(loc='upper left')
acc_ax.legend(loc='lower left')
plt.show()
model.save('model.h5')
import numpy as np
from tensorflow.keras import models
model = models.load_model('model.h5')
mean_prize = [ np.mean(dataset[87:, 8]),
np.mean(dataset[87:, 9]),
np.mean(dataset[87:, 10]),
np.mean(dataset[87:, 11]),
np.mean(dataset[87:, 12])]
print(mean_prize)
def calc_reward(true_numbers, true_bonus, pred_numbers):
count = 0
for ps in pred_numbers:
if ps in true_numbers:
count += 1
if count == 6:
return mean_prize[0], count
elif count == 5 and true_bonus in pred_numbers:
return mean_prize[1], count
elif count == 5:
return mean_prize[2], count
elif count == 4:
return mean_prize[3], count
elif count == 3:
return mean_prize[4], count
return 0, count
def gen_numbers_from_probability(nums_prob):
ball_box = []
for n in range(45):
ball_count = int(nums_prob[n] * 100 + 1)
ball = np.full((ball_count), n+1) #1부터 시작
ball_box += list(ball)
selected_balls = []
while True:
if len(selected_balls) == 6:
break
ball_index = np.random.randint(len(ball_box), size=1)[0]
ball = ball_box[ball_index]
#print('{0} {1} {2}'.format(len(ball_box), ball_index, ball))
if ball not in selected_balls:
selected_balls.append(ball)
return selected_balls
model.reset_states()
gi = 1
rewards = []
for i in range(len(x_train)):
xs = x_train[i].reshape(1, 1, 45)
ys_pred = model.predict_on_batch(xs)
sum_rewards = 0
print('No.{0:3d} True Numbers {1}'.format(gi+1, dataset[gi,1:7]))
for n in range(10):
numbers = gen_numbers_from_probability(ys_pred[0])
reward, count = calc_reward(dataset[gi,1:7], dataset[gi,7], numbers)
print('{0:2d} {1:15,d} {2:4d} {3}'.format(n, int(reward), count, numbers))
sum_rewards += reward
print('Total Reward: {0:15,d}'.format(int(sum_rewards)))
rewards.append(sum_rewards)
gi += 1
'''
for i in range(len(x_val)):
xs = x_val[i].reshape(1, 1, 45)
ys_pred = model.predict_on_batch(xs)
sum_rewards = 0
for n in range(10):
numbers = gen_numbers_from_probability(ys_pred[0])
sum_rewards += calc_reward(dataset[gi,1:7], dataset[gi,7], numbers)
print('{0:4d} {1} {2} {3:.1f}'.format(gi, dataset[gi,1:7], numbers, sum_rewards))
rewards.append(sum_rewards)
gi += 1
for i in range(len(x_test)):
xs = x_test[i].reshape(1, 1, 45)
ys_pred = model.predict_on_batch(xs)
sum_rewards = 0
for n in range(10):
numbers = gen_numbers_from_probability(ys_pred[0])
sum_rewards += calc_reward(dataset[gi,1:7], dataset[gi,7], numbers)
print('{0:4d} {1} {2} {3:.1f}'.format(gi, dataset[gi,1:7], numbers, sum_rewards))
rewards.append(sum_rewards)
gi += 1
'''
# %matplotlib inline
import matplotlib.pyplot as plt
plt.plot(rewards)
plt.ylabel('rewards')
plt.show()
print('receive numbers')
xs = samples[-1].reshape(1, 1, 45)
ys_pred = model.predict_on_batch(xs)
list_numbers = []
for n in range(10):
numbers = gen_numbers_from_probability(ys_pred[0])
numbers.sort() #sort 추가
print('{0} : {1}'.format(n, numbers))
list_numbers.append(numbers)
print('rewards check')
total_rewards = 0
for n in range(len(list_numbers)):
reward, count = calc_reward([19,32,37,40,41,43], [45], list_numbers[n])
print('{0} {1:15,d}'.format(count, int(reward)))
total_rewards += reward
print('Total {0:15,d}'.format(int(total_rewards)))
|
_posts/lotto_tcp_one_20220303.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Patent annotation: A novel dataset to highlight patent passages
#
# #### Types of data samples
#
# - *neutral_samples*: Here the paragraphs just above 'Advantageous Effects of Invention' are collected.
# To have a nearly uniformed sequence lengths, only 3 paragraphs are collected.
#
# - *positive_samples*: All paragraphs from section/heading 'Advantageous Effects of Invention'
# are collected.
#
# - *negative samples*: Paragraphs under the section 'Technical Problem' are collected.
#
#
# - Some of the examples patents containing interested tags as mentioned above are: (just for reference)
# - You can open Google patents in advanced mode and search for below patents and look for above tags.
# - US10842211B2
# - US10842310B2
# - US10842344B2
#
#
#
# ### Get the statistics
#
# Raw data can be downloaded from: https://bulkdata.uspto.gov/data/patent/grant/redbook/fulltext/2020/
#
# 1. Traverse USPTO bulk data files directory.
# 2. Each xml file contains patents (nearly 8k) granted per week. Essentially, there are 52 such xml files per every year. Each such xml file is a nested file, such that it contains nearly 8k patents and each patent is embedded in xml file.
# 3. Per each week, find the total number of patents and total number of pos/neg/neutral samples found and write these statistics to a csv file.
#
# #### For instance:
#
# - Year 2020 contains week_1.xml, week_2.xml, ...week_52.xml (total 52 xml files)
# - week_1.xml is nested file, means to say, it further contains 8k .xml files, each one for a patent.
#
# #### Download the files from USPTO:
# - use this url: https://bulkdata.uspto.gov/data/patent/grant/redbook/fulltext/2020/
# - save it to a directory, for instance: "/home/renuk/Documents/USPTO_BULK_DATA/patents/2020/"
# - Now you can execute the below codes.
# +
def stats(filename):
#home_dir = "/home/renuk/Documents/USPTO_BULK_DATA/patents/2020/"
#file_path =home_dir+filename
total_neutral_samples = 0 #total_positive_samples , total_negative_samples, total_neutral_samples
total_publications = 0
xml_text = html.unescape(open(filename, 'r').read())
weekly_filename = filename.split('/')
weekly_filename = weekly_filename[-1]
print(weekly_filename)
for patent in xml_text.split("<?xml version=\"1.0\" encoding=\"UTF-8\"?>"):
if patent is None or patent == "":
continue
patent_text = patent
bs = BeautifulSoup(patent, "lxml")
#try:
fwu_neutral = bs.find('heading',text='Solution to Problem') #fwu_advantages, fwu_problems, fwu_neutral
#Advantageous Effects of Invention --- for positive samples
#Technical Problem --- for negative samples
#Solution to Problem --- for neutral samples, also the paragraphs (2/3) which are just above the 'Advantageous Effects of Invention' section can be used for neural samples
##### search for application or grant
application = bs.find('us-patent-application')
if application is None: # If no application, search for grant
application = bs.find('us-patent-grant')
title = "None"
#### Search for its title
try:
title = application.find('invention-title').text
except Exception as e:
#print("no title found")
title = ""
#print("patent is:", title)
#### Search for publication number
try:
#publication_country = application.find('publication-reference').find('country').text
#publication_doc_number = application.find('publication-reference').find('doc-number').text
publication_kind = application.find('publication-reference').find('kind').text
#publication_num = publication_country+publication_doc_number+publication_kind
except Exception as e:
#publication_num = ""
publication_kind = ""
try:
publication_num = application['file'].split("-")[0]
except Exception as e:
publication_num = ""
publication_num = publication_num+publication_kind
#print(publication_num)
try:
application_type = application.find('application-reference')['appl-type']
except Exception as e:
application_type =''
#print(application_type)
if publication_num:
total_publications +=1
if fwu_neutral: #fwu_advantages, fwu_problems, fwu_neutral
total_neutral_samples +=1
return weekly_filename, total_publications, total_neutral_samples
# +
import time
import pprint
import os
import sys
import html
import datetime
import pandas as pd
from bs4 import BeautifulSoup
import re
from datetime import datetime
path = "/home/renuk/Documents/USPTO_BULK_DATA/patents/2020/" #directory containing weekly xml files (i.e., 52 files per year)
all_files = glob.glob(os.path.join(path, "ipg*.xml"))
df = pd.DataFrame(columns=['weekly_filename', 'total_publications', 'total_neutral_samples'])
for filename in all_files:
weekly_filename, total_publications, total_neutral_samples = stats(filename)
df = df.append({'weekly_filename':weekly_filename, 'total_publications': total_publications,
'total_neutral_samples':total_neutral_samples},ignore_index=True)
df.to_csv("/home/renuk/Documents/USPTO_BULK_DATA/patents/2020_neutral_stats.csv")
# -
# ### Traverse 52 xml files and access postive, negative and neutral samples.
# +
# import the following
import os, glob
import pandas as pd
import time
import pprint
import os
import sys
import html
import datetime
import pandas as pd
from bs4 import BeautifulSoup
import re
from datetime import datetime
# -
# #### Note:
# - change the name of 4th coloumn according to your sample of interest. For instance: 'positive_text' for +ve label.
# - change tags also accordingly. For instance: fwu_advantages = bs.find('heading',text='Advantageous Effects of Invention') for +ve labels
# - go through each line of code and change the variable name wherever necessary.
# - Below code traverse each weekly xml file, find the labels, if found then it will collect the pargarphs from that particular section or tag.
# - For every weekly xml, there is seperate csv file is stored with data automatically. For instance: '/home/renuk/Documents/USPTO_BULK_DATA/patents/2020/ipg200211.csv'
# ### Case 1: Fetch negative samples from 2020
def get_xml_data(filename):
#home_dir = "/home/renuk/Documents/USPTO_BULK_DATA/patents/2020/"
#file_path =home_dir+filename
csv_dir = "/home/renuk/Documents/USPTO_BULK_DATA/patents/2020_neg_updated/"
df = pd.DataFrame(columns=['publication_number', 'patent_title', 'appl_type',
'negative_text'])
total_negative_samples = 0
total_publications = 0
xml_text = html.unescape(open(filename, 'r').read())
for patent in xml_text.split("<?xml version=\"1.0\" encoding=\"UTF-8\"?>"):
if patent is None or patent == "":
continue
patent_text = patent
bs = BeautifulSoup(patent, "lxml")
#try:
fwu_problems = bs.find('heading',text='Technical Problem')
#except Exception as e:
##### search for application or grant
application = bs.find('us-patent-application')
if application is None: # If no application, search for grant
application = bs.find('us-patent-grant')
title = "None"
#### Search for its title
try:
title = application.find('invention-title').text
except Exception as e:
#print("no title found")
title = ""
#print("patent is:", title)
#### Search for publication number
try:
#publication_country = application.find('publication-reference').find('country').text
#publication_doc_number = application.find('publication-reference').find('doc-number').text
publication_kind = application.find('publication-reference').find('kind').text
#publication_num = publication_country+publication_doc_number+publication_kind
except Exception as e:
#publication_num = ""
publication_kind = ""
try:
publication_num = application['file'].split("-")[0]
except Exception as e:
publication_num = ""
publication_num = publication_num+publication_kind
#print(publication_num)
try:
application_type = application.find('application-reference')['appl-type']
except Exception as e:
application_type =''
#print(application_type)
if publication_num:
total_publications +=1
if fwu_problems:
total_negative_samples +=1
text = patent_text.splitlines()
adv = []
neg_text=""
for i in text:
#print(type(i))
start = text.index(i)
iteration = 1
if ('>Technical Problem<'in i and iteration==1):
for j in range(20):
if '<p' in text[start+1]:
neg_text = text[start+1]
#print(neg_text)
adv.append(text[start+1])
start = start+1
else:
continue
#print("patent is:", title)
#print(publication_num)
#print(application_type)
#print('-----------')
df = df.append({'publication_number':publication_num, 'patent_title': title,
'appl_type':application_type,
'negative_text':adv},ignore_index=True)
iteration=0
#print('-----------')
#else:
#total_not_found +=1
#file = filename
#name = file.split('.')
#name =name[0]+'.csv'
#name = name[-1].split('.')[0]
csv = filename.split('/')[-1].split('.')[0]+'_negative.csv'
csv_filename = csv_dir+csv
df.to_csv(csv_filename) # for each week, there will be a seperate csv file created and later you can merge them
print('total publications in',filename,total_publications )
print('total negative samples in ', filename,total_negative_samples )
df.shape
print("------------")
# +
# call get_xml_data from here.
from datetime import datetime
begin = datetime.now()
current_time = begin.strftime("%H:%M:%S")
print("Current Time =", current_time)
path = "/home/renuk/Documents/USPTO_BULK_DATA/patents/2020/"
all_files = glob.glob(os.path.join(path, "ipg*.xml"))
for filename in all_files:
get_xml_data(filename)
end = datetime.now()
current_time = end.strftime("%H:%M:%S")
print("Current Time =", current_time)
# -
# #### Merge all weekly csv files to one yearly file
# +
import os, glob
import pandas as pd
path = "/home/renuk/Documents/USPTO_BULK_DATA/patents/2020_neg_updated/"
all_files = glob.glob(os.path.join(path, "ipg*.csv"))
df_from_each_file = (pd.read_csv(f, sep=',') for f in all_files)
df_merged = pd.concat(df_from_each_file, ignore_index=True)
df_merged.to_csv( "/home/renuk/Documents/USPTO_BULK_DATA/patents/2020_neg_updated/2020_neg.csv")
# -
neg = pd.read_csv('/home/renuk/Documents/USPTO_BULK_DATA/patents/2020_neg_updated/2020_neg.csv')
neg.shape
neg.head(10)
neg['negative_text'][1]
# ### Case 2: Fetch neutral samples from 2020
#
def get_xml_data(filename):
#home_dir = "/home/renuk/Documents/USPTO_BULK_DATA/patents/2020/"
#file_path =home_dir+filename
csv_dir = "/home/renuk/Documents/USPTO_BULK_DATA/patents/2020_neutral_updated/"
df = pd.DataFrame(columns=['publication_number', 'patent_title', 'appl_type',
'neutral_text'])
total_neutral_samples = 0
total_publications = 0
xml_text = html.unescape(open(filename, 'r').read())
for patent in xml_text.split("<?xml version=\"1.0\" encoding=\"UTF-8\"?>"):
if patent is None or patent == "":
continue
patent_text = patent
bs = BeautifulSoup(patent, "lxml")
#try:
fwu_solutions = bs.find('heading',text='Solution to Problem')
#except Exception as e:
##### search for application or grant
application = bs.find('us-patent-application')
if application is None: # If no application, search for grant
application = bs.find('us-patent-grant')
title = "None"
#### Search for its title
try:
title = application.find('invention-title').text
except Exception as e:
#print("no title found")
title = ""
#print("patent is:", title)
#### Search for publication number
try:
#publication_country = application.find('publication-reference').find('country').text
#publication_doc_number = application.find('publication-reference').find('doc-number').text
publication_kind = application.find('publication-reference').find('kind').text
#publication_num = publication_country+publication_doc_number+publication_kind
except Exception as e:
#publication_num = ""
publication_kind = ""
try:
publication_num = application['file'].split("-")[0]
except Exception as e:
publication_num = ""
publication_num = publication_num+publication_kind
#print(publication_num)
try:
application_type = application.find('application-reference')['appl-type']
except Exception as e:
application_type =''
#print(application_type)
if publication_num:
total_publications +=1
if fwu_solutions:
total_neutral_samples +=1
text = patent_text.splitlines()
adv = []
neu_text=""
for i in text:
#print(type(i))
start = text.index(i)
iteration = 1
if ('>Solution to Problem<'in i and iteration==1):
for j in range(20):
if '<p' in text[start+1]:
neu_text = text[start+1]
#print(neg_text)
adv.append(text[start+1])
start = start+1
else:
continue
#print("patent is:", title)
#print(publication_num)
#print(application_type)
#print('-----------')
df = df.append({'publication_number':publication_num, 'patent_title': title,
'appl_type':application_type,
'neutral_text':adv},ignore_index=True)
iteration=0
#print('-----------')
#else:
#total_not_found +=1
#file = filename
#name = file.split('.')
#name =name[0]+'.csv'
#name = name[-1].split('.')[0]
csv = filename.split('/')[-1].split('.')[0]+'_neutral.csv'
csv_filename = csv_dir+csv
df.to_csv(csv_filename) # for each week, there will be a seperate csv file created and later you can merge them
print('total publications in',filename,total_publications )
print('total neutral samples in ', filename,total_neutral_samples )
df.shape
print("------------")
# +
# call get_xml_data from here.
from datetime import datetime
begin = datetime.now()
current_time = begin.strftime("%H:%M:%S")
print("Current Time =", current_time)
path = "/home/renuk/Documents/USPTO_BULK_DATA/patents/2020/"
all_files = glob.glob(os.path.join(path, "ipg*.xml"))
for filename in all_files:
get_xml_data(filename)
end = datetime.now()
current_time = end.strftime("%H:%M:%S")
print("Current Time =", current_time)
# -
# #### Merge all weekly csv files to one yearly file
# +
import os, glob
import pandas as pd
path = "/home/renuk/Documents/USPTO_BULK_DATA/patents/2020_neutral_updated//"
all_files = glob.glob(os.path.join(path, "ipg*.csv"))
df_from_each_file = (pd.read_csv(f, sep=',') for f in all_files)
df_merged = pd.concat(df_from_each_file, ignore_index=True)
df_merged.to_csv( "/home/renuk/Documents/USPTO_BULK_DATA/patents/2020_neutral_updated/2020_neutral.csv")
# -
df_neutral = pd.read_csv('/home/renuk/Documents/USPTO_BULK_DATA/patents/2020_neutral_updated/2020_neutral.csv')
df_neutral.head(5)
df_neutral.shape
df_neutral['neutral_text'][1]
# ##### Display an patent containing the above neutral sample from patent 'https://patents.google.com/patent/US10758063B2/en?oq=US10758063B2'
from IPython import display
display.Image("neutral_sample.png")
|
Jupyter notebook/Patent_annotation_dataset_related.ipynb
|
% ---
% jupyter:
% jupytext:
% text_representation:
% extension: .m
% format_name: light
% format_version: '1.5'
% jupytext_version: 1.14.4
% kernelspec:
% display_name: Octave
% language: octave
% name: octave
% ---
t = 1:0.1:100;
% +
%plot inline
% -
setenv("GNUTERM", "X11")
% +
%plot -f svg
% -
plot(t, sin(t/4))
|
Untitled.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/camilorey/material_clases/blob/main/insercion_cliente_banco_class_A.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="rl9LVsW5DjlK" outputId="523d0391-2386-4a6d-d67c-f6a504131240"
#para poder subir archivos
from google.colab import files
import io #para poder extraer la información del archivo
#Para poder procesar los datos
import pandas as Pandas
#Para poder hacer una conexión a una base de datos
import psycopg2 as Psycopg2
# + [markdown] id="njoEz0WSD5Zx"
# Subir el archivo de Excel a Google Colab.
# + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 251} id="wxLKIstfD77X" outputId="eedd90d6-4968-4c22-c409-6f44c42a33b0"
#generamos el diálogo para subir el archivo de Excel con la información de los clientes
subidorDeArchivos = files.upload()
#transformamos el archivo subido a un DataFrame
#recordar poner el mismo nombre de archivo en la siguiente línea, si no, el Colab no funcionará
dataset = Pandas.read_excel(io.BytesIO(subidorDeArchivos['informacionClientes.xlsx']))
#chismoseamos el archivo para ver qué tenemos.
dataset.head()
# + [markdown] id="0aAIPNReFG2Y"
# Necesitamos extraer la información de las líneas del DataFrame.
# + id="RQTM6PTvFJ9Q"
insertEnCliente = "INSERT INTO cliente(cedula,estrato,sexo,edad,saldo) VALUES "
listaInserts = []
for idx,fila in dataset.iterrows():
cc = "'"+str(fila['cedula'])+"'"
est = str(fila['estrato'])
sex = "'"+fila['sexo']+"'"
ed = str(fila['edad'])
sald = str(fila['saldo'])
listaInserts.append(insertEnCliente+ '('+cc+','+est+','+sex+','+ed+','+sald+')')
# + [markdown] id="mtP2ifFqIFq_"
# Cómo nos quedaron los inserts
# + colab={"base_uri": "https://localhost:8080/"} id="Uds9ARR9IHnG" outputId="f2e99585-3b9c-464a-8c38-a3d3f6acd051"
listaInserts[0:5]
# + [markdown] id="MPmwb6kCJUN9"
# las credenciales de mi base de datos
# + id="CjcmkVtoJWI_"
servidor_db = 'ruby.db.elephantsql.com'
nombre_db = 'nhxutetr'
usuario_db = 'nhxutetr'
password_db = '<KEY>'
# + [markdown] id="0ugNHjH3J0gl"
# El bloque de instrucciones para comunicarse con la Base de DAtos en Python: un bloque de Excepción.
# + colab={"base_uri": "https://localhost:8080/"} id="OZU4t9xkJ6Hs" outputId="9bc38887-5e81-4af7-85fb-e5eab8ea81c4"
conexionDB = None
try: #intente hacer unas instrucciones
#vamos a hacer una conexión con la Base de Datos
conexionDB = Psycopg2.connect(host=servidor_db,database=nombre_db,user=usuario_db,password=<PASSWORD>db)
print("Todo bien, nos pudimos conectar a la Base de Datos")
cursorDB = conexionDB.cursor()
numInsert = 0
for insert in listaInserts:
cursorDB.execute(insert)
#cambios permanentes dentro de la base de datos
conexionDB.commit()
conexionDB.close()
except (Exception,Psycopg2.DatabaseError) as Error:
print("Ocurrió algo mal")
print(Error)
# + [markdown] id="3Rha7JjAPpp4"
# Método genérico para hacer búsquedas en la DB.
# + id="kwUEap9JPnyp"
def busquedaComoDataFrame(queryBusqueda):
conexionDB = None
resultado = None #dataframe que contendrá el resultado de la búsqueda
try:
#vamos a hacer la conexión a la base de datos (Esta instrucción siempre es igual)
conexionDB = Psycopg2.connect(host=servidor_db,database=nombre_db,user=usuario_db,password=<PASSWORD>)
#vamos a confirmar que estamos conectados a la base de datos
print("conexión establecida con la base de datos")
#vamos a ejecutar el comando y meterlo de una dentro del DataFrame
resultado = Pandas.read_sql_query(queryBusqueda,conexionDB)
print("resultado obtenido, todo bien")
#ahora cerramos la conexión
conexionDB.close()
except (Exception,Psycopg2.DatabaseError) as Error:
print("Ha ocurrido un error")
print(Error)
finally:
if conexionDB is not None:
conexionDB.close()
#sea como sea el ingreso a la DB fue hecho, necesitamos retornar el resultado
return resultado
# + id="9r1M8TXtPuGe"
queryJovenes = """SELECT cliente.cedula,
cliente.edad,
cliente.estrato
FROM cliente
WHERE cliente.edad <25;"""
# + colab={"base_uri": "https://localhost:8080/"} id="6xE_onHEP5N1" outputId="5201ad65-e08e-4138-a88e-9f486fbbf3c3"
info_jovenes = busquedaComoDataFrame(queryJovenes)
# + colab={"base_uri": "https://localhost:8080/", "height": 402} id="kxwjNUZlP_9R" outputId="f0b8ff39-122a-4c69-e0c1-e05c5eb6b3bd"
info_jovenes
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="RgPTksf3QWc8" outputId="3e2d687c-a7f5-444d-9536-0f46e770d757"
info_jovenes['edad'].plot(linewidth=0.5)
# + colab={"base_uri": "https://localhost:8080/"} id="YEHQ4v6DQbcl" outputId="56c23a46-6c27-4db3-a788-9c79fd1aea1d"
print("promedio de edad jóvenes", info_jovenes['edad'].mean())
# + id="sSMrHbaPwKqz"
from sklearn.linear_model import LinearRegression
regresor = LinearRegression()
# + id="mjEQByofzURg"
regresor.fit(X_train,y_train)
# + id="lrEMmCAF2WEh"
y_prediccion = regresor.predict(X_test)
|
insercion_cliente_banco_class_A.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # networkm
#
# > Network Models based on `networkx` MultiDiGraph objects endowed with dynamics. `Graph Functions` contains various manipulation tools for graph objects. `Model Functions` contains methods of simulating these graphs as coupled, time-delayed differential equations with noise, with current support for Boolean functions / Boolean networks. `Network Class` contains the culmination of these functions in a single `BooleanNetwork` class object for pipelined analysis. `PUF Functions` analyzes these networks in the context of physically unclonable functions. Accelerated with `numba`.
# ## Install
# `pip install networkm`
# ## How to use
import networkm
from networkm import *
# This package models Boolean Networks with dynamics of the form
#
# $\tau_{i}\frac{dx_{i}}{dt}=-x_{i}(t)+f_{i}[pred_{i}(t-delays)]+noise$
#
# where tau is a time-constant, f is a logical function, $pred_{i}$ are the nodes flowing into node $x_{i}$ after some time-delay along each edge, and the noise term is random.
# We can quickly simulate entire distributions of complex networks and analyze their statistics:
bne=BooleanNetworkEnsemble(classes=3,instances=3,challenges=3,repeats=3,
g = (nx.random_regular_graph,3,256),f=XOR,tau=(rng.normal,1,0.1),a=np.inf,
edge_replacements=dict(a=np.inf,tau=(rng.normal,0.5,0.05),
f=MPX,delay=(rng.random,0,0.5)),
delay=(rng.random,0,1),dt=0.01,T=25,noise=0.01,hold=(rng.normal,1,0.1),
decimation=None)
plot_mu(bne.data)
plot_lya(bne.data)
bne[0].plot_graph()
# Or start more simply, and consider a Ring Oscillator / Repressilator: https://en.wikipedia.org/wiki/Ring_oscillator , https://en.wikipedia.org/wiki/Repressilator. Real-world implications in e.g circuit design and gene-regulatory networks.
#
# The system is a ring of 3 nodes which connect to their left neighbor and cyclically invert eachother.
g=ring(N=3,left=True,right=False,loop=False)
print_graph(g)
# We model this with the simplest case, as follows. We give each node the NOT function. This function is executed differentially with a time-constant of 1. The node receives its neighbors state instantly; we put no explicit time-delays along edges, and include no noise. We initialize one node to 1, and hold all nodes at their steady-state value from this configuration for the default value of one time-constant. Then they are released and have the following dynamics:
ro=BooleanNetwork(g=ring(N=3,left=True,right=False,loop=False),
f=NOT,
tau=1,
delay=0,
noise=0,
a=np.inf, #this makes the derivative integer-valued; see `sigmoid` function
init=[1,0,0],
hold=None,
edge_replacements=None,
T=15,
dt=0.01
)
fig_params(reset=True)
ro.plot_timeseries()
# The dynamics are not restricted to the edges of a hypercube, allowing Boolean Networks to explore regions of the analog phase space in ways that traditional models forcing only binary valued states don't capture:
ro.plot_3d()
# For a more complex example we consider a "Hybrid Boolean Network" composed of a multiplexer - which forces inital conditions using a clock - connected to a chaotic ring network, which executes the XOR function. This has real-world implications in e.g cryptography and physically unclonable functions (PUF) as an HBN-PUF - see https://ieeexplore.ieee.org/document/9380284.
#
# More explicitly, we consider a 16-node ring where each node executes the 3-input XOR function on itself and its two neighbors. We include noise, time-delays, different rise/fall times for tau, and replace each node with itself + a multiplexer that sets the initial condition and copies the state thereafter, with its own set of dynamical constants.
b=BooleanNetwork(g = nx.random_regular_graph(3,16),
a = (rng.normal,15,5),
tau = (rng.normal,[0.5,0.4],[0.1,0.05]),
f = XOR,
delay = (rng.random,0,1),
edge_replacements = dict(
delay = (rng.normal,0.5,0.1),
a = (rng.normal,15,5),
tau = (rng.normal,[0.2,0.15],[0.05,0.05]),
f = MPX
),
T = 15,
dt = 0.01,
noise = 0.01,
init = None,
hold = None,
plot=False,
)
b.plot_graph()
# We can quickly analyze differences between e.g randomly shuffled attributes and noise:
chal=b.random_init()
x,x0,y,y0=b.integrate(init=chal,noise=0.),\
b.integrate(init=chal,noise=0.1),\
b.query(instances=1,challenges=[chal],repeats=1,noise=0)[0,0,0],\
b.integrate(noise=0.1) #parameters been shuffled by query
plot_comparison(x,x0,y,y0,i=0)
sidis.refresh()
|
index.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import numpy as np
from scipy. integrate import odeint
import matplotlib.pyplot as plt
import math
x0 = 895000
y0 = 577000
t0 = 0
tmax = 1
a = 0.34
b = 0.93
c = 0.54
h = 0.29
a2 = 0.31
b2 = 0.88
c2 = 0.41
h2 = 0.41
def P(t):
p=2*math.sin(t)
return p
def Q(t):
q=math.cos(t)+3
return q
def P2(t):
p=2*math.sin(2*t)
return p
def Q2(t):
q=math.cos(t)+3
return q
def f(y, t):
y1, y2 = y
return [-a*y1 - b*y2 + P(t), -c*y1 - h*y2 + Q(t) ]
def f2(y, t):
y1, y2 = y
return [-a2*y1 - b2*y2 + P2(t), -c2*y1*y2 - h2*y2 + Q2(t) ]
t = np.linspace( 0, tmax, num = 100)
y0 = [x0, y0]
w1 = odeint(f, y0, t)
y11 = w1[:,0]
y21 = w1[:,1]
fig = plt.figure(facecolor='white')
plt.plot(t, y11, t, y21, linewidth=2)
plt.ylabel("x, y")
plt.xlabel("t")
plt.grid(True)
plt.show()
fig.savefig('03.png', dpi = 600)
w1 = odeint(f2, y0, t)
y12 = w1[:,0]
y22 = w1[:,1]
fig2 = plt.figure(facecolor='white')
plt.plot(t, y12, t, y22, linewidth=2)
plt.ylabel("x, y")
plt.xlabel("t")
plt.grid(True)
plt.show()
fig2.savefig('04.png', dpi = 600)
# -
|
lab-03/lab03-sadautov.ipynb
|
# ##### Copyright 2021 Google LLC.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# # coloring_ip
# <table align="left">
# <td>
# <a href="https://colab.research.google.com/github/google/or-tools/blob/master/examples/notebook/contrib/coloring_ip.ipynb"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/colab_32px.png"/>Run in Google Colab</a>
# </td>
# <td>
# <a href="https://github.com/google/or-tools/blob/master/examples/contrib/coloring_ip.py"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/github_32px.png"/>View source on GitHub</a>
# </td>
# </table>
# First, you must install [ortools](https://pypi.org/project/ortools/) package in this colab.
# !pip install ortools
# +
# Copyright 2010 <NAME> <EMAIL>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Simple coloring problem using MIP in Google CP Solver.
Inspired by the GLPK:s model color.mod
'''
COLOR, Graph Coloring Problem
Written in GNU MathProg by <NAME> <<EMAIL>>
Given an undirected loopless graph G = (V, E), where V is a set of
nodes, E <= V x V is a set of arcs, the Graph Coloring Problem is to
find a mapping (coloring) F: V -> C, where C = {1, 2, ... } is a set
of colors whose cardinality is as small as possible, such that
F(i) != F(j) for every arc (i,j) in E, that is adjacent nodes must
be assigned different colors.
'''
Compare with the MiniZinc model:
http://www.hakank.org/minizinc/coloring_ip.mzn
This model was created by <NAME> (<EMAIL>)
Also see my other Google CP Solver models:
http://www.hakank.org/google_or_tools/
"""
import sys
from ortools.linear_solver import pywraplp
# Create the solver.
print('Solver: ', sol)
if sol == 'GLPK':
# using GLPK
solver = pywraplp.Solver('CoinsGridGLPK',
pywraplp.Solver.GLPK_MIXED_INTEGER_PROGRAMMING)
else:
# Using CBC
solver = pywraplp.Solver('CoinsGridCLP',
pywraplp.Solver.CBC_MIXED_INTEGER_PROGRAMMING)
#
# data
#
# max number of colors
# [we know that 4 suffices for normal maps]
nc = 5
# number of nodes
n = 11
# set of nodes
V = list(range(n))
num_edges = 20
#
# Neighbours
#
# This data correspond to the instance myciel3.col from:
# http://mat.gsia.cmu.edu/COLOR/instances.html
#
# Note: 1-based (adjusted below)
E = [[1, 2], [1, 4], [1, 7], [1, 9], [2, 3], [2, 6], [2, 8], [3, 5], [3, 7],
[3, 10], [4, 5], [4, 6], [4, 10], [5, 8], [5, 9], [6, 11], [7, 11],
[8, 11], [9, 11], [10, 11]]
#
# declare variables
#
# x[i,c] = 1 means that node i is assigned color c
x = {}
for v in V:
for j in range(nc):
x[v, j] = solver.IntVar(0, 1, 'v[%i,%i]' % (v, j))
# u[c] = 1 means that color c is used, i.e. assigned to some node
u = [solver.IntVar(0, 1, 'u[%i]' % i) for i in range(nc)]
# number of colors used, to minimize
obj = solver.Sum(u)
#
# constraints
#
# each node must be assigned exactly one color
for i in V:
solver.Add(solver.Sum([x[i, c] for c in range(nc)]) == 1)
# adjacent nodes cannot be assigned the same color
# (and adjust to 0-based)
for i in range(num_edges):
for c in range(nc):
solver.Add(x[E[i][0] - 1, c] + x[E[i][1] - 1, c] <= u[c])
# objective
objective = solver.Minimize(obj)
#
# solution
#
solver.Solve()
print()
print('number of colors:', int(solver.Objective().Value()))
print('colors used:', [int(u[i].SolutionValue()) for i in range(nc)])
print()
for v in V:
print('v%i' % v, ' color ', end=' ')
for c in range(nc):
if int(x[v, c].SolutionValue()) == 1:
print(c)
print()
print('WallTime:', solver.WallTime())
if sol == 'CBC':
print('iterations:', solver.Iterations())
|
examples/notebook/contrib/coloring_ip.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #### <NAME>, <NAME>
import pandas as pd
df = pd.read_excel('drug-dataset.xlsx')
df.head()
from scipy import stats
df['exp_BP_diff'] = df["After_exp_BP"] - df['Before_exp_BP']
stats.shapiro(df['exp_BP_diff'])
# ##### As the second paramter of the above tuple is less than 0.05 we can conclude our distribution is not normal
# ##### According to the fact that the number of our data is bigger than 30 we can use 'Central limit theorem'
#
#
import matplotlib.pyplot as plt
plt.hist(df["exp_BP_diff"])
# +
import numpy as np
from scipy.stats import norm
# Generate some data for this demonstration.
#data = norm.rvs(10.0, 2.5, size=500)
# Fit a normal distribution to the data:
mu, std = norm.fit(df["exp_BP_diff"])
# Plot the histogram.
plt.hist(df["exp_BP_diff"], bins=10, normed=True, alpha=0.5, color='y')
# Plot the PDF.
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
p = norm.pdf(x, mu, std)
plt.plot(x, p, 'k', linewidth=2)
title = "Fit results: mu = %.2f, std = %.2f" % (mu, std)
plt.title(title)
plt.show()
# -
# ### Point Estimate:
# Obtaining a sample of size 20 from our data :
#
df['exp_BP_diff'].sample(n=20, random_state=1)
sample = df['exp_BP_diff'].sample(n=20, random_state=1)
# #### Sample Mean
sample.mean()
# #### sample standard deviation
sample.std()
# ### Confidence Interval Estimate
# +
import scipy.stats as st
st.t.interval(0.95, len(sample)-1, loc=np.mean(sample), scale=st.sem(sample))
# +
alpha = 0.05 # significance level = 5%
n = len(sample) # sample sizes
s2 = np.var(sample, ddof=1) # sample variance
df = n - 1 # degrees of freedom
upper = (n - 1) * s2 / stats.chi2.ppf(alpha / 2, df)
lower = (n - 1) * s2 / stats.chi2.ppf(1 - alpha / 2, df)
(lower, upper)
# -
# #### T-test:
# T-test for mean
# +
from scipy.stats import ttest_ind
stats.ttest_1samp(sample, 35)
# -
# The assumption is that the mean of population is greater than 35, and on account of the fact that pvalue is less than alpha, it can be concluded that our null hypothesis is not valid
# #### T test for two subpopulation :
# +
df = pd.read_excel('drug-dataset.xlsx')
df['exp_BP_diff'] = df["After_exp_BP"] - df['Before_exp_BP']
sample = df.sample(n=24, random_state=1)
# -
# seperating dataframe by gender, male and female :
score_female = sample[sample["Gender"] == "F"]["exp_BP_diff"]
score_male = sample[sample['Gender'] == 'M']["exp_BP_diff"]
stats.ttest_ind(score_female, score_male)
# We assumed that the means of the two samples (male samples and female samples) are equal, but as pvalue indicates they are not equal
# ### F Test :
stats.f_oneway(score_female, score_male)
#
# We assumed that the variances of the two samples are equal, but the value of pvalue indicates that they are not
|
statistical-modeling.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] deletable=true editable=true
# # Transform and split DWPCs, assess performance
# + deletable=true editable=true
import itertools
import bz2
import pandas
import numpy
import sklearn.metrics
from scipy.special import logit
# + deletable=true editable=true
unperm_name = 'wikidata-v0.1'
# + deletable=true editable=true
feature_df = pandas.read_table('data/matrix/features.tsv.bz2')
feature_type_df = pandas.read_table('data/matrix/feature-type.tsv')
# + deletable=true editable=true
feature_df.head(2)
# + deletable=true editable=true
feature_type_df.head(2)
# + deletable=true editable=true
def transform_dwpcs(x, scaler):
x = numpy.array(x)
return numpy.arcsinh(x / scaler)
transformed_df = feature_df.copy()
dwpc_features = feature_type_df.query("feature_type == 'dwpc'").feature
degree_features = feature_type_df.query("feature_type == 'degree'").feature
feature_to_scaler = dict(zip(feature_type_df.feature, feature_type_df.unperm_mean))
for column in dwpc_features:
transformed_df[column] = transform_dwpcs(transformed_df[column], feature_to_scaler[column])
# + deletable=true editable=true
column_names = list()
columns = list()
for metapath in dwpc_features:
df = pandas.pivot_table(transformed_df, values=metapath, index=['compound_id', 'disease_id'], columns='hetnet')
df = df[df[unperm_name].notnull()]
dwpc = df.iloc[:, 0]
pdwpc = df.iloc[:, 1:].mean(axis='columns')
rdwpc = dwpc - pdwpc
for column in dwpc, pdwpc, rdwpc:
columns.append(column)
for feature_type in 'dwpc', 'pdwpc', 'rdwpc':
column_names.append('{}_{}'.format(feature_type, metapath))
split_df = pandas.concat(columns, levels=column_names, axis=1)
split_df.columns = column_names
split_df.reset_index(inplace=True)
# + deletable=true editable=true
split_df.head(2)
# + deletable=true editable=true
base_df = feature_df.query("hetnet == @unperm_name").copy()
base_df.insert(8, 'prior_logit', logit(base_df['prior_prob']))
for metaege in degree_features:
base_df['degree_{}'.format(metaege)] = numpy.arcsinh(base_df[metaege])
base_df.drop(
['hetnet', 'primary', 'prior_prob'] + list(degree_features) + list(dwpc_features),
axis='columns', inplace=True)
transformed_df = base_df.merge(split_df)
transformed_df.head(2)
# + deletable=true editable=true
path = 'data/matrix/wikidata-v0.1/transformed-features.tsv.bz2'
with bz2.open(path, 'wt') as write_file:
transformed_df.to_csv(write_file, sep='\t', index=False, float_format='%.5g')
# + [markdown] deletable=true editable=true
# ### Compute performance
# + deletable=true editable=true
transformed_df = transformed_df.dropna(axis=1)
transformed_df.head(2)
# + deletable=true editable=true
rows = list()
for column in transformed_df.columns[transformed_df.columns.str.contains('dwpc')]:
feature_type, metapath = column.split('_', 1)
auroc = sklearn.metrics.roc_auc_score(transformed_df.status, transformed_df[column])
rows.append([feature_type + '_auroc', metapath, auroc])
auroc_df = pandas.DataFrame(rows, columns=['feature_type', 'metapath', 'auroc'])
auroc_df = auroc_df.pivot_table(values='auroc', index='metapath', columns='feature_type').reset_index()
auroc_df.head(2)
# + deletable=true editable=true
primary_auroc_df = pandas.read_table('data/feature-performance/primary-aurocs.tsv')
primary_auroc_df = primary_auroc_df.rename(columns={'feature': 'metapath', 'auroc_permuted': 'pdwpc_primary_auroc', 'pval_auroc': 'pval_delta_auroc'})
primary_auroc_df = primary_auroc_df[['metapath', 'nonzero', 'pdwpc_primary_auroc', 'delta_auroc', 'pval_delta_auroc']]
auroc_df = auroc_df.merge(primary_auroc_df)
auroc_df.head(2)
# + deletable=true editable=true
auroc_df.to_csv('data/feature-performance/auroc.tsv', sep='\t', index=False, float_format='%.5g')
# + deletable=true editable=true
#auroc_df.sort_values('rdwpc_auroc', ascending = False)
idx = -auroc_df.metapath.str.contains('CduftD') & ~auroc_df.metapath.str.contains('DduftC')
auroc_df[idx].sort_values('rdwpc_auroc', ascending = False).head()
# + [markdown] deletable=true editable=true
# ## Visualization Sandbox
# + deletable=true editable=true
# %matplotlib inline
import seaborn
# + deletable=true editable=true
seaborn.jointplot(transformed_df['pdwpc_CpiwPeGgaDso>D'], transformed_df['rdwpc_CpiwPeGgaDso>D'], alpha = 0.1);
# + deletable=true editable=true
seaborn.jointplot(transformed_df['pdwpc_CpiwPeGgaD'], transformed_df['rdwpc_CpiwPeGgaD'], alpha = 0.1);
# + deletable=true editable=true
seaborn.jointplot(auroc_df['dwpc_auroc'], auroc_df['pdwpc_auroc'], alpha = 0.1);
# + deletable=true editable=true
seaborn.jointplot(auroc_df['delta_auroc'], auroc_df['rdwpc_auroc'], alpha = 0.1);
# + deletable=true editable=true
|
all-features/5.5-transplit-DWPCs.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.12 ('ed')
# language: python
# name: python3
# ---
import tensorflow as tf
import numpy as np
mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0","/gpu:1"])
print('Number of devices: {}'.format(mirrored_strategy.num_replicas_in_sync))
# +
#default flag setting
flag = {}
flag['model_name'] = 'efficientdet-d0'
flag['logdir'] = '/tmp/deff/'
flag['runmode'] = 'dry'
flag['trace_filename'] = None
flag['threads'] = 0
flag['bm_runs'] = 10
flag['tensorrt'] = None
flag['delete_logdir'] =True
flag['freeze'] = False
flag['use_xla'] =False
flag['batch_size'] = 1
flag['ckpt_path'] = None
flag['export_ckpt'] = None
flag['hparams'] = ''
flag['input_image'] = None
flag['output_image_dir'] = None
flag['input_video'] = None
flag['output_video'] = None
flag['line_thickness'] = None
flag['max_boxes_to_draw'] = 100
flag['min_score_thresh'] = 0.4
flag['nms_method'] = 'hard'
flag['saved_model_dir'] = '/tmp/saved_model'
flag['tfile_path'] = None
# -
falg['mode']
|
efficientdet/p_multi_gpu_ed_train.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="BwaBCc1EpyLj"
# # In this notebook, we're going to cover some of the most fundamental concepts of tensors using TensorFlow
#
# More specifically, we're going to cover:
# * Introduction to tensors
# * Getting information from tensors
# * Manipulating tensors
# * Tensors & NumPy
# * Using @tf.function (a way to speed up your regular Python functions)
# * Using GPUs with TensorFlow (or TPUs)
# * Exercises to try for yourself!
#
# See full course materials on GitHub: https://github.com/mrdbourke/tensorflow-deep-learning/
# + [markdown] id="6nyqXj0zqArZ"
# ## Introduction to Tensors
# + colab={"base_uri": "https://localhost:8080/"} id="P1O0-4L5qgr1" outputId="de0d30ff-a6e2-4600-b7c4-e9540604b0ee"
# Import TensorFlow
import tensorflow as tf
print(tf.__version__)
# + colab={"base_uri": "https://localhost:8080/"} id="fCovQKKHqyft" outputId="8753063f-2e7e-456a-d2dc-973feac33925"
# Create tensors with tf.constant()
scalar = tf.constant(7)
scalar
# + colab={"base_uri": "https://localhost:8080/"} id="_1ooIMQfrFeQ" outputId="e3595968-5e55-4717-ebab-809f54111f58"
# Check the number of dimensions of a tensor (ndim stands for number of dimensions)
scalar.ndim
# + colab={"base_uri": "https://localhost:8080/"} id="oInbmHAWreFO" outputId="d3204cc7-4e07-45b3-f263-7b8623eb6125"
# Create a vector
vector = tf.constant([10, 10])
vector
# + colab={"base_uri": "https://localhost:8080/"} id="d9knHLcLrm2N" outputId="34d73a43-4d08-499e-b78e-9d395d7eff58"
# Check the dimension of our vector
vector.ndim
# + colab={"base_uri": "https://localhost:8080/"} id="OPSffnFgryen" outputId="ba547f4b-d8cc-4888-8872-d859b05f550f"
# Create a matrix (has more than 1 dimension)
matrix = tf.constant([[10, 7],
[7, 10]])
matrix
# + colab={"base_uri": "https://localhost:8080/"} id="ygKiGAHpr9yI" outputId="f8b97dd1-97ae-4e10-e246-9ad883209719"
matrix.ndim
# + colab={"base_uri": "https://localhost:8080/"} id="QtK9aDsnsIx3" outputId="615bc82f-37c3-4963-928d-39db9dbb1682"
# Create another matrix
another_matrix = tf.constant([[10., 7.],
[3., 2.],
[8., 9.]], dtype=tf.float16) # specify the data type with dtype parameter
another_matrix
# + colab={"base_uri": "https://localhost:8080/"} id="ma51NiJAsrH9" outputId="c4afe9d7-c730-48a5-fde4-0239bf36c132"
# What's the number dimensions of another_matrix?
another_matrix.ndim
# + colab={"base_uri": "https://localhost:8080/"} id="hv8MJE2ms62_" outputId="4190125f-4bfe-470f-d284-f3176df9f2e3"
# Let's create a tensor
tensor = tf.constant([[[1, 2, 3,],
[4, 5, 6]],
[[7, 8, 9],
[10, 11, 12]],
[[13, 14, 15],
[16, 17, 18]]])
tensor
# + colab={"base_uri": "https://localhost:8080/"} id="1R8Aoic0tUvM" outputId="38fa85ae-cba6-467d-f9ad-079c743bed42"
tensor.ndim
# + [markdown] id="x0OG8U6htdV5"
# What we've created so far:
#
# * Scalar: a single number
# * Vector: a number with direction (e.g. wind speed and direction)
# * Matrix: a 2-dimensional array of numbers
# * Tensor: an n-dimensional array of numbers (when n can be any number, a 0-dimensional tensor is a scalar, a 1-dimensional tensor is a vector)
# + [markdown] id="iaXQoV17ux78"
# ### Creating tensors with `tf.Variable`
# + colab={"base_uri": "https://localhost:8080/"} id="H8Td180Ft916" outputId="a46afbc2-bd47-4d7a-b8ec-52b91c0d7417"
# Create the same tesnor with tf.Variable() as above
changeable_tensor = tf.Variable([10, 7])
unchangeable_tensor = tf.constant([10, 7])
changeable_tensor, unchangeable_tensor
# + colab={"base_uri": "https://localhost:8080/", "height": 196} id="cq4p8PdzuxZD" outputId="cca13872-e8c6-4175-a98e-c7e6049c34f4"
# Let's try change one of the elements in our changeable tensor
changeable_tensor[0] = 7
changeable_tensor
# + colab={"base_uri": "https://localhost:8080/"} id="PtR3x_QbvmCu" outputId="85b28c89-5794-434f-e5a3-316362b4d675"
# How about we try .assign()
changeable_tensor[0].assign(7)
changeable_tensor
# + colab={"base_uri": "https://localhost:8080/", "height": 196} id="H5u69J3cv4Gj" outputId="fa2ade7e-db08-431b-f6aa-d9787cea7699"
# Let's try change our unchangable tensor
unchangeable_tensor[0].assign(7)
unchangeable_tensor
# + [markdown] id="APQ3QpdSwqhM"
# 🔑 **Note:** Rarely in practice will you need to decide whether to use `tf.constant` or `tf.Variable` to create tensors, as TensorFlow does this for you. However, if in doubt, use `tf.constant` and change it later if needed.
# + [markdown] id="PAnaGWqAwAMD"
# ### Creating random tensors
#
# Random tensors are tensors of some abitrary size which contain random numbers.
# + id="m4_XSR4_xT9M"
# Create two random (but the same) tensors
random_1 = tf.random.Generator.from_seed(7) # set seed for reproducibility
random_1 = random_1.normal(shape=(3, 2))
random_2 = tf.random.Generator.from_seed(7)
random_2 = random_2.normal(shape=(3, 2))
# Are they equal?
random_1, random_2, random_1 == random_2
# + [markdown] id="dsmVteVgymQJ"
# ### Shuffle the order of elements in a tensor
# + colab={"base_uri": "https://localhost:8080/"} id="bhzaeMlzz-_4" outputId="6d390dac-c719-4b32-9d1a-7d1a15661248"
# Shuffle a tesnor (valuable for when you want to shuffle your data so the inherent order doesn't effect learning)
not_shuffled = tf.constant([[10, 7],
[3, 4],
[2, 5]])
# Shuffle our non-shuffled tensor
tf.random.shuffle(not_shuffled)
# + colab={"base_uri": "https://localhost:8080/"} id="4yj_yuzO1WiP" outputId="7c27e284-4fce-4bd0-e619-953502150ddf"
# Shuffle our non-shuffled tensor
tf.random.set_seed(42)
tf.random.shuffle(not_shuffled, seed=42)
# + colab={"base_uri": "https://localhost:8080/"} id="w66JOweN0m1k" outputId="fe753c6f-7963-43be-94c8-032e7d5f12f8"
not_shuffled
# + [markdown] id="fzxSvIOX16vu"
# 🛠 **Exercise:** Read through TensorFlow documentation on random seed generation: https://www.tensorflow.org/api_docs/python/tf/random/set_seed and practice writing 5 random tensors and shuffle them.
#
# It looks like if we want our shuffled tensors to be in the same order, we've got to use the global level random seed as well as the operation level random seed:
#
# > Rule 4: "If both the global and the operation seed are set: Both seeds are used in conjunction to determine the random sequence."
# + colab={"base_uri": "https://localhost:8080/"} id="JuLv22oM29PO" outputId="42ad086f-e757-463a-802b-acd553332661"
tf.random.set_seed(42) # global level random seed
tf.random.shuffle(not_shuffled, seed=42) # operation level random seed
# + [markdown] id="rgWIke8k3s2Z"
# ### Other ways to make tensors
# + colab={"base_uri": "https://localhost:8080/"} id="PWD-PYnu4UPK" outputId="31240c2f-9374-4764-d86d-90595711a605"
# Create a tensor of all ones
tf.ones([10, 7])
# + colab={"base_uri": "https://localhost:8080/"} id="B2fDzsaC4f2w" outputId="4dd03c26-ae58-4364-d7f7-a37ab9853ebd"
# Create a tensor of all zeroes
tf.zeros(shape=(3, 4))
# + [markdown] id="wU3a8ghB40fl"
# ### Turn NumPy arrays into tensors
#
# The main difference between NumPy arrays and TensorFlow tensors is that tensors can be run on a GPU (much faster for numerical computing).
# + colab={"base_uri": "https://localhost:8080/"} id="EfpBPRcX4rBK" outputId="eb9b4d35-f517-45ed-8640-cbf60a73e85e"
# You can also turn NumPy arrays into tensors
import numpy as np
numpy_A = np.arange(1, 25, dtype=np.int32) # create a NumPy array between 1 and 25
numpy_A
# X = tf.constant(some_matrix) # capital for matrix or tensor
# y = tf.constant(vector) # non-capital for vector
# + colab={"base_uri": "https://localhost:8080/"} id="7Cwftwrf5Teh" outputId="d46d9657-68e4-4512-b11f-0fcba2c0e050"
A = tf.constant(numpy_A, shape=(3, 8))
B = tf.constant(numpy_A)
A, B
# + colab={"base_uri": "https://localhost:8080/"} id="ixr6dI665eV7" outputId="8ea461d5-d2dd-4e5b-892c-b3e01cc396f2"
3 * 8
# + colab={"base_uri": "https://localhost:8080/"} id="tWehwgwY6QKq" outputId="8b7ae354-9995-4aac-ee52-1d915874db1c"
A.ndim
# + [markdown] id="4fxbyvOG5pTE"
# ### Getting information from tensors
#
# When dealing with tensors you probably want to be aware of the following attributes:
# * Shape
# * Rank
# * Axis or dimension
# * Size
# + colab={"base_uri": "https://localhost:8080/"} id="tT_QG2dg6NHC" outputId="01802615-33a9-4ce7-e79f-43482c08edfa"
# Create a rank 4 tensor (4 dimensions)
rank_4_tensor = tf.zeros(shape=[2, 3, 4, 5])
rank_4_tensor
# + colab={"base_uri": "https://localhost:8080/"} id="y39mIgfhaiTi" outputId="0a5ff4c7-ab13-4cc8-d40e-5b73ddccde60"
rank_4_tensor.shape, rank_4_tensor.ndim, tf.size(rank_4_tensor)
# + colab={"base_uri": "https://localhost:8080/"} id="tQXtQmH1a4iM" outputId="ac63cd43-d31e-4561-c172-992887e8efb6"
2 * 3 * 4 * 5
# + colab={"base_uri": "https://localhost:8080/"} id="4DyWKdLXbRo2" outputId="12563639-ec53-42ec-b77a-afdae3f8cad6"
# Get various attributes of our tensor
print("Datatype of every element:", rank_4_tensor.dtype)
print("Number of dimensions (rank):", rank_4_tensor.ndim)
print("Shape of tensor:", rank_4_tensor.shape)
print("Elements along the 0 axis:", rank_4_tensor.shape[0])
print("Elements along the last axis:", rank_4_tensor.shape[-1])
print("Total number of elements in our tensor:", tf.size(rank_4_tensor))
print("Total number of elements in our tensor:", tf.size(rank_4_tensor).numpy())
# + [markdown] id="1Ny7YAQrb5PO"
# ### Indexing tensors
#
# Tensors can be indexed just like Python lists.
#
# + colab={"base_uri": "https://localhost:8080/"} id="XQ8Rm6n-c26C" outputId="90bdf46e-3449-4bb1-f602-b94ff15b2628"
some_list = [1, 2, 3, 4]
some_list[:2]
# + colab={"base_uri": "https://localhost:8080/"} id="TPFfp_VDcoda" outputId="72e04fcc-acdf-40fb-88a7-a9a08101888c"
# Get the first 2 elements of each dimension
rank_4_tensor[:2, :2, :2, :2]
# + colab={"base_uri": "https://localhost:8080/"} id="MwHr9nwMdR9W" outputId="a53b8a21-5b09-41c6-eb86-a5d1c7d0251a"
some_list[:1]
# + colab={"base_uri": "https://localhost:8080/"} id="cd9gm76Udh9g" outputId="83bcb8b0-0d25-4779-94a2-9ab9398562e7"
rank_4_tensor.shape
# + colab={"base_uri": "https://localhost:8080/"} id="uDdoWqG0c8oO" outputId="14f0dae5-5de9-4a2f-ad66-30494803c04b"
# Get the first element from each dimension from each index except for the final one
rank_4_tensor[:1, :1, :1, :]
# + colab={"base_uri": "https://localhost:8080/"} id="Tx2nNGgqdYxr" outputId="d0e59a54-5ba7-4473-9ee3-2d60c57d9593"
# Create a rank 2 tensor (2 dimensions)
rank_2_tensor = tf.constant([[10, 7],
[3, 4]])
rank_2_tensor.shape, rank_2_tensor.ndim
# + colab={"base_uri": "https://localhost:8080/"} id="OcuRCghyeDaK" outputId="96064275-b92c-4b96-adbd-7d93fe068ea8"
rank_2_tensor
# + colab={"base_uri": "https://localhost:8080/"} id="yRqri6-ReErh" outputId="59314461-8e1a-42b6-ebe3-09a29cef8765"
some_list, some_list[-1]
# + colab={"base_uri": "https://localhost:8080/"} id="OMpMoXawd7vG" outputId="286b0fc3-d151-472e-875f-962fa7f8d8fd"
# Get the last item of each of row of our rank 2 tensor
rank_2_tensor[:, -1]
# + colab={"base_uri": "https://localhost:8080/"} id="7Jg8DxA3eMbt" outputId="fda49b16-b283-4c39-b404-fa3c60292ac8"
# Add in extra dimension to our rank 2 tensor
rank_3_tensor = rank_2_tensor[..., tf.newaxis]
rank_3_tensor
# + colab={"base_uri": "https://localhost:8080/"} id="YLuLQdIZeeM1" outputId="9ac41455-3dc6-4208-8628-b6dd62da4c56"
# Alternative to tf.newaxis
tf.expand_dims(rank_2_tensor, axis=-1) # "-1" means expand the final axis
# + colab={"base_uri": "https://localhost:8080/"} id="CuzEEpMifAW6" outputId="fc98fc1f-b55a-4b99-8858-0263241facce"
# Expand the 0-axis
tf.expand_dims(rank_2_tensor, axis=0) # expand the 0-axis
# + colab={"base_uri": "https://localhost:8080/"} id="FzEwBgWnfOuG" outputId="ffb160a0-1401-4acc-f1ca-0feb9dbd5ea8"
rank_2_tensor
# + [markdown] id="_iEntn_WfXVS"
# ### Manipulating tensors (tensor operations)
#
# **Basic operations**
#
# `+`, `-`, `*`, `/`
# + colab={"base_uri": "https://localhost:8080/"} id="iPYE1HZ0fg9b" outputId="ac40d8fc-300f-4529-fffe-6aba78b230c8"
# You can add values to a tensor using the addition operator
tensor = tf.constant([[10, 7],
[3, 4]])
tensor + 10
# + colab={"base_uri": "https://localhost:8080/"} id="_nBLVKFJgfPe" outputId="54de4ed9-825e-48b3-b6e9-d36f7b7e55e9"
# Original tensor is unchanged
tensor
# + colab={"base_uri": "https://localhost:8080/"} id="YMrp823igljC" outputId="d887707f-404d-4a13-a2a3-75549f16cc29"
# Multiplication also works
tensor * 10
# + colab={"base_uri": "https://localhost:8080/"} id="3qpGsHvegyyl" outputId="5a26a76f-4000-4d97-b371-1161da139f82"
# Substraction if you want
tensor - 10
# + colab={"base_uri": "https://localhost:8080/"} id="n4i7wpMeg2XW" outputId="d34453f3-a06b-460b-846b-c6403998c74a"
# We can use the tensorflow built-in function too
tf.multiply(tensor, 10)
# + colab={"base_uri": "https://localhost:8080/"} id="I34D6u5ihEg-" outputId="259d2dbd-35b6-45da-ac32-604a2a037118"
tensor
# + [markdown] id="pkXWTuXIhY1A"
# **Matrix multiplication**
#
# In machine learning, matrix multiplication is one of the most common tensor operations.
#
# There are two rules our tensors (or matrices) need to fulfil if we're going to matrix multiply them:
#
# 1. The inner dimensions must match
# 2. The resulting matrix has the shape of the outer dimensions
# + colab={"base_uri": "https://localhost:8080/"} id="GP28yRH2hic0" outputId="a239f5e0-2be0-44b3-d5e9-466fde894776"
# Matrix multiplication in tensorflow
print(tensor)
tf.matmul(tensor, tensor)
# + colab={"base_uri": "https://localhost:8080/"} id="EeLqJcEkj1um" outputId="fbe528dc-4f93-45c6-8b07-a6b7674ff7f4"
tensor, tensor
# + colab={"base_uri": "https://localhost:8080/"} id="wl3zUdR4jy4J" outputId="9889ee37-f84d-447a-b9d6-6aef2302ac55"
tensor * tensor
# + colab={"base_uri": "https://localhost:8080/"} id="M800SrPpjmc3" outputId="fb189e4b-bce0-4f5a-f48f-9d88ceb8d657"
# Matrix mutliplication with Python operator "@"
tensor @ tensor
# + colab={"base_uri": "https://localhost:8080/"} id="FSZHcT0ukGma" outputId="fec1aabb-ff24-41c9-97a1-11b6ec394aa2"
tensor.shape
# + colab={"base_uri": "https://localhost:8080/"} id="kRY_Vit8kLCl" outputId="4c64386a-2cda-46aa-9b91-d051da849150"
# Create a tensor (3, 2) tensor
X = tf.constant([[1, 2],
[3, 4],
[5, 6]])
# Create another (3, 2) tensor
Y = tf.constant([[7, 8],
[9, 10],
[11, 12]])
X, Y
# + colab={"base_uri": "https://localhost:8080/", "height": 237} id="UQxfdLn-ket6" outputId="e477c92d-a0c4-467a-88fe-2ebc8f48da74"
# Try to matrix multiply tensors of same shape
tf.matmul(X, Y)
# + colab={"base_uri": "https://localhost:8080/"} id="OCKaZBGSqJio" outputId="55b8bff6-6de3-45ca-f90a-ad49f2442d03"
Y
# + colab={"base_uri": "https://localhost:8080/"} id="Fjme1G2ek4x_" outputId="75ad0bdf-3792-418b-a81f-071aa4d4f29c"
# Let's change the shape of Y
tf.reshape(Y, shape=(2, 3))
# + colab={"base_uri": "https://localhost:8080/"} id="T2N4KI12qXmJ" outputId="e8c91d61-314d-48ed-ab54-d78ae16dfef4"
X.shape, tf.reshape(Y, shape=(2, 3)).shape
# + colab={"base_uri": "https://localhost:8080/"} id="p7AVnWqeqQHZ" outputId="6310497f-a433-4b0e-cdbd-f4485dfc7820"
# Try to matrix multiply X by reshaped Y
X @ tf.reshape(Y, shape=(2, 3))
# + colab={"base_uri": "https://localhost:8080/"} id="w2mQBcS9qi3V" outputId="1104ada4-e000-4b77-d047-026d789ea8fa"
tf.matmul(X, tf.reshape(Y, shape=(2, 3)))
# + colab={"base_uri": "https://localhost:8080/"} id="IrPdpX8Fqxts" outputId="08cf5509-2b33-4d63-ced9-28813c539194"
tf.reshape(X, shape=(2, 3)).shape, Y.shape
# + colab={"base_uri": "https://localhost:8080/"} id="h-JtkXgLqpUC" outputId="a7d0d096-4e64-4266-9b21-49e46d7b91c0"
# Try change the shape of X instead of Y
tf.matmul(tf.reshape(X, shape=(2, 3)), Y)
# + colab={"base_uri": "https://localhost:8080/"} id="PlpxV41OrKk1" outputId="8eb8cb0d-911a-4045-aa7e-55ced71b4d37"
# Can do the same with transpose
X, tf.transpose(X), tf.reshape(X, shape=(2, 3))
# + colab={"base_uri": "https://localhost:8080/"} id="e4Hfk7W0rruC" outputId="8382d254-1c47-40b7-8a60-61588cc5d0ee"
# Try matrix multiplication with transpose rather than reshape
tf.matmul(tf.transpose(X), Y)
# + [markdown] id="vtJbz8X8kkvr"
# 📖 **Resource:** Info and example of matrix multiplication: https://www.mathsisfun.com/algebra/matrix-multiplying.html
# + [markdown] id="DwbKgBUisf-U"
# **The dot product**
#
# Matrix multiplication is also referrred to as the dot product.
#
# You can perform matrix multiplication using:
# * `tf.matmul()`
# * `tf.tensordot()`
# * `@`
# + colab={"base_uri": "https://localhost:8080/"} id="YudW8PBMtpRk" outputId="f5978cff-5423-41f6-f153-9173811757d7"
X, Y
# + colab={"base_uri": "https://localhost:8080/"} id="WXt0pYMftLrC" outputId="4af8483e-4905-4c02-8c0d-d77191e1f447"
# Perform the dot product on X and Y (requires X or Y to be transposed)
tf.tensordot(tf.transpose(X), Y, axes=1)
# + colab={"base_uri": "https://localhost:8080/"} id="CcVXO0FLtt2G" outputId="82aca054-0759-41a8-a1d5-a63ac73b9dc2"
# Perform matrix mutliplication between X and Y (transposed)
tf.matmul(X, tf.transpose(Y))
# + colab={"base_uri": "https://localhost:8080/"} id="PA4IMdztuEwc" outputId="d52fed3f-3a52-4863-83a9-baab3f2bdc7d"
# Perform matrix multiplication between X and Y (reshaped)
tf.matmul(X, tf.reshape(Y, shape=(2, 3)))
# + colab={"base_uri": "https://localhost:8080/"} id="AkVTgrLVuPY_" outputId="3a69d19c-6a57-4780-da7e-6bcd9c105ed2"
# Check the values of Y, reshape Y and transposed Y
print("Normal Y:")
print(Y, "\n") # "\n" is for newline
print("Y reshaped to (2, 3):")
print(tf.reshape(Y, (2, 3)), "\n")
print("Y transposed:")
print(tf.transpose(Y))
# + colab={"base_uri": "https://localhost:8080/"} id="BmgqBtU6vCK9" outputId="1974e439-7424-478c-e351-2a35a64e234e"
tf.matmul(X, tf.transpose(Y))
# + [markdown] id="4S9nm79nusb3"
# Generally, when performing matrix multiplication on two tensors and one of the axes doesn't line up, you will tranpose (rather than reshape) one of the tensors to get satisify the matrix multiplication rules.
# + [markdown] id="ph6Ed401vcvO"
# ### Changing the datatype of a tensor
# + colab={"base_uri": "https://localhost:8080/", "height": 35} id="NeOtrv74wMA1" outputId="15744386-01ec-4e3d-8411-4d4b3d32a46a"
tf.__version__
# + colab={"base_uri": "https://localhost:8080/"} id="fIK7ucC-vsTC" outputId="d61b36e8-0916-42ab-9e36-81250ed4d9de"
# Create a new tensor with default datatype (float32)
B = tf.constant([1.7, 7.4])
B, B.dtype
# + colab={"base_uri": "https://localhost:8080/"} id="83tVK_PZwHgA" outputId="4e5b26ec-62a7-4eb2-f3a5-35c6e6a3c5cc"
C = tf.constant([7, 10])
C, C.dtype
# + colab={"base_uri": "https://localhost:8080/"} id="sa2Z2OpPwKii" outputId="da77e54d-3c25-4dae-aa03-f0fd12d7482a"
# Change from float32 to float16 (reduced precision)
D = tf.cast(B, dtype=tf.float16)
D, D.dtype
# + colab={"base_uri": "https://localhost:8080/"} id="BRoPkDzRw7t1" outputId="80094860-e8e9-4f80-d3cb-5642d8856168"
# Change from int32 to float32
E = tf.cast(C, dtype=tf.float32)
E
# + colab={"base_uri": "https://localhost:8080/"} id="u-4XZL_YxVr8" outputId="ed3f7af2-5051-4949-e692-48b195f53a72"
E_float16 = tf.cast(E, dtype=tf.float16)
E_float16
# + [markdown] id="aMKLfoICxd9E"
# ### Aggregating tensors
#
# Aggregating tensors = condensing them from multiple values down to a smaller amount of values.
# + colab={"base_uri": "https://localhost:8080/"} id="kB6W42J6xnLV" outputId="2dbc4abe-9d1c-494c-a5c3-77669ef3c779"
# Get the absolute values
D = tf.constant([-7, -10])
D
# + colab={"base_uri": "https://localhost:8080/"} id="XdvJQPlaNsFN" outputId="f6e75a8f-5490-4f63-a9b3-936ba2c8668d"
# Get the absolute values
tf.abs(D)
# + [markdown] id="wDerTCUPNwXn"
# Let's go through the following forms of aggregation:
# * Get the minimum
# * Get the maximum
# * Get the mean of a tensor
# * Get the sum of a tensor
# + colab={"base_uri": "https://localhost:8080/"} id="SXbYeL--ODKq" outputId="4452a8a6-b957-41af-d73d-22b33eac83ef"
# Create a random tensor with values between 0 and 100 of size 50
E = tf.constant(np.random.randint(0, 100, size=50))
E
# + colab={"base_uri": "https://localhost:8080/"} id="gqhaeooSOX-1" outputId="ea7b9b12-730a-40b2-fde9-00196ebf2b86"
tf.size(E), E.shape, E.ndim
# + colab={"base_uri": "https://localhost:8080/"} id="hGXDigFWObwy" outputId="b573debb-9ae2-4bfc-f1f0-26de38ca3b70"
# Find the minimum
tf.reduce_min(E)
# + colab={"base_uri": "https://localhost:8080/"} id="fHGtgyRgOlny" outputId="33865bb4-073a-460e-b803-98233b19df10"
# Find the maximum
tf.reduce_max(E)
# + colab={"base_uri": "https://localhost:8080/"} id="rYpG079IOsKN" outputId="f97d16c8-afd3-4ebc-dba6-501800dbfd7e"
# Find the mean
tf.reduce_mean(E)
# + colab={"base_uri": "https://localhost:8080/"} id="oa03eQSJOxHc" outputId="997d0936-8634-485e-e697-939f97284b36"
# Find the sum
tf.reduce_sum(E)
# + [markdown] id="FuvRPHtEO4a9"
# 🛠 **Exercise:** With what we've just learned, find the variance and standard deviation of our `E` tensor using TensorFlow methods.
# + colab={"base_uri": "https://localhost:8080/", "height": 179} id="K9wXGYSoO1sz" outputId="ee86aff9-fbf5-47d0-ba70-93213a110be2"
# Find the variance of our tensor
tf.reduce_variance(E) # won't work...
# + colab={"base_uri": "https://localhost:8080/"} id="K2eCZ9C7PsEu" outputId="f88daeca-eacc-4eb5-ae99-66c5c56c569e"
# To find the variance of our tensor, we need access to tensorflow_probability
import tensorflow_probability as tfp
tfp.stats.variance(E)
# + colab={"base_uri": "https://localhost:8080/"} id="hOtwBveqP64D" outputId="8df9353e-119e-43be-a8a4-d1eb1f37aaac"
# Find the standard deviation
tf.math.reduce_std(tf.cast(E, dtype=tf.float32))
# + colab={"base_uri": "https://localhost:8080/", "height": 321} id="FaYXVHxLQPcc" outputId="f4f247ba-fa58-4fc6-8eb7-985db05df3c7"
tf.math.reduce_std(E)
# + colab={"base_uri": "https://localhost:8080/"} id="fijFqd6GTfMP" outputId="7730cfe2-3379-42ec-bf44-347f3ec61f91"
# Find the variance of our E tensor
tf.math.reduce_variance(tf.cast(E, dtype=tf.float32))
# + [markdown] id="P_QXWLEMQsN_"
# ### Find the positional maximum and minimum
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="s3EWHf4aTNEH" outputId="18058aa7-4f46-415b-9024-5aed81bc754b"
# Create a new tensor for finding positional minimum and maxmimum
tf.random.set_seed(42)
F = tf.random.uniform(shape=[50])
F
# + colab={"base_uri": "https://localhost:8080/"} id="mJ0gpH8hUZ9w" outputId="4fe6c884-9abe-476f-9569-c17308f5ca24"
# Find the positional maximum
tf.argmax(F)
# + colab={"base_uri": "https://localhost:8080/"} id="C7arV_9sUzBH" outputId="76cbf570-84e8-473a-e41b-07a6fd7dc80c"
# Index on our largest value position
F[tf.argmax(F)]
# + colab={"base_uri": "https://localhost:8080/"} id="ynkj9cUMU3oZ" outputId="ddbeff3c-eede-4906-b64c-40412714cf71"
# Find the max value of F
tf.reduce_max(F)
# + colab={"base_uri": "https://localhost:8080/"} id="Ut1IdVaIU8SH" outputId="324d1ae4-81d6-451a-d671-9590cffdf6b7"
# Check for equality
F[tf.argmax(F)] == tf.reduce_max(F)
# + colab={"base_uri": "https://localhost:8080/"} id="7ax9HjPiVD-P" outputId="598d47a4-a4a6-483d-f374-7fd3aa2cd67f"
# Find the positional minimum
tf.argmin(F)
# + colab={"base_uri": "https://localhost:8080/"} id="4k4CBnWGVYv5" outputId="627aa510-67a8-460a-e2c4-afab470cc713"
# Find the minimum using the positional minimum index
F[tf.argmin(F)]
# + [markdown] id="sbhmL_h2V9ZV"
# ### Squeezing a tensor (removing all single dimensions)
# + colab={"base_uri": "https://localhost:8080/"} id="JvnxCrg4WKuu" outputId="69fdd598-4c5d-4ac1-b846-89b199a21bae"
# Create a tensor to get started
tf.random.set_seed(42)
G = tf.constant(tf.random.uniform(shape=[50]), shape=(1, 1, 1, 1, 50))
G
# + colab={"base_uri": "https://localhost:8080/"} id="E5CtnUE2WUp2" outputId="542a9439-f8cb-4f98-f562-6cd2d7a100a2"
G.shape
# + colab={"base_uri": "https://localhost:8080/"} id="tbxE6MhnWhXj" outputId="a22f13dc-6f1f-4550-acd9-1408731e4822"
# Squeezing removes all single dimensions from a tensor
G_squeezed = tf.squeeze(G)
G_squeezed, G_squeezed.shape
# + [markdown] id="p_-OcZGkWq3O"
# ### One-hot encoding tensors
# + colab={"base_uri": "https://localhost:8080/"} id="kisv2t0-XNBQ" outputId="fbf6485f-7097-4771-9938-74eccbc85a4b"
# Create a list of indices
some_list = [0, 1, 2, 3] # could be red, green, blue, purple
# One hot encode our list of indices
tf.one_hot(some_list, depth=4)
# + colab={"base_uri": "https://localhost:8080/"} id="GsVof4cxXyUR" outputId="fca7ae35-aa8e-45dc-98de-dff61058405e"
# Specify custom values for one hot encoding
tf.one_hot(some_list, depth=4, on_value="yo I love deep learning", off_value="I also like to dance")
# + [markdown] id="bW-XeuhTZE6I"
# ### Squaring, log, square root
# + colab={"base_uri": "https://localhost:8080/"} id="Nt_oRKPRZju3" outputId="4f028c60-f0e2-4eb1-e6ba-ebd0b0a41ce3"
# Create a new tensor
H = tf.range(1, 10)
H
# + colab={"base_uri": "https://localhost:8080/"} id="d9XHH6o4Zn4B" outputId="25a97d6e-31d1-4b49-b422-47b1c28cc4f8"
# Square it
tf.square(H)
# + colab={"base_uri": "https://localhost:8080/", "height": 287} id="G79OZ8baaBSd" outputId="17f25e1f-6e22-4e91-cce0-856d80119175"
# Find the squareroot (will error, method requires non-int type)
tf.sqrt(H)
# + colab={"base_uri": "https://localhost:8080/"} id="Wd_s8tJrZq3r" outputId="c4a02b59-f4d0-479b-db8b-0f464e503e3e"
# Find the squareroot
tf.sqrt(tf.cast(H, dtype=tf.float32))
# + colab={"base_uri": "https://localhost:8080/"} id="jzS6Uy51Z0m-" outputId="420f8f95-8967-45c1-b2bb-0cb42cd0dee6"
# Find the log
tf.math.log(tf.cast(H, dtype=tf.float32))
# + [markdown] id="WFNQmz5oaPQV"
# ### Tensors and NumPy
#
# TensorFlow interacts beautifully with NumPy arrays.
#
# 🔑 **Note:** One of the main differences between a TensorFlow tensor and a NumPy array is that a TensorFlow tensor can be run on a GPU or TPU (for faster numerical processing).
# + colab={"base_uri": "https://localhost:8080/"} id="NGHsXAhlsU2e" outputId="ee70334c-3216-4453-9e5f-6d0469b866c8"
# Create a tensor directly from a NumPy array
J = tf.constant(np.array([3., 7., 10.]))
J
# + colab={"base_uri": "https://localhost:8080/"} id="7dwCbmQ5smvD" outputId="01298a2d-7079-4280-b9f6-5c113170f299"
# Convert our tensor back to a NumPy array
np.array(J), type(np.array(J))
# + colab={"base_uri": "https://localhost:8080/"} id="Stvjy-cqsvOo" outputId="e8e969c9-7bbe-4646-cdc7-bbc94f9a78e5"
# Convert tensor J to a NumPy array
J.numpy(), type(J.numpy())
# + colab={"base_uri": "https://localhost:8080/"} id="Nb0G4hXWs5BG" outputId="816c9f22-f1a7-4d68-a04a-62bb7bdba079"
# The default types of each are slightly different
numpy_J = tf.constant(np.array([3., 7., 10.]))
tensor_J = tf.constant([3., 7., 10.])
# Check the datatypes of each
numpy_J.dtype, tensor_J.dtype
# + [markdown] id="bjz3D0XEtYvw"
# ### Finding access to GPUs
# + colab={"base_uri": "https://localhost:8080/"} id="n0BuebkbuauS" outputId="1142cacf-04dc-4546-8d75-641a492fbbaf"
import tensorflow as tf
tf.config.list_physical_devices("GPU")
# + colab={"base_uri": "https://localhost:8080/"} id="wHBLuTiMuvmy" outputId="13b795f0-36de-4676-ca73-dcf2055d7526"
# !nvidia-smi
# + [markdown] id="1nZlExzSvxJy"
# > 🔑 **Note:** If you have access to a CUDA-enabled GPU, TensorFlow will automatically use it whenver possible.
|
00_tensorflow_fundamentals.ipynb
|
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .cpp
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: C++
// language: ''
// name: cling
// ---
// # C++ kernel with Cling
//
// [Cling](https://cern.ch/cling) is an interpreter for C++.
//
// Yup, this is a thing. No tab completion support (yet).
class Rectangle {
private:
double w;
double h;
public:
Rectangle(double w_, double h_) {
w = w_;
h = h_;
}
double area(void) {
return w * h;
}
double perimeter(void) {
return 2 * (w + h);
}
};
Rectangle r = Rectangle(5, 4);
r.area();
Rectangle(12,23).perimeter();
r.methodmissing();
|
python/jupyter/cling/cling.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # State of Data Science
# ### Business Understanding
# Using Kaggle DS-ML Survey dataset, I will explore the following questions:
# 1. Explore how levels of education and majors vary in across countries, esp US and India
# 2. What is the most common education level and how it relates to years of experience?
# 3. What are the most common job responsibilites for each job role in India?
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
sns.set(style="darkgrid")
# ### Data Understanding
# First row contains question numbers for each question, so skipping that as we only need question text as column headers
df = pd.read_csv('multipleChoiceResponses.csv', skiprows=1)
df.head().T
df.columns.tolist()
# *Since we will only be usign a few columns, so taking only a subset for further analysis*
df_relevant = df.iloc[:, 1:21]
df_relevant.head().T
df_relevant.columns.tolist()
# Renaming the columns appropriately
df_relevant.columns = ['Gender','Gender_self_describe','Age','Country','Highest_education','UG_major','Title','Title_text',
'Industry','Industry_text','Experience','Annual_compensation','Use_ML_models',
'Activities_Analyze','Activities_ML_service','Activities_Data_infra','Activities_Prototyping',
'Activities_Research','Activities_None','Activities_Other']
df_relevant = df_relevant.drop(columns=['Gender_self_describe','Title_text','Industry_text'])
df_relevant = df_relevant[df_relevant.Country.isin(df_relevant.Country.value_counts().index[:3])]
df_relevant.head()
plt.figure(figsize=(8,5))
sns.countplot(df_relevant.Gender)
plt.title('Gender Distribution of Data Scientists')
plt.figure(figsize=(8,5))
sns.countplot(x="Country", hue="Gender", data=df_relevant[df_relevant.Gender.isin(df_relevant.Gender.value_counts().index[:2])])
plt.title('Distribution of Data Scientists by Country and Gender')
# Males out-number females in DS and ML field by a significant margin and this gap keeps widening from US to India and China
# ### 1. Explore how levels of education and majors vary in across countries, esp US and India
# Analyzing Educational levels. Need to map values to better work with data.
df_relevant.Highest_education.unique()
highest_edu_dict = {'Doctoral degree':'Doctoral', 'Bachelor’s degree':'Bachelors', 'Master’s degree':'Masters',
'Professional degree':'Professional',
'Some college/university study without earning a bachelor’s degree':'College_dropout',
'I prefer not to answer':'No_answer', 'No formal education past high school':'HS'}
df_relevant.Highest_education.replace(highest_edu_dict, inplace=True)
df_relevant.fillna('No_answer', inplace=True)
plt.figure(figsize=(15,5))
sns.countplot(df_relevant.Highest_education)
plt.title('Educational levels of Data Scientists')
plt.figure(figsize=(10,5))
sns.countplot(x="Highest_education", hue="Country", data=df_relevant[df_relevant.Gender.isin(df_relevant.Gender.value_counts().index[:2])])
plt.title('Distribution of Data Scientists by Country and Educational level')
df_percent_count = (df_relevant
.groupby('Country')['Highest_education']
.value_counts(normalize=True)
.mul(100)
.rename('percent')
.reset_index())
chart = df_percent_count.pipe((sns.catplot,'data'), x='Highest_education',y='percent',hue='Country',kind='bar')
chart.set_xticklabels(rotation=90)
chart.fig.suptitle('Education levels by Country')
df_relevant.UG_major.unique()
major_dict = {'Other':'Other', 'Engineering (non-computer focused)':'Engg_NonCS',
'Computer science (software engineering, etc.)':'CS',
'Social sciences (anthropology, psychology, sociology, etc.)':'Social',
'Mathematics or statistics':'Math', 'Physics or astronomy':'Physics',
'Information technology, networking, or system administration':'IT',
'A business discipline (accounting, economics, finance, etc.)':'Business',
'Environmental science or geology':'Environment',
'Medical or life sciences (biology, chemistry, medicine, etc.)':'Medicine',
'I never declared a major':'None','Humanities (history, literature, philosophy, etc.)':'Humanities',
'No_answer':'None', 'Fine arts or performing arts':'Arts'}
df_relevant.UG_major.replace(major_dict, inplace=True)
plt.figure(figsize=(15,5))
sns.countplot(df_relevant.UG_major)
df_relevant.UG_major.value_counts()
df_relevant_major = df_relevant.copy()
df_relevant_major = df_relevant_major[df_relevant_major.UG_major.isin(df_relevant_major.UG_major.value_counts().index[:5])]
df_percent_major = (df_relevant_major
.groupby('Country')['UG_major']
.value_counts(normalize=True)
.mul(100)
.rename('percent')
.reset_index())
chart = df_percent_major.pipe((sns.catplot,'data'), x='UG_major',y='percent',hue='Country',kind='bar')
chart.set_xticklabels(rotation=90)
chart.fig.suptitle('UG Major by Country')
# ### 2. How are levels of education related to job roles?
df_relevant.Country.unique()
df_relevant.Title.value_counts()
# +
plt.pie(df_relevant.Title.value_counts()[:10], labels=df_relevant.Title.value_counts().index[:10],
autopct='%1.1f%%', shadow=True, startangle=140)
plt.axis('equal')
plt.show()
# -
df_relevant.groupby(['Title'])['Highest_education'].describe()['top']
pd.merge(df_relevant[df_relevant.Country=='United States of America'].groupby(['Title'])['Highest_education'].describe()['top'],
df_relevant.groupby(['Title'])['Highest_education'].describe()['top'], left_index=True, right_index=True,
suffixes=('_US', '_Overall'), how='outer').to_clipboard()
# *We observe that most roles need require a Masters role, also lot more roles need Masters in the US*
sns.countplot(df_relevant.groupby('Title')['Highest_education'].describe()['top'])
df_relevant.groupby('Title')['Highest_education'].value_counts(normalize=True).to_excel('JobTitle_by_Education.xlsx')
# ### 3. What are the responsibilites for each role?
df_relevant[['Activities_Analyze','Activities_ML_service','Activities_Data_infra','Activities_Prototyping',
'Activities_Research','Activities_None','Activities_Other']]
df_activities = df_relevant.copy()
for col in ['Activities_Analyze','Activities_ML_service','Activities_Data_infra','Activities_Prototyping',
'Activities_Research','Activities_None','Activities_Other']:
print(df_relevant[col].unique())
# Encode the columns to numbers by mapping No_Answers to 0, and answer to 1
def activities_map(c):
return int(c!='No_answer')
df_activities[['Activities_Analyze','Activities_ML_service','Activities_Data_infra','Activities_Prototyping',
'Activities_Research','Activities_None','Activities_Other']] = df_activities[['Activities_Analyze','Activities_ML_service','Activities_Data_infra','Activities_Prototyping',
'Activities_Research','Activities_None','Activities_Other']].applymap(activities_map)
df_activities[['Activities_Analyze','Activities_ML_service','Activities_Data_infra','Activities_Prototyping',
'Activities_Research','Activities_None','Activities_Other']].head()
df_relevant[['Activities_Analyze','Activities_ML_service','Activities_Data_infra','Activities_Prototyping',
'Activities_Research','Activities_None','Activities_Other']].head()
plt.figure(figsize=(10, 9))
sns.heatmap(df_activities.groupby('Title')[['Activities_Analyze','Activities_ML_service','Activities_Data_infra',
'Activities_Prototyping', 'Activities_Research','Activities_None',
'Activities_Other']].mean().T[['Data Scientist','Data Engineer','Data Analyst',
'Software Engineer','Business Analyst','DBA/Database Engineer',
'Research Scientist']], cmap='coolwarm')
# ### Conclusion
# 1. We looked at the Educational levels and identified that most Data Scientists have college degrees, and while they come from a diverse majors CS majors, and Engineering majors. A significant number of Data Scientists in US have Business and Maths majors in their undergrad.
# 2. We looked at the Educational levels for different job titles and observe that for most roles, candidates generally have Masters degree and the preference for candidates with Masters degree is more marked in US.
# 3. We looked at the responsibilities associated with each title and build a profile for each job title. We find that Data Scientists more often build prototypes, use ML service to derive insights for products and analyze data.
|
2_Intro_to_DS/Kaggle-DS-Survey_blog.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
from bs4 import BeautifulSoup
import pandas as pd
import traceback
import statsmodels.api as sm
from matplotlib import pyplot as plt
import numpy as np
from sklearn.linear_model import LinearRegression
data_AF = pd.read_excel(r'C:\Users\com\[MSB535]\data\Breau Data\Asiana Airlines Inc. Freight.xlsx', index_col=0)
data_AP = pd.read_excel(r'C:\Users\com\[MSB535]\data\Breau Data\Asiana Airlines Inc. Passenger.xlsx', index_col=0)
data_DF = pd.read_excel(r'C:\Users\com\[MSB535]\data\Breau Data\Delta Air Lines Inc. Freight.xlsx', index_col=0)
data_DP = pd.read_excel(r'C:\Users\com\[MSB535]\data\Breau Data\Delta Air Lines Inc. Passenger.xlsx', index_col=0)
data_KF = pd.read_excel(r'C:\Users\com\[MSB535]\data\Breau Data\Korean Air Lines Co. Ltd. Freight.xlsx', index_col=0)
data_KP = pd.read_excel(r'C:\Users\com\[MSB535]\data\Breau Data\Korean Air Lines Co. Ltd. Passenger.xlsx', index_col=0)
data_CF = pd.read_excel(r'C:\Users\com\[MSB535]\data\Breau Data\Cathay Pacific Airways Ltd. Freight.xlsx', index_col=0)
data_CP = pd.read_excel(r'C:\Users\com\[MSB535]\data\Breau Data\Cathay Pacific Airways Ltd. Passenger.xlsx', index_col=0)
data_EF = pd.read_excel(r'C:\Users\com\[MSB535]\data\Breau Data\Eva Airways Corporation Freight.xlsx', index_col=0)
data_EP = pd.read_excel(r'C:\Users\com\[MSB535]\data\Breau Data\Eva Airways Corporation Passenger.xlsx', index_col=0)
data_LF = pd.read_excel(r'C:\Users\com\[MSB535]\data\Breau Data\Lufthansa German Airlines Freight.xlsx', index_col=0)
data_LP = pd.read_excel(r'C:\Users\com\[MSB535]\data\Breau Data\Lufthansa German Airlines Passenger.xlsx', index_col=0)
data_A = data_AF
data_A['Passenger'] = data_AP['Passenger'].values
data_D = data_DF
data_D['Passenger'] = data_DP['Passenger'].values
data_K = data_KF
data_K['Passenger'] = data_KP['Passenger'].values
data_C = data_CF
data_C['Passenger'] = data_CP['Passenger'].values
data_E = data_EF
data_E['Passenger'] = data_EP['Passenger'].values
data_L = data_LF
data_L['Passenger'] = data_LP['Passenger'].values
data = [data_A,data_D,data_K,data_C, data_E, data_L]
for i in range(len(data)):
data[i]['Wprice'] = data[i]['Price']/data[i]['Price'].mean()
data[i]['WFreight'] = data[i]['Freight']/data[i]['Freight'].mean()
data[i]['WPassenger'] = data[i]['Passenger']/data[i]['Passenger'].mean()
data[i]['datetime'] = pd.to_datetime(data[i].index.values)
def plot_df(df, x, y, title="", xlabel='Date', ylabel='Value', dpi=100,colors = 'color'):
plt.figure(figsize=(16,5), dpi=dpi)
plt.plot(x, y, color=colors)
plt.gca().set(title=title, xlabel=xlabel, ylabel=ylabel)
plt.xticks(np.arange(0,len(x),step = 30),x[np.arange(0,len(x),step = 30)])
plt.show()
len(data[0])+len(data[1])+len(data[2])+len(data[3])+len(data[4])+len(data[5])
data[0].info()
what = 'Freight'
plt.figure(figsize=(16,5), dpi=100)
plt.scatter(data[0]['datetime'].values, data[0][what], label = 'Asiana')
plt.scatter(data[1]['datetime'].values, data[1][what], label = 'Delta')
plt.scatter(data[2]['datetime'].values, data[2][what], label = 'Korean')
plt.scatter(data[3]['datetime'].values, data[3][what], label = 'Cathay')
plt.scatter(data[4]['datetime'].values, data[4][what], label = 'Eva')
plt.scatter(data[5]['datetime'].values, data[5][what], label = 'Lufthansa')
plt.gca().set(title='Cumulative Case-Freight')
plt.legend()
plt.show()
what = 'Passenger'
plt.figure(figsize=(16,5), dpi=100)
plt.scatter(data[0]['datetime'].values, data[0][what], label = 'Asiana')
#plt.scatter(data[1]['datetime'].values, data[1][what], label = 'Delta')
plt.scatter(data[2]['datetime'].values, data[2][what], label = 'Korean')
plt.scatter(data[3]['datetime'].values, data[3][what], label = 'Cathay')
plt.scatter(data[4]['datetime'].values, data[4][what], label = 'Eva')
plt.scatter(data[5]['datetime'].values, data[5][what], label = 'Lufthansa')
plt.gca().set(title='Cumulative Case-Passenger')
plt.legend()
plt.show()
what = 'WFreight'
plt.figure(figsize=(16,5), dpi=100)
plt.scatter(data[0]['datetime'].values, data[0][what], label = 'Asiana')
plt.scatter(data[1]['datetime'].values, data[1][what], label = 'Delta')
plt.scatter(data[2]['datetime'].values, data[2][what], label = 'Korean')
plt.scatter(data[3]['datetime'].values, data[3][what], label = 'Cathay')
plt.scatter(data[4]['datetime'].values, data[4][what], label = 'Eva')
plt.scatter(data[5]['datetime'].values, data[5][what], label = 'Lufthansa')
plt.gca().set(title='Cumulative Case-Deweighted Freight')
plt.legend()
plt.show()
what = 'WPassenger'
plt.figure(figsize=(16,5), dpi=100)
plt.scatter(data[0]['datetime'].values, data[0][what], label = 'Asiana')
plt.scatter(data[1]['datetime'].values, data[1][what], label = 'Delta')
plt.scatter(data[2]['datetime'].values, data[2][what], label = 'Korean')
plt.scatter(data[3]['datetime'].values, data[3][what], label = 'Cathay')
plt.scatter(data[4]['datetime'].values, data[4][what], label = 'Eva')
plt.scatter(data[5]['datetime'].values, data[5][what], label = 'Lufthansa')
plt.gca().set(title='Cumulative Case-Deweighted Passenger')
plt.legend()
plt.show()
for i in range(4):
if i == 0:
WFreight_all = data[i]['WFreight'].values
WPassenger_all = data[i]['WPassenger'].values
WPrice_all = data[i]['Wprice'].values
else:
WFreight_all = np.concatenate((WFreight_all,data[i]['WFreight'].values))
WPassenger_all = np.concatenate((WPassenger_all,data[i]['WPassenger'].values))
WPrice_all = np.concatenate((WPrice_all,data[i]['Wprice'].values))
line = LinearRegression()
y_value = WPrice_all
x_value = WFreight_all
line.fit(x_value.reshape(-1,1),y_value)
plt.figure(figsize=(16,5), dpi=100)
plt.plot(data[0]['WFreight'].values, data[0]['Wprice'].values, 'o',label = 'Asiana')
plt.plot(data[1]['WFreight'].values, data[1]['Wprice'].values, 'o',label = 'Delta')
plt.plot(data[2]['WFreight'].values, data[2]['Wprice'].values, 'o', label = 'Korean')
plt.plot(data[3]['WFreight'].values, data[3]['Wprice'].values, 'o', label = 'Cathay')
plt.plot(data[4]['WFreight'].values, data[4]['Wprice'].values, 'o', label = 'Eva')
plt.plot(data[4]['WFreight'].values, data[4]['Wprice'].values, 'o', label = 'Lufthansa')
plt.plot(x_value,line.predict(x_value.reshape(-1,1)), color = 'green')
plt.title('Cumulative Case_Cargo(Deweighted)', fontsize = 15)
plt.xlabel('Cargo', fontsize = 15)
plt.ylabel('Stock Price', fontsize = 15)
plt.legend()
plt.show()
x_value = sm.add_constant(x_value,has_constant='add')
model = sm.OLS(y_value,x_value)
result=model.fit()
result.summary()
line = LinearRegression()
y_value = WPrice_all
x_value = WPassenger_all
line.fit(x_value.reshape(-1,1),y_value)
plt.figure(figsize=(16,5), dpi=100)
plt.plot(data[0]['WPassenger'].values, data[0]['Wprice'].values, 'o',label = 'Asiana')
plt.plot(data[1]['WPassenger'].values, data[1]['Wprice'].values, 'o',label = 'Delta')
plt.plot(data[2]['WPassenger'].values, data[2]['Wprice'].values, 'o', label = 'Korean')
plt.plot(data[3]['WPassenger'].values, data[3]['Wprice'].values, 'o', label = 'Cathay')
plt.plot(data[4]['WPassenger'].values, data[4]['Wprice'].values, 'o', label = 'Eva')
plt.plot(data[5]['WPassenger'].values, data[5]['Wprice'].values, 'o', label = 'Lufthansa')
plt.plot(x_value,line.predict(x_value.reshape(-1,1)), color = 'green')
plt.title('Cumulative Case_Passenger(Deweighted)', fontsize = 15)
plt.xlabel('Passenger', fontsize = 15)
plt.ylabel('Stock Price', fontsize = 15)
plt.legend()
plt.show()
x_value = sm.add_constant(x_value,has_constant='add')
model = sm.OLS(y_value,x_value)
result=model.fit()
result.summary()
data[3][data[3]['WFreight']<0.7]
|
Cumulative Data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/junwin/HousePricesTensorflow/blob/main/housepPricePrediction2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + cellView="form" id="AVV2e0XKbJeX"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="sUtoed20cRJJ"
# # Using Tensorflow and Keras to predict house prices using a .csv file of house data
#
# 
# + [markdown] id="C-3Xbt0FfGfs"
# This walkthrough shows the process of using deep learning to predict house prices using Tensorflow, Keras and a .csv file of sales data.
#
# The topics covered are:
#
# * Loading data from a .csv
# * splitting data into training, validation and test sets
# * Handling mixed types of data
# * Normalising data
# * Handling categorical features
# * Creating a deep learning model
# * Analysing the results
# * Storing and reloading models
#
#
#
# + [markdown] id="7G5_-fCKyu-q"
# ## Acknowledgement
# This short walkthrough borrows from the excellent tutorial on loading .csv files using Keras.
# https://www.tensorflow.org/tutorials/load_data/csv
# + [markdown] id="fgZ9gjmPfSnK"
# ## Setup
# + id="baYFZMW_bJHh"
import pandas as pd
import numpy as np
# Make numpy values easier to read.
np.set_printoptions(precision=3, suppress=True)
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
from tensorflow.keras import regularizers
#from sklearn import preprocessing
from sklearn.model_selection import train_test_split
# + [markdown] id="B87Rd1SOUv02"
# ## Loading data from a .csv
# The size of the sales data is relatively small, so we can process it in memory. We will use a Pandas dataframe to do any manipulation.
# + colab={"base_uri": "https://localhost:8080/", "height": 226} id="GS-dBMpuYMnz" outputId="9d60321b-6130-4f5d-8216-a6a4fee19ca0"
url="https://junwin.github.io/data/housepriceclean2.csv"
housePrices=pd.read_csv(url).sample(frac=1)
# We will not process the ClosedDate column - so remove it
housePrices.pop('ClosedDate')
housePrices.head()
# + [markdown] id="Az17SrZWMoN8"
# A benefit of using Pandas is that it is easy to experiment with the data, for example, to select some subset while to help understand what affects the accuracy of the model.
#
# + id="y6eKJbdMPZJT"
#housePrices = housePrices.loc[(housePrices['Zip'] == 60076) & (housePrices['Type'] == 'SFH')]
# + [markdown] id="XoY69vs9Mazt"
# In our processing, we will treat strings as categories. Although Zip code is numeric, we cannot treat it as a number in the model, so let's convert it to a string.
# + colab={"base_uri": "https://localhost:8080/"} id="nngEWwnpx6eJ" outputId="a0506047-f179-4b92-c184-db0ab2086dd3"
housePrices['Zip'] = housePrices['Zip'].astype(str)
housePrices.dtypes
# + [markdown] id="_iS70QzjvbSg"
# We now split the data loaded from the .csv into
# three sets ( training data, validation data and test data).
# + colab={"base_uri": "https://localhost:8080/"} id="yPhMIKlWmWo_" outputId="eaa24a1e-64e7-4b04-ff91-87e6e34bd1b8"
train, test = train_test_split(housePrices, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
# + [markdown] id="44RzokpwQbAJ"
# Next, we will separate the training and validation dataframes into features and targets and do a rough normalisation of price.
# + id="D8rCGIK1ZzKx"
housePrices_features = train.copy()
housePrices_labels = housePrices_features.pop('SoldPr')
housePrices_labels = housePrices_labels/100000
val_features = val.copy()
val_labels = val.pop('SoldPr')
val_labels = val_labels/100000
# + colab={"base_uri": "https://localhost:8080/"} id="3VshL1mDKSf6" outputId="7c65fe1a-a165-40b9-c17d-658cf22407b2"
print(housePrices_features.dtypes)
print(housePrices_labels.dtypes)
# + [markdown] id="urHOwpCDYtcI"
# It is difficult to use the data we have to train a model "as is" because the feature data contains different types and numeric ranges.
#
# It is possible to write code to transform the original dataset; however, we would need to consider deploying the extra code. Second, when using the model to predict a price, we will need to use the same transformations on the data input.
#
# A reasonable solution is to embed any transformations into the model, so it is simple to deploy and use.
#
# Here we will build a model using the Keras functional API the functional API uses symbolic tensors so we will first create a set of these tensors matching the names and types of the feature data.
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="5WODe_1da3yw" outputId="438c856c-3d61-4564-c8d5-c4837ce328f2"
inputs = {}
for name, column in housePrices_features.items():
dtype = column.dtype
if dtype == object:
dtype = tf.string
else:
dtype = tf.float32
inputs[name] = tf.keras.Input(shape=(1,), name=name, dtype=dtype)
inputs
# + [markdown] id="aaheJFmymq8l"
# We will use Tensorflow's normalization pre-processing for each numeric field. Notice that we call the adapt method with the training data to set up the normaliser before use.
# + colab={"base_uri": "https://localhost:8080/"} id="wPRC_E6rkp8D" outputId="d07d5c2f-f181-49d7-fe67-7dd3055a0d90"
numeric_inputs = {name:input for name,input in inputs.items()
if input.dtype==tf.float32}
x = layers.Concatenate()(list(numeric_inputs.values()))
norm = preprocessing.Normalization()
norm.adapt(np.array(housePrices[numeric_inputs.keys()]))
all_numeric_inputs = norm(x)
all_numeric_inputs
# + [markdown] id="-JoR45Uj712l"
# Collect all the symbolic preprocessing results, to concatenate them later.
# + id="M7jIJw5XntdN"
preprocessed_inputs = [all_numeric_inputs]
# + [markdown] id="r0Hryylyosfm"
# String fields will first use a StringLookup function to map the strings to their corresponding index in a vocabulary based on the different data strings. Second, using the index and default settings, the Tensorflow's pre-processing category encoding will generate a one-hot vector.
#
# Together with the numeric field handling, we then transform the input data.
# + id="79fi1Cgan2YV"
for name, input in inputs.items():
if input.dtype == tf.float32:
continue
lookup = preprocessing.StringLookup(vocabulary=np.unique(housePrices_features[name]))
one_hot = preprocessing.CategoryEncoding(max_tokens=lookup.vocab_size())
x = lookup(input)
x = one_hot(x)
preprocessed_inputs.append(x)
# + [markdown] id="Wnhv0T7itnc7"
# Given the preprocessed inputs (for normalisation and categorisation) and the corresponding input fields we can build a model the preprocesses the input data. We can visualise the model, as shown below.
# + colab={"base_uri": "https://localhost:8080/", "height": 525} id="XJRzUTe8ukXc" outputId="7c43d279-442c-40cf-c4d4-324f70befb8c"
preprocessed_inputs_cat = layers.Concatenate()(preprocessed_inputs)
housePrices_preprocessing = tf.keras.Model(inputs, preprocessed_inputs_cat)
tf.keras.utils.plot_model(model = housePrices_preprocessing , rankdir="LR", dpi=72, show_shapes=True)
# + [markdown] id="PNHxrNW8vdda"
# This `model` just contains the input preprocessing. You can run it to see what it does to your data. Keras models don't automatically convert Pandas `DataFrames` because it's not clear if it should be converted to one tensor or to a dictionary of tensors. So convert it to a dictionary of tensors:
# + id="5YjdYyMEacwQ"
housePrices_features_dict = {name: np.array(value)
for name, value in housePrices_features.items()}
# + [markdown] id="0nKJYoPByada"
# We can run some sample data through the preprocessing, to validate we are getting the expected results - you should be able to see the normalised data and one-hot vectors.
# + colab={"base_uri": "https://localhost:8080/"} id="SjnmU8PSv8T3" outputId="4bf7b766-0371-48fc-872c-84bd2ae5a597"
features_dict = {name:values[:1] for name, values in housePrices_features_dict.items()}
housePrices_preprocessing(features_dict)
# + [markdown] id="qkBf4LvmzMDp"
# Now build the model based on the Keras functional API on top of this.
# We will typically make changes to the layers defined in the housePrices_model function. In this case, a pipeline is built that takes the inputs, pre-processes them and then uses the main model to make predictions.
# + id="coIPtGaCzUV7"
def housePrices_model(preprocessing_head, inputs):
body = tf.keras.Sequential([
layers.Dense(128,activation='relu', kernel_regularizer=regularizers.l2(0.001)),
layers.Dense(64,activation='relu'),
#layers.Dense(64,activation='relu'),
layers.Dense(1)
])
preprocessed_inputs = preprocessing_head(inputs)
result = body(preprocessed_inputs)
model = tf.keras.Model(inputs, result)
model.compile(loss='mse', optimizer='adam', metrics=['mae'])
return model
housePrices_model = housePrices_model(housePrices_preprocessing, inputs)
# + [markdown] id="LK5uBQQF2KbZ"
# When you train the model, pass the dictionary of features as `x`, and the label as `y`.
# + id="VpMHKYTqomAL"
val_features_dict = {name: np.array(value)
for name, value in val.items()}
history_1 = housePrices_model.fit(x=housePrices_features_dict, y=housePrices_labels,epochs=250,
validation_data=(val_features_dict, val_labels))
# + [markdown] id="PxDJzn1LtK0c"
# It is crucial to visualise the metrics produced during training; this will quickly indicate how quickly the model converges, potential issues of under and overfitting.
# + colab={"base_uri": "https://localhost:8080/", "height": 573} id="v52vc1fw49Um" outputId="4afc78c8-a765-403a-9faa-4b4546ad49c6"
# Draw a graph of the loss, which is the distance between
# the predicted and actual values during training and validation.
import matplotlib.pyplot as plt
train_loss = history_1.history['mae']
val_loss = history_1.history['mae']
epochs = range(1, len(train_loss) + 1)
plt.plot(epochs, train_loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# Exclude the first few epochs so the graph is easier to read
SKIP = 20
plt.plot(epochs[SKIP:], train_loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
# + [markdown] id="LxgJarZk3bfH"
# By implementing pre-processing as part of the model, we can now save the model and deploy it elsewhere as a simple package.
# + id="Ay-8ymNA2ZCh"
housePrices_model.save('test')
reloaded = tf.keras.models.load_model('test')
# + id="ojBbrVDFYnnR"
# ! zip -r test.zip test
# + id="Qm6jMTpD20lK"
features_dict = {name:values[:1] for name, values in housePrices_features_dict.items()}
before = housePrices_model(features_dict)
after = reloaded(features_dict)
assert (before-after)<1e-3
print(before)
print(after)
# + [markdown] id="WA-S29Bfu7BF"
# Finally, we can run predictions on our own data.
# + colab={"base_uri": "https://localhost:8080/"} id="pLOZuDVtNIow" outputId="8540bac1-c30f-4e7f-852c-7880b25dffd7"
houseData_own2 = {'Type': np.array(['SFH', 'SFH', 'SFH', 'Condo', 'Condo']),
'houseEra': np.array(['recent', '19A', '20A', '20A', '19B']),
'Area': np.array([8410, 1400, 1500, 1500, 1600]),
'Zip': np.array(['60062', '60062', '60076', '60076', '60202']),
'Rooms': np.array([16, 6, 7, 7, 7]),
'FullBaths': np.array([6.0, 2.0, 2.0, 2.5, 2.0]),
'HalfBaths': np.array([0.0, 1.0, 1.0, 0.0, 0.0]),
'BsmtBth': np.array(['Yes', 'No', 'No', 'No', 'No']),
'Beds': np.array([5, 3, 3, 3, 3]),
'BsmtBeds': np.array([1.0, 0.0, 0.0, 0.0, 0.0]),
'GarageSpaces': np.array([3, 2, 0, 0, 0]) }
ans = reloaded.predict(houseData_own2)
print(ans)
|
housepPricePrediction2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: mlsa
# language: python
# name: mlsa
# ---
# # Workshop DL01: Deep Neural Networks
#
# ## Agenda:
# - Introduction to deep learning
# - Apply DNN to MNIST dataset and IEEE fraud dataset
#
# For this workshop we are gonna talk about deep learning algorithms and train DNN models with 2 datasets. We will first start with an easier dataset as demonstration, namely the MNIST dataset. Then we will move on to the harder dataset, which is the IEEE fraud detection dataset, so we can also compare the performance of supervised learning and deep learning models.
#
# ## Exercies:
# - Think about what features to include in your model and how they should be represented.
# - Think about when we should use traditional models such as Random Forest or deep learning models. (hint: think about the dataset size)
# ### Deep Learning Paradigm:
# There is no clear cut definition of deep learning as many people tend to have different definitions, but the most popular opinion is that deep-learning could be defined as neural network with more than 2 hidden layers and later evolved into including unsupervised neural architectures.
# <img src="paradigm.png">
#
# ### Deep Learning Application:
# | Neural Network | Application |
# | :------------: | :---------: |
# |Standard NN|Real Estate, Online Advertising|
# |Convolutional NN|Photo Tagging|
# |Recurrent NN|Speech Recognition, Machine Translation|
# |Unsupervised NN|Fraud Detection|
# ### So what is a neural network?
# Recall that in the classification workshop we have introduced a GLM model called **Perceptron**. This is actually the very structural building block of neural network (NN).
#
# | Perceptron | Activation |
# |:----------:| :--------: |
# |<img src="perceptron.png">|<img src="activation.png">|
#
# Notations:
# - $g$: activation function (activation function is to capture the non-linearities in the dataset)
# - $X$: input matrix
# - $w_0$: bias
# - $W$: weight matrix
#
# A **NN** can then be viewed as a stack of multi-output perceptrons. For example, the hidden layer is a perceptron with 4 outputs and the output layer is a perceptron with 2 outputs:
# <img src="SNN.png" width="500">
# Note that the subscripts indicates the number of layer, e.g. $W^{(1)}$ is the weight matrix for the first layer.
#
# Vectorization representation:
# $$ Z = w_0^{(1)} + X^TW^{(1)}$$
# $$ Y = g(w_0^{(2)} + Z^TW^{(2)})$$
# There are many conventions of representing NN, but they mean the same thing.
# A **deep neural network (DNN)** is an extension of a single-layer NN with more than 2 hidden layers. For example, some DNN can have upto hundreds of layers:
# <img src="DNN.png" width="500">
#
# Note that input and output layers are not counted as the number of layers of a NN.
#
# Suppose we have a $l$-layer NN then the vectorization representation is:
# $$ Z^{(k)} = w_0^{(k)} + g(Z^{(k-1)})^TW^{(k)}, where \space k = 1,2,..l-1$$
# $$ Y = g(w_0^{(l)} + Z^{(l-1)T}W^{(l)})$$
#
# Getting the matrix dimemsion correct can be quite tricky.
# ### Then how do we train a NN?
# First we need to define a **cost function** then update the parameters ($W,w_0$) by minimising the cost. This is known as **back propagation**, whereas computing output is known as **forward propagation**.
#
# Considering a sample size of n, cost function is a function of predicted output ($\hat{Y^{[i]}}=f(X^{[i]},W)$) and actual output ($Y^{[i]}$):
# $$J(\hat{Y^{[i]}},W) = \frac{1}{n} \sum_{i=1}^n{L(\hat{Y^{[i]}},Y^{[i]})}$$, where $L$ could be:
# - binary cross entropy loss: $Y^{[i]}\log{(\hat{Y^{[i]}})}+(1-Y^{[i]})\log{(1-\hat{Y^{[i]}})}$
# - mean square loss: $(Y^{[i]}-\hat{Y^{[i]}})^2$
#
# After we define a cost function, we need a way to update the parameters that minimise the cost. There are many ways but let's start with the most basic one, namely **gradient descent**. As the name suggests, we want to find the parameters by decreasing gradient gradually because a function decreases the fastest if one goes from a random point in the opposite direction of the gradient at that point. So we update the parameters like this:
# $$new W = old W - \alpha \frac{\partial J(old W)}{\partial old W}$$
# where $\alpha$ is the learning rate.
#
# Graph Representation:
#
# |Step 1|Step 2|
# |:----:|:----:|
# |<img src="step1.png">|<img src="step2.png">|
#
# |Step 3|Step 4|
# |:----:|:----:|
# |<img src="step3.png">|<img src="step4.png">|
#
#
# Notice that $J$ is not a direct funtion of $W$, so differentiating it with respect to $W$ is not straight forward. So we need to use Chain Rule:
# $$\frac{\partial J(\hat{Y^{[i]}},W)}{\partial W} = \frac{\partial J(\hat{Y^{[i]}},W)}{\partial \hat{Y^{[i]}}}\frac{\partial \hat{Y^{[i]}}}{\partial Z^{[i]}}\frac{\partial Z^{[i]}}{\partial W}$$
#
#
# Imgages Source: ©MIT 6.S191: Introduction to Deep Learning [introtodeeplearning.com](http://introtodeeplearning.com/)
# ### Alternative ways of training NNs:
# We have only covered one way of training NN, which is gradient descent. In practise, there are many other concerns as well. For example, the choice of learning rate $\alpha$ have great impact on the whether we can find the true minimum. Too small the $\alpha$ converges slowly and can get stuck in local minima, whereas too big the $\alpha$ will overshoot and might not converge. We can try different values of $\alpha$ which could take a long time, or we can use **adaptive learning rate** (Momentum, RMSProp, Adam, Afagrad, Adadelta, learning rate decay etc.).
#
# Other than choosing learning rate, there are many popular options to improve training or fitting. Here we provide a list for you to consider:
# - Train a small size of data each time: Minibatch Training, Stochastic Gradient Descent
# - Normalisation of input, Batch Normalisation
# - Regularization: add $l_1, l_2$ in cost funtion, dropout, early stopping, data augmentation
# - Parallelize computation, use of GPU/TPU
#
# Now enough with the jiber jaber and let's train a DNN!
# ### Example 1: MNIST Data with Feed Forward Neural Networks
# Our task would be to label each image, as to which number it belong to exactly. For any image classification task, first, we have to understand how are image stored on devices. In this example, all images are 28x28 pixels, hence the input layer dimension would be 784 (28*28). For demonstration, we use a feedforward neural network with 2 hidden layers here. Of course, there are much more complicated neural architectures such as CNN's etc.
# <img src="MnistExamples.png">
# +
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import os
import matplotlib.pyplot as plt
from matplotlib import pyplot
import seaborn as sns
import pandas as pd
import tensorflow as tf
from tensorflow import keras
#Keras Packages
#from tensorflow.python import keras
from keras.layers import Input, Dense, Lambda
from keras import optimizers
from keras.utils import plot_model, to_categorical
from keras.callbacks import ModelCheckpoint, TensorBoard
from keras import regularizers
from keras.models import Model, load_model, save_model
from keras.datasets import mnist
print(tf.__version__)
print(tf.keras.__version__)
# -
# #### Step 1: Importing the data and Visualization
# +
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print("Training input data shape: {}".format(x_train.shape))
#This implies there exist 60,000 images with 28x28 pixels
print("Testing input data shape: {}".format(x_test.shape))
#This implies there exist 10,0000 images with 28x28
for i in range(9):
pyplot.subplot(330 + 1 + i)
pyplot.imshow(x_train[i], cmap=pyplot.get_cmap('gray'))
pyplot.show()
sns.countplot(y_train)
# -
# #### Step 2: Data preparation
# +
input_vector_dimension = 28*28
num_classes = 10
x_train = x_train.reshape(x_train.shape[0], input_vector_dimension).astype('float32') / 255
print("Training data shape after reshaping: {}".format(x_train.shape))
x_test = x_test.reshape(x_test.shape[0], input_vector_dimension).astype('float32') / 255
print("Testing data shape: {}".format(x_test.shape))
#Why divide by 255?
y_train = to_categorical(y_train, num_classes)
print("Output training data: {}".format(y_train.shape))
y_test = to_categorical(y_test, num_classes)
print("Output testing data: {}".format(y_test.shape))
# -
# #### Step 3: Designing the neural architecture
# +
#We will use Keras Functional API, not the sequestion model API
input_layer = Input(shape=(input_vector_dimension,))
hidden_layer_1 = Dense(50, activation='relu',activity_regularizer=regularizers.l1(10e-5))(input_layer)
hidden_layer_2 = Dense(50, activation='relu')(hidden_layer_1)
output_layer = Dense(10, activation='softmax')(hidden_layer_2)
nn_model = Model(inputs=input_layer, output=output_layer)
nn_model.summary()
# plot_model(nn_model, 'my_first_model.png',show_shapes=True)
# Use Colab when running the above command, so much less trouble
# -
# 
# #### Step 4: Train the model
# +
# specifying the training configuration
nn_model.compile(loss='categorical_crossentropy',
optimizer=optimizers.RMSprop(),
metrics=['accuracy'])
# training
history = nn_model.fit(x_train, y_train,
batch_size=20,
epochs=5,
validation_split=0.1,verbose=1)
test_scores = nn_model.evaluate(x_test, y_test)
print('Test loss:', test_scores[0])
print('Test accuracy:', test_scores[1])
# -
# #### Step 4: Monitoring Performance
# Lastly, save the trained model so you can load it later
#save_model(nn_model, 'nn_1_model.h5',overwrite=True)
#nn_model.save('nn__model.h5')
#model = keras.models.load_model('nn_model.h5')
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='best')
plt.show()
# ### Example 2: IEEE Data
# read csv file into a dataframe
X_train = pd.read_csv('X_train.csv')
Y_train = pd.read_csv('Y_train.csv')
from sklearn.model_selection import train_test_split
train_size = int(0.8*X_train.shape[0])
test_size = X_train.shape[0]-train_size
X_train, X_test, Y_train, Y_test = train_test_split(
X_train, Y_train, train_size=train_size, test_size=test_size, random_state=4)
X_train.shape
# +
# putting everything together
input_layer = Input(shape=(X_train.shape[1],))
hidden_layer_1 = Dense( 250, activation='relu',activity_regularizer=regularizers.l1(10e-5))(input_layer)
hidden_layer_2 = Dense( 200, activation='relu')(hidden_layer_1)
hidden_layer_3 = Dense( 150, activation='relu')(hidden_layer_2)
hidden_layer_4 = Dense( 100, activation='relu')(hidden_layer_3)
hidden_layer_5 = Dense( 50, activation='relu')(hidden_layer_4)
output_layer = Dense(1, activation='sigmoid')(hidden_layer_5)
nn_model = Model(inputs=input_layer, output=output_layer)
nn_model.summary()
nn_model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(),
metrics=['accuracy'])
#Define training parameters
history = nn_model.fit(X_train, Y_train,
batch_size=64,
epochs=20,
validation_split=0.1)
test_scores = nn_model.evaluate(X_test, Y_test, verbose=0)
print('Test loss:', test_scores[0])
print('Test accuracy:', test_scores[1])
nn_model.save('nn_model.h5')
# +
from sklearn.metrics import classification_report, confusion_matrix
yhat_classes = nn_model.predict(X_test)
yhat_classes.dtype
yhat_classes = np.argmax(yhat_classes, axis=1)
yhat_classes[0:5]
#Y_test.dtype
matrix = confusion_matrix(Y_test.values, yhat_classes)
print(matrix)
# -
|
Deep-Learning/workshop-DL01-DeepNeuralNetworks-old.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Two Sample Testing
import numpy as np
from scipy.stats import ttest_1samp, ttest_ind,mannwhitneyu,levene,shapiro,wilcoxon
from statsmodels.stats.power import ttest_power
import pandas as pd
# ### Independent samples
weight = np.array([
# sugar consumption in grams and stature (0=diabetic, 1=non diabetic)
[9.31, 0],
[7.76, 0],
[6.98, 1],
[7.88, 1],
[8.49, 1],
[10.05, 1],
[8.80, 1],
[10.88, 1],
[6.13, 1],
[7.90, 1],
[11.51, 0],
[12.59, 0],
[7.05, 1],
[11.85, 0],
[9.99, 0],
[7.48, 0],
[8.79, 0],
[8.69, 1],
[9.68, 0],
[8.58, 1],
[9.19, 0],
[8.11, 1]])
# The above dataset contains 2 samples from 2 different population - Diabetic and Non-Diabetic. And these 2 population are independent of each other. Hence we apply unpaired test on this dataset spliting it into 2 samples (Diabetic and Non-Diabetic)
#
# Diabetic - [9.31, 7.76, 11.51, 12.59, 11.85, 9.99, 7.48, 8.79, 9.68, 9.19]
#
# Non-Diabetic - [6.98, 7.88, 8.49, 10.05, 8.8, 10.88, 6.13, 7.9, 7.05, 8.69, 8.58, 8.11]
diabetic = weight[weight[:,1]==0][:,0]
non_diabetic = weight[weight[:,1]==1][:,0]
diabetic, non_diabetic
# **Formulate the Hypothesis**
#
# H<sub>0</sub> : μ<sub>1</sub> = μ<sub>2</sub> (μ<sub>1</sub> - μ<sub>2</sub> = 0)<br/>
# H<sub>a</sub> : μ<sub>1</sub> $\neq$ μ<sub>2</sub> (μ<sub>1</sub> < μ<sub>2</sub>, μ<sub>1</sub> > μ<sub>2</sub>)<br/>
# This test assumes the variances are equal among the two diff samples
#
# ttest_ind is used for parametric unpaired samples
ttest_ind(diabetic, non_diabetic)
# p_value < 0.05 (significance level). Hence we reject the null hypothesis. We have sufficient evidence to accept the alternative hypothesis
# mannwhitneyu is used for non-parametric unpaired samples
mannwhitneyu(diabetic, non_diabetic)
# p_value < 0.05 (significance level). Hence we reject the null hypothesis. We have sufficient evidence to accept the alternative hypothesis
# ### Dependent Samples
# pre and post-Exercise food energy intake
intake = np.array([
[5460, 3980],
[5230, 4890],
[5640, 3885],
[6180, 5160],
[6390, 5645],
[6512, 4650],
[6765, 6109],
[7515, 5975],
[7515, 6790],
[8230, 6970],
[8770, 7335],
])
# The above intake dataset contains 2 samples (pre, post) derived from a single population. And these 2 samples are dependent on eachother. Hence we apply paired test on this dataset.
pre = intake[:,0]
post = intake[:,1]
# **Formulate the Hypothesis**
#
# H<sub>0</sub> : μ<sub>1</sub> = μ<sub>2</sub> (μ<sub>1</sub> - μ<sub>2</sub> = 0)<br/>
# H<sub>a</sub> : μ<sub>1</sub> $\neq$ μ<sub>2</sub> (μ<sub>1</sub> < μ<sub>2</sub>, μ<sub>1</sub> > μ<sub>2</sub>)<br/>
# This test assumes the variances are equal among the two diff samples
#
# +
# For paired t-test of parametric samples
ttest_1samp(post-pre, popmean=0)
# we consider both the smaple means are equal. Hence difference in popmean is zero
# -
# For paired t-test of non-parametric samples
wilcoxon(post-pre)
# Both the parametric and non-parametric tests for this paired sample suggests that we need to reject the Null Hypothesis and embrace the Alternate Hypothesis
# ### Test for the variances
# H<sub>0</sub> = The variances of both the samples are equal<br/>
# H<sub>a</sub> = The variances are not equal
levene(diabetic,non_diabetic)
levene(pre,post)
# The greater p_value suggests that we have to accept the null hypothesis which says that both the samples have equal variances
# ### Test for the Shape of the population
# H<sub>0</sub> = The sample comes from a normal population<br/>
# H<sub>a</sub> = The sample doesn't come from a normal distribution
t_statistic, p_value = shapiro(diabetic)
t_statistic, p_value
t_statistic, p_value = shapiro(non_diabetic)
t_statistic, p_value
t_statistic, p_value = shapiro(pre)
t_statistic, p_value
t_statistic, p_value = shapiro(post)
t_statistic, p_value
# The greater p_value in both the cases suggests that we have to accept the null hypothesis which says that all the samples from both the dataset comes from a normal population
# ### Power of the test (1-β)
# To calculate the power of the test we need to calculate the delta value, which requires S<sub>pooled</sub> (Pooled Standard Deviation) to be calculated for two sampe test
#
# <img src="https://latex.codecogs.com/gif.latex?\Delta&space;=&space;\frac{\bar{X_{1}}-\bar{X_{2}}}{S_{pooled}}" />
# Where
# <img src="https://latex.codecogs.com/gif.latex?S_{pooled}&space;=&space;\sqrt{\frac{\left&space;(&space;n_{1}-1&space;\right&space;)S_{1}^2&space;+&space;\left&space;(&space;n_{2}-1&space;\right&space;)S_{2}^2}{\left&space;(&space;n_{1}-1&space;\right&space;)+\left&space;(&space;n_{2}-1&space;\right&space;)}}" />
delta = (np.mean(pre) - np.mean(post)) / np.sqrt(((11-1)*np.var(pre) + (11-1)*np.var(post)) / 11-1+11-1)
delta
ttest_power(delta, nobs=11, alpha=0.05, alternative="two-sided")
# The power of the test (1-β) denotes that there is 64.5% probability we rejecting the null hypothesis while the null hypothesis is also false
|
CourseContent/04-Advanced.Statistics/Week3/SM4 - Two Sample Testing.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="deadly-copper"
# ## <프로젝트 유형: Multi-Step LSTM Time Series Forecasting>
# * windows개념 이용해서 univariate time series forecasting의 방법으로 해결해보고자 함
# + id="optical-region"
# Runned_by_Colab
# 2. Trials 내용
# ReduceLROnPlateau() factor 0.1, patience 3
# 모델의 2번째 레이어를 NLP 217p bidirectional layer로 변경 <-얘가 핵심인듯?
# + [markdown] id="moving-alloy"
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="SGv1StdHGyz9" outputId="c61388bd-3c2b-400d-a6b8-33470d94a1c1"
from google.colab import drive
drive.mount('/content/drive')
# + id="robust-parameter"
#Import Library
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM, TimeDistributed, Dropout, Bidirectional#,Reshape, Flatten
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tqdm import tqdm
import warnings
warnings.filterwarnings("ignore")
# + id="sweet-prayer"
# Read Data
data_path = '/content/drive/MyDrive/Dacon_Data/[DACON]Bit_Trader/data'
train_x_df = pd.read_csv(data_path + "/train_x_df.csv")
train_y_df = pd.read_csv(data_path + "/train_y_df.csv")
test_x_df = pd.read_csv(data_path + "/test_x_df.csv")
# + id="scenic-lotus"
# 입력 받은 2차원 데이터 프레임을 3차원 numpy array로 변경하는 함수
def df2d_to_array3d(df_2d):
feature_size = df_2d.iloc[:,2:].shape[1]
time_size = len(df_2d.time.value_counts())
sample_size = len(df_2d.sample_id.value_counts())
array_3d = df_2d.iloc[:,2:].values.reshape([sample_size, time_size, feature_size])
return array_3d
# 2차원 DF LSTM으로 학습하기 위해 3차원으로 변환시키기
train_x_array = df2d_to_array3d(train_x_df) #(1380, 1380, 10)
train_y_array = df2d_to_array3d(train_y_df) #(1380, 120, 10)
test_x_array = df2d_to_array3d(test_x_df) #(529, 1380, 10)
# + id="handled-creature"
# 모델 구성(op1: many-to-one model, op2: many-to-many model(output 모양 바꾸기, return_sequences = True, Window이동 단위 1에서 output 크기로 변경))
# LSTM에 return_sequence=True가 주어지고 Bidirectional이면서 TimeDistributed() 함수가 사용되면 양방향 many-to-many 유형이 된다.https://m.blog.naver.com/chunjein/221589656211
# TimeDistributed() 함수 사용: 모델 layer에 model.add(TimeDistributed(Dense(1, activation='sigmoid')))추가
def build_model():
seq_len = 120
model = Sequential()
# 첫 번째 layer에서 LSTM()매서드 안의 stateful = True는 오히려 다운 퍼포먼스 될 수 있음
model.add(LSTM(80, activation='tanh', return_sequences= True, input_shape = [seq_len, 1]))
model.add(Bidirectional(LSTM(50, activation='tanh')))
model.add(Dense(1)) #120개의 open값 입력되어 121번째 open값 '하나' 예측
model.compile(optimizer = 'adam', loss = 'mse', metrics = ['mse'])
return model
# + colab={"base_uri": "https://localhost:8080/"} id="three-harmony" outputId="ae56c017-540a-4657-8d7b-0f94e7a089e3"
build_model().summary()
# + id="recognized-catalyst"
# LearningRateScheduler(scheduler) 구현
# from tensorflow.keras.callbacks import LearningRateScheduler
# # LearningRateScheduler 함수 생성 (epoch이 5이하면 그대로 lr 유지, 아닐시 lr 조정)
# def scheduler(epoch, learning_rate):
# if epoch < 5:
# return learning_rate
# else:
# return learning_rate * tf.math.exp(-0.1)
# lr = LearningRateScheduler(scheduler)
# + colab={"base_uri": "https://localhost:8080/"} id="friendly-plaza" jupyter={"outputs_hidden": true} outputId="db92f0a0-6b30-462b-817a-eab00f48308e" tags=[]
# test_x_array에 대한 Auto_Regressive한 Prediction 및 valid_pred_array에 예측 결과 기록
# 1) test_pred_array{예측값 모아두는 3차원 배열(120*1 2차원 배열 529개)} 만들기
test_pred_array = np.zeros([len(test_x_array), 120, 1])
# 2) early_stoppage & reduceLR 정의: https://www.dacon.io/competitions/official/235709/codeshare/2453?page=1&dtype=recent 참고)
early_stop = tf.keras.callbacks.EarlyStopping(monitor='loss', patience= 10, mode = 'auto')
reduceLR = tf.keras.callbacks.ReduceLROnPlateau(monitor='loss', factor=0.1, patience= 3, mode = 'auto', mindelta = 0.0001, min_lr=0)
# 3) test_x_array로 시계열 Windows 만들기 -> 데이터셋 구성 -> 모델 학습 ||| 예측 -> test_pred_array에 기록 -> window_3d의 첫번째 값 삭제 -> test_pred_array와 window_3d 병합 -> model.predict()에 넣어 예측 -> ***
ep = 30
bs = 120
# idx에 해당하는 샘플 529개 학습: for loop 529번 돌아감
for idx in tqdm(range(test_x_array.shape[0])): # 529번
seq_len = 120
sequence_length = seq_len + 1
windows = []
for index in range(1380 - sequence_length):
windows.append(test_x_array[idx, :, 1][index: index + sequence_length])
# x_test, y_test 데이터셋 구성
windows = np.array(windows) # 1329 * 121의 2차원 배열
x_test = windows[:, :-1]
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1))
y_test = windows[:, -1]
model = build_model()
history = model.fit(x_test, y_test, epochs= ep, batch_size= bs, verbose=0, shuffle = True, callbacks=[early_stop, reduceLR])
# shuffle= True 효과 있을까? 미약하게나마 효과 있음. https://machinelearningmastery.com/stateful-stateless-lstm-time-series-forecasting-python/
print('sample_id : ', idx, '번')
print('loss : ', history.history['loss'][-1])
# print('mse : ', history.history['mse'][-1]) #<- model.compile()의 metrics인자값으로 설정된 값을 추가로 출력해 주고 싶다면!
print('lr : ', round(model.optimizer.lr.numpy(), 5)) #<-조정된 학습률 출력
# test_x_array Windows 중 마지막 윈도우 추출해서 3차원 변환시켜 LSTM모델에 넣고 Predict
window = windows[-1, :-1] # windows.shape (1259, 121), window.shape (120, )
window_3d = np.reshape(window, (1, window.shape[0], 1)) # (1, 120, 1)
for m in range(120):
# model.predict()에 window_3d 넣어 예측
pred = model.predict(window_3d)
# 120분 중 처음 1분 예측값 test_pred_array에 기록
test_pred_array[idx, m, :] = pred
# window_3d의 첫번째 분 값은 삭제한 window_3d_2nd 구성
window_3d_2nd = window_3d[0, 1:, :] # 119개
# pred_target(prediction할 때마다 나오는 각각의 예측값들) 1차원 -> 2차원으로 구성
pred_target = test_pred_array[idx, m, :]
pred_target = np.reshape(pred_target, (pred_target.shape[0], 1))
# test_pred_array와 window_3d_2nd 병합하여 모델에 입력할 새로운 window_3d 재구성
window_3d = np.concatenate((window_3d_2nd, pred_target), axis=0)
window_3d = window_3d.T
window_3d = np.reshape(window_3d, (window_3d.shape[0], window_3d.shape[1], 1))
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="viral-accreditation" outputId="d612e1a8-19a3-40ad-dd77-36577abe2d8e" tags=[]
# 손실값 시각화
plt.plot(history.history['loss'], 'b-', label='loss')
plt.xlabel('Epoch')
plt.legend()
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="portuguese-blake" outputId="2e851826-3bd2-41bf-e903-2f02a23496e2"
# test_x_array 각 샘플의 2시간 예측값이 기록되어 있는 test_pred_array shape 확인
print(test_pred_array.shape)
# + id="corrected-coast"
# 모델 저장 및 로드
model.save('./my_model.h5')
model = tf.keras.models.load_model('./my_model.h5')
# + colab={"base_uri": "https://localhost:8080/"} id="periodic-mechanism" outputId="614893a3-bbe3-4329-d9e5-eadf49f2ee67"
# 매수 시점, 매수 비율 표 만들기
# 1) train_pred_array 3차원에서 2차원으로 바꾸기
pred_array_2d = np.zeros([test_pred_array.shape[0], 120])
for idx in tqdm(range(test_pred_array.shape[0])):
pred_array_2d[idx, :] = test_pred_array[idx, :, 0]
# 2) 예측값을 재해석하여 submission 표를 작성하는 함수 정의
def array_to_submission(pred_array):
submission = pd.DataFrame(np.zeros([pred_array.shape[0], 2], np.int64),
columns=['buy_quantity', 'sell_time'])
submission = submission.reset_index()
sell_price = []
for idx, sell_time in enumerate(np.argmax(pred_array, axis=1)):
sell_price.append(pred_array[idx, sell_time])
sell_price = np.array(sell_price)
submission.loc[:, 'buy_quantity'] = ((1*1*(sell_price/1)*0.9995*0.9995) > 1.08)*1 #-DACON-Bit_Trader폴더 주가 손실계산.png 참고
submission['sell_time'] = np.argmax(pred_array, axis=1)
submission.columns = ['sample_id', 'buy_quantity', 'sell_time']
return submission, sell_price
final_submission, forecasted_max = array_to_submission(pred_array_2d)
# + id="literary-dairy"
# final_submission csv파일로 저장
final_submission.to_csv('./submission.csv', index = False)
# + colab={"base_uri": "https://localhost:8080/"} id="moderate-vermont" outputId="236bd132-dd10-4c54-baca-9d0eaa371a44"
# 각 샘플의 예측치 중 최고값 모아 보기
forecasted_max
# + colab={"base_uri": "https://localhost:8080/"} id="missing-penny" outputId="91f9c696-f2f2-4024-ba7b-3576e790e515"
# 전체 300가지 sample에 대해 _가지 case에서 (수수료 감안해서) 매수 시점(1380분)보다 108% 이상 상승한다고 예측함.
final_submission.buy_quantity.value_counts()
# + colab={"base_uri": "https://localhost:8080/"} id="conservative-domain" outputId="38dc327c-49e3-4ce1-d465-5e10286befc5"
# (수수료 감안하지 않고) 매수 시점보다 108% 이상 상승한다고 예측한 경우, 해당 예측치들 모아 보기
forecasted_max[forecasted_max >= 1.08]
# + [markdown] id="manufactured-contractor"
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="acoustic-patrol" outputId="3105b74b-1918-4628-dbf5-77959291e483"
# 모델 평가: test_x데이터로 예측하는 방식을 입력값(train_x)에 대한 예측값과 실제값(train_y_array) 비교를 통해 평가
train_pred_array = np.zeros([1, 120, 1])
sample = 100 #평가용 임의의 샘플 id
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience= 10, mode = 'auto')
reduceLR = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience= 3, min_lr = 0)
ep = 30
bs = 120
# train_x_array데이터로 시계열 Windows 만들기
# idx에 해당하는 샘플 하나만 학습하므로 for loop는 한 번만 돌아감
for idx in range(sample, sample+1):
seq_len = 120 # window_size와 같은 개념
sequence_length = seq_len + 1
windows = []
for index in range(1380 - sequence_length):
windows.append(train_x_array[idx, :, 1][index: index + sequence_length])
# x_train, y_train 데이터 구성
windows = np.array(windows) #1329 * 121의 2차원 배열
x_train = windows[:, :-1]
x_train = np.reshape(x_train, (x_train.shape[0], x_train.shape[1], 1))
y_train = windows[:, -1]
# Fit(early_stoppage & reduceLR 적용)
model = build_model()
history = model.fit(x_train, y_train, validation_split=0.1, epochs = ep, batch_size = bs, verbose = 2, callbacks = [early_stop, reduceLR])
print('sample_id : ', idx, '번')
print('loss : ', history.history['loss'][-1])
# print('mse : ', history.history['mse'][-1]) #<- model.compile()의 metrics인자값으로 설정된 값을 추가로 출력해 주고 싶다면!
print('lr : ', round(model.optimizer.lr.numpy(), 5)) #<-조정된 학습률 출력
# train_x_array Windows 중 마지막 윈도우 추출해서 3차원 변환시켜 LSTM모델에 넣고 Predict
window = windows[-1, :-1]
window_3d = np.reshape(window, (1, window.shape[0], 1))
for m in range(120):
# model.predict()에 window_3d 넣어 예측
pred = model.predict(window_3d)
# 120분 중 처음 1분 예측값 train_pred_array에 기록
train_pred_array[:, m, :] = pred
# window_3d의 첫번째 분 값은 삭제한 window_3d_2nd 구성
window_3d_2nd = window_3d[0, 1:, :] # 119개
# pred_target(prediction할 때마다 나오는 각각의 예측값들) 1차원 -> 2차원으로 구성
pred_target = train_pred_array[:, m, :]
pred_target = np.reshape(pred_target, (pred_target.shape[0], 1))
# train_pred_array와 window_3d_2nd 병합하여 모델에 입력할 새로운 window_3d 재구성
# 이렇게 predict매서드에 들어갈 수 있는 형태로 만들어 줌으로써 얻은 이점: 1. Loop 돌리는 것 가능, 2. window가 이동하여 test_x의 마지막 윈도우의 값이 더 이상 남아 있지 않아도, 예상값들로만 새롭게 window를 구성하여 입력하는 방식으로 구현할 수 있게 해줌.
window_3d = np.concatenate((window_3d_2nd, pred_target), axis=0)
window_3d = window_3d.T
window_3d = np.reshape(window_3d, (window_3d.shape[0], window_3d.shape[1], 1))
# + colab={"base_uri": "https://localhost:8080/", "height": 265} id="basic-township" outputId="a371be58-4f2e-4e66-a5a5-8032d62a15c4"
# train 샘플 훈련 성과 시각화해보기
# 1) 입력 series와 출력 series를 연속적으로 연결하여 시각적으로 보여주는 코드 정의
def plot_series(x_series, y_series):
plt.plot(x_series, label = 'input_series')
plt.plot(np.arange(len(x_series), len(x_series)+len(y_series)),
y_series, label = 'output_series')
plt.axhline(1, c = 'red')
plt.legend()
# 2) train data 중 sample_id idx에 해당하는 x_series로 모델을 학습한 후 y_series를 추론
x_series = train_x_array[sample,:,1]
y_series = train_y_array[sample,:,1]
plot_series(x_series, y_series)
plt.plot(np.arange(1380, 1380+120), train_pred_array[0,:,0], label = 'prediction')
plt.legend()
plt.show()
# + colab={"base_uri": "https://localhost:8080/"} id="opened-superior" outputId="f9af6298-b0e7-44e1-c07c-516884f5e09c"
train_pred_array
# + id="blocked-press"
def df2d_to_answer(df_2d):
# valid_y_df로부터
# open 가격 정보가 포함된
# [샘플 수, 120분] 크기의
# 2차원 array를 반환하는 함수
feature_size = df_2d.iloc[:,2:].shape[1]
time_size = len(df_2d.time.value_counts())
sample_size = len(df_2d.sample_id.value_counts())
sample_index = df_2d.sample_id.value_counts().index
array_2d = df_2d.open.values.reshape([sample_size, time_size])
sample_index = list(sample_index)
return array_2d, sample_index
def COIN(y_df, submission, df2d_to_answer = df2d_to_answer):
# 2차원 데이터프레임에서 open 시점 데이터만 추출하여 array로 복원
# sample_id정보를 index에 저장
y_array, index = df2d_to_answer(y_df)
# index 기준으로 submission을 다시 선택
submission = submission.set_index(submission.columns[0])
submission = submission.iloc[index, :]
# 초기 투자 비용은 10000 달러
total_momey = 10000 # dolors
total_momey_list = []
# 가장 처음 sample_id값
start_index = submission.index[0]
for row_idx in submission.index:
sell_time = submission.loc[row_idx, 'sell_time']
buy_price = y_array[row_idx - start_index, 0]
sell_price = y_array[row_idx - start_index, sell_time]
buy_quantity = submission.loc[row_idx, 'buy_quantity'] * total_momey
residual = total_momey - buy_quantity
ratio = sell_price / buy_price
total_momey = buy_quantity * ratio * 0.9995 * 0.9995 + residual
total_momey_list.append(total_momey)
return total_momey, total_momey_list
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="previous-kuwait" outputId="09204b86-6a92-4e82-bee7-d7a785960740"
# 손실값 시각화
plt.plot(history.history['loss'], 'b-', label='loss')
plt.plot(history.history['val_loss'], 'r--', label='val_loss')
plt.xlabel('Epoch')
plt.legend()
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 325} id="front-pharmacy" outputId="c3a91c0c-37a9-42c8-f824-6a886772b63c"
total_momey, total_momey_list = COIN(train_y_df,
train_pred_array)
print(total_momey)
# + [markdown] id="intermediate-pencil"
# ---
# + id="structural-carter"
# train_x_df의 자료들을 학습에 활용하게 될 경우:
# train_x_df를 sample_id을 기준으로 추출하는 방법
train_x_df = train_x_df[train_x_df.sample_id < 300]
train_y_df = train_y_df[train_y_df.sample_id < 300]
|
Trials/Bit_Trader_Trial4(16510).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/MikhailKuklin/exploratory_data_analysis_pandas/blob/main/2021_05_31_exploratory_data_analysis_pandas_partI.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="EOBZn7-b-4DD"
# load necessary library/modules
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
# + id="98jJHIYU_W3A"
# open database file - cvs in that case
df = pd.read_csv('https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv')
# + colab={"base_uri": "https://localhost:8080/"} id="7Ry2CT33_gIn" outputId="c6256fcb-4a3c-435e-cb85-ca2b76651e54"
# basic info
df.info()
# + colab={"base_uri": "https://localhost:8080/"} id="NW-HXiFR_uuL" outputId="09c81f95-2b05-4f2f-e9e2-3c38d5642880"
# get names of all columns
df.columns
# + colab={"base_uri": "https://localhost:8080/", "height": 615} id="YwsMcynr5S4o" outputId="5e50e6a6-d0eb-49ab-b37b-465f01b56fa6"
# rename column
df.rename(columns={"location": "country"}) #inplace=True if permanent
# + colab={"base_uri": "https://localhost:8080/"} id="Uzy73au0AEpI" outputId="c7daa37a-0efd-4bd5-c3ee-b2bf53f73a6f"
# get a range of rows
df.index
# + colab={"base_uri": "https://localhost:8080/"} id="hyDICZr7ylM9" outputId="095c856c-990e-4af7-a66f-7f44907839ae"
# get number of columns and rows
df.shape
# + colab={"base_uri": "https://localhost:8080/"} id="Q30cvqkMysQj" outputId="24c3ee8c-75fa-4bcf-b582-b1da850cde42"
# check if database has duplicated
df.drop_duplicates().shape # particular column df.drop_duplicates(subset=['location']))
# + colab={"base_uri": "https://localhost:8080/", "height": 615} id="Orl487pn_7I6" outputId="6d802099-2379-432b-be8b-0db41c403d4b"
# check table
#df.head(50)
df.tail(100)
#df.describe()
# + colab={"base_uri": "https://localhost:8080/"} id="Eax726obVsnl" outputId="84064069-283d-4dcc-ac2e-b0216d5925ed"
# check if columns contains missing values
df.isna().sum()
# + colab={"base_uri": "https://localhost:8080/"} id="tH2K6JYq2Lae" outputId="03299037-4329-40d5-dc6f-bb3e5406d93f"
# load only one column
df_ocol = pd.read_csv('https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv')['new_vaccinations']
print(f"Number of days when vaccine has been given: {np.sum(df_ocol > 0)}")
# + colab={"base_uri": "https://localhost:8080/"} id="xgp_PzevnQnd" outputId="dc56ec39-b840-4c6b-fbd0-98eb44363ee8"
# average
df.mean()
# + colab={"base_uri": "https://localhost:8080/"} id="SpHudZBU3JKA" outputId="b82e99f6-f2fb-4e7e-fbd4-28ae5bc3dd9a"
# extract average of two columns
df[['new_deaths','new_cases']].mean()
# + colab={"base_uri": "https://localhost:8080/"} id="0xEMyxYPsyLA" outputId="7693d8d0-0341-4ffe-9e66-3e6bc8c17e7f"
# extract average of same columns for Asia only
df[df.continent=='Asia'][['new_deaths','new_cases']].mean()
# + colab={"base_uri": "https://localhost:8080/"} id="IvUJC9F72cKV" outputId="54256b90-9f78-423f-c516-511222faa8b4"
# extract average of same columns for Europe and Asia
df[df.continent.isin(['Europe','Asia'])][['new_deaths','new_cases']].mean()
# + colab={"base_uri": "https://localhost:8080/", "height": 118} id="FG73h6Tw6pWQ" outputId="8ad419dd-e52d-4a09-971f-7d358812e1c9"
# select data only for 2020-05-05 and Zimbabwe
df[(df.location=='Zimbabwe') & (df.date.isin(["2021-06-01"]))]
# + colab={"base_uri": "https://localhost:8080/", "height": 362} id="VbDqW4GWprb5" outputId="155aadc6-0fa1-4f43-a9de-8957cc3fd2f8"
# slicing
df[df.continent=='Asia'][['new_deaths','new_cases']][20000:20010]
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="zezQS_7hFJbI" outputId="cfe80568-2dbd-4a2e-8734-4f2b1467b1bf"
# add a new column
df.loc[df.new_vaccinations > 0, 'vaccines_given'] = 'yes'
df.loc[df.new_vaccinations <= 0, 'vaccines_given'] = 'no'
df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="ZeUjhH7kGi7-" outputId="4e7277ea-c842-48e3-d38f-9ce5203a1478"
# count values in a new column
df[df.location=='Finland'].vaccines_given.value_counts()
# + colab={"base_uri": "https://localhost:8080/"} id="xflM-XukpL7j" outputId="eae54f35-d65f-4ceb-d167-42d23dddbad0"
# unique
df['continent'].unique() # if we want to get only number - nunique()
# + colab={"base_uri": "https://localhost:8080/"} id="5Ve6waeBZ7hq" outputId="8db277f4-35c2-407d-c602-b546e5d57d94"
# drop all of the rowas with a missing value
df['continent'].dropna().unique()
# + colab={"base_uri": "https://localhost:8080/", "height": 422} id="lJN4Oevvnxze" outputId="2dff4732-6083-4d0a-9df4-e31655d096eb"
# sort values
df[['continent','location','new_cases_per_million']].sort_values('new_cases_per_million') # ascending=False
# + id="c4toixIpTH0_"
# create dataframe only for Europe
df_europe=df[df.continent=='Europe']
# + colab={"base_uri": "https://localhost:8080/"} id="RxEEuMBVUUwP" outputId="495b103d-6427-46fa-a59c-7397ee0dc4ae"
# show total cases in France
df_europe[df_europe.location=='France']['total_cases'].max()
# + colab={"base_uri": "https://localhost:8080/"} id="eYj6pFH5riQ7" outputId="7f5bbb31-6e36-433a-c5d0-f816477fe97d"
# shorter command
df[(df.continent=='Europe') & (df.location=='France')]['total_cases'].max()
# + colab={"base_uri": "https://localhost:8080/"} id="ZSQZSFp75QxO" outputId="c1fe6cd4-2c18-4bfe-862d-320e02af8085"
# sum of new cases in Europe
df_europe[df_europe.continent=='Europe']['new_cases'].sum()
# + colab={"base_uri": "https://localhost:8080/"} id="GRsM-21J1bUJ" outputId="cfc55203-dcfa-4086-9f26-1ab71601d621"
# number of cases in european countries
countries=df_europe.location.unique()
for country in countries:
print(f"{country}:{df_europe[df_europe.location==country]['total_cases'].max()}")
# + colab={"base_uri": "https://localhost:8080/"} id="w_L6EYewlIrN" outputId="0cfe7f08-0bab-4a27-8d0e-cc5a701b6df4"
# extract number of vaccinated people per hundred in each continent
mlist1=df['continent']
mlist1.dropna(inplace=True)
mlist2=mlist1.unique()
for cont in mlist2:
tmp=df[df.continent==cont]
print(f"People vaccinated per hundred in {cont} is {tmp['people_vaccinated_per_hundred'].max():.0f}")
# + colab={"base_uri": "https://localhost:8080/"} id="a1fjPR8DlzO8" outputId="80845644-f1d8-4bad-8211-c908c719446d"
# extract number of vaccinated people per hundred in european countries
mlist3=df[df.continent=='Europe']
mlist4=mlist3['location']
mlist4.dropna(inplace=True)
mlist5=mlist4.unique()
for country in mlist5:
tmp=df[df.location==country]
print(f"People vaccinated per hundred in {country} is {tmp['people_vaccinated_per_hundred'].max():.0f}")
# + colab={"base_uri": "https://localhost:8080/", "height": 875} id="2-8ZNdrQWAiM" outputId="a7f6f1ac-70ce-40dc-b4a1-52f1cd4cc8b9"
# bar plot EU countries vs. new cases
plt.figure(figsize=(25,10))
sns.barplot(x='location',y='new_cases', data=df_europe)
plt.xlabel('Country', fontsize=30)
plt.xticks(fontsize=24,rotation='90')
plt.ylabel('New cases', fontsize=30)
plt.yticks(fontsize=24)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 399} id="is0YiigyAKTx" outputId="f0d26138-918c-453a-d75f-0c99cdf2a1f6"
# bar plot continent vs. new cases
plt.figure(figsize=(13,6))
sns.barplot(x='continent',y='new_cases', data=df)
plt.xlabel('Continent', fontsize=18)
plt.xticks(fontsize=14)
plt.ylabel('New cases', fontsize=18)
plt.yticks(fontsize=14)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 636} id="jALBoBfsisv4" outputId="c53390ad-814a-42e4-e165-cc3f9aa0e0f8"
df['year'] = pd.DatetimeIndex(df['date']).year
df['month_year'] = pd.to_datetime(df['date']).dt.to_period('M')
df_sort=df.sort_values('month_year')
plt.figure(figsize=(25,10))
sns.barplot(x='continent', y='new_cases', hue='month_year', data=df_sort, palette='CMRmap_r')
plt.xlabel('Continent', fontsize=30)
plt.xticks(fontsize=24)
plt.ylabel('New cases per million', fontsize=30)
plt.yticks(fontsize=24)
plt.legend(fontsize=20)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 404} id="xhQLdumXBrKV" outputId="c5011f7c-fa68-4443-ee66-06934668a4f2"
# bar plot continent vs. total vaccinations per hundred
plt.figure(figsize=(13,6))
sns.barplot(x='continent',y='total_vaccinations_per_hundred', data=df)
plt.xlabel('Continent', fontsize=18)
plt.xticks(fontsize=14)
plt.ylabel('Total vaccinations per hundred', fontsize=18)
plt.yticks(fontsize=14)
plt.show()
# + id="1durp79IC0d2" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="b86ddd91-bc29-4b6d-c37d-294e31959767"
# correlations
correlat=df.corr()
correlat
#correlat>0.9
# + colab={"base_uri": "https://localhost:8080/"} id="XZSLcVzZKTpF" outputId="e1fc4c90-8b99-46c7-95e7-6a695e872455"
# correlation between two columns
col1=df['new_cases_per_million']
col2=df['total_vaccinations_per_hundred']
col1.corr(col2)
# + colab={"base_uri": "https://localhost:8080/"} id="ZFxAM4uGCVrE" outputId="e396818a-4ce7-406b-ac57-e89b257be9f4"
# show country with the lowest number of male smokers in the world
country_ml_min=df['location'].loc[df['male_smokers'].idxmin()]
print(f"{country_ml_min} has the lowest number of males smokers:{df['male_smokers'].min()}%")
# + colab={"base_uri": "https://localhost:8080/"} id="DCVyiV2TEdih" outputId="9fbd363c-5c43-48af-b4ff-f3b0b261600e"
# show country with the highest number of male smokers in the world
country_ml_max=df['location'].loc[df['male_smokers'].idxmax()]
print(f"{country_ml_max} has the highest number of males smokers:{df['male_smokers'].max()}%")
# + colab={"base_uri": "https://localhost:8080/"} id="dk9juADkHzr3" outputId="cbb2ac5e-faec-4c58-96a1-6452ea7405c2"
# show country with the highest number of female smokers in the world
country_fl_max=df['location'].loc[df['female_smokers'].idxmax()]
print(f"{country_fl_max} has the highest number of females smokers:{df['female_smokers'].max()}%")
# + colab={"base_uri": "https://localhost:8080/"} id="Sqy-j-19H8JY" outputId="6805a125-da70-4b43-884e-03da95b14aa0"
# show country with the lowest number of female smokers in the world
country_fl_min=df['location'].loc[df['female_smokers'].idxmin()]
print(f"{country_fl_min} has the lowest number of females smokers:{df['female_smokers'].min()}%")
# + colab={"base_uri": "https://localhost:8080/"} id="zG4cJPm9ggln" outputId="565f5f78-5ebb-4404-d300-f929913a6c0a"
# show country with the highest number of vaccinated people per hundred in the world
country_pvh_max=df['location'].loc[df['people_vaccinated_per_hundred'].idxmax()]
print(f"{country_pvh_max} has the highest number of vaccinated people:{df[df.location==country_pvh_max]['people_vaccinated_per_hundred'].mean():.0f}%")
|
2021_05_31_exploratory_data_analysis_pandas_partI.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import numpy as np
import scipy.stats
import matplotlib.pylab as plt
import os, sys
sys.path.insert(0, "../")
import geepee.aep_models as aep
import geepee.ep_models as ep
# %matplotlib inline
np.random.seed(42)
import pdb
# +
# We first define several utility functions
def kink_true(x):
fx = np.zeros(x.shape)
for t in range(x.shape[0]):
xt = x[t]
if xt < 4:
fx[t] = xt + 1
else:
fx[t] = -4*xt + 21
return fx
def kink(T, process_noise, obs_noise, xprev=None):
if xprev is None:
xprev = np.random.randn()
y = np.zeros([T, ])
x = np.zeros([T, ])
xtrue = np.zeros([T, ])
for t in range(T):
if xprev < 4:
fx = xprev + 1
else:
fx = -4*xprev + 21
xtrue[t] = fx
x[t] = fx + np.sqrt(process_noise)*np.random.randn()
xprev = x[t]
y[t] = x[t] + np.sqrt(obs_noise)*np.random.randn()
return xtrue, x, y
def plot(model):
# make prediction on some test inputs
N_test = 200
x_test = np.linspace(-4, 6, N_test)
x_test = np.reshape(x_test, [N_test, 1])
zu = model.sgp_layer.zu
mu, vu = model.predict_f(zu)
mf, vf = model.predict_f(x_test)
my, vy = model.predict_y(x_test)
C = model.get_hypers()['C']
# plot function
fig = plt.figure(figsize=(16,10))
ax = fig.add_subplot(111)
ax.plot(x_test[:,0], kink_true(x_test[:,0]), '-', color='k')
ax.plot(C[0,0]*x_test[:,0], my[:,0], '-', color='r', label='y')
ax.fill_between(
C[0,0]*x_test[:,0],
my[:,0] + 2*np.sqrt(vy[:, 0, 0]),
my[:,0] - 2*np.sqrt(vy[:, 0, 0]),
alpha=0.2, edgecolor='r', facecolor='r')
ax.plot(zu, mu, 'ob')
ax.plot(x_test[:,0], mf[:,0], '-', color='b', label='f, alpha=%.2f' % alpha)
ax.fill_between(
x_test[:,0],
mf[:,0] + 2*np.sqrt(vf[:,0]),
mf[:,0] - 2*np.sqrt(vf[:,0]),
alpha=0.2, edgecolor='b', facecolor='b')
ax.plot(
model.emi_layer.y[0:model.N-1],
model.emi_layer.y[1:model.N],
'r+', alpha=0.5)
mx, vx = model.get_posterior_x()
ax.plot(mx[0:model.N-1], mx[1:model.N], 'og', alpha=0.3)
ax.set_xlabel(r'$x_{t-1}$')
ax.set_ylabel(r'$x_{t}$')
ax.set_xlim([-4, 6])
ax.legend(loc='lower center')
import pprint
pp = pprint.PrettyPrinter(indent=4)
keys = ['ls', 'sf', 'zu', 'sn', 'C', 'R']
params_dict = {}
for key in keys:
params_dict[key] = opt_hypers[key]
pp.pprint(params_dict)
def plot_latent(model, latent_true):
# make prediction on some test inputs
N_test = 200
x_test = np.linspace(-4, 6, N_test)
x_test = np.reshape(x_test, [N_test, 1])
zu = model.sgp_layer.zu
mu, vu = model.predict_f(zu)
mf, vf = model.predict_f(x_test)
# plot function
fig = plt.figure(figsize=(16,10))
ax = fig.add_subplot(111)
ax.plot(x_test[:,0], kink_true(x_test[:,0]), '-', color='k')
ax.plot(zu, mu, 'ob')
ax.plot(x_test[:,0], mf[:,0], '-', color='b', label='f, alpha=%.2f' % alpha)
ax.fill_between(
x_test[:,0],
mf[:,0] + 2*np.sqrt(vf[:,0]),
mf[:,0] - 2*np.sqrt(vf[:,0]),
alpha=0.2, edgecolor='b', facecolor='b')
ax.plot(
latent_true[0:model.N-1],
latent_true[1:model.N],
'r+', alpha=0.5)
mx, vx = model.get_posterior_x()
ax.plot(mx[0:model.N-1], mx[1:model.N], 'og', alpha=0.3)
ax.set_xlabel(r'$x_{t-1}$')
ax.set_ylabel(r'$x_{t}$')
ax.set_xlim([-4, 6])
ax.legend(loc='lower center')
# plot function
fig = plt.figure(figsize=(16,10))
ax = fig.add_subplot(111)
mx, vx = model.get_posterior_x()
ax.plot(np.arange(model.N), mx, '-g', alpha=0.5)
ax.fill_between(
np.arange(model.N),
mx[:,0] + 2*np.sqrt(vx[:,0]),
mx[:,0] - 2*np.sqrt(vx[:,0]),
alpha=0.3, edgecolor='g', facecolor='g')
ax.plot(np.arange(model.N), latent_true, 'r+', alpha=0.5)
ax.set_xlabel(r'$t$')
ax.set_ylabel(r'$x_{t}$')
ax.set_xlim([0, model.N])
ax.legend(loc='lower center')
se = (latent_true - mx[:, 0])**2
mse = np.mean(se)
se_std = np.std(se)/np.sqrt(se.shape[0])
ll = -0.5 * (latent_true - mx[:, 0])**2/vx[:, 0] -0.5*np.log(2*np.pi*vx[:, 0])
mll = np.mean(ll)
ll_std = np.std(ll)/np.sqrt(ll.shape[0])
print 'se %.3f +/- %.3f' % (mse, se_std)
print 'll %.3f +/- %.3f' % (mll, ll_std)
# +
# generate a dataset from the kink function above
T = 200
process_noise = 0.2
obs_noise = 0.1
(xtrue, x, y) = kink(T, process_noise, obs_noise)
y_train = np.reshape(y, [y.shape[0], 1])
# init hypers
alpha = 0.5
Dlatent = 1
Dobs = 1
M = 10
C = 1*np.ones((1, 1))
R = np.ones(1)*np.log(obs_noise)/2
lls = np.reshape(np.log(2), [Dlatent, ])
lsf = np.reshape(np.log(2), [1, ])
zu = np.linspace(-2, 5, M)
zu = np.reshape(zu, [M, 1])
lsn = np.log(process_noise)/2
params = {'ls': lls, 'sf': lsf, 'sn': lsn, 'R': R, 'C': C, 'zu': zu}
# +
# create AEP model
model = aep.SGPSSM(y_train, Dlatent, M,
lik='Gaussian', prior_mean=0, prior_var=1000)
hypers = model.init_hypers(y_train)
for key in params.keys():
hypers[key] = params[key]
model.update_hypers(hypers, alpha)
# optimise
model.optimise(method='L-BFGS-B', alpha=alpha, maxiter=3000, reinit_hypers=False)
opt_hypers = model.get_hypers()
plot(model)
# create EP model
model_ep = ep.SGPSSM(y_train, Dlatent, M,
lik='Gaussian', prior_mean=0, prior_var=1000)
model_ep.update_hypers(opt_hypers)
# run EP
model_ep.inference(no_epochs=50, alpha=alpha, parallel=True, decay=0.99)
plot(model_ep)
# +
# create AEP model
model = aep.SGPSSM(y_train, Dlatent, M,
lik='Gaussian', prior_mean=0, prior_var=1000)
hypers = model.init_hypers(y_train)
for key in params.keys():
hypers[key] = params[key]
model.update_hypers(hypers, alpha)
# optimise
model.set_fixed_params(['C'])
model.optimise(method='L-BFGS-B', alpha=alpha, maxiter=3000, reinit_hypers=False)
opt_hypers = model.get_hypers()
plot(model)
# create EP model
model_ep = ep.SGPSSM(y_train, Dlatent, M,
lik='Gaussian', prior_mean=0, prior_var=1000)
model_ep.update_hypers(opt_hypers)
# run EP
model_ep.inference(no_epochs=100, alpha=alpha, parallel=True, decay=0.99)
plot(model_ep)
plot_latent(model, xtrue)
plot_latent(model_ep, xtrue)
# -
|
examples/notebooks/gpssm_kink_aep.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from dataclasses import dataclass, field
@dataclass
class Program(object):
weight: int
dependents: set = field(default_factory=set)
_total_weight: int = field(init=False, default=None)
def get_total_weight(self, programs):
if self._total_weight is None:
self._total_weight = (
self.weight +
sum(programs[d].get_total_weight(programs)
for d in self.dependents))
return self._total_weight
def read_programs(lines):
programs = {}
for line in lines:
if not line.strip():
continue
name_weight, arrow, dependents = line.partition(' -> ')
name, weight = name_weight.split(' (')
programs[name] = Program(
int(weight.rstrip(')\n ')),
{d.strip() for d in dependents.split(',') if d.strip()})
return programs
# +
import aocd
data = aocd.get_data(day=7, year=2017)
programs = read_programs(data.splitlines())
# +
def find_root(program):
roots = program.keys() - set.union(*(v.dependents for v in program.values()))
return roots.pop()
test = read_programs('''\
pbga (66)
xhth (57)
ebii (61)
havc (66)
ktlj (57)
fwft (72) -> ktlj, cntj, xhth
qoyq (66)
padx (45) -> pbga, havc, qoyq
tknk (41) -> ugml, padx, fwft
jptl (61)
ugml (68) -> gyxo, ebii, jptl
gyxo (61)
cntj (57)
'''.splitlines())
assert find_root(test) == 'tknk'
# +
from collections import Counter
tests = {
'ugml': 251,
'padx': 243,
'fwft': 243
}
for name, expected in tests.items():
assert test[name].get_total_weight(test) == expected
def correct_balance(name, programs):
program = programs[name]
weights = {}
for d in program.dependents:
weight = programs[d].get_total_weight(programs)
weights.setdefault(weight, []).append(d)
if len(weights) == 1:
# balanced, no adjustment needed
return 0
# imbalanced
imbalanced, target = weights
if len(weights[imbalanced]) != 1:
imbalanced, target = target, imbalanced
# Check for balance in the child nodes
sub_program_name = weights[imbalanced][0]
sub_correction = correct_balance(sub_program_name, programs)
if sub_correction:
return sub_correction
return programs[sub_program_name].weight + (target - imbalanced)
assert correct_balance('tknk', test) == 60
# -
root = find_root(programs)
print('Part 1:', root)
print('Part 2:', correct_balance(root, programs))
|
2017/Day 07.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="EJJ4qgvpjiPe" outputId="8cfd39c6-840c-4352-f4d5-ff5b629cec82" tags=[]
# !pip install git+git://github.com/geopandas/geopandas.git
# !pip install earthpy
# !pip install descartes # required for plotting polygons in geopandas
# + id="520px5NJjiPi"
# import necessary packages
# %matplotlib inline
import os
import matplotlib.pyplot as plt
import geopandas as gpd
import earthpy as et
import pandas as pd
from tqdm.notebook import tqdm
pd.options.mode.chained_assignment = None # default='warn'
# + [markdown] id="BsaCV0AnnaK0"
# # Understanding the Shape File
# + id="zEuGeCmUnSg-"
# Load the shape file from github
# !wget -Nq https://raw.githubusercontent.com/kaushiktandon/COVID-19-Vaccine-Allocation/master/shape/la_shape.zip
# !unzip -nq la_shape.zip -d shape
# + [markdown] id="UX8wRnJJnYzJ"
# This part will help us understand what the Los Angeles County shapefile contains
# + colab={"base_uri": "https://localhost:8080/", "height": 228} id="TM5dDMk7e8HY" outputId="d38953ca-f9e1-46ab-9d25-cf624815e39e" tags=[]
# Let's take a look at the shapefile
neighborhood_shapes = gpd.read_file("shape/la_shape.shp") # L.A. County neighborhoods shapefile
print(neighborhood_shapes.head()) # view the data attribute table
print("Shape: ", neighborhood_shapes.shape)
print(neighborhood_shapes.columns)
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="WUhSC3TQs61f" outputId="1ebf9e75-e1da-4060-d1cf-d9d8505da15f" tags=[]
neighborhood_shapes.plot()
# + [markdown] id="KnfaTujAC0Mn"
# # Understanding the Social Explorer Data
#
# + id="1jd-XQ1-jiPq"
# !wget --directory-prefix=data/ -Nq https://raw.githubusercontent.com/kaushiktandon/COVID-19-Vaccine-Allocation/master/data/SocialExplorerData_ACS2018_5YearEstimate_CA_CensusTract.csv
social_explorer_filename = 'data/SocialExplorerData_ACS2018_5YearEstimate_CA_CensusTract.csv'
census_data = pd.read_csv(social_explorer_filename, skiprows=[1])
# Convert FIPS column to string for future use
census_data['FIPS'] = census_data['FIPS'].apply(str)
# + colab={"base_uri": "https://localhost:8080/", "height": 837} id="Rqz8QmIOfISd" outputId="2bbee691-aab8-4d1e-aa23-d5439c36f2b5"
census_data.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 516} id="0Jqx8dY1kUkT" outputId="73acc8d8-427d-46a9-d51c-a187d7cf3bda"
census_data.describe()
# + [markdown] id="GpRx9eCJOxQc"
# ## Getting centroid latitude and longitude of each census tract
# + colab={"base_uri": "https://localhost:8080/", "height": 203} id="DLjHjG9LP3AO" outputId="8e3e9d13-cc05-4878-fe71-010ca7e6b576"
# !wget --directory-prefix=data/ -Nq https://raw.githubusercontent.com/kaushiktandon/COVID-19-Vaccine-Allocation/master/data/Census_Tract_LA_Geocoding.csv
census_tract_info_filename = 'data/Census_Tract_LA_Geocoding.csv'
census_tract_centroid_info = pd.read_csv(census_tract_info_filename)
# Format GEOID in the same way as SocialExplorer
census_tract_centroid_info["GEOID"] = census_tract_centroid_info["GEOID"].str[10:].str.strip()
census_tract_centroid_info.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 886, "referenced_widgets": ["6d124d1d7ca642d1ab18c7486e811a42", "7aa39623d5754edda0e163f587ed7906", "5f3eea7aa423426eb389d32a1dd473e8", "38919a9061a3477ca9003330fa1eaa7a", "b2d0ad72603e4af2bc3e07c11912bcf0", "af3b57cfa6234673adaeba96f68faf88", "e790a023e209488082c931becafc2279", "2590b4aab0d44c87b283da2638cb98eb"]} id="XiN4XWlxQYBY" outputId="8cd23bab-2afb-4c93-8e74-04fcac770859"
for idx, census_tract in tqdm(census_tract_centroid_info.iterrows(), total=census_tract_centroid_info.shape[0]):
found_index = census_data.index[census_data['FIPS'] == census_tract["GEOID"]]
census_data.at[found_index, 'Latitude'] = census_tract['Latitude']
census_data.at[found_index, 'Longitude'] = census_tract['Longitude']
census_data.at[found_index, 'Possible Neighborhood'] = census_tract['Neighborhood']
census_data.head()
# + id="w3EgHcpYmRaH"
# These two census tracts seem to no longer exist? Unsure, but will manually add their latitude and longitude
broken1 = census_data.index[census_data['Name of Area'] == 'Census Tract 9901']
census_data.at[broken1, 'Latitude'] = 33.99919090
census_data.at[broken1, 'Longitude'] = -118.79218760
census_data.at[broken1, 'Possible Neighborhood'] = 'Unknown'
broken2 = census_data.index[census_data['Name of Area'] == 'Census Tract 9902']
census_data.at[broken2, 'Latitude'] = 33.91358680
census_data.at[broken2, 'Longitude'] = -118.49286930
census_data.at[broken2, 'Possible Neighborhood'] = 'Unknown'
# Verify no more invalid rows
import math
for idx, row in census_data.iterrows():
if (math.isnan(row['Latitude'])):
print(row['Name of Area'], row['Latitude'], row['Longitude'])
# + [markdown] id="pnWw1Zl5tdS3"
# The "Possible Neighborhood" column is from this new datasource (https://usc.data.socrata.com/widgets/atat-mmad). We will use the shapefile and polygon intersection to determine which census tract falls into which neighborhood and use this "Possible Neighborhood" to validate.
#
# There are 262 "possible neighborhoods" versus 272 unique names from the shapefile.
#
# The 10 missing are:
#
# 1. Desert View Highlands
# 2. Green Valley
# 3. Hasley Canyon
# 4. Lake Hughes
# 5. Littlerock
# 6. North El Monte
# 7. South Diamond Bar
# 8. Sun Village
# 9. <NAME>de
# 10. West San Dimas
#
# As a result, validation may not be perfect. Regardless, we want to use the shapefile to maintain consistency with the boundaries used by the USC Covid Risk Model.
# + [markdown] id="N24QlvuImUYo"
# # Polygon Intersection
# + colab={"base_uri": "https://localhost:8080/", "height": 67, "referenced_widgets": ["2a61988983ef460a95eac9eab7f9f60a", "5fbba3d5d8054da2998297bb591126d7", "1c6412c603df455d9f65d8f200ce01bf", "536d5a498f9d448d94e46bc6167d95e6", "48a9b4237d664abda7ea3db18cabee48", "119aae2d8e4749f5bbac038e0aea1abe", "44933650180b4bb1bc3e5c23c403fd53", "58ca8644bfce4332917d2e4e213f4a72"]} id="qdGPNEKWmw-_" outputId="f4d30bdf-9344-4ca4-c537-91b397b1acc8"
from shapely.geometry import Point, Polygon
neighborhood_shapes = neighborhood_shapes[['name','geometry']]
neighborhood_census_data = pd.DataFrame(neighborhood_shapes['name'])
for feature in census_data.columns[14:]:
if ("%" in feature): continue
neighborhood_census_data[feature] = 0
polys = neighborhood_shapes['geometry']
for idx, row in tqdm(census_data.iterrows(), total=census_data.shape[0]):
mask = polys.contains(Point(row['Longitude'], row['Latitude']))
# This Census Tract from the Census Data belongs to this particular neighborhood
neighborhood_row = neighborhood_shapes[mask]
if (len(neighborhood_row) == 0):
# These do not fall inside any polygon, but still have data. Since there are only a few, this will be done manually.
if (row['Name of Area'] == 'Census Tract 1247'):
neighborhood_row = neighborhood_shapes[neighborhood_shapes['name'] == 'Sherman Oaks']
elif (row['Name of Area'] == 'Census Tract 9800.33'):
neighborhood_row = neighborhood_shapes[neighborhood_shapes['name'] == 'San Pedro']
else:
continue
neighborhood = neighborhood_row.iloc[0]
# Validate neighborhoods
if (neighborhood['name'] != row['Possible Neighborhood']):
print(row, neighborhood)
# Add the value for each feature
for feature in row[14:-3].items():
if ("%" in feature[0]): continue
previous_value = neighborhood_census_data.loc[neighborhood_row.index, feature[0]].iloc[0]
neighborhood_census_data.at[neighborhood_row.index, feature[0]] = previous_value + feature[1]
# + id="ocp37BFCDaiR"
neighborhood_census_data['Population Density (Per Sq. Mile)'] = neighborhood_census_data['Total Population'] / neighborhood_census_data['Area (Land)']
neighborhood_census_data['Average Household Size'] = (neighborhood_census_data['Households10: 1-Person Household'] + 2 * neighborhood_census_data['Households10: 2-Person Household'] + 3 * neighborhood_census_data['Households10: 3-Person Household'] + 4 * neighborhood_census_data['Households10: 4-Person Household'] + 5 * neighborhood_census_data['Households10: 5-Person Household'] + 6 * neighborhood_census_data['Households10: 6-Person Household'] + 7 * neighborhood_census_data['Households10: 7-or-More Person Household']) / neighborhood_census_data['Households10:']
# Cannot be recomputed easily
neighborhood_census_data['Median Household Income (In 2018 Inflation Adjusted Dollars)'] = 0
# Calculate percentages for relevant features
# We can use the order of the columns and their names to our advantage. If the name ends in a semicolon, it is the denominator.
# If the name solely contains a :, this means it is a subset of the previously seen denominator column.
denominator_column = None
for key, column in neighborhood_census_data.iteritems():
if (key[0] == '%'):
continue
elif (key[-1] == ':'):
denominator_column = column
continue
elif (':' in key):
neighborhood_census_data["% " + key] = 100.0 * column / denominator_column
continue
else:
# These keys do not need percentages
continue
# + colab={"base_uri": "https://localhost:8080/", "height": 475} id="XDfYr9J3Kf9X" outputId="c80aed29-2e72-4954-dece-637759365e39"
neighborhood_census_data.head()
# + id="9EY-wNc5Kinn"
# Remove rows with missing data
neighborhood_census_data = neighborhood_census_data[neighborhood_census_data["Total Population"] != 0]
# Missing data is actually 0
neighborhood_census_data.fillna(0)
neighborhood_census_data.to_csv('social_explorer_processed_data.csv', encoding='utf-8', index=False)
|
code/social_explorer_data_processing.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Answers to Exercises
# **Exercise 1**
np.sum(big_array[big_array % 2 == 1])
# **Exercise 2**
random[1:5, 2:5] + np.eye(3, 3)
# **Exercise 3**
# +
big_part = random[:3, :5].reshape((1, 15))
negative_1 = random[3][5]
small_part = np.array([negative_1]) #this just puts -1 into an array so it can be appended to the vector big_part
combined = np.append(big_part, small_part)
square = combined.reshape((4, 4))
square_mean = np.mean(square)
square_mean
# -
# **Exercise 4**
# +
x = np.array([3.0, 5.0, 8.0])
y = np.array([3.0, 5.0, 8.0])
#Square x here:
x = x ** 2
#Square y here:
for i in range(len(y)):
y[i] = y[i] ** 2
# -
# **Exercise 5:**
prices.sum(axis=0)
cheapest_store = "C"
# **Exercise 6:**
vector = np.array([3, 1, -3])
an_array = np.array([[-2, -1, 3], [-3, 0, 3], [-3, -1, 4]])
identity = an_array + vector #does this look like np.eye(3, 3)?
identity
# **Exercise 7:**
x = np.array([-7, 7])
# **Exercise 8:**
y[np.sqrt(x) == 2]
# **Exercise 9:**
theta_hat = np.linalg.solve((X.T @ X), (X.T @ Y))
loss = np.dot(np.linalg.norm(Y - (X @theta_hat)), np.linalg.norm(Y - (X @ theta_hat)))
# **Exercise 10:**
# +
male = titanic[titanic["Sex"] == "male"]
male_avg = male["Fare"].mean()
male_avg
female = titanic[titanic["Sex"] == "female"]
female_avg = female["Fare"].mean()
female_avg
# -
# **Exercise 11:**
survived = titanic[titanic["Survived"] == 1]
age_survived = survived["Age"].mean()
age_survived
died = titanic[titanic["Survived"] == 0]
age_died = died["Age"].mean()
age_died
# **Exercise 12:**
class_1 = titanic[titanic["Pclass"] == 1]
survived_1 = len(class_1[class_1["Survived"] == 1])
total_1 = len(class_1)
proportion_1 = survived_1 / total_1
class_2 = titanic[titanic["Pclass"] == 2]
survived_2 = len(class_2[class_2["Survived"] == 1])
total_2 = len(class_2)
proportion_2 = survived_2 / total_2
class_3 = titanic[titanic["Pclass"] == 3]
survived_3 = len(class_3[class_3["Survived"] == 1])
total_3 = len(class_3)
proportion_3 = survived_3 / total_3
|
Lectures/Lecture3_NumpyPandas_Visualizations/Lecture3pt1/Numpy Pandas Homework/Numpy and Pandas Homework (Answers).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
g_key = "<KEY>"
# -
city = pd.read_csv(r'C:\Users\harol\Desktop\Python API\output_data\cities.csv')
city.head()
gmaps.configure(api_key=g_key)
locations = city[["Latitude", "Longitude"]].astype(float)
humidity = city["Humidity"].astype(float)
# +
fig = gmaps.figure()
heat_layer = gmaps.heatmap_layer(locations, weights=humidity,
dissipating=False, max_intensity=100,
point_radius = 1)
fig.add_layer(heat_layer)
fig
# -
myweather = city.loc[(city['Temperature'] >= 30) & (city['Temperature'] <= 100) &
(city['Windspeed'] <= 40)]
myweather
hotel = myweather
hotel.reset_index(drop=True, inplace=True)
hotel
# +
hotel['Hotel Name'] = ''
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
params = {
"radius": 50000,
"rankby": "prominence",
"types": "lodging",
"keyword": "hotel",
"key": g_key,
}
for index, row in hotel.iterrows():
lat = row['Latitude']
lng = row['Longitude']
city = row['City']
params["location"] = f'{lat},{lng}'
response = requests.get(base_url, params=params).json()
results = response['results']
try:
print(f"Closest {city} hotel is {results[0]['name']}.")
hotel.loc[index, 'Hotel Name'] = results[0]['name']
except:
print(f"Missing field/result for {city}... skipping.")
# -
hotel = hotel[hotel['Hotel Name'].notna()]
hotel = hotel.rename(columns={"Latitude": "Lat", "Longitude": "Lng"})
hotel
locations = hotel[["Lat", "Lng"]].astype(float)
hotel_list=hotel["Hotel Name"].to_list()
country_list=hotel["Country"].to_list()
city_list=hotel["City"].to_list()
# +
hotel_layer = gmaps.symbol_layer(
locations, fill_color='rgba(0, 150, 0, 0.4)',
stroke_color='rgba(0, 0, 150, 0.4)', scale=2,
info_box_content=[
f"Nearest Hotel: {hotel}, Country: {country}, City: {city}"
for hotel, country, city
in zip(hotel_list, country_list, city_list)
]
)
fig = gmaps.figure()
fig.add_layer(hotel_layer)
fig.add_layer(heat_layer)
fig
# -
|
Vacationpy.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# [explaining article](https://www.fcodelabs.com/2019/03/18/Sum-Tree-Introduction/)
# %load_ext cython
# %matplotlib inline
import matplotlib.pyplot as plot
import numba
import numpy as np
import seaborn as sns
import cProfile
import pstats
# benchmark workload: fill, sample, update, repeat
def buffer_benchmark(factory, max_frame, batch_size):
buffer = factory()
for frame in range(max_frame):
buffer.add('frame%d' % frame, 1.0)
if len(buffer) > batch_size:
indices, weights, objects = buffer.sample(batch_size)
assert len(indices) == len(weights) == len(objects) == batch_size, 'invalid output of sample()'
weights = np.clip(weights + np.random.random(weights.shape) - 0.5, 0.01, 1.0)
buffer.update(indices, weights)
class NaivePrioritizedBuffer(object):
def __init__(self, capacity, prob_alpha=0.6):
self.prob_alpha = prob_alpha
self.capacity = capacity
self.buffer = []
self.pos = 0
self.priorities = np.zeros((capacity,), dtype=np.float32)
def add(self, obj, p):
max_prio = np.max(self.priorities) if self.buffer else 1.0
if len(self.buffer) < self.capacity:
self.buffer.append(obj)
else:
self.buffer[self.pos] = obj
self.priorities[self.pos] = max_prio
self.pos = (self.pos + 1) % self.capacity
def sample(self, n, beta=0.4):
if len(self.buffer) == self.capacity:
prios = self.priorities
else:
prios = self.priorities[: self.pos]
probs = prios ** self.prob_alpha
probs /= probs.sum()
indices = np.random.choice(len(self.buffer), n, p=probs)
samples = [self.buffer[idx] for idx in indices]
weights = (len(self.buffer) * probs[indices]) ** (-beta)
weights /= weights.max()
return indices, weights, samples
def update(self, indices, priorities):
for idx, prio in zip(indices, priorities):
self.priorities[idx] = prio
def __len__(self):
return len(self.buffer)
factory = lambda: NaivePrioritizedBuffer(10000, 0.6)
# %timeit buffer_benchmark(factory, 10000, 32)
# %timeit buffer_benchmark(factory, 20000, 32)
# %timeit buffer_benchmark(factory, 50000, 32)
# %timeit buffer_benchmark(factory, 100000, 32)
# most of sumtree code derived from [github](https://github.com/chuyangliu/Snake/blob/master/snake/util/sumtree.py):
# + magic_args="-c=-O3" language="cython"
# # cython: boundscheck=False
# import numpy as np
# cimport numpy as np
#
# cdef class SumTree:
# cpdef float[:] tree
# cpdef np.ndarray nd_tree
# cpdef size_t capacity
# cpdef size_t pos
# cpdef size_t count
# def __init__(self, capacity : size_t):
# self.nd_tree = np.zeros((capacity * 2 - 1, ), dtype=np.float32)
# self.tree = self.nd_tree
# self.capacity = capacity
# self.pos = 0
# self.count = 0
#
# def get_one(self, s : float) -> [size_t, float]:
# cdef size_t parent = 0
# cdef size_t left, right
# with nogil:
# while parent < self.capacity - 1:
# left = parent * 2 + 1
# right = left + 1
# if s <= self.tree[left]:
# parent = left
# else:
# s -= self.tree[left]
# parent = right
# return parent - self.capacity + 1, self.tree[parent]
#
# def add_one(self, value : float) -> size_t:
# cdef size_t idx = self.pos
# self.update_one(idx, value)
# self.pos = (self.pos + 1) % self.capacity
# self.count = min(self.count + 1, self.capacity)
# return idx
#
# def update_one(self, idx : size_t, value : float):
# idx = idx + self.capacity - 1
# cdef float[:] tree = self.tree
# cdef float change = value - tree[idx]
# with nogil:
# while True:
# tree[idx] += change
# if 0 == idx:
# break
# idx = (idx - 1) // 2
#
# def sum_total(self) -> float:
# return self.tree[0]
#
# def __len__(self) -> size_t:
# return self.count
#
# class SumTreePrioritizedBufferCython(object):
# def __init__(self, capacity : size_t, alpha : float):
# self.capacity = capacity
# self.tree = SumTree(capacity)
# self.data = np.empty(capacity, dtype=object)
#
# def __len__(self) -> size_t:
# return len(self.tree)
#
# def add(self, obj : object, p : float):
# idx = self.tree.add_one(p)
# self.data[idx] = obj
#
# def update(self, indices, priorities):
# for i in range(indices.size):
# idx, prio = indices[i], priorities[i]
# self.tree.update_one(int(idx), float(prio))
#
# def sample(self, n : size_t):
# segment = self.tree.sum_total() / float(n)
# a = np.arange(n) * segment
# b = a + segment
# s = np.random.uniform(a, b)
# indices = np.zeros(n, dtype=np.uint32)
# weights = np.empty(n, dtype=np.float32)
# samples = np.empty(n, dtype=object)
# for i in range(n):
# idx, prio = self.tree.get_one(s[i])
# indices[i] = idx
# weights[i] = prio
# samples[i] = self.data[idx]
# return indices, weights, samples
# -
factory = lambda: SumTreePrioritizedBufferCython(10000, 0.6)
# %timeit buffer_benchmark(factory, 10000, 32)
# %timeit buffer_benchmark(factory, 20000, 32)
# %timeit buffer_benchmark(factory, 50000, 32)
# %timeit buffer_benchmark(factory, 100000, 32)
class SumTreePrioritizedBufferNumba(object):
def __init__(self, capacity, prob_alpha=0.6):
self.capacity, self.prob_alpha = capacity, prob_alpha
self.tree = np.zeros(capacity * 2 - 1, dtype=np.float32)
self.data = np.empty(capacity, dtype=object)
self.pos, self.len = 0, 0
@numba.jit
def _update(self, idx, value):
tree = self.tree
change = value - tree[idx]
tree[idx] = value
while True:
tree[idx] += change
if 0 == idx:
break
idx = (idx - 1) // 2
@numba.jit
def _retrieve(self, s):
tree_idx, parent = None, 0
while True:
if parent >= self.capacity - 1:
tree_idx = parent
break
left = parent * 2 + 1
right = left + 1
if s <= self.tree[left]:
parent = left
else:
s -= self.tree[left]
parent = right
return tree_idx
@numba.jit
def add(self, obj, p=1):
idx = self.pos + self.capacity - 1
self.data[self.pos] = obj
self._update(idx, p)
self.pos = (self.pos + 1) % self.capacity
self.len = min(self.len + 1, self.capacity)
@numba.jit
def sample(self, n, beta=0.4):
segment = self.tree[0] / n
a = np.arange(n) * segment
b = a + segment
s = np.random.uniform(a, b)
indices = np.zeros(n, dtype=np.int32)
weights = np.empty(n, dtype=np.float32)
samples = np.empty(n, dtype=object)
for i in range(n):
idx = self._retrieve(s[i])
indices[i] = idx
weights[i] = self.tree[idx]
samples[i] = self.data[idx - self.capacity + 1]
return indices, weights, samples
@numba.jit
def update(self, indices, priorities):
for idx, prio in zip(indices, priorities):
self._update(idx, prio)
def __len__(self):
return self.len
pr = cProfile.Profile()
pr.enable()
buffer_benchmark(factory, 10000, 32)
pr.disable()
st = pstats.Stats(pr).sort_stats('tottime')
st.print_stats()
factory = lambda: SumTreePrioritizedBufferNumba(10000, 0.6)
# %timeit buffer_benchmark(factory, 10000, 32)
# %timeit buffer_benchmark(factory, 20000, 32)
# %timeit buffer_benchmark(factory, 50000, 32)
# %timeit buffer_benchmark(factory, 100000, 32)
factory = lambda: SumTreePrioritizedBufferCython(100000, 0.6)
# %timeit buffer_benchmark(factory, 100000, 32)
# %timeit buffer_benchmark(factory, 200000, 32)
# %timeit buffer_benchmark(factory, 500000, 32)
# %timeit buffer_benchmark(factory, 1000000, 32)
factory = lambda: SumTreePrioritizedBufferNumba(100000, 0.6)
# %timeit buffer_benchmark(factory, 100000, 32)
# %timeit buffer_benchmark(factory, 200000, 32)
# %timeit buffer_benchmark(factory, 500000, 32)
# %timeit buffer_benchmark(factory, 1000000, 32)
factory = lambda: NaivePrioritizedBuffer(100000, 0.6)
# %timeit buffer_benchmark(factory, 100000, 32)
# %timeit buffer_benchmark(factory, 200000, 32)
# %timeit buffer_benchmark(factory, 500000, 32)
# %timeit buffer_benchmark(factory, 1000000, 32)
|
misc/sum_tree.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
#
# Compute ICA on MEG data and remove artifacts
# ============================================
#
# ICA is fit to MEG raw data.
# The sources matching the ECG and EOG are automatically found and displayed.
# Subsequently, artifact detection and rejection quality are assessed.
#
# +
# Authors: <NAME> <<EMAIL>>
# <NAME> <<EMAIL>>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.preprocessing import ICA
from mne.preprocessing import create_ecg_epochs, create_eog_epochs
from mne.datasets import sample
# -
# Setup paths and prepare raw data.
#
#
# +
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, None, fir_design='firwin') # already lowpassed @ 40
raw.set_annotations(mne.Annotations([1], [10], 'BAD'))
raw.plot(block=True)
# For the sake of example we annotate first 10 seconds of the recording as
# 'BAD'. This part of data is excluded from the ICA decomposition by default.
# To turn this behavior off, pass ``reject_by_annotation=False`` to
# :meth:`mne.preprocessing.ICA.fit`.
raw.set_annotations(mne.Annotations([0], [10], 'BAD'))
# -
# 1) Fit ICA model using the FastICA algorithm.
# Other available choices are ``picard`` or ``infomax``.
#
# <div class="alert alert-info"><h4>Note</h4><p>The default method in MNE is FastICA, which along with Infomax is
# one of the most widely used ICA algorithm. Picard is a
# new algorithm that is expected to converge faster than FastICA and
# Infomax, especially when the aim is to recover accurate maps with
# a low tolerance parameter, see [1]_ for more information.</p></div>
#
# We pass a float value between 0 and 1 to select n_components based on the
# percentage of variance explained by the PCA components.
#
#
# +
ica = ICA(n_components=0.95, method='fastica', random_state=0, max_iter=100)
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
# low iterations -> does not fully converge
ica.fit(raw, picks=picks, decim=3, reject=dict(mag=4e-12, grad=4000e-13))
# maximum number of components to reject
n_max_ecg, n_max_eog = 3, 1 # here we don't expect horizontal EOG components
# -
# 2) identify bad components by analyzing latent sources.
#
#
# +
title = 'Sources related to %s artifacts (red)'
# generate ECG epochs use detection via phase statistics
ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5, picks=picks)
ecg_inds, scores = ica.find_bads_ecg(ecg_epochs, method='ctps')
ica.plot_scores(scores, exclude=ecg_inds, title=title % 'ecg', labels='ecg')
show_picks = np.abs(scores).argsort()[::-1][:5]
ica.plot_sources(raw, show_picks, exclude=ecg_inds, title=title % 'ecg')
ica.plot_components(ecg_inds, title=title % 'ecg', colorbar=True)
ecg_inds = ecg_inds[:n_max_ecg]
ica.exclude += ecg_inds
# detect EOG by correlation
eog_inds, scores = ica.find_bads_eog(raw)
ica.plot_scores(scores, exclude=eog_inds, title=title % 'eog', labels='eog')
show_picks = np.abs(scores).argsort()[::-1][:5]
ica.plot_sources(raw, show_picks, exclude=eog_inds, title=title % 'eog')
ica.plot_components(eog_inds, title=title % 'eog', colorbar=True)
eog_inds = eog_inds[:n_max_eog]
ica.exclude += eog_inds
# -
# 3) Assess component selection and unmixing quality.
#
#
# +
# estimate average artifact
ecg_evoked = ecg_epochs.average()
ica.plot_sources(ecg_evoked, exclude=ecg_inds) # plot ECG sources + selection
ica.plot_overlay(ecg_evoked, exclude=ecg_inds) # plot ECG cleaning
eog_evoked = create_eog_epochs(raw, tmin=-.5, tmax=.5, picks=picks).average()
ica.plot_sources(eog_evoked, exclude=eog_inds) # plot EOG sources + selection
ica.plot_overlay(eog_evoked, exclude=eog_inds) # plot EOG cleaning
# check the amplitudes do not change
ica.plot_overlay(raw) # EOG artifacts remain
# +
# To save an ICA solution you can say:
# ica.save('my_ica.fif')
# You can later load the solution by saying:
# from mne.preprocessing import read_ica
# read_ica('my_ica.fif')
# Apply the solution to Raw, Epochs or Evoked like this:
# ica.apply(epochs)
# -
# References
# ----------
# .. [1] <NAME>, <NAME>, <NAME> (2018). Faster Independent Component
# Analysis by Preconditioning With Hessian Approximations.
# IEEE Transactions on Signal Processing 66:4040–4049
#
#
|
stable/_downloads/5b60b1d2b62e31ceb3bcab30d238498d/plot_ica_from_raw.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:root]
# language: python
# name: conda-root-py
# ---
# +
# #!/usr/bin/env python
# make sure to install these packages before running:
# pip install pandas
# pip install sodapy
import pandas as pd
from sodapy import Socrata
# Unauthenticated client only works with public data sets. Note 'None'
# in place of application token, and no username or password:
# client = Socrata("data.virginia.gov", None)
# Example authenticated client (needed for non-public datasets):
client = Socrata("data.virginia.gov",
"NngCh4cIYdkg2eNP2DqZs1iAq",
username="<EMAIL>",
password="<PASSWORD>")
# +
# First 2000 results, returned as JSON from API / converted to Python list of
# dictionaries by sodapy.
results = client.get("bre9-aqqr", limit=100000)
# Convert to pandas DataFrame
results_df = pd.DataFrame.from_records(results)
# -
results_df.head()
results_df.shape
results_df.to_csv("cases.csv", index=False)
results_df.info()
client = Socrata("data.virginia.gov",
"NngCh4cIYdkg2eNP2DqZs1iAq",
username="<EMAIL>",
password="<PASSWORD>",
timeout=10)
# +
vaccines = client.get("28k2-x2rj", limit=500000)
# Convert to pandas DataFrame
vaccine_df = pd.DataFrame.from_records(vaccines)
# -
vaccine_df.head()
vaccine_df.shape
vaccine_df.to_csv("vaccine.csv", index=False)
vaccine_df.info()
# +
# Create an SQLite database
from sqlalchemy import create_engine
engine = create_engine('sqlite:///test.db')
# create table for each df in DB
results_df.to_sql("case_data", con=engine, if_exists='replace', index=False)
vaccine_df.to_sql("vaccine_data", con=engine, if_exists='replace', index=False)
# -
|
Notebooks/Data Acquisition.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="eTcp-kGZ-ETm" colab_type="text"
# # 2A Assignment
# + [markdown] id="7VvnsDDl-MQE" colab_type="text"
#
# <p style="text-align: justify;">
#
# <a href="http://hmkcode.github.io/images/ai/backpropagation.png">
# <img class="size-full wp-image-315 aligncenter" src="http://hmkcode.github.io/images/ai/backpropagation.png" alt="get-location" />
# </a>
# <br>
# If you are building your own neural network, you will definitely need to understand how to train it.
# Backpropagation is a commonly used technique for training neural network. There are many resources explaining the technique,
# but this post will explain backpropagation with concrete example in a very detailed colorful steps.
# </p>
#
# ## Overview
#
# In this post, we will build a neural network with three layers:
#
# - **Input** layer with two inputs neurons
# - One **hidden** layer with two neurons
# - **Output** layer with a single neuron
#
#
#
#
# ## Weights, weights, weights
#
# Neural network training is about finding weights that minimize prediction error. We usually start our training with a set of randomly generated weights.Then, backpropagation is used to update the weights in an attempt to correctly map arbitrary inputs to outputs.
#
# Our initial weights will be as following:
# `w1 = 0.22`, `w2 = 0.24`, `w3 = 0.42`, `w4 = 0.16`, `w5 = 0.28` and `w6 = 0.30`
#
#
# ## Dataset
#
# Our dataset has one sample with two inputs and one output.
#
#
# Our single sample is as following `inputs=[2, 3]` and `output=[1]`.
#
#
# ## Forward Pass
#
# We will use given weights and inputs to predict the output. Inputs are multiplied by weights; the results are then passed forward to next layer.
#
#
#
# $\begin{bmatrix}2 & 3\end{bmatrix}$ . $\begin{bmatrix}0.22 & 0.42\\0.24 & 0.16\end{bmatrix}$ = $\begin{bmatrix}1.16 & 1.32\end{bmatrix}$ . $\begin{bmatrix}0.28 \\0.30\end{bmatrix}$ = 0.7208
#
#
# ## Calculating Error
#
# Now, it's time to find out how our network performed by calculating the difference between the actual output and predicted one. It's clear that our network output, or **prediction**, is not even close to **actual output**. We can calculate the difference or the error as following.
#
#
# Error = $\frac{1}{2}$ ${(0.7208 - 1.0)^2}$ = 0.03898
#
# ## Reducing Error
#
# Our main goal of the training is to reduce the **error** or the difference between **prediction** and **actual output**. Since **actual output** is constant, "not changing", the only way to reduce the error is to change **prediction** value. The question now is, how to change **prediction** value?
#
# By decomposing **prediction** into its basic elements we can find that **weights** are the variable elements affecting **prediction** value. In other words, in order to change **prediction** value, we need to change **weights** values.
#
#
#
# > The question now is **how to change\update the weights value so that the error is reduced?**
# > The answer is **Backpropagation!**
#
#
# ## **Backpropagation**
#
# **Backpropagation**, short for "backward propagation of errors", is a mechanism used to update the **weights** using [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent). It calculates the gradient of the error function with respect to the neural network's weights. The calculation proceeds backwards through the network.
#
# > **Gradient descent** is an iterative optimization algorithm for finding the minimum of a function; in our case we want to minimize th error function. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient of the function at the current point.
#
#
# For example, to update `w6`, we take the current `w6` and subtract the partial derivative of **error** function with respect to `w6`. Optionally, we multiply the derivative of the **error** function by a selected number to make sure that the new updated **weight** is minimizing the error function; this number is called ***learning rate***.
#
#
#
#
# The derivation of the error function is evaluated by applying the chain rule as following
#
#
# So to update `w6` we can apply the following formula
#
#
# Similarly, we can derive the update formula for `w5` and any other weights existing between the output and the hidden layer.
#
#
# However, when moving backward to update `w1`, `w2`, `w3` and `w4` existing between input and hidden layer, the partial derivative for the error function with respect to `w1`, for example, will be as following.
#
# We can find the update formula for the remaining weights `w2`, `w3` and `w4` in the same way.
#
# In summary, the update formulas for all weights will be as following:
#
# We can rewrite the update formulas in matrices as following
#
# ## Backward Pass
#
# Using derived formulas we can find the new **weights**.
#
# > **Learning rate:** is a hyperparameter which means that we need to manually guess its value.
#
#
# `delta = -0.27920`
# <br>
# `a = 0.05`
# <br>
# $\begin{bmatrix} W5 \\ W6 \end{bmatrix}$ = $\begin{bmatrix} 0.28 \\ 0.30 \end{bmatrix}$ - 0.05(-0.27920)$\begin{bmatrix}1.16 \\ 1.32 \end{bmatrix}$ = $\begin{bmatrix} 0.28 \\ 0.30 \end{bmatrix}$ - $\begin{bmatrix} -0.01619 \\ -0.01843 \end{bmatrix}$ = $\begin{bmatrix} 0.29619 \\ 0.31843 \end{bmatrix}$
# <br>
# <br>
# $\begin{bmatrix} W1 & W3\\ W2 & W4 \end{bmatrix}$ = $\begin{bmatrix} 0.22 & 0.42\\ 0.24 & 0.16 \end{bmatrix}$ - (0.05)(-0.27920)$\begin{bmatrix} 2 \\3 \end{bmatrix}$. $\begin{bmatrix} 0.28 & 0.30 \end{bmatrix}$ = $\begin{bmatrix} 0.22 & 0.42\\ 0.24 & 0.16 \end{bmatrix}$ - $\begin{bmatrix} -0.00782 & - 0.00838 \\ -0.011 & -0.01256 \end{bmatrix}$ = $\begin{bmatrix} 0.22782 & 0.42838\\0.2510 & 0.17256 \end{bmatrix}$
# <br>
# <br>
# Now, using the new **weights** we will repeat the forward passed
# <br>
# <br>
# $\begin{bmatrix}2 & 3\end{bmatrix}$ . $\begin{bmatrix}0.227 & 0.429\\0.251 & 0.172\end{bmatrix}$ = $\begin{bmatrix}1.207 & 1.375\end{bmatrix}$ $\begin{bmatrix}0.296 \\0.318\end{bmatrix}$ = 0.7945
# <br>
# <br>
# We can notice that the **prediction** `0.7945` is a little bit closer to **actual output** than the previously predicted one `0.7208`. We can repeat the same process of backward and forward pass until **error** is close or equal to zero.
#
# + [markdown] id="EioRgwhRd-yX" colab_type="text"
# # Assignment 2B
# + id="58yWhd32eEzC" colab_type="code" colab={}
import numpy as np
# + id="59E1OLBOge_E" colab_type="code" colab={}
# Initialize the weights
weights = {}
weights['w1'] = 0.22
weights['w2'] = 0.24
weights['w3'] = 0.42
weights['w4'] = 0.16
weights['w5'] = 0.28
weights['w6'] = 0.30
# Input
inp = {}
inp['i1'] = 2
inp['i2'] = 3
# + id="YgqOjdP0sGfS" colab_type="code" colab={}
# Forward Propagation
def forward_prop(inp, weights):
input_val = np.array([inp['i1'], inp['i2']])
layer1 = np.array([[weights['w1'], weights['w3'] ],[weights['w2'], weights['w4']]])
out1 = np.matmul(input_val, layer1)
layer2 = np.array([weights['w5'], weights['w6']])
out2 = np.matmul(out1, layer2)
return out2
# + id="QrJRbMPRhikg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="84898d06-2fa5-48cb-81ed-2ddbad3b0b13"
# Initial Forward Prop
val = forward_prop(inp, weights)
print(f'The initial prediction value is: {val}')
# + id="3F6roHBNhlLE" colab_type="code" colab={}
# Backpropagation
def back_prop(inp, weights, lr, y_pred, y_true):
error = (0.5) * (y_pred - y_true)**2
delta = (y_pred - y_true)
layer2 = np.array([[weights['w5']], [weights['w6']]])
layer1 = np.array([[weights['w1'], weights['w3']],[weights['w2'], weights['w4']]])
weights['w1'] -= lr * delta * inp['i1'] * weights['w5']
weights['w2'] -= lr * delta * inp['i2'] * weights['w5']
weights['w3'] -= lr * delta * inp['i1'] * weights['w6']
weights['w4'] -= lr * delta * inp['i2'] * weights['w6']
weights['w5'] -= lr * delta * (inp['i1']*weights['w1'] + inp['i2']*weights['w2'])
weights['w6'] -= lr * delta * (inp['i1']*weights['w3'] + inp['i2']*weights['w4'])
print(f'delta = {delta}\nlearning rate = {lr}')
print('The updated weights are: ')
[print(f'{i} = {weights[i]}') for i in weights]
return weights
# + id="TaLAfKUxsLH1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 194} outputId="8ed0c732-3bf8-4e4d-ff9b-8154b9e057b8"
weights = back_prop(inp, weights, 0.05, val, 1.0)
val = forward_prop(inp, weights)
print(f'The forward prop value now is: {val}')
# + id="yqk1MlgytmVT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 194} outputId="74288d02-62ed-4f4c-8b1b-f8ae8e688724"
weights = back_prop(inp, weights, 0.05, val, 1.0)
val = forward_prop(inp, weights)
print(f'The forward prop value now is: {val}')
# + id="Lz4zmoACts17" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 194} outputId="5b140756-bb76-45b4-b4ca-90d1adb178c6"
weights = back_prop(inp, weights, 0.05, val, 1.0)
val = forward_prop(inp, weights)
print(f'The forward prop value now is: {val}')
# + id="zLk3lTRzttWj" colab_type="code" colab={}
|
Assignment 2/Assignment_2A_2B.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # if, elif, else Statements
#
# <code>if</code> Statements in Python allows us to tell the computer to perform alternative actions based on a certain set of results.
#
# Verbally, we can imagine we are telling the computer:
#
# "Hey if this case happens, perform some action"
#
# We can then expand the idea further with <code>elif</code> and <code>else</code> statements, which allow us to tell the computer:
#
# "Hey if this case happens, perform some action. Else, if another case happens, perform some other action. Else, if *none* of the above cases happened, perform this action."
#
# Let's go ahead and look at the syntax format for <code>if</code> statements to get a better idea of this:
#
# if case1:
# perform action1
# elif case2:
# perform action2
# else:
# perform action3
# ## First Example
#
# Let's see a quick example of this:
if True:
print('It was true!')
# Let's add in some else logic:
# +
x = False
if x:
print('x was True!')
else:
print('I will be printed in any case where x is not true')
print('Here it is')
# -
# ### Multiple Branches
#
# Let's get a fuller picture of how far <code>if</code>, <code>elif</code>, and <code>else</code> can take us!
#
# We write this out in a nested structure. Take note of how the <code>if</code>, <code>elif</code>, and <code>else</code> line up in the code. This can help you see what <code>if</code> is related to what <code>elif</code> or <code>else</code> statements.
#
# We'll reintroduce a comparison syntax for Python.
# +
loc = 'Bank'
if loc == 'Auto Shop':
print('Welcome to the Auto Shop!')
elif loc == 'Bank':
print('Welcome to the bank!')
else:
print('Where are you?')
# -
# Note how the nested <code>if</code> statements are each checked until a True boolean causes the nested code below it to run. You should also note that you can put in as many <code>elif</code> statements as you want before you close off with an <code>else</code>.
# ## if-elif-else Exercise
# Write code to check weather condition with following senarios:
#
# 1.) Sunny and Hot (temp is higher than 30)
# 2.) Sunny and Moderate (temp is between 20 and 30)
# 3.) Rainy and Cold (temp is less than 20)
# 4.) Rainy and Very Cold (temp is less than 5)
# +
# Write your code here
temp=int(input('Enter temperature '))
if temp>30:
print('Sunny and Hot ')
elif 20<=temp<=30:
print('Sunny and Moderate ')
elif 5<=temp<20:
print('Rainy and Cold ')
elif temp<5:
print('Rainy and Very Cold ')
# -
# # for Loops
#
# A <code>for</code> loop acts as an iterator in Python; it goes through items that are in a *sequence* or any other iterable item. Objects that we've learned about that we can iterate over include strings, lists, tuples, and even built-in iterables for dictionaries, such as keys or values.
#
# We've already seen the <code>for</code> statement a little bit in past lectures but now let's formalize our understanding.
#
# Here's the general format for a <code>for</code> loop in Python:
#
# for item in object:
# statements to do stuff
# The variable name used for the item is completely up to the coder, so use your best judgment for choosing a name that makes sense and you will be able to understand when revisiting your code. This item name can then be referenced inside your loop, for example if you wanted to use <code>if</code> statements to perform checks.
#
# Let's go ahead and work through several example of <code>for</code> loops using a variety of data object types. We'll start simple and build more complexity later on.
#
# ## Example 1
# Iterating through a list
# We'll learn how to automate this sort of list in the next lecture
list1 = [1,2,3,4,5,6,7,8,9,10]
list1
for num in list1:
print(num)
# Great! Hopefully this makes sense. Now let's add an <code>if</code> statement to check for even numbers. We'll first introduce a new concept here--the modulo.
# ### Modulo
# The modulo allows us to get the remainder in a division and uses the % symbol. For example:
17 % 5
# This makes sense since 17 divided by 5 is 3 remainder 2. Let's see a few more quick examples:
# 3 Remainder 1
10 % 3
# 2 no remainder
4 % 2
# Notice that if a number is fully divisible with no remainder, the result of the modulo call is 0. We can use this to test for even numbers, since if a number modulo 2 is equal to 0, that means it is an even number!
#
# Back to the <code>for</code> loops!
#
# ## Example 2
# Let's print only the even numbers from that list!
for num in list1:
if num % 2 == 0:
print(num)
# We could have also put an <code>else</code> statement in there:
for num in list1:
if num % 2 == 0:
print(num)
else:
print('Odd number')
# ## Example 3
list1=['w','o','r','d']
print(list1)
# +
result=''
for elements in list1:
result +=elements
# -
# Please notice here that <code>result +=elements</code> is equivallent to <code>result=result+elements</code>
print(result)
# ## For loop Exercise
# +
#create empty list
lit=[]
#write for loop to assign each letter of "result" separately to list
for i in result:
lit=lit+[i]
# -
#print list
print(lit)
# # while Loops
#
# The <code>while</code> statement in Python is one of most general ways to perform iteration. A <code>while</code> statement will repeatedly execute a single statement or group of statements as long as the condition is true. The reason it is called a 'loop' is because the code statements are looped through over and over again until the condition is no longer met.
#
# The general format of a while loop is:
#
# while test:
# code statements
# else:
# final code statements
#
# Let’s look at a few simple <code>while</code> loops in action.
# Notice how many times the print statements occurred and how the <code>while</code> loop kept going until the True condition was met, which occurred once x==10. It's important to note that once this occurred the code stopped. Also see how we could add an <code>else</code> statement:
# +
x = 0
while x < 10:
print('x is currently: ',x)
print(' x is still less than 10, adding 1 to x')
x+=1
else:
print('All Done!')
# -
# # break, continue, pass
#
# We can use <code>break</code>, <code>continue</code>, and <code>pass</code> statements in our loops to add additional functionality for various cases. The three statements are defined by:
#
# break: Breaks out of the current closest enclosing loop.
# continue: Goes to the top of the closest enclosing loop.
# pass: Does nothing at all.
#
#
# Thinking about <code>break</code> and <code>continue</code> statements, the general format of the <code>while</code> loop looks like this:
#
# while test:
# code statement
# if test:
# break
# if test:
# continue
# else:
#
# <code>break</code> and <code>continue</code> statements can appear anywhere inside the loop’s body, but we will usually put them further nested in conjunction with an <code>if</code> statement to perform an action based on some condition.
#
# Let's go ahead and look at some examples!
# +
x = 0
while x < 10:
print('x is currently: ',x)
print(' x is still less than 10, adding 1 to x')
x+=1
if x==3:
print('x==3')
else:
print('continuing...')
continue #pass
print('is this executed?')
# -
# Note how we have a printed statement when x==3, and a continue being printed out as we continue through the outer while loop. Let's put in a break once x ==3 and see if the result makes sense:
# +
x = 0
while x < 10:
print('x is currently: ',x)
print(' x is still less than 10, adding 1 to x')
x+=1
if x==3:
print('Breaking because x==3')
break
else:
print('continuing...')
continue
# -
# Note how the other <code>else</code> statement wasn't reached and continuing was never printed!
#
# After these brief but simple examples, you should feel comfortable using <code>while</code> statements in your code.
#
# **A word of caution however! It is possible to create an infinitely running loop with <code>while</code> statements.**
# ## While Loop Exercise
# Write code to check if input number is greater than, less than or equal to 50. Loop should be continuous and command of **exit** should end the loop
# Hint: you need to used input() fucntion and cast the value to 'int' for conditions but don't cast when 'exit'.
#
# The code for taking input and casting is as following:
#
# <code>x=input("Enter a your value: ")</code>
#write code here
while True:
x=input("Enter a your value: ")
if x=='exit':
print('breaking ')
break
else:
x=int(x)
if x==50:
print('equal to 50 ')
elif x<50:
print('less than 50 ')
else:
print('greater than 50')
continue
# # Methods
#
# Methods are essentially functions for the objects. Methods perform specific actions on an object and can also take arguments, just like a function. This lecture will serve as just a brief introduction to methods and get you thinking about overall design methods that we will touch back upon when we reach OOP in the course.
#
# Methods are in the form:
#
# object.method(arg1,arg2,etc...)
#
# You'll later see that we can think of methods as having an argument 'self' referring to the object itself. You can't see this argument but we will be using it later on in the course during the OOP lectures.
#
# Let's take a quick look at what an example of the various methods a list has:
# Create a simple list
lst = [1,2,3,4,5]
# Fortunately, with iPython and the Jupyter Notebook we can quickly see all the possible methods using the tab key.
# The common methods for a list are:
#
# * append
# * count
# * extend
# * insert
# * pop
# * remove
# * reverse
# * sort
#
# Let's try out a few of them:
lst.append(6)
lst
lst.clear()
lst
# Check how many times 2 shows up in the list
lst.count(2)
# You can always use Tab in the Jupyter Notebook to get list of all methods you use Shift+Tab in the Jupyter Notebook to get help of all methods.
# The methods for a string are:
#
# * lower
# * upper
# * replace
# * split
# * count
# * join
# * find
# * endswith
# ### Home Task
# Make a string having two words, starting with same letters and separated by a space like **'<NAME>'** and perform the following tasks using methods
#declare a string
name='<NAME>'
# Hint: s.split('Letter to split with', maxsplit=no. of splits)
# split the string
name.split()
# apply upper() method on the string
name.upper()
# +
# apply lower() method on the string
name.lower()
# -
# # Functions
#
# ## Introduction to Functions
#
# **So what is a function?**
#
# Formally, a function is a useful device that groups together a set of statements so they can be run more than once. They can also let us specify parameters that can serve as inputs to the functions.
#
# On a more fundamental level, functions allow us to not have to repeatedly write the same code again and again.
#
# Functions will be one of most basic levels of reusing code in Python, and it will also allow us to start thinking of program design.
# ## def Statements
#
# Let's see how to build out a function's syntax in Python. It has the following form:
def name_of_function(arg1,arg2):
'''
This is where the function's Document String (docstring) goes
'''
name_of_function(1,2)
# We begin with <code>def</code> then a space followed by the name of the function. Try to keep names relevant, for example len() is a good name for a length() function. Also be careful with names, you wouldn't want to call a function the same name as a [built-in function in Python](https://docs.python.org/2/library/functions.html) (such as len).
#
# Next come a pair of parentheses with a number of arguments separated by a comma. These arguments are the inputs for your function. You'll be able to use these inputs in your function and reference them. After this you put a colon.
#
# The important step is that you must indent to begin the code inside your function correctly.
# ### Example 1: A simple print 'hello' function
def say_hello():
print('hello')
# **Call the Function**
say_hello()
# ### Example 2: A simple greeting function
# Let's write a function that greets people with their name.
def greeting(name):
print('Hello',(name))
greeting('Jose')
# ## Using return
# Let's see some example that use a <code>return</code> statement. <code>return</code> allows a function to *return* a result that can then be stored as a variable, or used in whatever manner a user wants.
#
# ### Example 3: Addition function
def add_num(num1,num2):
return num2,num1
# Can save as variable due to return
result = add_num(4,5)
result
# ### Example 4: Function with default arguments
def my_power(num,n=2):
'''
Method to get power of number
'''
return num**n
my_power(3,3)
# # Functions exercise
# Write a function to check if a number is even or odd.
#define the function
def even_odd(num):
return num%2
#call the function
if even_odd(40):
print('odd')
else:
print('even')
# # Built-in Functions in Python
# We will discuss some built in functions provided to us by Python. They are as following:
# * Filter
# * Map
# * Lambda
# * Range
# * Min, Max
# * Enumerate
#
# Let's have a look at the following examples and see how they work
# ## map function
#
# The **map** function allows you to "map" a function to an iterable object. That is to say you can quickly call the same function to every item in an iterable, such as a list. For example:
def square(num):
return num**2
my_nums = [1,2,3,4,5]
new_num=[]
for i in my_nums:
new_num.append(square(i))
new_num
# **Using the map() function:**
map(square,my_nums)
#printing results
list(map(square,my_nums))
# ## filter function
#
# The filter function returns an iterator yielding those items of iterable for which function(item)
# is true. Meaning you need to filter by a function that returns either True or False. Then passing that into filter (along with your iterable) and you will get back only the results that would return True when passed to the function.
def check_even(num):
return num % 2 == 0
nums = [0,1,2,3,4,5,6,7,8,9,10]
filter(check_even,nums)
list(filter(check_even,nums))
# ## lambda expression
#
# lambda expressions allow us to create "anonymous" functions. This basically means we can quickly make ad-hoc functions without needing to properly define a function using def.
#
# **lambda's body is a single expression, not a block of statements.**
#
# Consider the following function:
def square(num):
result = num**2
return result
square(2)
# We could actually even write this all on one line.
lambda num: num ** 2
# for displaying results
square = lambda num: num **2
# Let's repeat some of the examples from above with a lambda expression
list(map(lambda num: num ** 2, my_nums))
# ### Task
#
# Repeat the **filter()** function example with the **lambda()** functions
# wrtite the code here
number=[1,2,3,4,5]
list(filter(lambda n:n%2==0,number))
# ## range
#
# The range function allows you to quickly *generate* a list of integers, this comes in handy a lot, so take note of how to use it! There are 3 parameters you can pass, a start, a stop, and a step size. Let's see some examples:
range(0,11)
# Note that this is a **generator** function, so to actually get a list out of it, we need to cast it to a list with **list()**. What is a generator? Its a special type of function that will generate information and not need to save it to memory.
list(range(0,11))
list(range(0,11,2))
# ## enumerate
#
# enumerate is a very useful function to use with for loops. Let's imagine the following situation:
# +
index_count = 0
for letter in 'abcde':
print("At index " + str(index_count) +" the letter is " + str(letter))
index_count += 1
# -
# Keeping track of how many loops you've gone through is so common, that enumerate was created so you don't need to worry about creating and updating this index_count or loop_count variable
# +
# Notice the tuple unpacking!
for i,letter in enumerate('abcde'):
print("At index " + str(i) +" the letter is " + str(letter))
# -
# ## min and max
#
# Quickly check the minimum or maximum of a list with these functions.
mylist = [10,20,30,40,100]
min(mylist)
max(mylist)
# ## random
#
# Python comes with a built in random library. There are a lot of functions included in this random library, so we will only show you two useful functions for now.
from random import shuffle
shuffle(mylist)
mylist
from random import randint
randint(0,100)
# ## input
#
# This function is used in python to take input from user.
a=input('Enter Something into this box: ')
|
Python Basics/Control Flow and Functions.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import numpy as np
import pandas as pd
import sematch
import math
import sys
import time
import nltk
from nltk import word_tokenize
from nltk.corpus import wordnet as wn
from nltk.corpus import brown
from tqdm import tqdm, tqdm_notebook
from collections import Counter
import warnings
warnings.filterwarnings('ignore')
# +
def get_best_synset_pair(word_1, word_2):
"""
Choose the pair with highest path similarity among all pairs.
Mimics pattern-seeking behavior of humans.
"""
max_sim = -1.0
synsets_1 = wn.synsets(word_1)
synsets_2 = wn.synsets(word_2)
if len(synsets_1) == 0 or len(synsets_2) == 0:
return None, None
else:
max_sim = -1.0
best_pair = None, None
for synset_1 in synsets_1:
for synset_2 in synsets_2:
sim = wn.path_similarity(synset_1, synset_2)
if sim > max_sim:
max_sim = sim
best_pair = synset_1, synset_2
return best_pair
def length_dist(synset_1, synset_2):
"""
Return a measure of the length of the shortest path in the semantic
ontology (Wordnet in our case as well as the paper's) between two
synsets.
"""
l_dist = sys.maxint
if synset_1 is None or synset_2 is None:
return 0.0
if synset_1 == synset_2:
# if synset_1 and synset_2 are the same synset return 0
l_dist = 0.0
else:
wset_1 = set([str(x.name()) for x in synset_1.lemmas()])
wset_2 = set([str(x.name()) for x in synset_2.lemmas()])
if len(wset_1.intersection(wset_2)) > 0:
# if synset_1 != synset_2 but there is word overlap, return 1.0
l_dist = 1.0
else:
# just compute the shortest path between the two
l_dist = synset_1.shortest_path_distance(synset_2)
if l_dist is None:
l_dist = 0.0
# normalize path length to the range [0,1]
return math.exp(-ALPHA * l_dist)
def hierarchy_dist(synset_1, synset_2):
"""
Return a measure of depth in the ontology to model the fact that
nodes closer to the root are broader and have less semantic similarity
than nodes further away from the root.
"""
h_dist = sys.maxint
if synset_1 is None or synset_2 is None:
return h_dist
if synset_1 == synset_2:
# return the depth of one of synset_1 or synset_2
h_dist = max([x[1] for x in synset_1.hypernym_distances()])
else:
# find the max depth of least common subsumer
hypernyms_1 = {x[0]:x[1] for x in synset_1.hypernym_distances()}
hypernyms_2 = {x[0]:x[1] for x in synset_2.hypernym_distances()}
lcs_candidates = set(hypernyms_1.keys()).intersection(
set(hypernyms_2.keys()))
if len(lcs_candidates) > 0:
lcs_dists = []
for lcs_candidate in lcs_candidates:
lcs_d1 = 0
if hypernyms_1.has_key(lcs_candidate):
lcs_d1 = hypernyms_1[lcs_candidate]
lcs_d2 = 0
if hypernyms_2.has_key(lcs_candidate):
lcs_d2 = hypernyms_2[lcs_candidate]
lcs_dists.append(max([lcs_d1, lcs_d2]))
h_dist = max(lcs_dists)
else:
h_dist = 0
return ((math.exp(BETA * h_dist) - math.exp(-BETA * h_dist)) /
(math.exp(BETA * h_dist) + math.exp(-BETA * h_dist)))
def word_similarity(word_1, word_2):
synset_pair = get_best_synset_pair(word_1, word_2)
return (length_dist(synset_pair[0], synset_pair[1]) *
hierarchy_dist(synset_pair[0], synset_pair[1]))
######################### sentence similarity ##########################
def most_similar_word(word, word_set):
"""
Find the word in the joint word set that is most similar to the word
passed in. We use the algorithm above to compute word similarity between
the word and each word in the joint word set, and return the most similar
word and the actual similarity value.
"""
max_sim = -1.0
sim_word = ""
for ref_word in word_set:
sim = word_similarity(word, ref_word)
if sim > max_sim:
max_sim = sim
sim_word = ref_word
return sim_word, max_sim
def info_content(lookup_word):
"""
Uses the Brown corpus available in NLTK to calculate a Laplace
smoothed frequency distribution of words, then uses this information
to compute the information content of the lookup_word.
"""
global N
if N == 0:
# poor man's lazy evaluation
for sent in brown.sents():
for word in sent:
word = word.lower()
if not brown_freqs.has_key(word):
brown_freqs[word] = 0
brown_freqs[word] = brown_freqs[word] + 1
N = N + 1
lookup_word = lookup_word.lower()
n = 0 if not brown_freqs.has_key(lookup_word) else brown_freqs[lookup_word]
return 1.0 - (math.log(n + 1) / math.log(N + 1))
def semantic_vector(words, joint_words, info_content_norm):
"""
Computes the semantic vector of a sentence. The sentence is passed in as
a collection of words. The size of the semantic vector is the same as the
size of the joint word set. The elements are 1 if a word in the sentence
already exists in the joint word set, or the similarity of the word to the
most similar word in the joint word set if it doesn't. Both values are
further normalized by the word's (and similar word's) information content
if info_content_norm is True.
"""
sent_set = set(words)
semvec = np.zeros(len(joint_words))
i = 0
for joint_word in joint_words:
if joint_word in sent_set:
# if word in union exists in the sentence, s(i) = 1 (unnormalized)
semvec[i] = 1.0
if info_content_norm:
semvec[i] = semvec[i] * math.pow(info_content(joint_word), 2)
else:
# find the most similar word in the joint set and set the sim value
sim_word, max_sim = most_similar_word(joint_word, sent_set)
semvec[i] = PHI if max_sim > PHI else 0.0
if info_content_norm:
semvec[i] = semvec[i] * info_content(joint_word) * info_content(sim_word)
i = i + 1
return semvec
def semantic_similarity(sentence_1, sentence_2, info_content_norm):
"""
Computes the semantic similarity between two sentences as the cosine
similarity between the semantic vectors computed for each sentence.
"""
words_1 = nltk.word_tokenize(sentence_1)
words_2 = nltk.word_tokenize(sentence_2)
joint_words = set(words_1).union(set(words_2))
vec_1 = semantic_vector(words_1, joint_words, info_content_norm)
vec_2 = semantic_vector(words_2, joint_words, info_content_norm)
return np.dot(vec_1, vec_2.T) / (np.linalg.norm(vec_1) * np.linalg.norm(vec_2))
######################### word order similarity ##########################
def word_order_vector(words, joint_words, windex):
"""
Computes the word order vector for a sentence. The sentence is passed
in as a collection of words. The size of the word order vector is the
same as the size of the joint word set. The elements of the word order
vector are the position mapping (from the windex dictionary) of the
word in the joint set if the word exists in the sentence. If the word
does not exist in the sentence, then the value of the element is the
position of the most similar word in the sentence as long as the similarity
is above the threshold ETA.
"""
wovec = np.zeros(len(joint_words))
i = 0
wordset = set(words)
for joint_word in joint_words:
if joint_word in wordset:
# word in joint_words found in sentence, just populate the index
wovec[i] = windex[joint_word]
else:
# word not in joint_words, find most similar word and populate
# word_vector with the thresholded similarity
sim_word, max_sim = most_similar_word(joint_word, wordset)
if max_sim > ETA:
wovec[i] = windex[sim_word]
else:
wovec[i] = 0
i = i + 1
return wovec
def word_order_similarity(sentence_1, sentence_2):
"""
Computes the word-order similarity between two sentences as the normalized
difference of word order between the two sentences.
"""
words_1 = nltk.word_tokenize(sentence_1)
words_2 = nltk.word_tokenize(sentence_2)
joint_words = list(set(words_1).union(set(words_2)))
windex = {x[1]: x[0] for x in enumerate(joint_words)}
r1 = word_order_vector(words_1, joint_words, windex)
r2 = word_order_vector(words_2, joint_words, windex)
return 1.0 - (np.linalg.norm(r1 - r2) / np.linalg.norm(r1 + r2))
######################### overall similarity ##########################
def similarity(sentence_1, sentence_2, info_content_norm):
"""
Calculate the semantic similarity between two sentences. The last
parameter is True or False depending on whether information content
normalization is desired or not.
"""
return DELTA * semantic_similarity(sentence_1, sentence_2, info_content_norm) + \
(1.0 - DELTA) * word_order_similarity(sentence_1, sentence_2)
# +
ALPHA = 0.2
BETA = 0.45
ETA = 0.4
PHI = 0.2
DELTA = 0.85
brown_freqs = dict()
N = 0
src = '/media/w/1c392724-ecf3-4615-8f3c-79368ec36380/DS Projects/Kaggle/Quora/scripts/features/'
trdf = pd.read_csv(src + 'df_train_spacylemmat_fullclean.csv').iloc[:, :-1]
tedf = pd.read_csv(src + 'df_test_spacylemmat_fullclean.csv').iloc[:, 4:]
trdfs = pd.read_csv(src + 'df_test_lemmatfullcleanSTEMMED.csv').iloc[:, :-1]
tedfs = pd.read_csv(src + 'df_train_lemmatfullcleanSTEMMED.csv').iloc[:, 4:]
trdf.fillna('NULL', inplace = True)
tedf.fillna('NULL', inplace = True)
trdfs.fillna('NULL', inplace = True)
tedfs.fillna('NULL', inplace = True)
# -
t = time.time()
trdf = trdf.iloc[:100, :]
X = pd.DataFrame()
X['nltk_article_similarity'] = trdf.apply(lambda x: similarity(x['question1'], x['question2'], False), axis = 1)
X['nltk_article_similarity_brown'] = trdf.apply(lambda x: similarity(x['question1'], x['question2'], True), axis = 1)
time.time() - t
|
features/.ipynb_checkpoints/Extraction - NLTK Similarity Article 22.05-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="wIXqUgrcicAw"
# ## *Descripción e instrucciones*
#
# La implementación está diseñada para ser ejecutada desde google colab, pero puede ejecutarse de forma local siempre que se tengan instalados los requisitos preestablecidos.
#
# Es necesario establecer un conjunto de directorios para interacturar con las funciones. Ya que al segmentar o aplicar filtros, los resultados se guardan en distintas carpetas y se leen por el siguiente método correspondiente. Se aplicaron dos preprocesos para un mejor control en la calidad de la salida.
#
# Hay que adaptar estos directorios a las rutas adecuadas según el usuario que utilice el notebook.
#
# Los directorios necesarios son:
#
# - *im_dir*: contiene las imágenes a procesar.
# - *imagenes_segmentadas*: contiene las imágenes después de extraer las regiones que contienen texto
# - *imagenes_Preproceso1*: contiene las imágenes después de aplicar un primer preproceso
# - *imagenes_Preproceso2*: contiene las imágenes después de aplicar un segundo preproceso
# - *archivos_texto*: contiene los archivos de texto de las transcripciones obtenidas
#
# Por otra parte, se debe especificar las rutas de acceso a los archivos .csv que contienen información sobre las transcripciones (para medir la calidad de la extracción) y las entidades conocidas.
#
# - *JusticIA_DatosTranscripciones.csv*
# - *civilservants.csv*
# - *organizations.csv*
# - *places.csv*
# - *prosecuted.csv*
# + [markdown] id="0tP1KqyRYHtM"
# # Instalación de librerias necesarias
# Estas librerias se mencionan en el archivo `requirements.txt`.
#
# **NOTA:** Si se usa colab, se debe reiniciar el entorno de ejecución después de la instalación.
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="R9c-2vWSM3BB" outputId="d0a51f46-9aee-47a0-9fac-125cd05671bc"
# Pre-requisitos para instalar Kraken
# !pip uninstall tensorflow -y
# !pip install tensorflow==2.3.1
# !pip uninstall Keras -y
# !pip install Keras==2.2.4
# !pip uninstall scikit-learn -y
# !pip install scikit-learn==0.19.2
# !pip uninstall scikit-image -y
# !pip install scikit-image==0.18.2
#Kraken para segmentación de imágenes
# !pip install kraken
#paquetes necesarios en linux para usar tesseract
# ! apt install tesseract-ocr
# ! apt install libtesseract-dev
#bibliotecas para python
#Extracción de texto
# ! pip install pytesseract
# Manipular imágenes
# ! pip install Pillow
# ! pip install opencv-python
#Indentificación de entidades
# !pip install rapidfuzz
#Indentificación de fechas
# !pip install datefinder
#Para evaaluación de la calidad de la extracción
# !pip install python-Levenshtein
# + [markdown] id="XBxZU4BFP4bh"
# # Montando en colab las carpetas necesarias
# + colab={"base_uri": "https://localhost:8080/"} id="OKdpsRJF24h6" outputId="6c014c6a-bc61-4f6a-e02f-032231c2b120"
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} id="tJcQv05VQT0K" outputId="03a47ed4-2f4f-433a-f72c-fa5904989446"
# ! git clone https://github.com/Hackaton-JusticIA-2021/gato-encerrado-Hackathon-RIIAA-2021
# + id="VUcqsF5MpBgn"
import importlib
# se cargan los modulos desde sus rutas
spec = importlib.util.spec_from_file_location("extraccion_texto", "/content/gato-encerrado-Hackathon-RIIAA-2021/modules/extraccion_texto.py")
extraccion_texto = importlib.util.module_from_spec(spec)
spec.loader.exec_module(extraccion_texto)
spec = importlib.util.spec_from_file_location("fuzzy_search", "/content/gato-encerrado-Hackathon-RIIAA-2021/modules/fuzzy_search.py")
fuzzy_search = importlib.util.module_from_spec(spec)
spec.loader.exec_module(fuzzy_search)
spec = importlib.util.spec_from_file_location("proceso_imagen", "/content/gato-encerrado-Hackathon-RIIAA-2021/modules/preproceso_imagen.py")
preproceso_imagen = importlib.util.module_from_spec(spec)
spec.loader.exec_module(preproceso_imagen)
# + [markdown] id="5Zaubo7MZ36n"
# Clonamos el repositorio de Github para poder acceder a las librerias creadas en python.
#
# **Nota**: Las carpetas se guardan en drive de forma temporal.
# + [markdown] id="QNa9zcEicvDu"
# Acceso a las imágenes que vamos a procesar.
# + id="6necXLWiBoXt"
in_dir='/content/gato-encerrado-Hackathon-RIIAA-2021/Evaluacion/Reto2'
# + [markdown] id="7zH9EPYliEyG"
# Si no existen las carpetas se crean usando pathlib.
# + id="KyLAuvRFDfVq"
#nombre de carpetas
#carpeta donde guardamos imagenes_bloques
imagenes_segmentadas = '/content/drive/My Drive/Hackaton/Hackathon RIIAA 2021 (CNB)/Implementacion/ImagenesSegmentadas'
#carpeta donde guardamos primer preproceso
imagenes_Preproceso1 = '/content/drive/My Drive/Hackaton/Hackathon RIIAA 2021 (CNB)/Implementacion/Preproceso1'
#carpeta donde guardamos segundo preproceso
imagenes_Preproceso2 = '/content/drive/My Drive/Hackaton/Hackathon RIIAA 2021 (CNB)/Implementacion/Preproceso2'
#carpeta donde guardamos los archivos de texto
archivos_texto = '/content/drive/My Drive/Hackaton/Hackathon RIIAA 2021 (CNB)/Implementacion/TextFiles'
#Carpeta donde guradaremos la base de datos resultante
output_dir = '/content/drive/My Drive/Hackaton/Hackathon RIIAA 2021 (CNB)/Implementacion/extraccion.csv'
# + id="MePyhhEgEvBK"
from pathlib import Path
Path(imagenes_segmentadas).mkdir(parents=True, exist_ok=True)
Path(imagenes_Preproceso1).mkdir(parents=True, exist_ok=True)
Path(imagenes_Preproceso2).mkdir(parents=True, exist_ok=True)
Path(archivos_texto).mkdir(parents=True, exist_ok=True)
# + [markdown] id="YVFBSRHBhP0z"
# # Pipeline para extracción de texto sobre imagenes de evaluación.
#
# En este bloque de código se realiza el preproceso y ectracción de texto a las imagenes del conjunto de evaluación.
# + colab={"base_uri": "https://localhost:8080/"} id="6WnHVvzFdjw9" outputId="61e8e61e-38bd-4e76-e79a-6ba5854d6d7d"
import pandas as pd
import os
files_0=os.listdir(in_dir)
files=[]
for file in files_0:
if file[-4:]=='.JPG':
files.append(file)
resultado={}
for i,file in enumerate(files):
name=str(Path(in_dir)/file)
output_path=preproceso_imagen.segmentar_orientar(img_path=name,output_dir=imagenes_segmentadas)
output_path=preproceso_imagen.guardar_imagen(output_path, filtro=preproceso_imagen.pillow_image_detail, filtro_name="", output_dir=imagenes_Preproceso1, pillow=True)
output_path=preproceso_imagen.guardar_imagen(output_path, filtro=preproceso_imagen.noise_removal, filtro_name="", output_dir=imagenes_Preproceso2, pillow=False)
texto=extraccion_texto.guardar_texto(output_path,archivos_texto)
key=Path(name).name
resultado[key]=texto
print(i)
df_transcripciones=pd.DataFrame(resultado,index=['texto']).T
df_transcripciones.to_csv(output_dir)
print(df_transcripciones)
# + [markdown] id="KFPzyawAjLOw"
# # Evaluación
# La siguiente función sirve para medir el desempeño de la extracción de texto en relación con las transcripciones manuales.
# Es necesario especificar la dirección de la base de transcripciones.
# Evalua sobre una muestras aleatoria de imágenes de tamaño n.
# Toma como parámetros de entrada los siguientes argumentos:
#
# #inputs:
#
# **main_root**: Ruta donde se encuentra el conjunto de imágenes sobre el que vamos a evaluar.
#
# **output_dir**: Ruta donde se guradán los archivos obtenidos.
#
# **df_root**: Ruta donde se encuentra nuetsra base de transcripciones.
#
# **n_samples**: Tamaño de la muestra sobre la que se va a evaluar el proceso.
#
# **psm**: Parámetro del extractor del texto.
#
# **random_state** Semilla para controlar el muestreo.
#
# # output:
#
# La función devuelve un número entre 0 y 1, donde 0 corresponde a una transcripción perfecta (comparando con la transcripción etiquetada) y 1 una transcripción que no corresponde para nada con la real.
#
# **Nota:** La prueba que aquí se presenta corresponde a la evaluación de las transcripciones manuales.
#
#
# + id="ebHxrZE-GHuS"
import random
from PIL import Image, ImageFilter
from PIL import ImageEnhance
from Levenshtein import distance
import pandas as pd
import numpy as np
def evalua(main_root,output_dir,df_root,n_samples=10,psm=6,random_state=0,):
#Leer base de transcripciones
df_transcripciones=pd.read_csv(df_root)
df_transcripciones.index=df_transcripciones.NombreArchivo
#carpeta donde guardamos imagenes_bloques
imagenes_segmentadas = str(Path(output_dir)/'ImagenesSegmentadas')
#carpeta donde guardamos primer preproceso
imagenes_Preproceso1 = str(Path(output_dir)/'Preproceso1')
#carpeta donde guardamos segundo preproceso
imagenes_Preproceso2 = str(Path(output_dir)/'Preproceso2')
#carpeta donde guardamos los archivos de texto
archivos_texto = str(Path(output_dir)/'ArchivosTexto')
Path(imagenes_segmentadas).mkdir(parents=True, exist_ok=True)
Path(imagenes_Preproceso1).mkdir(parents=True, exist_ok=True)
Path(imagenes_Preproceso2).mkdir(parents=True, exist_ok=True)
Path(archivos_texto).mkdir(parents=True, exist_ok=True)
files_0=os.listdir(main_root)
#Validamos que los archivos sean imagenes .JPG
files=[]
for file in files_0:
if file[-4:]=='.JPG':
files.append(file)
print('Por evaluar sobre ',len(files),' de ',len(files_0),' imágenes')
random.seed(random_state)
sub_set=random.choices(files,k=n_samples)
resultado={}
distances=[]
for i,file in enumerate(sub_set):
name=str(Path(main_root)/file)
#Pipeline de proceso
output_path=preproceso_imagen.segmentar_orientar(img_path=name,output_dir=imagenes_segmentadas)
output_path=preproceso_imagen.guardar_imagen(output_path, filtro=preproceso_imagen.pillow_image_detail, filtro_name="", output_dir=imagenes_Preproceso1, pillow=True)
output_path=preproceso_imagen.guardar_imagen(output_path, filtro=preproceso_imagen.noise_removal, filtro_name="", output_dir=imagenes_Preproceso2, pillow=False)
texto=extraccion_texto.guardar_texto(output_path,archivos_texto)
key=Path(name).name
resultado[key]=texto
#Texto real
try:
texto_real=df_transcripciones.loc[key]['Texto']
distances.append(distance(texto,texto_real)/len(texto_real))
print(i)
except:
print('Archivo no encontrado en la base de datos')
df=pd.DataFrame(resultado,index=['texto']).T
df.to_csv(str(Path(output_dir)/'valuate_data.csv'))
return np.mean(distances)
# + colab={"base_uri": "https://localhost:8080/"} id="PMfjhU4kKShA" outputId="bd9525b3-a1b0-4a56-fd55-1af6cfdd7ca6"
in_dir="/content/drive/My Drive/Hackathon RIIAA 2021 (CNB)/Datos - Hackathon JusticIA/Fichas_manual/"
save_dir='/content/drive/My Drive/Hackaton/Hackathon RIIAA 2021 (CNB)/Implementacion/'
df_dir='/content/drive/My Drive/Hackathon RIIAA 2021 (CNB)/Datos - Hackathon JusticIA/JusticIA_DatosTranscripciones.csv'
evalua(in_dir,save_dir,df_dir,n_samples=1)
# + [markdown] id="15kVS7ZePy9S"
# #Pipeline para identificación de entidades.
#
# Los siguientes bloques de código corresponden al método propuesto para el reto 2A que consiste en encontrar coincidencias de entidades (personas, lugares, fechas y organizaciones) en los textos obtenidos de las transcripciones de imágenes.
#
# Para correr esto se necesitan el módulo fuzzy_search de la carpeta de modulos del repositorio maestro. En este módulo se encuentran algunas funciones que utilizan la distancia de Levenshtein para encontrar coincidencias entre cadenas de texto.
#
# El objeto df_salida corresponde a la salida
#
#
# + id="VF5No_SHKfL7" colab={"base_uri": "https://localhost:8080/", "height": 515} outputId="7c813753-0653-4444-adac-0d1162ba138d"
import pandas as pd
import numpy as np
### Si se quiere leer este dataframe del disco, descoméntese la siguiente linea
#df_transcripciones=pd.read_csv('/content/drive/MyDrive/Hackaton/Hackathon RIIAA 2021 (CNB)/Implementacion/extraccion.csv')
df_transcripciones.columns=['NombreArchivo','Texto']
### Leyendo las listas de las organizaciones (Nota:Es importante que las listas tengan el mismo formato, es decir nombres de las columnas, con el que se nos proporcionaron al inicio)
organizaciones = pd.read_csv("/content/gato-encerrado-Hackathon-RIIAA-2021/Evaluacion/organizations.csv", error_bad_lines=False)
organizaciones.columns = ["Organización"]
lugares = pd.read_csv("/content/gato-encerrado-Hackathon-RIIAA-2021/Evaluacion/places.csv", error_bad_lines=False).drop(["PLACES "],axis=1)
lugares["Lugar"] = lugares.index
lugares = lugares.reset_index(drop=True)
servidores = pd.read_csv("/content/gato-encerrado-Hackathon-RIIAA-2021/Evaluacion/civilservants.csv", error_bad_lines=False)
servidores.columns = ["Servidor Público"]
enjuiciados = pd.read_csv("/content/gato-encerrado-Hackathon-RIIAA-2021/Evaluacion/prosecuted.csv", error_bad_lines=False)
enjuiciados.columns = ["Enjuiciados"]
### Si se quiere hacer un muestreo sobre una pequeña muestra de las listas descomente la siguiente linea, si se quiere generar la base completa no haga nada.
#listas = [organizaciones.loc[0:100],lugares.loc[0:100],servidores.loc[0:100],enjuiciados.loc[0:100]]
listas = [organizaciones,lugares,servidores,enjuiciados]
## Párametro para regular la rigidez de la búsqueda
threshold = 70
df_salida = fuzzy_search.obtener_matchs_df(df_transcripciones, listas,threshold ).sort_values("NombreArchivo").append(fuzzy_search.obtener_df_expedientes_fechas(df_transcripciones)).sort_values('NombreArchivo').reset_index(drop=True)
df_salida.to_csv('/content/drive/My Drive/Hackaton/Hackathon RIIAA 2021 (CNB)/Implementacion/Entidades_identificadas.csv')
df_salida
# + [markdown] id="pl--MC64Vuze"
# # Buscando una entidad en todas las fichas
#
# En las siguientes celdas se presenta un buscador de entidades, como parámetros de **entrada** se reciben el **nombre de la entidad a buscar**, el **dataframe de trancripciones** donde se realizará la búsqueda y un parámetro de **threshold** que permite regular qué tan extricta se requiere la consulta, threshold < 60 es una búsqueda laxa con mayor probabilidad de falsos positivos y threshold > 60 significa una búsqueda más rígida en la que se puede perder un poco de información.
#
# Como **salida** devuelve un dataframe con todas las fichas en las que encuentra coincidencias y un word cloud de las palabras más frecuentes en las transcripciones en que se encuentra la entidad.
# + id="dMITu-JC6hBM" colab={"base_uri": "https://localhost:8080/", "height": 707} outputId="261c64f0-439e-4d54-dadb-67ed3b505516"
name = "Acapulco"
fuzzy_search.find_matchs_word_cloud(name,df_transcripciones,threshold=50)
# + id="kq5HrNjwY4T3"
|
implementacion_preprocesamiento_extraccion.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.0.1
# language: julia
# name: julia-1.0
# ---
# # Ordinary Differential Equation Solvers: Runge-Kutta Methods
# ## <NAME>
#
# So what's an <i>Ordinary Differential Equation</i>?
#
# Differential Equation means we have some equation (or equations) that have derivatives in them.
#
# The <i>ordinary</i> part differentiates them from <i>partial</i> differential equations (the ones with curly $\partial$ derivatives). Here, we only have one <b>independent</b> variable (let's call it $t$), and one or more <b>dependent</b> variables (let's call them $x_1, x_2, ...$). In partial differential equations, we can have more than one independent variable.
#
# This ODE can either be written as a system of the form
# $$
# \frac{d x_1}{dt}=f_1(t,x_1,x_2,...,x_n)
# $$
# $$
# \frac{d x_2}{dt}=f_2(t,x_1,x_2,...,x_n)
# $$
# ...
# $$
# \frac{d x_n}{dt}=f_n(t,x_1,x_n,...,x_n)
# $$
# or a single n'th order ODE of the form
# $$
# f_n(t,x) \frac{d^n x}{dt^n}+...+f_1(t,x) \frac{dx}{dt}+f_0(t,x)=0
# $$
# that can be rewritten in terms of a system of first order equations by performing variable substitutions such as
# $$
# \frac{d^i x}{dt^i}=x_i
# $$
#
# Though STEM students such as I have probably spent thousands of hours pouring of ways to analytically solve both ordinary and partial differential equations, unfortunately, the real world is rarely so kind as to provide us with an exactly solvable differential equation. At least for interesting problems.
#
# We can sometimes approximate the real world as an exactly solvable situation, but for the situation we are interested in, we have to turn to numerics. I'm not saying those thousand different analytic methods are for nothing. We need an idea ahead of time of what the differential equation should be doing, to tell if it's misbehaving or not. We can't just blindly plug and chug.
#
# Today will be about introducing four different methods based on Taylor expansion to a specific order, also known as Runge-Kutta Methods. We can improve these methods with adaptive stepsize control, but that is a topic for another time, just like the other modern types of solvers such as Richardson extrapolation and predictor-corrector.
#
# Nonetheless, to work with ANY computational differential equation solver, you need to understand the fundamentals of routines like Euler and Runge-Kutta, their error propagation, and where they can go wrong. Otherwise, you might misinterpret the results of more advanced methods.
#
# <b>WARNING:</b> If you need to solve a troublesome differential equation for a research problem, use a package, like [DifferentialEquations](https://github.com/JuliaDiffEq/DifferentialEquations.jl). These packages have much better error handling and optimization.
#
# Let's first add our plotting package and colors.
using Plots
plotlyjs()
# We will be benchmarking our solvers on one of the simplest and most common ODE's,
#
# \begin{equation}
# \frac{d}{d t}x=x \;\;\;\;\;\;\; x(t)=C e^t
# \end{equation}
#
# Though this only has one dependent variable, we want to structure our code so that we can accommodate a series of dependent variables, $y_1,y_2,...,y_n$, and their associated derivative functions. Therefore, we create a function for each dependent variable, and then `push` it into an array declared as type `Function`.
function f1(t::Float64,x::Array{Float64,1})
return x[1]
end
f=Function[]
push!(f,f1)
# ### Euler's Method
# <img src="../images/ODE/graphic.png" width="400px" style="float: left; margin: 20px"/>
# First published in Euler's <i>Instutionum calculi integralis</i> in 1768, this method gets a lot of milage, and if you are to understand anything, this method is it.
#
# We march along with step size $h$, and at each new point, calculate the slope. The slope gives us our new direction to travel for the next $h$.
#
# We can determine the error from the Taylor expansion of the function
# $$
# x_{n+1}=x_n+h f(x_n,t) + \mathcal{O}(h^2).
# $$
# In case you haven't seen it before, the notation $\mathcal{O}(x)$ stands for "errors of the order x".
# Summing over the entire interval, we accumuluate error according to
# $$N\mathcal{O}(h^2)= \frac{x_f-x_0}{h}\mathcal{O}(h^2)=h $$,
# making this a <b>first order</b> method. Generally, if a technique is $n$th order in the Taylor expansion for one step, its $(n-1)$th order over the interval.
function Euler(f::Array{Function,1},t0::Float64,x::Array{Float64,1},h::Float64)
d=length(f)
xp=copy(x)
for ii in 1:d
xp[ii]+=h*f[ii](t0,x)
end
return t0+h,xp
end
# ## Implicit Method or Backward Euler
#
#
# If $f(t,x)$ has a form that is invertible, we can form a specific expression for the next step. For example, if we use our exponential,
# \begin{equation}
# x_{n+1}=x_n+ h f(t_{n+1},x_{n+1})
# \end{equation}
# \begin{equation}
# x_{n+1}-h x_{n+1}=x_n
# \end{equation}
# \begin{equation}
# x_{n+1}=\frac{x_n}{1-h}
# \end{equation}
#
# This expression varies for each differential equation and only exists if the function is invertible.
# +
function Implicit(f::Array{Function,1},t0::Float64,x::Array{Float64,1},h::Float64)
return t0+h,x[1]/(1-h)
end
# -
# ## 2nd Order Runge-Kutta
#
# So in the Euler Method, we could just make more, tinier steps to achieve more precise results. Here, we make <i>bettter</i> steps. Each step itself takes more work than a step in the first order methods, but we win by having to perform fewer steps.
#
# This time, we are going to work with the Taylor expansion up to second order,
# \begin{equation}
# x_{n+1}=x_n+h f(t_n,x_n) + \frac{h^2}{2} f^{\prime}(t_n,x_n)+ \mathcal{O} (h^3).
# \end{equation}
#
# Define
# \begin{equation}
# k_1=f(t_n,x_n),
# \end{equation}
# so that we can write down the derivative of our $f$ function, and the second derivative (curvature), of our solution,
# \begin{equation}
# f^{\prime}(t_n,x_n)=\frac{f(t_n+h/2,x_n+h k_1/2)-k_1}{h/2}+\mathcal{O}(h^2).
# \end{equation}
# Plugging this expression back into our Taylor expanion, we get a new expression for $x_{n+1}$
# \begin{equation}
# x_{n+1}=x_n+hf(t_n+h/2,x_n+h k_1/2)+\mathcal{O}(h^3)
# \end{equation}
#
# We can also interpret this technique as using the slope at the center of the interval, instead of the start.
function RK2(f::Array{Function,1},t0::Float64,x::Array{Float64,1},h::Float64)
d=length(f)
xp=copy(x)
xk1=copy(x)
for ii in 1:d
xk1[ii]+=f[ii](t0,x)*h/2
end
for ii in 1:d
xp[ii]+=f[ii](t0+h/2,xk1)*h
end
return t0+h,xp
end
# ## 4th Order Runge-Kutta
# Wait! Where's 3rd order? There exists a 3rd order method, but I only just heard about it while fact-checking for this post. RK4 is your dependable, multi-purpose workhorse, so we are going to skip right to it.
#
# $$
# k_1= f(t_n,x_n)
# $$
# $$
# k_2= f(t_n+h/2,x_n+h k_1/2)
# $$
# $$
# k_3 = f(t_n+h/2, x_n+h k_2/2)
# $$
# $$
# k_4 = f(t_n+h,x_n+h k_3)
# $$
# $$
# x_{n+1}=x_n+\frac{h}{6}\left(k_1+2 k_2+ 2k_3 +k_4 \right)
# $$
# I'm not going to prove here that the method is fourth order, but we will see numerically that it is.
#
# <i>Note:</i> I premultiply the $h$ in my code to reduce the number of times I have to multiply $h$.
function RK4(f::Array{Function,1},t0::Float64,x::Array{Float64,1},h::Float64)
d=length(f)
hk1=zeros(Float64,length(x))
hk2=zeros(Float64,length(x))
hk3=zeros(Float64,length(x))
hk4=zeros(Float64,length(x))
for ii in 1:d
hk1[ii]=h*f[ii](t0,x)
end
for ii in 1:d
hk2[ii]=h*f[ii](t0+h/2,x+hk1/2)
end
for ii in 1:d
hk3[ii]=h*f[ii](t0+h/2,x+hk2/2)
end
for ii in 1:d
hk4[ii]=h*f[ii](t0+h,x+hk3)
end
return t0+h,x+(hk1+2*hk2+2*hk3+hk4)/6
end
# This next function merely iterates over a certain number of steps for a given method.
function Solver(f::Array{Function,1},Method::Function,t0::Float64,
x0::Array{Float64,1},h::Float64,N::Int64)
d=length(f)
ts=zeros(Float64,N+1)
xs=zeros(Float64,d,N+1)
ts[1]=t0
xs[:,1]=x0
for i in 2:(N+1)
ts[i],xs[:,i]=Method(f,ts[i-1],xs[:,i-1],h)
end
return ts,xs
end
# +
N=1000
xf=10
t0=0.
x0=[1.]
dt=(xf-t0)/N
tEU,xEU=Solver(f,Euler,t0,x0,dt,N);
tIm,xIm=Solver(f,Implicit,t0,x0,dt,N);
tRK2,xRK2=Solver(f,RK2,t0,x0,dt,N);
tRK4,xRK4=Solver(f,RK4,t0,x0,dt,N);
xi=tEU
yi=exp(xi);
errEU=reshape(xEU[1,:],N+1)-yi
errIm=reshape(xIm[1,:],N+1)-yi
errRK2=reshape(xRK2[1,:],N+1)-yi;
errRK4=reshape(xRK4[1,:],N+1)-yi;
# -
plot(x=tEU,y=xEU[1,:],Geom.point,
Theme(highlight_width=0pt,default_color=green,
default_point_size=3pt))
# +
lEU=layer(x=tEU,y=xEU[1,:],Geom.point,
Theme(highlight_width=0pt,default_color=green,
default_point_size=3pt))
lIm=layer(x=tIm,y=xIm[1,:],Geom.point,
Theme(highlight_width=0pt,default_color=yellow,
default_point_size=3pt))
lRK2=layer(x=tRK2,y=xRK2[1,:],Geom.point,
Theme(highlight_width=0pt,default_color=cyan,
default_point_size=2pt))
lRK4=layer(x=tRK4,y=xRK4[1,:],Geom.point,
Theme(highlight_width=0pt,default_color=violet,
default_point_size=4pt))
lp=layer(x->e^x,-.1,10,Geom.line,Theme(default_color=red))
plot(lp,lEU,lIm,lRK2,lRK4,
Guide.manual_color_key("Legend",["Euler","Implicit","RK2","RK4","Exact"],
[green,yellow,cyan,violet,red]),
Coord.cartesian(xmin=9.5,xmax=10.1))
# +
lEU=layer(x=xi,y=errEU,Geom.point,
Theme(highlight_width=0pt,default_color=green,
default_point_size=1pt))
lIm=layer(x=xi,y=errIm,Geom.point,
Theme(highlight_width=0pt,default_color=yellow,
default_point_size=1pt))
lRK2=layer(x=xi,y=errRK2,Geom.point,
Theme(highlight_width=0pt,default_color=cyan,
default_point_size=1pt))
lRK4=layer(x=xi,y=errRK4,Geom.point,
Theme(highlight_width=0pt,default_color=violet,
default_point_size=1pt))
plot(lEU,lIm,lRK2,lRK4,Scale.y_asinh,
Guide.manual_color_key("Legend",["Euler","Implicit","RK2","RK4"],
[green,yellow,cyan,violet]))
# -
# ## Scaling of the Error
#
# I talked above about the error scaling either as $h,h^2$, or $h^4$. I won't just talk but here will numerically demonstrate the relationship as well. For a variety of different step sizes, the below code calculates the final error for each method. Then we will plot the error and see how it scales.
# +
t0=0.
tf=1.
dx=tf-t0
x0=[1.]
dt=collect(.001:.0001:.01)
correctans=exp(tf)
errfEU=zeros(dt)
errfIm=zeros(dt)
errfRK2=zeros(dt)
errfRK4=zeros(dt)
for ii in 1:length(dt)
N=round(Int,dx/dt[ii])
dt[ii]=dx/N
tEU,xEU=Solver(f,Euler,t0,x0,dt[ii],N);
tIm,xIm=Solver(f,Implicit,t0,x0,dt[ii],N);
tRK2,xRK2=Solver(f,RK2,t0,x0,dt[ii],N);
tRK4,xRK4=Solver(f,RK4,t0,x0,dt[ii],N);
errfEU[ii]=xEU[1,end]-correctans
errfIm[ii]=xIm[1,end]-correctans
errfRK2[ii]=xRK2[1,end]-correctans
errfRK4[ii]=xRK4[1,end]-correctans
end
# -
plot(x=dt,y=errfEU)
# +
lEU=layer(x=dt,y=errfEU,Geom.point,
Theme(highlight_width=0pt,default_color=green,
default_point_size=1pt))
lIm=layer(x=dt,y=errfIm,Geom.point,
Theme(highlight_width=0pt,default_color=yellow,
default_point_size=1pt))
lRK2=layer(x=dt,y=errfRK2*10^5,Geom.point,
Theme(highlight_width=0pt,default_color=cyan,
default_point_size=1pt))
lRK4=layer(x=dt,y=errfRK4*10^10,Geom.point,
Theme(highlight_width=0pt,default_color=violet,
default_point_size=1pt))
tempEU(x)=(errfEU[end]*x/.01)
tempIm(x)=(errfIm[end]*x/.01)
#le=layer([tempEU,tempIm],0,.01,Geom.line,Theme(default_color=base01))
le=layer(tempEU,0,.01,Geom.line,Theme(default_color=base01))
lei=layer(tempIm,0,.01,Geom.line,Theme(default_color=base01))
temp2(x)=(errfRK2[end]*(x/.01)^2*10^5)
l2=layer(temp2,0,.01,Geom.line,Theme(default_color=base00))
temp4(x)=(errfRK4[end]*(x/.01)^4*10^10)
l4=layer(temp4,0,.01,Geom.line,Theme(default_color=base00))
xl=Guide.xlabel("h")
ylrk2=Guide.ylabel("Error e-5")
ylrk4=Guide.ylabel("Error e-10")
yl=Guide.ylabel("Error")
pEU=plot(lEU,lIm,le,lei,xl,yl,Guide.title("Euler and Implicit, linear error"))
p2=plot(lRK2,l2,xl,ylrk2,Guide.title("RK2, error h^2"))
p4=plot(lRK4,l4,xl,ylrk4,Guide.title("RK4, error h^4"))
vstack(hstack(p2,p4),pEU)
# -
# ## Arbitrary Order
# While I have presented 4 concrete examples, many more exist. For any choice of variables $a_i, \beta_{i,j},a_i$ that fulfill
# $$
# x_{n+1}=x_n+h\left(\sum_{i=1}^s a_i k_i \right)+ \mathcal{O}(h^p)
# $$
# with
# $$
# k_i=f\left(t_n+\alpha_i h,x_n+h\left(\sum_{j=1}^s \beta_{ij} k_j \right) \right)
# $$
# we have a Runge-Kutta method of order $p$, where $p\geq s$. The Butcher tableau provides a set of consistent coefficients.
#
#
# These routines are also present in the M4 folder in a module named `diffeq.jl`. For later work, you may simply import the module.
#
# Stay tuned for when we tuned these routines to the stiff van der Pol equations!
|
Numerics_Prog/Runge-Kutta-Methods.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
def toautoml_csv(remote_path):
import pandas as pd
csv_input = pd.read_csv('/Users/seyran/Documents/GitHub/children_book_data/concatenated_annotations.csv')
csv_input.insert(0,'set','UNASSIGNED')
csv_input.insert(1,'path',remote_path)
csv_input["path"] = csv_input["path"] + csv_input["id"]
csv_input = csv_input.drop('id', axis=1)
csv_input.insert(5,'empty5','')
csv_input.insert(6,'empty6','')
csv_input.insert(9,'empty9','')
csv_input.insert(10,'empty10','')
csv_input.to_csv('output.csv',index=False)
remote_path= "gs://depth-286210/children_book_images/"
toautoml_csv(remote_path)
|
toautoml_csv.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exporteer een dataset naar Excel
# Deze notebook laat zien hoe een dataset of transactie uit ART-DECOR eenvoudig geëxporteerd kan worden naar Excel.
#
# Eerst wat overhead: importeren van de nodige libraries, pandas in dit geval.
import pandas as pd
# ## Ophalen van de DECOR transactie.
# DECOR datasets en transacties zijn te vinden op:
# https://decor.nictiz.nl/decor/services/ProjectIndex
#
# Kies een dataset of transactie, en kopieer de uri hieronder. De HTML versie kan gebruikt worden, pandas kan prima HTML tabellen lezen. De 'name' wordt later gebruikt voor het Excel bestand. Verwijder eventueel het deel '&hidecolumns=45ghijklmnop' uit de URI, dan worden alle kolommen uit de dataset getoond.
#
# Heb even geduld na het runnen van de Notebook cell, ophalen kan even tijd kosten. Daarna wordt de tweede tabel uit de HTML gelezen (de eerste tabel (html[0] zijn titels van de pagina). De HTML wordt geïmporteerd naar een pandas DataFrame - te vergelijken met een Excel worksheet.
#
# transaction.head() laat de eerste rijen van de opgehaalde dataset zien.
name = 'Demo'
uri ='https://decor.nictiz.nl/decor/services/RetrieveTransaction?id=2.16.840.1.113883.3.1937.99.62.3.4.2&effectiveDate=2012-09-05T16:59:35&language=nl-NL&ui=nl-NL&version=2019-10-10T11:18:31&format=html'
html = pd.read_html(uri)
transaction = html[1] # First table is overhead
transaction.head()
# ## Opschonen van kolomnamen en index zetten
# De kolomnamen zoals ze opgehaald zijn, zijn niet wat we in Excel willen zien. Dus deze gaan we:
# * neem de eerste regel van kolomkop als er meer regels zijn (kan gebeuren bij lange koppen)
# * splits op spaties, en nemen dan het eerste deel
# * gebruik dataset ID als de index van het Dataframe
transaction.columns = [col.splitlines()[0] for col in transaction.columns]
transaction.columns = [col.split("[")[0] for col in transaction.columns]
transaction.set_index('ID', drop=True, inplace=True)
transaction.head()
# ## Verwijderen kolommen
# In de Excel willen we niet alle kolommen zien. De kolommen in de lijst worden verwijderd uit het DataFrame.
dropcols = ['Mandatory', 'Conf', 'Max', 'Datatype CC',
'Eigenschappen', 'Voorbeeld', 'Codes',
'Context', 'Bron', 'Rationale', 'Operationalisaties', 'Opmerking',
'Mapping', 'Conditie', 'Status', 'Community', 'Terminologie',
'Waardelijst','Ouderconcept', 'Erft van']
transaction = transaction.drop(labels=dropcols, axis='columns')
transaction.head()
# ## Notatie kardinaliteit aanpassen
# Ik ben zelf niet zo'n fan van de omslachtige 1..* notatie, dus ik converteer naar de elegantere ?, 1, +, * notatie.
transaction['Card'] = transaction['Card'].map({'0 … 1': '?', '1 … 1': '1', '1 … *': '+', '0 … *': '*'})
transaction.head()
# ## Opslaan als Excel
# Nu de dataset eruit ziet zoals we deze willen hebben, slaan we deze op als Excel.
#
# De opmaak in Excel is minimaal. Voor een uitgebreidere set mogelijkheden zie:
# https://xlsxwriter.readthedocs.io/working_with_pandas.html
writer = pd.ExcelWriter('Example_01_' + name + '.xlsx')
transaction.to_excel(writer, sheet_name=name)
writer.save()
|
01_Exporteer_een_dataset_naar_Excel.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import csv
import cv2
from tkinter import *
from tkinter import Tk, Menu, Canvas
from PIL import Image, ImageTk
from tkinter import filedialog
data=pd.read_csv("features.csv")
data_new=pd.read_csv("features.csv",na_values=['?'])
data_new.dropna(inplace=True)
predictions=data_new['Prediction']
data_new
features_raw = data_new[['Number of contour','Area','Perimeter']]
from sklearn.model_selection import train_test_split
predict_class = predictions.apply(lambda x: 0 if x == 0 else 1)
np.random.seed(1234)
X_train, X_test, y_train, y_test = train_test_split(features_raw, predict_class, train_size=0.80, random_state=1)
# Show the results of the split
#print ("Training set has {} samples.".format(X_train.shape[0]))
#print ("Testing set has {} samples.".format(X_test.shape[0]))
import sklearn
from sklearn import svm
C = 1.0
svc = svm.SVC(kernel='linear',C=C,gamma=2)
svc.fit(X_train, y_train)
from sklearn.metrics import fbeta_score
print (X_test)
predictions_test = svc.predict(X_test)
predictions_test
def imgReader(img):
data=pd.read_csv("features.csv")
data_new=pd.read_csv("features.csv",na_values=['?'])
data_new.dropna(inplace=True)
predictions=data_new['Prediction']
data_new
features_raw = data_new[['Number of contour','Area','Perimeter']]
from sklearn.model_selection import train_test_split
predict_class = predictions.apply(lambda x: 0 if x == 0 else 1)
np.random.seed(1234)
csvTitle = [['Number of contour','Area', 'Perimeter', 'Pred']]
csvData = []
#HSV
image = cv2.imread(img)
#while(1):
# Convert BGR to HSV
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# define range of blue color in HSV
lower_blue = np.array([80,120,70])
upper_blue = np.array([305,300,220])
# Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv, lower_blue, upper_blue)
# Bitwise-AND mask and original image
res = cv2.bitwise_and(image,image, mask= mask)
#cv2.imshow('frame',frame)
cv2.imwrite("/home/mcaadmin/blueinfected/Im001_1.png",res)
#cv2.imshow('res',res)
#BINARY
image = cv2.imread("/home/mcaadmin/blueinfected/Im001_1.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = 20
binary = cv2.threshold(gray, thresh, 300, cv2.THRESH_BINARY)[1]
cv2.imwrite("/home/mcaadmin/binaryinfected/Im001_1.png",binary)
#feature
image, contours, hierarchy = cv2.findContours(binary, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
#img = cv2.drawContours(image, contours, -1, (0,0, 255), 1)
numOfContours = len(contours) #number of contours
print("numOfContours:",numOfContours)
area = []
perimeter=[]
count = 0
eccentricity=[]
for count in range(numOfContours) :
cv2.drawContours(binary, contours, -1, (20,255,60), 1) #draw contours
cnt = contours[count]
area.append(cv2.contourArea(cnt))
perim = cv2.arcLength(cnt,True)
perimeter.append(perim)
count+=1
a=max(area)
print("area:",a)
for i in range(numOfContours) :
if area[i]==a:
k=i
#X_train, X_test, y_train, y_test = train_test_split(features_raw, predict_class, train_size=0.80, random_state=1)
num=numOfContours
area=a
peri=perimeter[k]
d=pd.read_csv("new.csv")
d_new=pd.read_csv("new.csv",na_values=['?'])
d_new.dropna(inplace=True)
x=[num,area,peri]
X_train1=features_raw
y_train1=predict_class
X_test1=d_new[['Number of contour','Area','Perimeter']]
# Show the results of the split
print ("Training set has {} samples.".format(X_train1.shape[0]))
print ("Testing set has {} samples.".format(X_test1.shape[0]))
import sklearn
from sklearn import svm
C = 1.0
svc = svm.SVC(kernel='linear',C=C,gamma=2)
svc.fit(X_train1, y_train1)
from sklearn.metrics import fbeta_score
print (X_test1)
predictions_test1 = svc.predict(X_test1)
predictions_test1
csvData.append([num,area, peri, int(predictions_test1)])
with open('new.csv', 'w') as csvFile:
writer = csv.writer(csvFile)
writer.writerows(csvTitle)
writer.writerows(csvData)
if(predictions_test1[0]==1):
print("lukemia detected***")
else:
print("Normal***")
#gui code
#imgname = "dataset/10_left.jpeg"
width = 430
height = 320
# Function definitions
def deleteImage(canvas):
canvas.delete("all")
return
def quitProgram():
gui.destroy()
# Main window
gui = Tk()
# setting the size of the window
canvas = Canvas(gui, width = 800, height =600)
canvas.pack()
# BROWSE BUTTON CODE
def browsefunc():
global imgname
imgname = filedialog.askopenfilename()
pathLabel = Label(gui, text=imgname, anchor=N )
pathLabel.pack()
# Inside the main gui window
# Creating an object containing an image
# A canvas with borders that adapt to the image within it
def loadImage():
width = 430
height = 320
img = Image.open(imgname)
im2 = img.resize((width, height), Image.NEAREST)
#width, height = img.size
filename = ImageTk.PhotoImage(im2)
canvas.image = filename # <--- keep reference of your image
canvas.create_image(200,0,anchor=N,image=filename)
canvas.pack()
#RUNNING PROGRAM
def run():
imgReader(imgname)
outData=pd.read_csv("new.csv")
count = int(outData['Number of contour'])
area = int(outData['Area'])
peri = int(outData['Perimeter'])
pred = int(outData['Pred'])
#countLabel = Label(gui, text = count, anchor = S)
#areaLabel = Label(gui, text = area, anchor = S)
#periLabel = Label(gui, text = peri, anchor = S)
if(pred):
outLabel = Label(gui, text = "Leukemia Detected", anchor = S)
else:
outLabel = Label(gui, text = "Normal", anchor = S)
#countLabel.pack()
#areaLabel.pack()
#periLabel.pack()
outLabel.pack()
# Menu bar
menubar = Menu(gui)
# Adding a cascade to the menu bar:
filemenu = Menu(menubar, tearoff=0)
menubar.add_cascade(label="Files", menu=filemenu)
# Adding an option to browse for an image
filemenu.add_command(label="browse", command=browsefunc)
# Adding a load image button to the cascade menu "File"
filemenu.add_command(label="Load an image", command=loadImage)
# Adding a running program to the cascade menu "File"
filemenu.add_command(label="run code", command=run)
filemenu.add_separator()
#Adding an option to quit the window
filemenu.add_command(label="Quit", command=quitProgram)
menubar.add_separator()
#menubar.add_cascade(label="?")
# Display the menu bar
gui.config(menu=menubar)
gui.title("EXTRACT TEXT FROM IMAGE")
gui.mainloop()
# -
|
Untitled7.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import glob
import cv2
import os
import matplotlib.pyplot as plt
def load_images(image_folder):
imgs = dict()
for filename in glob.glob(os.path.join("data/input/", image_folder, "*.JPG")):
file = os.path.basename(filename)
imgs[file] = cv2.imread(filename)
return imgs
imgs = load_images("orebro1")
plt.imshow(imgs["DSC_0688.JPG"])
|
00-Data-Loader.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction to Python
# Purpose: To begin practicing and working with Python
# *Step 1: Hello world*
print(Hello World)
print('Hello World')
# *Step 2: Print a User Friendly Greeting*
# +
#Input your name here after the equal sign and in quotations
user_name = 'Hawley'
# -
print('Hello,', user_name)
print('It is nice to meet you!')
# *Step 3: The area of a circle*
import numpy as np
# +
# Define your radius
r = 3
area = np.pi * r ** 2
print(area)
# +
# Define your radius
r = 3
area = np.pi * r ** 2
area
# -
# *Step 3b: Doing it in a function*
def find_area(r):
return np.pi*r**2
find_area(3)
# *Step 3c: Doing more with a function, faster!*
numberlist = list(range(0, 10))
numberlist
for numbers in numberlist:
area = findArea(numbers)
print('Radius:', numbers, 'Area:', area)
# *Step 4: Intro to Plotting*
import matplotlib.pyplot as plt
# %matplotlib inline
fig, ax = plt.subplots()
ax.set(xlim=(-1, 1), ylim=(-1, 1))
a_circle = plt.Circle((0, 0), 0.5)
ax.add_artist(a_circle)
for numbers in numberlist:
fig, ax = plt.subplots()
ax.set(xlim=(-10, 10), ylim=(-10, 10))
a_circle = plt.Circle((0, 0), numbers/2)
ax.add_artist(a_circle)
# # Recap Activity:
# Are you feeling more comfortable with Python now?
# Why yes? or no?
|
intro-to-python/intro-to-python.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from matplotlib import pyplot as plt
from CDDF_analysis.qso_loader_dr16q import QSOLoaderDR16Q
from CDDF_analysis.make_dr16q_plots import *
# -
# # Examples for QSOLoader, a plotting class
# Welcome to this example notebook! If you are interested in the DR16 DLA catalogue we have, but not interested in modifying the model(, which is in MATLAB) or running the model by yourself. This is the right place!
#
# For people who prefer to look into the mathematics in the model, please refer to the README in the repo.
#
# And you can always find all of the plotting functions for the paper in CDDF_analysis/make_dr16q_plots.py
# - All of the data products of the GP DLA catalogue could be found in : [**download link**](https://drive.google.com/drive/folders/1uaaUVLpCdhSVC7rGFN_C5TuvQZLk2cyn?usp=sharing) (the same as the one in the README)
# - It is required to have DR9's DLA labels as prior:
#
# ```bash
# # in shell
# cd data/scripts
# ./download_catalogs.sh
# ```
#
# - DR16Q catalogue from SDSS is required. If you do not want to download it by yourselves, run the shell script:
#
# ```bash
# # in shell
# cd data/scripts
# ./download_catalogs_dr16q.sh
# ```
#
# ## Loading into the class
# - `preloaded_file` : a file for all of the raw spectra (filtered by the condition given in the paper).
# - `catalogue_file` : a processed MATLAB catalogue for sightlines in DR16Q
# - `processed_file` : the GP DLA catalogue in MATLAB format
# - `learned_file` : the learned GP null model
# - `dla_concordance`: a text file for the DR9 DLA concordance catalogue
# - `log_concordance`: a text file for the sightlines of DR9 concordance catalogue
# - `snrs_file` : a file contains the SNRs of each spectrum
# - `sub_dla` : whether you have a subDLA model in the processed_file
# - `sample_file` : the Quasi-Monte Carlo samples used for computing the likelihood of the DLA model
# - `occams_razor` : a regularization factor to prevent the DLA model overfit the data. Fixed to 1/30
# - `dist_file` : the raw DR16Q catalogue from SDSS
qsos_dr16q = QSOLoaderDR16Q(
preloaded_file="data/dr16q/processed/preloaded_qsos.mat",
catalogue_file="data/dr16q/processed/catalog.mat",
processed_file="data/dr16q/processed/processed_qsos_multi_meanflux_dr16q_full_int_small.mat",
learned_file="data/dr16q/processed/learned_qso_model_lyseries_variance_wmu_boss_dr16q_minus_dr12q_gp_851-1421.mat",
dla_concordance="data/dla_catalogs/dr9q_concordance/processed/dla_catalog",
los_concordance="data/dla_catalogs/dr9q_concordance/processed/los_catalog",
snrs_file="data/dr16q/processed/snrs_qsos_multi_meanflux_dr16q.mat",
sub_dla=True,
sample_file="data/dr12q/processed/dla_samples_a03_30000.mat",
occams_razor=1 / 30, tau_sample_file="data/dr12q/processed/tau_0_samples_30000.mat",
dist_file="data/dr16q/distfiles/DR16Q_v4.fits")
# ## Plot the spectrum
# +
# plot the first spectrum in the DLA catalogue!
nspec = 0
qsos_dr16q.plot_this_mu(nspec)
plt.title(
'thingID = {}; z = {:.2g}, zPCA = {:.2g}, zPIPE = {:.2g}, zVI = {:.2g}; source_z = {}; IS_QSO_FINAL = {}; CLASS_PERSON = {} (30: BALQSO, 50: Blazar)'.format(
qsos_dr16q.thing_ids[nspec],
qsos_dr16q.z_qsos[nspec],
qsos_dr16q.hdu[1].data["Z_PCA"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["Z_PIPE"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["Z_VI"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["SOURCE_Z"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["IS_QSO_FINAL"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["CLASS_PERSON"][qsos_dr16q.test_real_index][nspec])
)
plt.ylim(-1, 5)
plt.tight_layout()
# -
# ## Plot spectrum with DLAs
#
# Let's say we want a DLA spectrum with high SNR and from zQSO > 3
all_nspecs = np.where( (qsos_dr16q.p_dlas > 0.90) & ( qsos_dr16q.snrs > 3) & ( qsos_dr16q.z_qsos > 3) )[0]
# +
# plot the first one from the selection criteria posed above
nspec = all_nspecs[0]
qsos_dr16q.plot_this_mu(nspec)
plt.title(
'thingID = {}; z = {:.2g}, zPCA = {:.2g}, zPIPE = {:.2g}, zVI = {:.2g}; source_z = {}; IS_QSO_FINAL = {}; CLASS_PERSON = {} (30: BALQSO, 50: Blazar)'.format(
qsos_dr16q.thing_ids[nspec],
qsos_dr16q.z_qsos[nspec],
qsos_dr16q.hdu[1].data["Z_PCA"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["Z_PIPE"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["Z_VI"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["SOURCE_Z"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["IS_QSO_FINAL"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["CLASS_PERSON"][qsos_dr16q.test_real_index][nspec])
)
plt.ylim(-1, 5)
plt.tight_layout()
# -
# Let's say we want the predictions only for those with logNHI > 20.3
all_nspecs = np.where( (qsos_dr16q.p_dlas > 0.90) & ( qsos_dr16q.snrs > 3) & ( qsos_dr16q.z_qsos > 3) & (qsos_dr16q.all_log_nhis.max(axis=1) >= 20.3))[0]
# +
# plot the first one from the selection criteria posed above
nspec = all_nspecs[0]
qsos_dr16q.plot_this_mu(nspec)
plt.title(
'thingID = {}; z = {:.2g}, zPCA = {:.2g}, zPIPE = {:.2g}, zVI = {:.2g}; source_z = {}; IS_QSO_FINAL = {}; CLASS_PERSON = {} (30: BALQSO, 50: Blazar)'.format(
qsos_dr16q.thing_ids[nspec],
qsos_dr16q.z_qsos[nspec],
qsos_dr16q.hdu[1].data["Z_PCA"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["Z_PIPE"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["Z_VI"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["SOURCE_Z"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["IS_QSO_FINAL"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["CLASS_PERSON"][qsos_dr16q.test_real_index][nspec])
)
plt.ylim(-1, 5)
plt.tight_layout()
# -
# ## Plot with the predictions with the CNN model from Parks(2018)
all_nspecs = np.where( (qsos_dr16q.p_dlas > 0.90) & ( qsos_dr16q.snrs > 3) & ( qsos_dr16q.z_qsos > 3) )[0]
# +
# plot the first one from the selection criteria posed above
nspec = all_nspecs[0]
qsos_dr16q.plot_this_mu(nspec, Parks=True)
plt.title(
'thingID = {}; z = {:.2g}, zPCA = {:.2g}, zPIPE = {:.2g}, zVI = {:.2g}; source_z = {}; IS_QSO_FINAL = {}; CLASS_PERSON = {} (30: BALQSO, 50: Blazar)'.format(
qsos_dr16q.thing_ids[nspec],
qsos_dr16q.z_qsos[nspec],
qsos_dr16q.hdu[1].data["Z_PCA"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["Z_PIPE"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["Z_VI"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["SOURCE_Z"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["IS_QSO_FINAL"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["CLASS_PERSON"][qsos_dr16q.test_real_index][nspec])
)
plt.ylim(-1, 5)
plt.tight_layout()
# -
# ## Plot the Conditional GP (continuum prediction based on metal emissions)
all_nspecs = np.where( (qsos_dr16q.p_dlas > 0.90) & ( qsos_dr16q.snrs > 3) & ( qsos_dr16q.z_qsos > 3) & (qsos_dr16q.all_log_nhis.max(axis=1) >= 20.3))[0]
# +
# plot the first one from the selection criteria posed above
nspec = all_nspecs[0]
qsos_dr16q.plot_this_mu(nspec, Parks=True, conditional_gp=True)
plt.title(
'thingID = {}; z = {:.2g}, zPCA = {:.2g}, zPIPE = {:.2g}, zVI = {:.2g}; source_z = {}; IS_QSO_FINAL = {}; CLASS_PERSON = {} (30: BALQSO, 50: Blazar)'.format(
qsos_dr16q.thing_ids[nspec],
qsos_dr16q.z_qsos[nspec],
qsos_dr16q.hdu[1].data["Z_PCA"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["Z_PIPE"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["Z_VI"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["SOURCE_Z"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["IS_QSO_FINAL"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["CLASS_PERSON"][qsos_dr16q.test_real_index][nspec])
)
plt.ylim(-1, 5)
plt.tight_layout()
# -
# ## Plot subDLA (logNHI = 19.5 ~ 20) candidates
all_nspecs = np.where( (qsos_dr16q.model_posteriors[:, 1] > 0.9999) & ( qsos_dr16q.snrs > 4) )[0]
# +
# plot the first one from the selection criteria posed above
nspec = all_nspecs[0]
qsos_dr16q.plot_this_mu(nspec)
plt.title(
'thingID = {}; z = {:.2g}, zPCA = {:.2g}, zPIPE = {:.2g}, zVI = {:.2g}; source_z = {}; IS_QSO_FINAL = {}; CLASS_PERSON = {} (30: BALQSO, 50: Blazar)'.format(
qsos_dr16q.thing_ids[nspec],
qsos_dr16q.z_qsos[nspec],
qsos_dr16q.hdu[1].data["Z_PCA"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["Z_PIPE"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["Z_VI"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["SOURCE_Z"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["IS_QSO_FINAL"][qsos_dr16q.test_real_index][nspec],
qsos_dr16q.hdu[1].data["CLASS_PERSON"][qsos_dr16q.test_real_index][nspec])
)
plt.ylim(-1, 5)
plt.tight_layout()
# -
# ## Plot the kernel function
# ref: CDDF_analysis.make_dr16q_plots.do_procedure_plots
# +
min_lambda = qsos_dr16q.GP.min_lambda
max_lambda = qsos_dr16q.GP.max_lambda
scale = np.shape(qsos_dr16q.GP.C)[0] / (max_lambda - min_lambda)
# some wavelegnths for emission lines to plot
lyg_wavelength = 972
nv_wavelength = 1240.81
oi_wavelength = 1305.53
cii_wavelength = 1335.31
siv_wavelength = 1399.8
# +
# plotting covariance matrix
fig, ax = plt.subplots(figsize=(8, 8))
im = ax.imshow(qsos_dr16q.GP.C, origin="lower")
ax.set_xticks(
[
(lyman_limit - min_lambda) * scale,
(lyg_wavelength - min_lambda) * scale,
(lyb_wavelength - min_lambda) * scale,
(lya_wavelength - min_lambda) * scale,
(nv_wavelength - min_lambda) * scale,
(oi_wavelength - min_lambda) * scale,
(cii_wavelength - min_lambda) * scale,
(siv_wavelength - min_lambda) * scale,
],
)
ax.set_xticklabels(
[
r"Ly $\infty$",
r"Ly $\gamma$",
r"Ly $\beta$",
r"Ly $\alpha$",
r"NV",
r"OI",
r"CII",
r"SIV",
],
rotation=45,
)
ax.set_yticks(
[
(lyman_limit - min_lambda) * scale,
(lyg_wavelength - min_lambda) * scale,
(lyb_wavelength - min_lambda) * scale,
(lya_wavelength - min_lambda) * scale,
(nv_wavelength - min_lambda) * scale,
(oi_wavelength - min_lambda) * scale,
(cii_wavelength - min_lambda) * scale,
(siv_wavelength - min_lambda) * scale,
]
)
ax.set_yticklabels(
[
r"Ly $\infty$",
r"Ly $\gamma$",
r"Ly $\beta$",
r"Ly $\alpha$",
r"NV",
r"OI",
r"CII",
r"SIV",
]
)
fig.colorbar(im, ax=ax, fraction=0.046, pad=0.04)
plt.tight_layout()
# -
# ## Generate DLA JSON catalogue
dla_json = qsos_dr16q.generate_json_catalogue(outfile="predictions_DLA_DR16Q.json")
# the format looks like this!
dla_json[12]
|
notebooks/Examples for QSOLoader.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Deep Q-Network implementation
#
# This notebook shamelessly demands you to implement a DQN - an approximate q-learning algorithm with experience replay and target networks - and see if it works any better this way.
#XVFB will be launched if you run on a server
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY"))==0:
# !bash ../xvfb start
# %env DISPLAY=:1
# __Frameworks__ - we'll accept this homework in any deep learning framework. This particular notebook was designed for tensorflow, but you will find it easy to adapt it to almost any python-based deep learning framework.
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
# ### Let's play some old videogames
# 
#
# This time we're gonna apply approximate q-learning to an atari game called Breakout. It's not the hardest thing out there, but it's definitely way more complex than anything we tried before.
#
# ### Processing game image
#
# Raw atari images are large, 210x160x3 by default. However, we don't need that level of detail in order to learn them.
#
# We can thus save a lot of time by preprocessing game image, including
# * Resizing to a smaller shape, 64 x 64
# * Converting to grayscale
# * Cropping irrelevant image parts (top & bottom)
# +
from gym.core import ObservationWrapper
from gym.spaces import Box
from scipy.misc import imresize
class PreprocessAtari(ObservationWrapper):
def __init__(self, env):
"""A gym wrapper that crops, scales image into the desired shapes and optionally grayscales it."""
ObservationWrapper.__init__(self,env)
self.img_size = (1, 64, 64)
self.observation_space = Box(0.0, 1.0, self.img_size)
def _observation(self, img):
"""what happens to each observation"""
# Here's what you need to do:
# * crop image, remove irrelevant parts
# * resize image to self.img_size
# (use imresize imported above or any library you want,
# e.g. opencv, skimage, PIL, keras)
# * cast image to grayscale
# * convert image pixels to (0,1) range, float32 type
<Your code here>
return <...>
# +
import gym
#spawn game instance for tests
env = gym.make("BreakoutDeterministic-v0") #create raw env
env = PreprocessAtari(env)
observation_shape = env.observation_space.shape
n_actions = env.action_space.n
env.reset()
obs, _, _, _ = env.step(env.action_space.sample())
#test observation
assert obs.ndim == 3, "observation must be [batch, time, channels] even if there's just one channel"
assert obs.shape == observation_shape
assert obs.dtype == 'float32'
assert len(np.unique(obs))>2, "your image must not be binary"
assert 0 <= np.min(obs) and np.max(obs) <=1, "convert image pixels to (0,1) range"
print("Formal tests seem fine. Here's an example of what you'll get.")
plt.title("what your network gonna see")
plt.imshow(obs[0, :, :],interpolation='none',cmap='gray');
# -
# ### Frame buffer
#
# Our agent can only process one observation at a time, so we gotta make sure it contains enough information to fing optimal actions. For instance, agent has to react to moving objects so he must be able to measure object's velocity.
#
# To do so, we introduce a buffer that stores 4 last images. This time everything is pre-implemented for you.
# +
from framebuffer import FrameBuffer
def make_env():
env = gym.make("BreakoutDeterministic-v4")
env = PreprocessAtari(env)
env = FrameBuffer(env, n_frames=4, dim_order='pytorch')
return env
env = make_env()
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
# +
for _ in range(50):
obs, _, _, _ = env.step(env.action_space.sample())
plt.title("Game image")
plt.imshow(env.render("rgb_array"))
plt.show()
plt.title("Agent observation (4 frames top to bottom)")
plt.imshow(obs.reshape([-1, state_dim[2]]));
# -
# ### Building a network
#
# We now need to build a neural network that can map images to state q-values. This network will be called on every agent's step so it better not be resnet-152 unless you have an array of GPUs. Instead, you can use strided convolutions with a small number of features to save time and memory.
#
# You can build any architecture you want, but for reference, here's something that will more or less work:
# 
import torch, torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
class DQNAgent(nn.Module):
def __init__(self, state_shape, n_actions, epsilon=0):
"""A simple DQN agent"""
nn.Module.__init__(self)
self.epsilon = epsilon
self.n_actions = n_actions
img_c, img_w, img_h = state_shape
# Define your network body here. Please make sure agent is fully contained here
<YOUR CODE>
def forward(self, state_t):
"""
takes agent's observation (Variable), returns qvalues (Variable)
:param state_t: a batch of 4-frame buffers, shape = [batch_size, 4, h, w]
Hint: if you're running on GPU, use state_t.cuda() right here.
"""
# Use your network to compute qvalues for given state
qvalues = <YOUR CODE>
assert isinstance(qvalues, Variable) and qvalues.requires_grad, "qvalues must be a torch variable with grad"
assert len(qvalues.shape) == 2 and qvalues.shape[0] == state_t.shape[0] and qvalues.shape[1] == n_actions
return qvalues
def get_qvalues(self, states):
"""
like forward, but works on numpy arrays, not Variables
"""
states = Variable(torch.FloatTensor(np.asarray(states)))
qvalues = self.forward(states)
return qvalues.data.cpu().numpy()
def sample_actions(self, qvalues):
"""pick actions given qvalues. Uses epsilon-greedy exploration strategy. """
epsilon = self.epsilon
batch_size, n_actions = qvalues.shape
random_actions = np.random.choice(n_actions, size=batch_size)
best_actions = qvalues.argmax(axis=-1)
should_explore = np.random.choice([0, 1], batch_size, p = [1-epsilon, epsilon])
return np.where(should_explore, random_actions, best_actions)
agent = DQNAgent(state_dim, n_actions, epsilon=0.5)
# Now let's try out our agent to see if it raises any errors.
def evaluate(env, agent, n_games=1, greedy=False, t_max=10000):
""" Plays n_games full games. If greedy, picks actions as argmax(qvalues). Returns mean reward. """
rewards = []
for _ in range(n_games):
s = env.reset()
reward = 0
for _ in range(t_max):
qvalues = agent.get_qvalues([s])
action = qvalues.argmax(axis=-1)[0] if greedy else agent.sample_actions(qvalues)[0]
s, r, done, _ = env.step(action)
reward += r
if done: break
rewards.append(reward)
return np.mean(rewards)
evaluate(env, agent, n_games=1)
# ### Experience replay
# For this assignment, we provide you with experience replay buffer. If you implemented experience replay buffer in last week's assignment, you can copy-paste it here __to get 2 bonus points__.
#
# 
# #### The interface is fairly simple:
# * `exp_replay.add(obs, act, rw, next_obs, done)` - saves (s,a,r,s',done) tuple into the buffer
# * `exp_replay.sample(batch_size)` - returns observations, actions, rewards, next_observations and is_done for `batch_size` random samples.
# * `len(exp_replay)` - returns number of elements stored in replay buffer.
# +
from replay_buffer import ReplayBuffer
exp_replay = ReplayBuffer(10)
for _ in range(30):
exp_replay.add(env.reset(), env.action_space.sample(), 1.0, env.reset(), done=False)
obs_batch, act_batch, reward_batch, next_obs_batch, is_done_batch = exp_replay.sample(5)
assert len(exp_replay) == 10, "experience replay size should be 10 because that's what maximum capacity is"
# +
def play_and_record(agent, env, exp_replay, n_steps=1):
"""
Play the game for exactly n steps, record every (s,a,r,s', done) to replay buffer.
Whenever game ends, add record with done=True and reset the game.
It is guaranteed that env has done=False when passed to this function.
PLEASE DO NOT RESET ENV UNLESS IT IS "DONE"
:returns: return sum of rewards over time
"""
# initial state
s = env.framebuffer
# Play the game for n_steps as per instructions above
<YOUR CODE>
return <mean rewards>
# +
# testing your code. This may take a minute...
exp_replay = ReplayBuffer(20000)
play_and_record(agent, env, exp_replay, n_steps=10000)
# if you're using your own experience replay buffer, some of those tests may need correction.
# just make sure you know what your code does
assert len(exp_replay) == 10000, "play_and_record should have added exactly 10000 steps, "\
"but instead added %i" % len(exp_replay)
is_dones = list(zip(*exp_replay._storage))[-1]
assert 0 < np.mean(is_dones) < 0.1, "Please make sure you restart the game whenever it is 'done' and record the is_done correctly into the buffer."\
"Got %f is_done rate over %i steps. [If you think it's your tough luck, just re-run the test]"%(np.mean(is_dones), len(exp_replay))
for _ in range(100):
obs_batch, act_batch, reward_batch, next_obs_batch, is_done_batch = exp_replay.sample(10)
assert obs_batch.shape == next_obs_batch.shape == (10,) + state_dim
assert act_batch.shape == (10,), "actions batch should have shape (10,) but is instead %s"%str(act_batch.shape)
assert reward_batch.shape == (10,), "rewards batch should have shape (10,) but is instead %s"%str(reward_batch.shape)
assert is_done_batch.shape == (10,), "is_done batch should have shape (10,) but is instead %s"%str(is_done_batch.shape)
assert [int(i) in (0,1) for i in is_dones], "is_done should be strictly True or False"
assert [0 <= a <= n_actions for a in act_batch], "actions should be within [0, n_actions]"
print("Well done!")
# -
# ### Target networks
#
# We also employ the so called "target network" - a copy of neural network weights to be used for reference Q-values:
#
# The network itself is an exact copy of agent network, but it's parameters are not trained. Instead, they are moved here from agent's actual network every so often.
#
# $$ Q_{reference}(s,a) = r + \gamma \cdot \max _{a'} Q_{target}(s',a') $$
#
# 
#
#
target_network = DQNAgent(state_dim, n_actions)
# This is how you can load weights from agent into target network
target_network.load_state_dict(agent.state_dict())
# ### Learning with... Q-learning
# Here we write a function similar to `agent.update` from tabular q-learning.
# Compute Q-learning TD error:
#
# $$ L = { 1 \over N} \sum_i [ Q_{\theta}(s,a) - Q_{reference}(s,a) ] ^2 $$
#
# With Q-reference defined as
#
# $$ Q_{reference}(s,a) = r(s,a) + \gamma \cdot max_{a'} Q_{target}(s', a') $$
#
# Where
# * $Q_{target}(s',a')$ denotes q-value of next state and next action predicted by __target_network__
# * $s, a, r, s'$ are current state, action, reward and next state respectively
# * $\gamma$ is a discount factor defined two cells above.
#
#
# __Note 1:__ there's an example input below. Feel free to experiment with it before you write the function.
# __Note 2:__ compute_td_loss is a source of 99% of bugs in this homework. If reward doesn't improve, it often helps to go through it line by line [with a rubber duck](https://rubberduckdebugging.com/).
# +
def compute_td_loss(states, actions, rewards, next_states, is_done, gamma = 0.99, check_shapes = False):
""" Compute td loss using torch operations only. Use the formula above. """
states = Variable(torch.FloatTensor(states)) # shape: [batch_size, c, h, w]
actions = Variable(torch.LongTensor(actions)) # shape: [batch_size]
rewards = Variable(torch.FloatTensor(rewards)) # shape: [batch_size]
next_states = Variable(torch.FloatTensor(next_states)) # shape: [batch_size, c, h, w]
is_done = Variable(torch.FloatTensor(is_done.astype('float32'))) # shape: [batch_size]
is_not_done = 1 - is_done
#get q-values for all actions in current states
predicted_qvalues = agent(states)
# compute q-values for all actions in next states
predicted_next_qvalues = target_network(next_states)
#select q-values for chosen actions
predicted_qvalues_for_actions = predicted_qvalues[range(len(actions)), actions]
# compute V*(next_states) using predicted next q-values
next_state_values = < YOUR CODE >
assert next_state_values.dim() == 1 and next_state_values.shape[0] == states.shape[0], "must predict one value per state"
# compute "target q-values" for loss - it's what's inside square parentheses in the above formula.
# at the last state use the simplified formula: Q(s,a) = r(s,a) since s' doesn't exist
# you can multiply next state values by is_not_done to achieve this.
target_qvalues_for_actions = <YOUR CODE>
#mean squared error loss to minimize
loss = torch.mean((predicted_qvalues_for_actions - target_qvalues_for_actions.detach()) ** 2 )
if check_shapes:
assert predicted_next_qvalues.data.dim() == 2, "make sure you predicted q-values for all actions in next state"
assert next_state_values.data.dim() == 1, "make sure you computed V(s') as maximum over just the actions axis and not all axes"
assert target_qvalues_for_actions.data.dim() == 1, "there's something wrong with target q-values, they must be a vector"
return loss
# +
# sanity checks
obs_batch, act_batch, reward_batch, next_obs_batch, is_done_batch = exp_replay.sample(10)
loss = compute_td_loss(obs_batch, act_batch, reward_batch, next_obs_batch, is_done_batch, gamma=0.99,
check_shapes=True)
loss.backward()
assert isinstance(loss, Variable) and tuple(loss.data.size()) == (1,), "you must return scalar loss - mean over batch"
assert np.any(next(agent.parameters()).grad.data.numpy() != 0), "loss must be differentiable w.r.t. network weights"
# -
# ### Main loop
#
# It's time to put everything together and see if it learns anything.
# +
from tqdm import trange
from IPython.display import clear_output
import matplotlib.pyplot as plt
from pandas import DataFrame
moving_average = lambda x, **kw: DataFrame({'x':np.asarray(x)}).x.ewm(**kw).mean().values
# %matplotlib inline
mean_rw_history = []
td_loss_history = []
# -
exp_replay = ReplayBuffer(10**5)
play_and_record(agent, env, exp_replay, n_steps=10000);
opt = < your favorite optimizer. Default to adam if you don't have one >
# +
for i in trange(10**5):
# play
play_and_record(agent, env, exp_replay, 10)
# train
< sample data from experience replay>
loss = < compute TD loss >
< minimize loss by gradient descent >
td_loss_history.append(loss.data.cpu().numpy()[0])
# adjust agent parameters
if i % 500 == 0:
agent.epsilon = max(agent.epsilon * 0.99, 0.01)
mean_rw_history.append(evaluate(make_env(), agent, n_games=3))
#Load agent weights into target_network
<YOUR CODE>
if i % 100 == 0:
clear_output(True)
print("buffer size = %i, epsilon = %.5f" % (len(exp_replay), agent.epsilon))
plt.figure(figsize=[12, 4])
plt.subplot(1,2,1)
plt.title("mean reward per game")
plt.plot(mean_rw_history)
plt.grid()
assert not np.isnan(td_loss_history[-1])
plt.subplot(1,2,2)
plt.title("TD loss history (moving average)")
plt.plot(moving_average(np.array(td_loss_history), span=100, min_periods=100))
plt.grid()
plt.show()
# -
assert np.mean(mean_rw_history[-10:]) > 10.
print("That's good enough for tutorial.")
# __ How to interpret plots: __
#
#
# This aint no supervised learning so don't expect anything to improve monotonously.
# * __ TD loss __ is the MSE between agent's current Q-values and target Q-values. It may slowly increase or decrease, it's ok. The "not ok" behavior includes going NaN or stayng at exactly zero before agent has perfect performance.
# * __ mean reward__ is the expected sum of r(s,a) agent gets over the full game session. It will oscillate, but on average it should get higher over time (after a few thousand iterations...).
# * In basic q-learning implementation it takes 5-10k steps to "warm up" agent before it starts to get better.
# * __ buffer size__ - this one is simple. It should go up and cap at max size.
# * __ epsilon__ - agent's willingness to explore. If you see that agent's already at 0.01 epsilon before it's average reward is above 0 - __ it means you need to increase epsilon__. Set it back to some 0.2 - 0.5 and decrease the pace at which it goes down.
# * Also please ignore first 100-200 steps of each plot - they're just oscillations because of the way moving average works.
#
# At first your agent will lose quickly. Then it will learn to suck less and at least hit the ball a few times before it loses. Finally it will learn to actually score points.
#
# __Training will take time.__ A lot of it actually. An optimistic estimate is to say it's gonna start winning (average reward > 10) after 20k steps.
#
# But hey, long training time isn't _that_ bad:
# 
#
# ### Video
agent.epsilon=0 # Don't forget to reset epsilon back to previous value if you want to go on training
#record sessions
import gym.wrappers
env_monitor = gym.wrappers.Monitor(make_env(),directory="videos",force=True)
sessions = [evaluate(env_monitor, agent, n_games=1) for _ in range(100)]
env_monitor.close()
# +
#show video
from IPython.display import HTML
import os
video_names = list(filter(lambda s:s.endswith(".mp4"),os.listdir("./videos/")))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./videos/"+video_names[-1])) #this may or may not be _last_ video. Try other indices
# -
# ## Assignment part I (5 pts)
#
# We'll start by implementing target network to stabilize training.
#
# To do that you should use TensorFlow functionality.
#
# We recommend thoroughly debugging your code on simple tests before applying it in atari dqn.
# ## Bonus I (2+ pts)
#
# Implement and train double q-learning.
#
# This task contains of
# * Implementing __double q-learning__ or __dueling q-learning__ or both (see tips below)
# * Training a network till convergence
# * Full points will be awarded if your network gets average score of >=10 (see "evaluating results")
# * Higher score = more points as usual
# * If you're running out of time, it's okay to submit a solution that hasn't converged yet and updating it when it converges. _Lateness penalty will not increase for second submission_, so submitting first one in time gets you no penalty.
#
#
# #### Tips:
# * Implementing __double q-learning__ shouldn't be a problem if you've already have target networks in place.
# * You will probably need `tf.argmax` to select best actions
# * Here's an original [article](https://arxiv.org/abs/1509.06461)
#
# * __Dueling__ architecture is also quite straightforward if you have standard DQN.
# * You will need to change network architecture, namely the q-values layer
# * It must now contain two heads: V(s) and A(s,a), both dense layers
# * You should then add them up via elemwise sum layer.
# * Here's an [article](https://arxiv.org/pdf/1511.06581.pdf)
# ## Bonus II (5+ pts): Prioritized experience replay
#
# In this section, you're invited to implement prioritized experience replay
#
# * You will probably need to provide a custom data structure
# * Once pool.update is called, collect the pool.experience_replay.observations, actions, rewards and is_alive and store them in your data structure
# * You can now sample such transitions in proportion to the error (see [article](https://arxiv.org/abs/1511.05952)) for training.
#
# It's probably more convenient to explicitly declare inputs for "sample observations", "sample actions" and so on to plug them into q-learning.
#
# Prioritized (and even normal) experience replay should greatly reduce amount of game sessions you need to play in order to achieve good performance.
#
# While it's effect on runtime is limited for atari, more complicated envs (further in the course) will certainly benefit for it.
#
# Prioritized experience replay only supports off-policy algorithms, so pls enforce `n_steps=1` in your q-learning reference computation (default is 10).
|
week4_approx_rl/homework_pytorch.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
import os
from collections import defaultdict
# %matplotlib inline
# # 150 Evals
# +
best_results = {'25evals': defaultdict(list),
'50evals': defaultdict(list),
'final': defaultdict(list)}
best_results_test = {'25evals': defaultdict(list),
'50evals': defaultdict(list),
'final': defaultdict(list)}
method_map = {'nei':'BoTorch-qNEI',
'gs': 'GS-3,6,9',
'gs_sampling': 'GS-3,6,9-s',
'ei': 'BO-1',
'lcb': 'GPyOpt-LCB',
'lcb_logregret': 'GPyOpt-LCB(log(Regret))',
'rs':'RS-1',
'rs3': 'Random Search x3',
'rs5': 'Random Search x5',
'ei3': 'GPyOpt-EI x3',
'ei5': 'GPyOpt-EI x5',
'nei_logregret':'BoTorch-qNEI(log(Regret))',
'asha':'ASHA'}
skip_results = ["gs-20201006", "gs-20201007"]
evals_used = defaultdict(list)
for fname in os.listdir("results/"):
if any(fname.startswith(n) for n in skip_results):
continue
if fname.endswith(".csv"):
method_name = method_map[fname.split("-")[0]]
if fname.split("-")[0] == 'nei_logregret':
continue
evals = fname.split("_")[-1][:-4]
df = pd.read_csv(f"results/{fname}")
if 'evals' in df.columns:
evals_used[method_name].append(df['evals'].mean())
best_results[evals][method_name].append(np.mean(df['mean_reward']))
# -
np.mean(evals_used['GS-3,6,9'])
np.mean(evals_used['GS-3,6,9-s'])
validdf = pd.concat([pd.DataFrame({'Method': [k]*len(v), 'EPE':v}) for k, v in best_results['final'].items()]).sort_values(by='Method')
# +
sns.set()
fig, ax = plt.subplots(ncols=2, figsize=(12, 6))
sns.boxplot(data=validdf,
x='Method',
y='EPE',
ax=ax[0],
showmeans=True,
meanprops={"marker":"*","markerfacecolor":"white", "markeredgecolor":"black"})
ax[0].set_title("Cartpole PPO")
ax[0].set_xlabel("")
ax[0].set_ylabel("Reward")
# ax[0].set_ylim([7, 20])
plt.sca(ax[0])
plt.xticks(rotation=90)
# -
validdf.to_csv('../csv-results/cartpole-valid.csv')
validdf.groupby('method').agg(['count', 'mean', 'median', 'var']).sort_values(by=('value', 'mean'), ascending=True)
meandf = pd.DataFrame({'25': df25.groupby('method').agg(['mean'])['value']['mean'],
'50': df50.groupby('method').agg(['mean'])['value']['mean'],
'100': finaldf.groupby('method').agg(['mean'])['value']['mean']})
meandf = meandf.reindex(['25', '50', '100'], axis=1)
print(meandf.to_latex(float_format=lambda x: '%.1f' % x))
|
distribution-of-outcomes/cartpole/eval.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
train = pd.read_csv('sign_mnist_train.csv')
test = pd.read_csv('sign_mnist_test.csv')
train.head()
train.shape
labels = train['label'].values
train.drop('label', axis = 1,inplace=True)
labels.shape
from sklearn.preprocessing import LabelBinarizer
label_binrizer = LabelBinarizer()
labels = label_binrizer.fit_transform(labels)
labels
labels.shape
images = train.values
images = np.array([np.reshape(i, (28, 28)) for i in images])
images.shape
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(images, labels, test_size = 0.3, random_state = 0)
import keras
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPooling2D, Flatten, Dropout
batch_size = 128
num_classes = 24
epochs = 50
x_train = x_train /255
x_train.shape
x_train = x_train.reshape(x_train.shape[0], 28, 28, 1)
x_train.shape
x_test = x_test / 255
x_test = x_test.reshape(x_test.shape[0], 28, 28, 1)
model = Sequential()
model.add(Conv2D(64, kernel_size=(3,3), activation = 'relu', input_shape=(28, 28 ,1) ))
model.add(MaxPooling2D(pool_size = (2, 2)))
model.add(Conv2D(64, kernel_size = (3, 3), activation = 'relu'))
model.add(MaxPooling2D(pool_size = (2, 2)))
model.add(Conv2D(64, kernel_size = (3, 3), activation = 'relu'))
model.add(MaxPooling2D(pool_size = (2, 2)))
model.add(Flatten())
model.add(Dense(128, activation = 'relu'))
model.add(Dropout(0.20))
model.add(Dense(num_classes, activation = 'softmax'))
model.compile(loss = keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(),
metrics=['accuracy'])
history = model.fit(x_train, y_train, validation_data = (x_test, y_test), epochs=epochs, batch_size=batch_size)
model.save('CNN_1.h5')
test_labels = test['label']
test.drop('label', axis = 1, inplace = True)
test_images = test.values
test_images = np.array([np.reshape(i, (28, 28)) for i in test_images])
test_images = np.array([i.flatten() for i in test_images])
test_labels = label_binrizer.fit_transform(test_labels)
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1)
test_images.shape
y_pred = model.predict(test_images)
from sklearn.metrics import accuracy_score
accuracy_score(test_labels, y_pred.round())
|
sign_language_recognisation_ASL.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Global Land Model
# Diagnostics
# Steps to creating diagnostic table for GLM runs
# +
import process_GLM
if __name__ == '__main__':
process_GLM.do_diagnostics_table()
# +
import process_GLM
for exp in constants.glm_experiments:
logger.info('do_diagnostic_plots_mgt: ' + exp)
obj = glm(name_exp=exp,
path_input=constants.path_glm_input + os.sep + exp,
time_period='past',
out_path=constants.path_glm_output + os.sep + exp + os.sep + 'management')
# Plot maps of management: biofuels/wood/harv
obj.make_glm_mgt_maps(2015 - obj.start_yr)
# -
|
LUH2/GLM/GLM.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: igsn-minimal
# language: python
# name: igsn-minimal
# ---
# ### Load up test samples
#
# Let's load up some live examples from SESAR
# +
import requests
from tqdm import tqdm
USER_CODES = {'ODP', 'IEUHM', 'IELCZ', 'NHB', 'HRV'}
def get_igsns(limit=10, pagesize=100, user_code='ODP'):
"""
Get me some IGSNs from SESAR.
Hits the endpoint to discover the list of IGSNs for a given user code (this is reaaaaaly slow....)
Parameters:
limit - the total number of IGSN samples to get
pagesize - the pagesize to generate these at. SESAR's server is reaaaaallly slow so
make this a bigger number if you can.
user_code - a valid user code - see USER_CODES for valid values
"""
# check user code
user_code = user_code or 'ODP'
if user_code not in USER_CODES:
raise ValueError(f'Unknown user code {user_code}, valid values are {USER_CODES}')
# Paginate over results and accumulate values
igsns, page, length = [], 1, 0
with tqdm(desc=f'Getting IGSNs...', total=limit) as pbar:
while length < limit:
response = requests.get(
f'https://app.geosamples.org/samples/user_code/{user_code}',
{'page_no': page, 'limit': min(limit - length, pagesize)},
headers={'Accept': 'application/json'}
)
if response.ok:
new_igsns = response.json()['igsn_list']
igsns.extend(new_igsns)
pbar.update(len(new_igsns))
length += len(new_igsns)
page += 1
else:
break
return igsns
# -
igsns = get_igsns(user_code='HRV', pagesize=500, limit=2500)
igsns[:2]
len(igsns)
# Now we can resolve the IGSNs to get the actual data for them
# +
def resolve(*igsns, host=None, accept=None):
"""
Resolve some IGSNs and get their data
Parameters:
*igsns - a list of IGSN URIs to resolve
host - the host to use for resolution (e.g. igsn.org). If None, defaults to
using handle.net (hdl.handle.net/20273)
accept - the format to accept. If None defaults to application/json.
Returns:
if accept == application/json, then this will return Python objects (using
requests' `.json()` method), otherwise a list of text strings containing the
content of the responses.
"""
# If no host, default to handle.net
host = host or 'hdl.handle.net/10273'
# Loop through and load IGSNs
data = []
for igsn in tqdm(igsns, total=len(igsns), ncols=50):
response = requests.get(
f'http://{host}/{igsn}',
headers={"Accept": accept or 'application/json'}
)
if response.ok:
if accept == 'application/json':
data.append(response.json())
else:
data.append(response.text)
else:
print(f'Failed to get {igsn} (HTTP status {response.status_code}), skipping')
return data
def igsn_to_dataframe(igsn_jsons):
"Convert IGSN data to a dataframe"
return pd.DataFrame.from_records([
content['sample'] for content in igsn_jsons
])
# -
# Let's download some IGSN data from SESAR and do something with it
# +
## Uncomment to download data
# xml_data = resolve(*igsns, accept='text/xml')
json_data = resolve(*igsns, accept='application/json')
# Dump to file for later
with open('../data/sesar_igsns.json', 'w') as sink:
json.dump(json_data, sink)
# with open('../data/sesar_igsns.xml', 'w') as sink:
# sink.writelines(xml_data)
# with open('../data/sesar_igsns.xml', 'r') as src:
# xml_data = [
# l for l in src.readlines()
# ]
# with open('../data/sesar_igsns.json', 'r') as src:
# json_data = json.load(src)
df = igsn_to_dataframe(json_data)
# -
igsn = igsns[0]
igsn
# Resolving an IGSN via igsn.org doesn't work with content negotiation
# +
import requests
igsn = "HRV003L17"
host = "igsn.org"
resp = requests.get(
f'http://{host}/{igsn}',
headers={"Accept": "application/json"}
)
# -
print(resp.request.headers)
print(resp.content)
# In contrast going directly through handle works ok
# +
host = "hdl.handle.net/10273"
requests.get(
f'http://{host}/{igsn}',
headers={"Accept": "application/json"}
).text
# -
# Let's generate some data with our schema
# +
import json
schema_file = '../data/schema.igsn.org/json/v0.0.1/registry.json'
with open(schema_file, 'r', encoding='utf-8') as src:
schema = json.load(src)
pjson(schema)
# -
from faker import Faker
# Schemas are key, value pairs. We care about the `type`
faker = Faker()
faker.keys()
def generate_fakes(schema, faker=None):
"""
Creates a data generator based on a JSONSchema instance
"""
faker = faker or Faker(),
while True:
result = {}
for key, value in schema.items():
data[k]
yield result
generate_fakes(schema)
faker.generate_fake(schema)
faker.generate_fake(schema['properties'])
# Now let's use these against our schema to see how well they work
#
# ### Validating IGSN metadata against our JSONSchema
#
# This is some testing against the hand-rolled JSONSchema info we have for IGSN
# +
import json
from highlight import pprint, pjson # a couple of pretty-printers for code
with open('../data/schema.igsn.org/json/v0.0.1/registry.json', 'r') as src:
registry_schema = json.load(src)
pjson(registry_schema['properties'])
# -
from jsonschema import validate
pjson(data[0])
|
jupyter/schema_validation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# # Time-Series Forecasting
#
# In simple terms data points that are collected sequentially at a regular interval with association over a time period is termed as time-series data. A time-series data having the mean and variance as constant is called stationary time-series.
# Time-series tend to have a linear relationship between lagged variables and this is called as autocorrelation. Hence a time-series historic data can be modelled to forecast the future data points without involvement of any other independent variables, these types of models are generally known as time-series forecasting. To name some key areas of applications of time-series are sales forecasting, economic forecasting, stock market forecasting etc.
# +
import warnings
warnings.filterwarnings('ignore')
from IPython.display import Image
Image(filename='../Chapter 3 Figures/Time_Series.png', width=500)
# -
# # Autoregressive Integrated Moving Average (ARIMA)
# +
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
# %matplotlib inline
from scipy import stats
import statsmodels.api as sm
from statsmodels.graphics.api import qqplot
from statsmodels.tsa.stattools import adfuller
# function to calculate MAE, RMSE
from sklearn.metrics import mean_absolute_error, mean_squared_error
# -
# Let's predict sales data using ARIMA
# Data Source: <NAME> (1976), in file: data/anderson14, Description: Monthly sales of company X Jan ’65 – May ’71 C. Cahtfield
df = pd.read_csv('Data/TS.csv')
ts = pd.Series(list(df['Sales']), index=pd.to_datetime(df['Month'],format='%Y-%m'))
# +
from statsmodels.tsa.seasonal import seasonal_decompose
decomposition = seasonal_decompose(ts)
trend = decomposition.trend
seasonal = decomposition.seasonal
residual = decomposition.resid
plt.subplot(411)
plt.title('Time Series - Decomposed')
plt.plot(ts, label='Original')
plt.legend(loc='best')
plt.subplot(412)
plt.plot(trend, label='Trend')
plt.legend(loc='best')
plt.subplot(413)
plt.plot(seasonal,label='Seasonality')
plt.legend(loc='best')
plt.tight_layout()
# -
# # Checking for stationarity
# Let's split the data into train and test. Since its a time series let's consider 1965 to 1968 for training and reamining for testing.
#
# Business forecasting edition by <NAME> Wichern recommend a minimum of 4 years of data depending on the regularity of data. If the seasonal pattern is regular 3 year data would be sufficient.
s_test = adfuller(ts, autolag='AIC')
# extract p value from test results
print "p value > 0.05 means data is non-stationary: ", s_test[1]
# ### Remove stationarity
# +
# log transform to remove variability
ts_log = np.log(ts)
ts_log.dropna(inplace=True)
s_test = adfuller(ts_log, autolag='AIC')
print "Log transform stationary check p value: ", s_test[1]
# +
#Take first difference:
ts_log_diff = ts_log - ts_log.shift()
ts_log_diff.dropna(inplace=True)
s_test = adfuller(ts_log_diff, autolag='AIC')
print "First order difference stationary check p value: ", s_test[1]
# +
# moving average smoothens the line
moving_avg = pd.rolling_mean(ts_log,2)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize = (10,3))
ax1.set_title('First order difference')
ax1.tick_params(axis='x', labelsize=7)
ax1.tick_params(axis='y', labelsize=7)
ax1.plot(ts_log_diff)
ax2.plot(ts_log)
ax2.set_title('Log vs Moving AVg')
ax2.tick_params(axis='x', labelsize=7)
ax2.tick_params(axis='y', labelsize=7)
ax2.plot(moving_avg, color='red')
plt.tight_layout()
# -
# ### Autocorrelation test
#
# We determined that the log of time series requires at least one order differening to stationarize. Now lets plot ACV and PACF charts for first order log series
# +
fig, (ax1, ax2) = plt.subplots(1, 2, figsize = (10,3))
# ACF chart
fig = sm.graphics.tsa.plot_acf(ts_log_diff.values.squeeze(), lags=20, ax=ax1)
# draw 95% confidence interval line
ax1.axhline(y=-1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray')
ax1.axhline(y=1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray')
ax1.set_xlabel('Lags')
# PACF chart
fig = sm.graphics.tsa.plot_pacf(ts_log_diff, lags=20, ax=ax2)
# draw 95% confidence interval line
ax2.axhline(y=-1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray')
ax2.axhline(y=1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray')
ax2.set_xlabel('Lags')
# -
# PACF plot has a significant spike only at lag 1, meaning that all the higher-order autocorrelations are effectively explained by the lag-1 and lag-2 autocorrelation.
#
# p = 2 i.e., the lag value where the PACF chart crosses the upper confidence interval for the first time
#
# q = 2 i.e., the lag value where the ACF chart crosses the upper confidence interval for the first time
# +
# build model
model = sm.tsa.ARIMA(ts_log, order=(2,0,2))
results_ARIMA = model.fit(disp=-1)
ts_predict = results_ARIMA.predict()
# Evaluate model
print "AIC: ", results_ARIMA.aic
print "BIC: ", results_ARIMA.bic
print "Mean Absolute Error: ", mean_absolute_error(ts_log.values, ts_predict.values)
print "Root Mean Squared Error: ", np.sqrt(mean_squared_error(ts_log.values, ts_predict.values))
# check autocorrelation
print "Durbin-Watson statistic :", sm.stats.durbin_watson(results_ARIMA.resid.values)
# -
# Usual practice is to build several models with different p and q and select the one with smallest value of AIC, BIC, MAE and RMSE.
#
# Now lets' increase p to 3 and see if there is any difference in result.
# +
model = sm.tsa.ARIMA(ts_log, order=(3,0,2))
results_ARIMA = model.fit(disp=-1)
ts_predict = results_ARIMA.predict()
plt.title('ARIMA Prediction - order(3,0,2)')
plt.plot(ts_log, label='Actual')
plt.plot(ts_predict, 'r--', label='Predicted')
plt.xlabel('Year-Month')
plt.ylabel('Sales')
plt.legend(loc='best')
print "AIC: ", results_ARIMA.aic
print "BIC: ", results_ARIMA.bic
print "Mean Absolute Error: ", mean_absolute_error(ts_log.values, ts_predict.values)
print "Root Mean Squared Error: ", np.sqrt(mean_squared_error(ts_log.values, ts_predict.values))
# check autocorrelation
print "Durbin-Watson statistic :", sm.stats.durbin_watson(results_ARIMA.resid.values)
# -
# ### Let's try with one level differencing
# +
model = sm.tsa.ARIMA(ts_log, order=(3,1,2))
results_ARIMA = model.fit(disp=-1)
ts_predict = results_ARIMA.predict()
# Correctcion for difference
predictions_ARIMA_diff = pd.Series(ts_predict, copy=True)
predictions_ARIMA_diff_cumsum = predictions_ARIMA_diff.cumsum()
predictions_ARIMA_log = pd.Series(ts_log.ix[0], index=ts_log.index)
predictions_ARIMA_log = predictions_ARIMA_log.add(predictions_ARIMA_diff_cumsum,fill_value=0)
plt.title('ARIMA Prediction - order(3,1,2)')
plt.plot(ts_log, label='Actual')
plt.plot(predictions_ARIMA_log, 'r--', label='Predicted')
plt.xlabel('Year-Month')
plt.ylabel('Sales')
plt.legend(loc='best')
print "AIC: ", results_ARIMA.aic
print "BIC: ", results_ARIMA.bic
print "Mean Absolute Error: ", mean_absolute_error(ts_log_diff.values, ts_predict.values)
print "Root Mean Squared Error: ", np.sqrt(mean_squared_error(ts_log_diff.values, ts_predict.values))
# check autocorrelation
print "Durbin-Watson statistic :", sm.stats.durbin_watson(results_ARIMA.resid.values)
# -
# In the above chart we can see that the model is over predicting at some places and AIC and BIC values is higher than previous model. Note: AIC/BIC can be positive or negative, however we should look at the absolute value of it for evaluation
#
# ### Predicting the future values
#
# Below values (p=3, d=0, q=2) is giving the smaller number for evaluation matrics, so lets use this as final model to predict the future values i.e., for the year 1972
# +
# final model
model = sm.tsa.ARIMA(ts_log, order=(3,0,2))
results_ARIMA = model.fit(disp=-1)
# predict future values
ts_predict = results_ARIMA.predict('1971-06-01', '1972-05-01')
plt.title('ARIMA Future Value Prediction - order(3,1,2)')
plt.plot(ts_log, label='Actual')
plt.plot(ts_predict, 'r--', label='Predicted')
plt.xlabel('Year-Month')
plt.ylabel('Sales')
plt.legend(loc='best')
|
jupyter_notebooks/machine_learning/ebook_mastering_ml_in_6_steps/Chapter_3_Code/Code/Time_Series.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:pcmdi_metrics_dev] *
# language: python
# name: conda-env-pcmdi_metrics_dev-py
# ---
# # Parallel Coordinate Plot: Real Case Example -- Mean Climate
#
# - Generate a static image of Parallel coordinate plot using Matplotlib, for mean climate metrics.
# - Author: <NAME> (2021-07)
# - Last update: 2021.08
#
# ## 1. Read data from JSON files
#
# Input data for parallel coordinate plot is expected as a set a (stacked or list of) 2-d numpy array(s) with list of strings for x and y axes labels.
#
# ### 1.1 Provide PMP output JSON files
import glob
import os
import numpy as np
# PMP output files downloadable from the [PMP results archive](https://github.com/PCMDI/pcmdi_metrics_results_archive).
url = ("https://github.com/PCMDI/pcmdi_metrics_results_archive/" +
"raw/main/metrics_results/mean_climate/cmip6/historical/v20210811/cmip6.historical.regrid2.2p5x2p5.v20210811.tar.gz")
# +
import requests
r = requests.get(url, allow_redirects=True)
filename = url.split('/')[-1]
with open(filename, 'wb') as file:
file.write(r.content)
# -
# Uncompress PMP output archive file
# +
import tarfile
# open file
with tarfile.open(filename) as file:
# extracting file
os.makedirs('json_files', exist_ok=True)
file.extractall('./json_files')
# -
# Check JSON files
mip = 'cmip6'
data_version = "v20210811"
json_dir = './json_files/'
json_list = sorted(glob.glob(os.path.join(json_dir, '*' + mip + '*' + data_version + '.json')))
for json_file in json_list:
print(json_file.split('/')[-1])
# ### 1.2 Extract data from JSON files
#
# Use `read_mean_clim_json_files` function to extract data from the above JSON files.
#
# #### Parameters
# - `json_list`: list of string, where each element is for path/file for PMP output JSON files
# - `stats`: list of string, where each element is statistic to extract from the JSON. Optional
# - `regions`: list of string, where each element is region to extract from the JSON. Optional
# - `mip`: string, category for mip, e.g., 'cmip6'. Optional
# - `debug`: bool, default=False, enable few print statements to help debug
#
# #### Returns
# - `df_dict`: dictionary that has `[stat][season][region]` hierarchy structure storing pandas dataframe for metric numbers (Rows: models, Columns: variables (i.e., 2d array)
# - `var_list`: list of string, all variables from JSON files
# - `var_unit_list`: list of string, all variables and its units from JSON files
# - `regions`: list of string, regions
# - `stats`: list of string, statistics
# +
from pcmdi_metrics.graphics import read_mean_clim_json_files
df_dict, var_list, var_unit_list, regions, stats = read_mean_clim_json_files(
json_list, mip=mip)
# -
print('var_list:', var_list)
print('var_unit_list:', var_unit_list)
print("len(var_list:", len(var_list))
print('regions:', regions)
print('stats:', stats)
df_dict['rms_xyt']['ann']['global']
# +
# Simple re-order variables
if 'zg-500' in var_list and 'sfcWind' in var_list:
var_list.remove('zg-500')
idx_sfcWind = var_list.index('sfcWind')
var_list.insert(idx_sfcWind+1, 'zg-500')
print("var_list:", var_list)
print("len(var_list:", len(var_list))
# -
data = df_dict['rms_xyt']['ann']['global'][var_list].to_numpy()
model_names = df_dict['rms_xyt']['ann']['global']['model'].tolist()
metric_names = ['\n['.join(var_unit.split(' [')) for var_unit in var_unit_list]
model_highlights = ['E3SM-1-0', 'E3SM-1-1', 'E3SM-1-1-ECA']
print('data.shape:', data.shape)
print('len(metric_names): ', len(metric_names))
print('len(model_names): ', len(model_names))
df_dict['rms_xyt']['ann']['global'][var_list].columns
# ## 2. Plot
from pcmdi_metrics.graphics import parallel_coordinate_plot
# Parameters
# ----------
# - `data`: 2-d numpy array for metrics
# - `metric_names`: list, names of metrics for individual vertical axes (axis=1)
# - `model_names`: list, name of models for markers/lines (axis=0)
# - `model_highlights`: list, default=None, List of models to highlight as lines
# - `fig`: `matplotlib.figure` instance to which the parallel coordinate plot is plotted. If not provided, use current axes or create a new one. Optional.
# - `ax`: `matplotlib.axes.Axes` instance to which the parallel coordinate plot is plotted. If not provided, use current axes or create a new one. Optional.
# - `figsize`: tuple (two numbers), default=(15,5), image size
# - `show_boxplot`: bool, default=True, show box and wiskers plot
# - `show_violin`: bool, default=True, show violin plot
# - `title`: string, default=None, plot title
# - `identify_all_models`: bool, default=True. Show and identify all models using markers
# - `xtick_labelsize`: number, fontsize for x-axis tick labels (optional)
# - `ytick_labelsize`: number, fontsize for x-axis tick labels (optional)
# - `colormap`: string, default='viridis', matplotlib colormap
# - `logo_rect`: sequence of float. The dimensions [left, bottom, width, height] of the new Axes. All quantities are in fractions of figure width and height. Optional
# - `logo_off`: bool, default=False, turn off PMP logo
#
# Return
# ------
# - `fig`: matplotlib component for figure
# - `ax`: matplotlib component for axis
fig, ax = parallel_coordinate_plot(data, metric_names, model_names, model_highlights,
title='Mean Climate: RMS_XYT, ANN, Global',
figsize=(21, 7),
colormap='tab20',
xtick_labelsize=10,
logo_rect=[0.8, 0.8, 0.15, 0.15])
fig.text(0.99, -0.45, 'Data version\n'+data_version, transform=ax.transAxes,
fontsize=12, color='black', alpha=0.6, ha='right', va='bottom',)
# Save figure as an image file
fig.savefig('mean_clim_parallel_coordinate_plot_'+data_version+'.png', facecolor='w', bbox_inches='tight')
# +
# Add Watermark
ax.text(0.5, 0.5, 'Example', transform=ax.transAxes,
fontsize=100, color='black', alpha=0.6,
ha='center', va='center', rotation='0')
# Save figure as an image file
fig.savefig('mean_clim_parallel_coordinate_plot_example.png', facecolor='w', bbox_inches='tight')
|
pcmdi_metrics/graphics/parallel_coordinate_plot/parallel_coordinate_plot_mean_clim.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: py310
# language: python
# name: py310
# ---
# # Models with Vector Inputs
#
# This is a basic tutorial on supervised models with vector inputs (the input is a vector of aggregated features).
#
# We approach the problem from the optimization perspective and demonstrate how a series of increasingly complex models can be built using a generic optimizer (stochastic gradient descent (SGD)).
#
# ## Data
#
# This notebook uses only generated datasets, there are no external dependencies.
# +
import tensorflow as tf
import tensorflow as tf
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dense, Input, Concatenate, Embedding, Dot
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import colors
from matplotlib import rc, cm
plt.rcParams.update({'pdf.fonttype': 'truetype'})
# -
# # Step 1: Linear Regression
#
# We start with building a basic one-dimensional linear regression model using SGD.
# +
m = 1 # dimensions
n = 500 # samples
x = np.random.randn(n, m)
c = 2 * np.ones(m)
y = np.dot(x, c.T) + 0.5 * np.random.randn(n) + 5
model = tf.keras.Sequential([
tf.keras.layers.Dense(1, activation='linear')
])
model.compile(optimizer = 'rmsprop',
loss = tf.keras.losses.MeanSquaredError(),
metrics=['mse'])
model.fit(x, y, epochs=256, verbose=0)
model.summary()
test_loss, test_mse = model.evaluate(x, y, verbose=2)
print('\nTest MSE:', test_mse)
y_hat = model.predict(x)
fix, ax = plt.subplots(1, 2, figsize=(10, 5))
ax[0].scatter(x, y, c='C0', alpha=0.3)
ax[1].scatter(x, y_hat, c='C1', alpha=0.3)
for i in (0,1):
ax[i].grid(True)
ax[i].set_xlabel('x')
ax[i].set_ylabel('y')
# -
# # Step 2: Classificaion with a Linear Boundary
# +
m = 2 # dimensions
n = 200 # samples per class
#
# Input data
#
x1 = np.random.randn(n, m) + np.array((2, 1))
x2 = np.random.randn(n, m) + np.array((0, -1))
x3 = np.random.randn(n, m) + np.array((-4, 0))
#
# Plot input data
#
extent = [-5, 5, -5, 5]
fix, ax = plt.subplots(1, figsize=(6, 6))
ax.scatter(x1[:, 0], x1[:, 1], label='Class 1', alpha=0.8, marker='+')
ax.scatter(x2[:, 0], x2[:, 1], label='Class 2', alpha=0.8, marker='+')
ax.scatter(x3[:, 0], x3[:, 1], label='Class 3', alpha=0.8, marker='+')
ax.set_xlabel('x1')
ax.set_ylabel('x2')
ax.set_xlim(extent[0], extent[1])
ax.set_ylim(extent[2], extent[3])
ax.grid(True)
ax.legend()
#
# Define and fit the model
#
x = np.vstack([x1, x2, x3])
y = np.hstack([np.zeros(n), np.zeros(n)+1, np.zeros(n)+2])
model = tf.keras.Sequential([
tf.keras.layers.Dense(3, activation='linear')
])
model.compile(optimizer = 'adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(x, y, epochs=150, verbose=0)
model.summary()
#
# Plot class probabilities
#
mesh_steps = 50
x_mesh, y_mesh = np.meshgrid(np.linspace(extent[0], extent[1], mesh_steps), np.linspace(extent[2], extent[3], mesh_steps))
mesh = np.vstack([x_mesh.ravel(), y_mesh.ravel()]).T
probability_model = tf.keras.Sequential([model,
tf.keras.layers.Softmax()])
xi = (x1, x2, x3)
fig, ax = plt.subplots(2, 3, figsize=(9, 6))
class_probs = np.zeros([3, mesh_steps, mesh_steps])
for i in range(3):
class_scores = model.predict(mesh)[:, i].reshape(mesh_steps, mesh_steps)
ax[0, i].imshow(class_scores, origin='lower', extent=extent, cmap='viridis')
ax[0, i].scatter(xi[i][:, 0], xi[i][:, 1], alpha=0.3, color='w', marker='+')
ax[0, i].set_xlim(extent[0], extent[1])
ax[0, i].set_ylim(extent[2], extent[3])
class_probs[i] = probability_model.predict(mesh)[:, i].reshape(mesh_steps, mesh_steps)
ax[1, i].imshow(class_probs[i], origin='lower', extent=extent, cmap='viridis')
ax[1, i].scatter(xi[i][:, 0], xi[i][:, 1], alpha=0.3, color='w', marker='+')
ax[1, i].set_xlim(extent[0], extent[1])
ax[1, i].set_ylim(extent[2], extent[3])
ax[1, i].set_title(f'Class {i+1} prob')
plt.tight_layout()
fig, ax = plt.subplots(1, figsize=(3, 3))
cmap = colors.ListedColormap(['C0', 'C1', 'C2'])
plt.imshow(np.argmax(class_probs, axis=0), origin='lower', extent=extent, cmap=cmap)
plt.tight_layout()
# -
# # Step 3: Classification with Nonlinear Boundaries
# +
def two_spirals(n_points, noise=1.0):
n = np.sqrt(np.random.rand(n_points,1)) * 360 * (2*np.pi)/360
d1x = -np.cos(n)*n + np.random.rand(n_points,1) * noise
d1y = np.sin(n)*n + np.random.rand(n_points,1) * noise
return (np.vstack((np.hstack((d1x, d1y)),np.hstack((-d1x, -d1y)))),
np.hstack((np.zeros(n_points),np.ones(n_points))))
x, y = two_spirals(1000)
extent = [-6, 6, -6, 6]
fix, ax = plt.subplots(1, figsize=(6, 6))
ax.set_title('Training set')
ax.plot(x[y==0, 0], x[y==0, 1], '+', label='Class 1')
ax.plot(x[y==1, 0], x[y==1, 1], '+', label='Class 2')
ax.legend()
ax.grid(True)
ax.set_xlim(extent[0], extent[1])
ax.set_ylim(extent[2], extent[3])
plt.savefig('nonlinear-dataset.pdf')
model = tf.keras.Sequential([
tf.keras.layers.Dense(8, activation='relu', name='layer_1'),
tf.keras.layers.Dense(8, activation='relu', name='layer_2'),
tf.keras.layers.Dense(2, activation='relu', name='layer_3'),
tf.keras.layers.Dense(1, activation='sigmoid', name='output')
])
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.fit(x, y, epochs=150, batch_size=10, verbose=0)
mesh_steps = 100
x_mesh, y_mesh = np.meshgrid(np.linspace(extent[0], extent[1], mesh_steps), np.linspace(extent[2], extent[3], mesh_steps))
mesh = np.vstack([x_mesh.ravel(), y_mesh.ravel()]).T
prob_mesh = model.predict(mesh).reshape(mesh_steps, mesh_steps)
fix, ax = plt.subplots(1, figsize=(6, 6))
ax.set_title('Predicted class')
ax.imshow(prob_mesh, origin='lower', extent=extent)
ax.scatter(x[y==0, 0], x[y==0, 1], alpha=0.5, color='C0', marker='+')
ax.scatter(x[y==1, 0], x[y==1, 1], alpha=0.5, color='C1', marker='+')
ax.set_xlim(extent[0], extent[1])
ax.set_ylim(extent[2], extent[3])
plt.savefig('nonlinear-decision.pdf')
model_map_layer = tf.keras.Model(model.input, model.get_layer('layer_3').output)
x_mapped = model_map_layer.predict(x)
fix, ax = plt.subplots(1, figsize=(6, 6))
ax.set_title('Mapped space')
ax.scatter(x_mapped[y==0, 0], x_mapped[y==0, 1], marker='+')
ax.scatter(x_mapped[y==1, 0], x_mapped[y==1, 1], marker='+')
ax.grid(True)
plt.savefig('nonlinear-mapping.pdf')
# -
# # Step 4: Lookup Embeddings
#
# In this section, we demonstrate how sparse features (high-cardinality categorical or one-hot) can be transformed into dense vectors and then used in regression or classfication models.
#
# For the sake of illustration, we consider the problem of modeling interactions between two entities. Many practical problems can be reduced to this form. For example, one needs to model interactions between users and products in a recommendation engine.
#
# We start with an interaction matrix that includes scores for pairs of entities, and train a model that predicts unknown scores. Each entity ID is mapped to a low-dimensional embedding using a lookup table.
# +
#
# Input data: interaction matrix
#
_ = np.nan
R = [[_, 5, 3, _, 4, _, 2, _, 1, _], #
[4, 5, _, 4, 4, 1, _, 2, 3, _], # entity A
[5, 4, 5, _, _, 2, 3, _, 4, _], # group 1
[_, _, 5, 5, 3, _, 1, 3, 2, 1], #
[3, 5, 5, _, 5, _, 1, 2, _, 2], # _____________
[1, _, 2, _, 2, 4, _, 5, _, 4], #
[_, 2, 1, 1, _, 5, 5, _, 3, _], # entity A
[2, _, 3, 1, 1, _, 4, _, _, 5], # group 2
[3, 1, 1, _, _, 4, _, 5, 5, 3], #
[2, 2, _, _, 2, _, 5, 4, 5, _]] #
#| | |
#| entity B | entity B |
#| group 1 | group 2 |
#
# Convert an interaction matrix into an array of samples (a_id, b_id, label)
#
def matrix_to_samples(R):
x = []
for u in range(R.shape[0]):
for j in range(R.shape[1]):
if not np.isnan(R[u, j]):
x.append([u, j, R[u, j]])
return np.array(x)
R_norm = np.array(R)
xy = matrix_to_samples(R_norm)
x1, x2, y = xy[:, 0].astype(int), xy[:, 1].astype(int), xy[:, 2]
#
# Visualize the input data
#
fix, ax = plt.subplots(1, figsize=(6, 6))
ax.set_title('Input data')
ax.scatter(xy[:, 0] + 1, xy[:, 1] + 1, s = 20 * xy[:, 2], c='k')
ax.set_yticks(range(1, n_a + 1))
ax.set_xticks(range(1, n_b + 1))
ax.grid(True)
#
# Model specification
#
n_a, n_b = R_norm.shape
embedding_dim = 2
input_a = Input(shape=(1,))
input_b = Input(shape=(1,))
embedding_a = Embedding(input_dim=n_a, output_dim=embedding_dim, name='a_embedding')(input_a)
embedding_b = Embedding(input_dim=n_b, output_dim=embedding_dim, name='b_embedding')(input_b)
combined = Dot(axes=2)([embedding_a, embedding_b])
score = Dense(1, activation='linear')(combined)
model = Model(inputs=[input_a, input_b], outputs=score)
#
# Model training
#
opt = tf.keras.optimizers.Adam(learning_rate=0.01)
model.compile(optimizer=opt, loss='mean_absolute_error')
history = model.fit([x1, x2], y, batch_size=1, epochs=30, verbose=0, validation_data=([x1, x2], y))
fix, ax = plt.subplots(1, figsize=(6, 3))
ax.plot(history.history['loss']);
ax.set_xlabel('Training epoch')
ax.set_ylabel('Loss')
ax.grid(True)
#
# Impute and visualize the unknown values
#
fix, ax = plt.subplots(1, figsize=(6, 6))
ax.set_title('Input data')
ax.scatter(xy[:, 0] + 1, xy[:, 1] + 1, s = 20 * xy[:, 2], c='k')
ax.set_yticks(range(1, n_a + 1))
ax.set_xticks(range(1, n_b + 1))
ax.grid(True)
imputed_xy = []
for ai in range(n_a):
for bi in range(n_b):
if np.isnan(R_norm[ai, bi]):
score = model.predict([np.array([ai]), np.array([bi])])[0, 0, 0]
imputed_xy.append([ai, bi, score])
imputed_xy = np.array(imputed_xy)
ax.scatter(imputed_xy[:, 0] + 1, imputed_xy[:, 1] + 1, s = 20 * imputed_xy[:, 2], c='r')
#
# Visualize embedding space
#
a_embedding_model = tf.keras.Model(model.input, model.get_layer('a_embedding').output)
a_embeddings = []
scores = []
for ai in range(n_a):
bi = 1 # any specific entity B
e = a_embedding_model.predict([np.array([ai]), np.array([bi])])[0, 0]
score = model.predict([np.array([ai]), np.array([bi])])[0, 0]
a_embeddings.append(e)
scores.append(score)
a_embeddings = np.array(a_embeddings)
scores = np.array(scores)
fix, ax = plt.subplots(1, figsize=(6, 6))
ax.set_title('Embedding space')
ax.scatter(a_embeddings[:, 0], a_embeddings[:, 1], s=10*np.power(scores, 2), c='k')
for i in range(n_a):
ax.annotate(i, (a_embeddings[i, 0], a_embeddings[i, 1]), xytext=(a_embeddings[i, 0], a_embeddings[i, 1] + 0.05))
ax.grid(True)
# -
|
_basic-components/regression/vector-models.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:root] *
# language: python
# name: conda-root-py
# ---
# # Structured natural language processing with Pandas and spaCy (code)
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import spacy
nlp = spacy.load("en_core_web_sm")
sns.set_style("whitegrid")
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:90% !important; }</style>"))
# -
data_loc_gh = "https://gist.githubusercontent.com/JakubPetriska/060958fd744ca34f099e947cd080b540/raw/963b5a9355f04741239407320ac973a6096cd7b6/quotes.csv"
df = pd.read_csv(data_loc_gh)
df.columns = df.columns.str.lower()
df["doc_id"] = df.index
df.head()
df.shape
docs = list(nlp.pipe(df.quote))
doc = nlp(df.quote[819])
print(doc)
spacy.displacy.render(doc, style="ent")
spacy.displacy.render(doc, style="ent")
#doc = nlp(df.quote[0])
spacy.displacy.render(doc, style="dep")
doc = nlp(df.quote[819])
spacy.displacy.render(doc, style="ent")
doc_nouns = list(doc.noun_chunks)
print(doc_nouns)
[(i, i.label_) for i in doc.ents]
[(i, i.ent_type_, i.is_stop) for i in doc]
type(docs)
# +
def extract_tokens_plus_meta(doc:spacy.tokens.doc.Doc):
"""Extract tokens and metadata from individual spaCy doc."""
return [
(i.text, i.i, i.lemma_, i.ent_type_, i.tag_,
i.dep_, i.pos_, i.is_stop, i.is_alpha,
i.is_digit, i.is_punct) for i in doc
]
def tidy_tokens(docs):
"""Extract tokens and metadata from list of spaCy docs."""
cols = [
"doc_id", "token", "token_order", "lemma",
"ent_type", "tag", "dep", "pos", "is_stop",
"is_alpha", "is_digit", "is_punct"
]
meta_df = []
for ix, doc in enumerate(docs):
meta = extract_tokens_plus_meta(doc)
meta = pd.DataFrame(meta)
meta.columns = cols[1:]
meta = meta.assign(doc_id = ix).loc[:, cols]
meta_df.append(meta)
return pd.concat(meta_df)
# -
tidy_docs = tidy_tokens(docs)
tidy_docs.head(11)
tidy_docs.groupby("doc_id").size().hist(figsize=(14, 7), color="red", alpha=.4, bins=50);
tidy_docs.query("ent_type != ''").ent_type.value_counts()
tidy_docs.query("is_stop == False & is_punct == False").lemma.value_counts().head(10).plot(kind="barh", figsize=(24, 14), alpha=.7)
plt.yticks(fontsize=20)
plt.xticks(fontsize=20);
(tidy_docs
.groupby("doc_id")
.apply(lambda x: x.assign(
prev_token = lambda x: x.token.shift(1),
next_token = lambda x: x.token.shift(-1))
)
.reset_index(drop=True)
.query("tag == 'POS'")
.loc[:, ["doc_id", "prev_token", "token", "next_token"]]
)
|
_notebooks/structured-nlp.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 1. Read in the data and transform latitude and longitude to x-y coordinate
import os
import pandas as pd
import math
import matplotlib.pyplot as plt
import numpy as np
from numpy import nanmax,argmax, unravel_index
import scipy.stats as stat
from sklearn.cluster import KMeans
from scipy.spatial.distance import cdist
from datetime import timedelta,datetime
from dateutil import tz
import time
from scipy.spatial.distance import pdist, squareform
np.set_printoptions(threshold=np.nan)
data = pd.DataFrame()
for info in os.listdir('C:/Users/glius/Google Drive/Gaussian Process/gps-iphonex/2019-04-0910'):
domain = os.path.abspath(r'C:/Users/glius/Google Drive/Gaussian Process/gps-iphonex/2019-04-0910')
info = os.path.join(domain,info)
df = pd.read_csv(info)
data = data.append(df)
data.shape
data.head(10)
UTC = [datetime.strptime(i, '%Y-%m-%dT%H:%M:%S.%f') for i in data['UTC time']]
EST = [UTC[i]-timedelta(hours=4) for i in range(data.shape[0])]
year = [EST[i].year for i in range(data.shape[0])]
month = [EST[i].month for i in range(data.shape[0])]
day = [EST[i].day for i in range(data.shape[0])]
hour = [EST[i].hour for i in range(data.shape[0])]
data['year'] = pd.Series(year)
data['month'] = pd.Series(month)
data['day'] = pd.Series(day)
data['hour'] = pd.Series(hour)
data.head(10)
## change the unit of angles to pi
def xytransform(data):
timestamp = np.array(data["timestamp"]-min(data["timestamp"]))/1000
latitude = np.array(data["latitude"])/180*math.pi
longitude = np.array(data["longitude"])/180*math.pi
lam_min=min(latitude)
lam_max=max(latitude)
phi_min=min(longitude)
phi_max=max(longitude)
R=6.371*10**6
d1=(lam_max-lam_min)*R
d2=(phi_max-phi_min)*R*math.sin(math.pi/2-lam_max)
d3=(phi_max-phi_min)*R*math.sin(math.pi/2-lam_min)
w1=(latitude-lam_min)/(lam_max-lam_min)
w2=(longitude-phi_min)/(phi_max-phi_min)
x=np.array(w1*(d3-d2)/2+w2*(d3*(1-w1)+d2*w1))
y=np.array(w1*d1*math.sin(math.acos((d3-d2)/(2*d1))))
return {'t':timestamp,'x':x,'y':y,'year':np.array(data['year']),'month':np.array(data['month']),
'day':np.array(data['day']),'hour':np.array(data['hour'])}
txy = xytransform(data)
plt.plot(txy['x'],txy['y'],'r.')
plt.show()
# ## 2. Define small functions used in SOGP function
# +
## parameter 1: kernel K0
## x = [t,g], where t is timestamp in seconds and g is x/y in meters
## period: 1 day = 24*60*60 = 86400s 1 week = 86400s*7 = 604800s
a1 = 10
b1 = 10
a2 = 70
b2 = 70
g = 50
def K0(x1,x2):
k1 = np.exp(-((abs(x1[0]-x2[0]))%86400)/a1)*np.exp(-np.floor(abs(x1[0]-x2[0])/86400)/b1)
k2 = np.exp(-((abs(x1[0]-x2[0]))%604800)/a2)*np.exp(-np.floor(abs(x1[0]-x2[0])/604800)/b2)
k3 = np.exp(-abs(x1[1]-x2[1])/200)
return 0.35*k1+0.15*k2+0.5*k3
## similarity matrix between bv's
def update_K(bv,t,K,X):
if t==0:
mat = np.array([1])
else:
d = np.shape(K)[0]
row = np.ones(d)
column = np.ones([d+1,1])
if X.ndim==1:
for i in range(d):
row[i] = column[i,0] = K0(X[t],X[bv[i]])
else:
for i in range(d):
row[i] = column[i,0] = K0(X[t,:],X[bv[i],:])
mat = np.hstack([np.vstack([K,row]),column])
return mat
## similarity vector between the t'th input with all bv's, t starts from 0 here
def update_k(bv,t,X):
d = len(bv)
if d==0:
out = np.array([0])
if d>=1:
out = np.zeros(d)
if X.ndim==1:
for i in range(d):
out[i] = K0(X[t],X[bv[i]])
else:
for i in range(d):
out[i] = K0(X[t,:],X[bv[i],:])
return out
def update_e_hat(Q,k):
if np.shape(Q)[0]==0:
out = np.array([0])
else:
out = np.dot(Q,k)
return out
def update_gamma(k,e_hat):
return 1-np.dot(k,e_hat)
def update_q(t,k,alpha,sigmax,Y):
if t==0:
out = Y[t]/sigmax
else:
out = (Y[t]-np.dot(k,alpha))/sigmax
return out
def update_s_hat(C,k,e_hat):
return np.dot(C,k)+e_hat
def update_eta(gamma,sigmax):
r = -1/sigmax
return 1/(1+gamma*r)
def update_alpha_hat(alpha,q,eta,s_hat):
return alpha+q*eta*s_hat
def update_c_hat(C,sigmax,eta,s_hat):
r = -1/sigmax
return C+r*eta*np.outer(s_hat,s_hat)
def update_s(C,k):
if np.shape(C)[0]==0:
s = np.array([1])
else:
temp = np.dot(C,k)
s = np.append(temp,1)
return s
def update_alpha(alpha,q,s):
T_alpha = np.append(alpha,0)
new_alpha = T_alpha + q*s
return new_alpha
def update_c(C,sigmax,s):
d = np.shape(C)[0]
if d==0:
U_c = np.array([0])
else:
U_c = np.hstack([np.vstack([C,np.zeros(d)]),np.zeros([d+1,1])])
r = -1/sigmax
new_c = U_c+r*np.outer(s,s)
return new_c
def update_Q(Q,gamma,e_hat):
d = np.shape(Q)[0]
if d==0:
out = np.array([1])
else:
temp = np.append(e_hat,-1)
new_Q = np.hstack([np.vstack([Q,np.zeros(d)]),np.zeros([d+1,1])])
out = new_Q + 1/gamma*np.outer(temp,temp)
return out
def update_alpha_vec(alpha,Q,C):
t = len(alpha)-1
return alpha[:t]-alpha[t]/(C[t,t]+Q[t,t])*(Q[t,:t]+C[t,:t])
def update_c_mat(C,Q):
t = np.shape(C)[0]-1
return C[:t,:t]+np.outer(Q[t,:t],Q[t,:t])/Q[t,t]-np.outer(Q[t,:t]+C[t,:t],Q[t,:t]+C[t,:t])/(Q[t,t]+C[t,t])
def update_q_mat(Q):
t = np.shape(Q)[0]-1
return Q[:t,:t]-np.outer(Q[t,:t],Q[t,:t])/Q[t,t]
def update_s_mat(k_mat,s_mat,index,Q):
k_mat = (k_mat[index,:])[:,index]
s_mat = (s_mat[index,:])[:,index]
step1 = k_mat-k_mat.dot(s_mat).dot(k_mat)
step2 = (step1[:d,:])[:,:d]
step3 = Q - Q.dot(step2).dot(Q)
return step3
# -
# ## 3. Define SOGP function and naive GP function
# +
def SOGP(X,Y,sigma2,tol,d):
n = len(Y)
Q = []
C = []
alpha = []
bv = []
I = 0 ## an indicator shows if it is the first time that the number of bvs hits d
for i in range(n):
k = update_k(bv,i,X)
if np.shape(C)[0]==0:
sigmax = 1+sigma2
else:
sigmax = 1+sigma2+k.dot(C).dot(k)
q = update_q(i,k,alpha,sigmax,Y)
r = -1/sigmax
e_hat = update_e_hat(Q,k)
gamma = update_gamma(k,e_hat)
if gamma<tol:
s = update_s_hat(C,k,e_hat)
eta = update_eta(gamma,sigmax)
alpha = update_alpha_hat(alpha,q,eta,s)
C = update_c_hat(C,sigmax,eta,s)
else:
s = update_s(C,k)
alpha = update_alpha(alpha,q,s)
C = update_c(C,sigmax,s)
Q = update_Q(Q,gamma,e_hat)
bv = np.array(np.append(bv,i),dtype=int)
if len(bv)>=d:
I = I + 1
if I==1:
K = np.zeros([d,d])
if X.ndim==1:
for i in range(d):
for j in range(d):
K[i,j] = K0(X[bv[i]],X[bv[j]])
else:
for i in range(d):
for j in range(d):
K[i,j] = K0(X[bv[i],:],X[bv[j],:])
S = np.linalg.inv(np.linalg.inv(C)+K)
if len(bv)>d:
alpha_vec = update_alpha_vec(alpha,Q,C)
c_mat = update_c_mat(C,Q)
q_mat = update_q_mat(Q)
s_mat = np.hstack([np.vstack([S,np.zeros(d)]),np.zeros([d+1,1])])
s_mat[d,d] = 1/sigma2
k_mat = update_K(bv,i,K,X)
eps = np.zeros(d)
for j in range(d):
eps[j] = alpha_vec[j]/(q_mat[j,j]+c_mat[j,j])-s_mat[j,j]/q_mat[j,j]+np.log(1+c_mat[j,j]/q_mat[j,j])
loc = np.where(eps == np.min(eps))[0]
bv = np.array(np.delete(bv,loc),dtype=int)
if loc==0:
index = np.append(np.arange(1,d+1),0)
else:
index = np.append(np.append(np.arange(0,loc),np.arange(loc+1,d+1)),loc)
alpha = update_alpha_vec(alpha[index],(Q[index,:])[:,index],(C[index,:])[:,index])
C = update_c_mat((C[index,:])[:,index],(Q[index,:])[:,index])
Q = update_q_mat((Q[index,:])[:,index])
S = update_s_mat(k_mat,s_mat,index,Q)
K = (k_mat[index[:d],:])[:,index[:d]]
output = {'bv':bv, 'alpha':alpha}
return output
## X must be time (one-dimensional) for this function
def SOGP_pred(new, result, X):
bv = result['bv']
alpha = result['alpha']
d = len(bv)
if X.ndim==1:
k0 = [K0(new,X[bv[i]]) for i in range(d)]
else:
k0 = [K0(new,X[bv[i],:]) for i in range(d)]
pred = np.dot(alpha,k0)
return pred
def naive_GP(X,Y,sigma2,new):
n = len(Y)
Kmat = np.zeros([n,n])
if X.ndim==1:
k0 = np.array([K0(new,X[i]) for i in range(n)])
for i in range(n):
for j in range(n):
Kmat[i,j]= K0(X[i],X[j])
else:
k0 = np.array([K0(new,X[i,:]) for i in range(n)])
for i in range(n):
for j in range(n):
Kmat[i,j]= K0(X[i,:],X[j,:])
pred = k0.dot(np.linalg.inv(Kmat+np.eye(n)*sigma2)).dot(Y)
return pred
# -
# ## 4. Define rectangle methods function
# +
gap = 180 ## 3 min, 180s
window = 2 ## smoothing window
def gps_smoothing(t,x,y,hour):
n = len(t)
new_x = np.zeros(n)
new_y = np.zeros(n)
start_time = 0
start = 0 ## index
for i in np.arange(1,n):
if i == n-1:
end = n-1 ## index
for j in np.arange(start, end+1):
if j < start+window:
new_x[j] = np.mean(x[start:(j+window+1)])
new_y[j] = np.mean(y[start:(j+window+1)])
elif j > end-window:
new_x[j] = np.mean(x[(j-window):(end+1)])
new_y[j] = np.mean(y[(j-window):(end+1)])
else:
new_x[j] = np.mean(x[(j-window):(j+window+1)])
new_y[j] = np.mean(y[(j-window):(j+window+1)])
elif t[i]- start_time >= gap:
end = i-1
for j in np.arange(start, end+1):
if j < start+window:
new_x[j] = np.mean(x[start:(j+window+1)])
new_y[j] = np.mean(y[start:(j+window+1)])
elif j > end-window:
new_x[j] = np.mean(x[(j-window):(end+1)])
new_y[j] = np.mean(y[(j-window):(end+1)])
else:
new_x[j] = np.mean(x[(j-window):(j+window+1)])
new_y[j] = np.mean(y[(j-window):(j+window+1)])
start = i
start_time = t[start]
return {'t':t,'x':new_x,'y':new_y,'hour':hour}
def combine_flight(t,x,y,w):
mov = [0,0,0,0,0,0,0]
h = len(t)
break_points = np.array([0,h-1],dtype=int)
coverage = 0
while coverage<h:
if len(break_points)==2:
## y = beta1*x + beta0
beta1 = (y[h-1]-y[0])/(x[h-1]-x[0]+0.0001)
beta0 = y[0]-beta1*x[0]
## line ax+by+c=0
a = beta1
b = -1
c = beta0
d = abs(a*x+b*y+np.ones(h)*c)/np.sqrt(a**2+b**2+0.0001)
if sum(d-w>0)==0:
coverage = h
mov = np.vstack((mov,np.array([t[0],t[h-1],x[0],x[h-1],y[0],y[h-1],1])))
else:
loc = np.where(d == np.max(d))[0][0]
break_points = np.append(break_points,loc)
if len(break_points)>2:
break_points = np.sort(break_points)
beta1 = [(y[break_points[i+1]]-y[break_points[i]])/(x[break_points[i+1]]-x[break_points[i]]+0.0001) for i in range(len(break_points)-1)]
beta0 = [y[break_points[i]]-beta1[i]*x[break_points[i]] for i in range(len(break_points)-1)]
a = beta1
b = -1
c = beta0
d = np.array([])
for i in range(len(break_points)-1):
for j in np.arange(break_points[i],break_points[i+1]):
d = np.append(d,abs(a[i]*x[j]+b*y[j]+c[i])/np.sqrt(a[i]**2+b**2+0.0001))
d = np.append(d,0)
if sum(d-w>0)==0:
coverage = h
for i in range(len(break_points)-1):
mov = np.vstack((mov,np.array([t[break_points[i]],t[break_points[i+1]],x[break_points[i]],x[break_points[i+1]],
y[break_points[i]],y[break_points[i+1]],1])))
else:
loc = np.where(d == np.max(d))[0][0]
break_points = np.append(break_points,loc)
mov = np.delete(mov,0,0)
return mov
## r: pause radius w: width of rectangle min_pause: minimum time for a pause
def rectangle(t,x,y,r,w,min_pause):
n = len(t)
start_time = 0
start = np.array([0],dtype=int) ## index
end = np.array([],dtype=int) ## index
for i in np.arange(1,n):
if i == n-1:
end = np.append(end,n-1)
elif t[i]-start_time >= gap:
start_time = t[i]
end = np.append(end, i-1)
start = np.append(start, i)
m = len(start)
all_trace = np.array([0,0,0,0,0,0,0])
for i in range(m):
trace_t_start = np.array([])
trace_t_end = np.array([])
trace_x_start = np.array([])
trace_x_end = np.array([])
trace_y_start = np.array([])
trace_y_end = np.array([])
trace_status = np.array([],dtype=int)
start_t = t[start[i]]
start_x = x[start[i]]
start_y = y[start[i]]
pause_timer = 0
for j in np.arange(start[i]+1,end[i]+1):
current_t = t[j]
current_x = x[j]
current_y = y[j]
d = np.sqrt((current_x-start_x)**2+(current_y-start_y)**2)
if d <= r:
pause_timer = current_t-start_t
if current_t == t[end[i]]:
trace_t_start = np.append(trace_t_start,start_t)
trace_x_start = np.append(trace_x_start,start_x)
trace_y_start = np.append(trace_y_start,start_y)
trace_t_end = np.append(trace_t_end,current_t)
trace_x_end = np.append(trace_x_end,current_x)
trace_y_end = np.append(trace_y_end,current_y)
trace_status = np.append(trace_status,0)
if d > r:
if pause_timer >= min_pause:
trace_t_start = np.append(trace_t_start,start_t)
trace_x_start = np.append(trace_x_start,start_x)
trace_y_start = np.append(trace_y_start,start_y)
trace_t_end = np.append(trace_t_end,t[j-1])
trace_x_end = np.append(trace_x_end,x[j-1])
trace_y_end = np.append(trace_y_end,y[j-1])
trace_status = np.append(trace_status,0)
pause_timer = 0
start_t = t[j-1]
start_x = x[j-1]
start_y = y[j-1]
if pause_timer < min_pause:
trace_t_start = np.append(trace_t_start,start_t)
trace_x_start = np.append(trace_x_start,start_x)
trace_y_start = np.append(trace_y_start,start_y)
trace_t_end = np.append(trace_t_end,current_t)
trace_x_end = np.append(trace_x_end,current_x)
trace_y_end = np.append(trace_y_end,current_y)
trace_status = np.append(trace_status,1)
pause_timer = 0
start_t = current_t
start_x = current_x
start_y = current_y
k = len(trace_t_start)
if k>=1:
t0 = np.array([trace_t_start[0],trace_t_end[0]])
x0 = np.array([trace_x_start[0],trace_x_end[0]])
y0 = np.array([trace_y_start[0],trace_y_end[0]])
if k==1:
if trace_status[0]==0:
mov = np.array([trace_t_start[0],trace_t_end[0],(trace_x_start[0]+trace_x_end[0])/2,
(trace_x_start[0]+trace_x_end[0])/2,(trace_y_start[0]+trace_y_end[0])/2,
(trace_y_start[0]+trace_y_end[0])/2,0])
all_trace = np.vstack((all_trace,mov))
else:
mov = np.array([trace_t_start[0],trace_t_end[0],trace_x_start[0],trace_x_end[0],trace_y_start[0],trace_y_end[0],1])
all_trace = np.vstack((all_trace,mov))
else:
for j in range(k):
if j!=k-1:
if trace_status[j]==0:
mov = np.array([trace_t_start[j],trace_t_end[j],(trace_x_start[j]+trace_x_end[j])/2,
(trace_x_start[j]+trace_x_end[j])/2,(trace_y_start[j]+trace_y_end[j])/2,
(trace_y_start[j]+trace_y_end[j])/2,0])
all_trace = np.vstack((all_trace,mov))
t0 = np.array([trace_t_start[j],trace_t_end[j]])
x0 = np.array([(trace_x_start[j]+trace_x_end[j])/2,(trace_x_start[j]+trace_x_end[j])/2])
y0 = np.array([(trace_y_start[j]+trace_y_end[j])/2,(trace_y_start[j]+trace_y_end[j])/2])
elif trace_status[j]==1 and trace_status[j+1]==1:
t0 = np.append(t0, trace_t_end[j+1])
x0 = np.append(x0, trace_x_end[j+1])
y0 = np.append(y0, trace_y_end[j+1])
if j+1==k-1:
mov = combine_flight(t0,x0,y0,w)
all_trace = np.vstack((all_trace,mov))
elif trace_status[j]==1 and trace_status[j+1]==0:
if j+1==k-1:
if trace_t_end[j+1]-trace_t_start[j+1]<min_pause:
t0 = np.append(t0, trace_t_end[j+1])
x0 = np.append(x0, trace_x_end[j+1])
y0 = np.append(y0, trace_y_end[j+1])
mov = combine_flight(t0,x0,y0,w)
all_trace = np.vstack((all_trace,mov))
else:
mov = combine_flight(t0,x0,y0,w)
all_trace = np.vstack((all_trace,mov))
else:
mov = combine_flight(t0,x0,y0,w)
all_trace = np.vstack((all_trace,mov))
t0 = np.array([trace_t_start[j+1],trace_t_end[j+1]])
x0 = np.array([(trace_x_start[j+1]+trace_x_end[j+1])/2,(trace_x_start[j+1]+trace_x_end[j+1])/2])
y0 = np.array([(trace_y_start[j+1]+trace_y_end[j+1])/2,(trace_y_start[j+1]+trace_y_end[j+1])/2])
if j==k-1:
if trace_status[j]==1 and trace_status[j-1]==0:
mov = np.array([trace_t_start[j],trace_t_end[j],trace_x_start[j],trace_x_end[j],trace_y_start[j],trace_y_end[j],1])
all_trace = np.vstack((all_trace,mov))
elif trace_status[j]==0 and trace_t_end[j]-trace_t_start[j]>=min_pause:
mov = np.array([trace_t_start[j],trace_t_end[j],(trace_x_start[j]+trace_x_end[j])/2,
(trace_x_start[j]+trace_x_end[j])/2,(trace_y_start[j]+trace_y_end[j])/2,
(trace_y_start[j]+trace_y_end[j])/2,0])
all_trace = np.vstack((all_trace,mov))
return np.delete(all_trace,0,0)
# -
# ## 5. pause/home detection
def plot_observed(data,r,w,min_pause):
txy = xytransform(data)
new_txy = gps_smoothing(txy['t'],txy['x'],txy['y'],txy['hour'])
traj = rectangle(new_txy['t'],new_txy['x'],new_txy['y'],r,w,min_pause)
grid_lv = 20
x_grid = int(np.ceil(max(new_txy['x'])/grid_lv))
y_grid = int(np.ceil(max(new_txy['y'])/grid_lv))
freq_mat = np.zeros((x_grid,y_grid))
hour_dist = {}
for i in range(x_grid):
for j in range(y_grid):
name = str(i)+','+str(j)
hour_dist[name] = []
for i in range(len(new_txy['t'])):
x_loc = int(np.floor(new_txy['x'][i]/grid_lv))
y_loc = int(np.floor(new_txy['y'][i]/grid_lv))
freq_mat[x_loc,y_loc] = freq_mat[x_loc,y_loc]+1
name = str(x_loc)+','+str(y_loc)
hour_dist[name].append(new_txy['hour'][i])
for i in range(np.shape(traj)[0]):
plt.plot([traj[i,2], traj[i,3]], [traj[i,4], traj[i,5]], 'k-', lw=1)
flat = freq_mat.flatten()
flat.sort()
pause_num = 10
home_percentage = np.empty(pause_num)
for i in range(pause_num):
val = flat[-(i+1)]
index_tuple = np.where(freq_mat == val)
index_array = np.array(index_tuple)
name = str(index_array[0][0])+','+str(index_array[1][0])
hour_array = np.array(hour_dist[name])
home_percentage[i]=sum((hour_array>19).astype(int)+(hour_array<9).astype(int))/len(hour_array)
marker = 15*np.log(val)/np.log(flat[-1])
plt.plot((index_array[0][0]+0.5)*grid_lv, (index_array[1][0]+0.5)*grid_lv, 'rp', markersize=marker)
loc = np.where(home_percentage == max(home_percentage))[0]
val = flat[-(loc[0]+1)]
index_tuple = np.where(freq_mat == val)
index_array = np.array(index_tuple)
return([(index_array[0][0]+0.5)*grid_lv, (index_array[1][0]+0.5)*grid_lv])
home_loc = plot_observed(data,r=5,w=5,min_pause=30)
# ## 6. Code for imputation
def select_traj(data,r,w,min_pause,sigma2,tol,d):
txy = xytransform(data)
new_txy = gps_smoothing(txy['t'],txy['x'],txy['y'],txy['hour'])
traj = rectangle(new_txy['t'],new_txy['x'],new_txy['y'],r,w,min_pause)
X = np.transpose(np.vstack([new_txy['t'],new_txy['x']]))
Y = new_txy['y']
result1 = SOGP(X,Y,sigma2,tol,d)['bv']
X = np.transpose(np.vstack([new_txy['t'],new_txy['y']]))
Y = new_txy['x']
result2 = SOGP(X,Y,sigma2,tol,d)['bv']
index = np.unique(np.append(result1,result2))
selected_time = new_txy['t'][index]
m = np.shape(traj)[0]
selected_index = np.array([],dtype=int)
for i in range(m):
if any(np.array(selected_time - traj[i,0]>=0)*np.array(traj[i,1] - selected_time>=0)==True):
selected_index = np.append(selected_index,i)
selected_traj = traj[selected_index,:]
return {'original':traj, 'selected':selected_traj,'bv_index':index}
def create_tables(traj, selected_traj):
n = np.shape(traj)[0]
m = np.shape(selected_traj)[0]
index = [selected_traj[i,6]==1 for i in range(m)]
flight_table = selected_traj[index,:]
index = [selected_traj[i,6]==0 for i in range(m)]
pause_table = selected_traj[index,:]
mis_table = np.zeros(6)
for i in range(n-1):
if traj[i+1,0]!=traj[i,1]:
mov = np.array([traj[i,1],traj[i+1,0],traj[i,3],traj[i+1,2],traj[i,5],traj[i+1,4]])
mis_table = np.vstack((mis_table,mov))
mis_table = np.delete(mis_table,0,0)
return {'flight':flight_table,'pause':pause_table,'mis':mis_table}
def impute_trajectory(method,traj_dict):
radius=50
selected_traj = traj_dict['selected']
traj = traj_dict['original']
tables = create_tables(traj, selected_traj)
flight_table = tables['flight']
pause_table = tables['pause']
mis_table = tables['mis']
distortions = []
mat = pause_table[:,[2,4]]
for k in range(1,10):
kmeanModel = KMeans(n_clusters=k).fit(mat)
kmeanModel.fit(mat)
distortions.append(sum(np.min(cdist(mat, kmeanModel.cluster_centers_, 'euclidean'), axis=1)) / mat.shape[0])
vec = np.array(distortions[:-1])-np.array(distortions[1:])
num = [i for i, x in enumerate(vec) if x<30][0]+1 ## 30 here is very arbitary
kmeans = KMeans(n_clusters=num)
kmeans = kmeans.fit(mat)
centroids = kmeans.cluster_centers_
mis_start = mis_table[:,0];mis_end = mis_table[:,1]
mis_x0 = mis_table[:,2];mis_x1 = mis_table[:,3]
mis_y0 = mis_table[:,4];mis_y1 = mis_table[:,5]
t_list_pause=(pause_table[:,0]+pause_table[:,1])/2
x_list_pause=(pause_table[:,2]+pause_table[:,3])/2
y_list_pause=(pause_table[:,4]+pause_table[:,5])/2
t_list_flight=(flight_table[:,0]+flight_table[:,1])/2
x_list_flight=(flight_table[:,2]+flight_table[:,3])/2
y_list_flight=(flight_table[:,4]+flight_table[:,5])/2
imp_start=np.array([]);imp_end=np.array([])
imp_t=np.array([]);imp_s=np.array([])
imp_x0=np.array([]);imp_x1=np.array([])
imp_y0=np.array([]);imp_y1=np.array([])
t_list_m=(selected_traj[:,0]+selected_traj[:,1])/2
x_list_m=(selected_traj[:,2]+selected_traj[:,3])/2
y_list_m=(selected_traj[:,4]+selected_traj[:,5])/2
obs_start = selected_traj[:,0]; obs_end = selected_traj[:,1]
obs_x0 = selected_traj[:,2]; obs_x1 = selected_traj[:,3]
obs_y0 = selected_traj[:,4]; obs_y1 = selected_traj[:,5]
a1 = 10; b1 = 10; a2 = 70; b2 = 70; g = 200
def K1(method_,current_t_,current_x_,current_y_,t_list=[],x_list=[],y_list=[]):
if method_=="TL":
return np.exp(-abs(t_list-current_t_)/86400/b1)
if method_=="GL":
d = np.sqrt((current_x_-x_list)**2+(current_y_-y_list)**2)
return np.exp(-d/g)
if method_=="GLC":
k1 = np.exp(-(abs(t_list-current_t_)%86400)/a1)*np.exp(-np.floor(abs(t_list-current_t_)/86400)/b1)
k2 = np.exp(-(abs(t_list-current_t_)%604800)/a2)*np.exp(-np.floor(abs(t_list-current_t_)/604800)/b2)
d = np.sqrt((current_x_-x_list)**2+(current_y_-y_list)**2)
k3 = np.exp(-d/g)
return 0.35*k1+0.15*k2+0.5*k3
def I_flight(method_,centroids_,current_t_,current_x_,current_y_,dest_t,dest_x,dest_y):
K_ = K1(method_,current_t_,current_x_,current_y_,t_list_m,x_list_m,y_list_m)
temp1 = K_[selected_traj[:,6]==1]
temp0 = K_[selected_traj[:,6]==0]
temp1[::-1].sort()
temp0[::-1].sort()
w1 = np.mean(temp1[:10])
w0 = np.mean(temp0[:10])
p = w1/(w1+w0)
d_dest = np.sqrt((dest_x-current_x_)**2+(dest_y-current_y_)**2)
v_dest = d_dest/(dest_t-current_t_+0.0001)
s1 = int(d_dest<3000)*int(v_dest>1.5)
s2 = int(d_dest>=3000)*int(v_dest>14)
s3 = np.sqrt((centroids_[:,0]-current_x_)**2+(centroids_[:,1]-current_y_)**2)<radius
s4 = np.sqrt((centroids_[:,0]-dest_x)**2+(centroids_[:,1]-dest_y)**2)<radius
if s1+s2==0 and sum(s3)==0 and sum(s4)==0:
out = stat.bernoulli.rvs(p,size=1)[0]
elif s1+s2==0 and sum(s3)==1 and sum(s4)==0:
out = stat.bernoulli.rvs(max(0,p-0.4),size=1)[0]
elif s1+s2==0 and sum(s3)==0 and sum(s4)==1:
out = stat.bernoulli.rvs(min(1,p+0.4),size=1)[0]
elif s1+s2==0 and sum(s3)==1 and sum(s4)==1 and sum(s3==s4)==len(s3):
out = stat.bernoulli.rvs(max(0,p-0.6),size=1)[0]
elif s1+s2==0 and sum(s3)==1 and sum(s4)==1 and sum(s3==s4)!=len(s3):
out = stat.bernoulli.rvs(max(0,p-0.2),size=1)[0]
elif d_dest<radius:
out = stat.bernoulli.rvs(max(0,p-0.6),size=1)[0]
elif s1+s2>0:
out = 1
else:
out = stat.bernoulli.rvs(p,size=1)[0]
return out
for i in range(mis_table.shape[0]):
current_t=mis_start[i]
current_x=mis_x0[i]
current_y=mis_y0[i]
vec_x=[];vec_y=[] ## record delta x,y
while current_t < mis_end[i]:
I = I_flight(method,centroids,current_t,current_x,current_y,mis_end[i],mis_x1[i],mis_y1[i])
if I==1:
## change this function
d2all = np.sqrt((current_x-x_list_m)**2+(current_y-y_list_m)**2)>200
abnormal = int(sum(d2all)==len(x_list_m))
weight= K1("GLC",current_t,current_x,current_y,t_list_flight,x_list_flight,t_list_flight)
normalize_w=weight/sum(weight)
flight_index=np.random.choice(flight_table.shape[0], p=normalize_w)
delta_t=(flight_table[flight_index,1]-flight_table[flight_index,0])
delta_x=(flight_table[flight_index,3]-flight_table[flight_index,2])
delta_y=(flight_table[flight_index,5]-flight_table[flight_index,4])
d_imp = np.sqrt(delta_x**2+delta_y**2)
d_act = np.sqrt((mis_x0[i]-mis_x1[i])**2+(mis_y0[i]-mis_y1[i])**2)
ratio = d_act/(d_imp+0.0001)
mov1 = np.array([mis_x1[i]-current_x,mis_y1[i]-current_y])
mov2 = np.array([delta_x,delta_y])
inner_prod = mov1.dot(mov2)
u = stat.bernoulli.rvs(normalize_w[flight_index],size=1)[0]
if (inner_prod<0 and u==0) or abnormal==1:
delta_x = 0.5*(mis_x1[i]-current_x)/(np.linalg.norm(mis_x1[i]-current_x)+0.0001)*np.linalg.norm(delta_x)-0.5*delta_x
delta_y = 0.5*(mis_y1[i]-current_y)/(np.linalg.norm(mis_y1[i]-current_y)+0.0001)*np.linalg.norm(delta_y)-0.5*delta_y
if ratio<=1:
delta_t = 0.8*delta_t*ratio
delta_x = 0.8*(delta_x*ratio+mis_x1[i]-current_x)/2
delta_y = 0.8*(delta_y*ratio+mis_y1[i]-current_y)/2
if(current_t+delta_t>=mis_end[i]):
temp=delta_t
delta_t=mis_end[i]-current_t
delta_x=delta_x*delta_t/temp
delta_y=delta_y*delta_t/temp
vec_x.append(delta_x)
vec_y.append(delta_y)
imp_start = np.append(imp_start,current_t)
current_t=current_t+delta_t
imp_end = np.append(imp_end,current_t)
imp_t = np.append(imp_t,delta_t)
imp_s = np.append(imp_s,1)
imp_x0 = np.append(imp_x0,current_x)
current_x=(mis_end[i]-current_t)/(mis_end[i]-mis_start[i])*(mis_x0[i]+sum(vec_x))+(current_t-mis_start[i])/(mis_end[i]-mis_start[i])*mis_x1[i]
imp_x1 = np.append(imp_x1,current_x)
imp_y0 = np.append(imp_y0,current_y)
current_y=(mis_end[i]-current_t)/(mis_end[i]-mis_start[i])*(mis_y0[i]+sum(vec_y))+(current_t-mis_start[i])/(mis_end[i]-mis_start[i])*mis_y1[i]
imp_y1 = np.append(imp_y1,current_y)
if I==0:
weight= K1("GLC",current_t,current_x,current_y,t_list_pause,x_list_pause,t_list_pause)
normalize_w=weight/sum(weight)
pause_index=np.random.choice(pause_table.shape[0], p=normalize_w)
R = np.random.uniform(1,5,size=1)[0]
delta_x=0
delta_y=0
s = 0
delta_t=(pause_table[pause_index,1]-pause_table[pause_index,0])*R
if(current_t+delta_t>=mis_end[i]):
delta_t=mis_end[i]-current_t
delta_x=mis_x1[i]-current_x
delta_y=mis_y1[i]-current_y
if delta_x**2+delta_y**2>100:
s = 1
vec_x.append(delta_x)
vec_y.append(delta_y)
imp_start = np.append(imp_start,current_t)
current_t=current_t+delta_t
imp_end = np.append(imp_end,current_t)
imp_t = np.append(imp_t,delta_t)
imp_s = np.append(imp_s,s)
imp_x0 = np.append(imp_x0,current_x)
current_x = current_x + delta_x
imp_x1 = np.append(imp_x1,current_x)
imp_y0 = np.append(imp_y0,current_y)
current_y = current_y + delta_y
imp_y1 = np.append(imp_y1,current_y)
imp_table=np.stack([imp_start,imp_end,imp_x0,imp_x1,imp_y0,imp_y1,imp_s], axis=1)
return imp_table
# ## 7. Apply the method on the data
r=5;w=5;min_pause=30;sigma2=2;tol=0.05;d=200
traj_dict = select_traj(data,r,w,min_pause,sigma2,tol,d)
imp_table = impute_trajectory('GLC',traj_dict)
np.shape(imp_table)[0]
np.savetxt('imp_table_compare.txt', imp_table)
np.savetxt('orig_table_compare.txt', traj_dict['original'])
imp_table = np.loadtxt('imp_table_compare.txt')
orig_table = np.loadtxt('orig_table_compare.txt')
imp_table
def xy_w_newt(data):
time_temp = [datetime.strptime(i, '%m/%d/%Y %H:%M') for i in data['time']]
timestamp = [time.mktime(i.timetuple()) for i in time_temp]
start = 0
count = 0
for i in range(len(timestamp)-1):
if timestamp[i+1]-timestamp[start]==0:
count = count + 1
if timestamp[i+1]-timestamp[start]>0 and count>0:
for j in range(count):
timestamp[i-j] = timestamp[start]+60/(count+1)*(count-j)
count = 0
start = i+1
if timestamp[i+1]-timestamp[start]>0 and count==0:
start = i+1
latitude = np.array(data["latitude"])/180*math.pi
longitude = np.array(data["longitude"])/180*math.pi
lam_min=min(latitude)
lam_max=max(latitude)
phi_min=min(longitude)
phi_max=max(longitude)
R=6.371*10**6
d1=(lam_max-lam_min)*R
d2=(phi_max-phi_min)*R*math.sin(math.pi/2-lam_max)
d3=(phi_max-phi_min)*R*math.sin(math.pi/2-lam_min)
w1=(latitude-lam_min)/(lam_max-lam_min)
w2=(longitude-phi_min)/(phi_max-phi_min)
x=np.array(w1*(d3-d2)/2+w2*(d3*(1-w1)+d2*w1))
y=np.array(w1*d1*math.sin(math.acos((d3-d2)/(2*d1))))
return {'t':timestamp,'x':x,'y':y}
## plot 04-09/04-10
breakpoint = (1554782400000.0 + 24*60*60*1000-min(data["timestamp"]))/1000
imp_09 = imp_table[imp_table[:,1]<=breakpoint,:]
imp_10 = imp_table[imp_table[:,1]>breakpoint,:]
true09 = xy_w_newt(pd.read_csv('C:/Users/glius/Google Drive/Gaussian Process/gps-primetracker/2019-04-09.csv'))
true10 = xy_w_newt(pd.read_csv('C:/Users/glius/Google Drive/Gaussian Process/gps-primetracker/2019-04-10.csv'))
plt.plot(txy['x'][txy['t']<=breakpoint],txy['y'][txy['t']<=breakpoint],'r.')
plt.plot(true09['x'],true09['y'],'b.')
for i in range(np.shape(imp_09)[0]):
plt.plot([imp_09[i,2], imp_09[i,3]], [imp_09[i,4], imp_09[i,5]], 'k-', lw=1)
plt.show()
plt.plot(txy['x'][txy['t']>breakpoint],txy['y'][txy['t']>breakpoint],'r.')
plt.plot(true10['x'],true10['y'],'b.')
for i in range(np.shape(imp_10)[0]):
plt.plot([imp_10[i,2], imp_10[i,3]], [imp_10[i,4], imp_10[i,5]], 'k-', lw=1)
plt.show()
# ## 8. Create daily summary statistics
a1 = datetime.strptime(np.array(data['UTC time'])[0], '%Y-%m-%dT%H:%M:%S.%f')
b1 = a1 - timedelta(hours=4) ### +timedelta(days=1) for general use, but in this case, don't need this
c1 = datetime(b1.year,b1.month,b1.day,0,0)
g1 = time.mktime(c1.timetuple())
f1 = datetime.utcfromtimestamp(g1) - timedelta(hours=4)
a2 = datetime.strptime(np.array(data['UTC time'])[-1], '%Y-%m-%dT%H:%M:%S.%f')
b2 = a2 + timedelta(hours=4) ### -timedelta(days=1)
c2 = datetime(b2.year,b2.month,b2.day,0,0)
g2 = time.mktime(c2.timetuple())
f2 = datetime.utcfromtimestamp(g2) - timedelta(hours=4)
f1,f1.strftime('%Y-%m-%d'),f2,f2.strftime('%Y-%m-%d')
num_days = int((g2-g1)/24/60/60/1000)
summary_stats = pd.DataFrame(columns=['date', 'duration_home', 'pause_time','missing_time','dist_traveled','mean_f_len','sd_f_len',
'mean_f_time','sd_f_time','radius','max_dist_from_home','max_diameter',
'num_sig_loc','loc_entropy'])
for j in range(num_days):
T = g1 + j*24*60*60*1000
sub_index = (imp_table[:,0]*1000+min(data["timestamp"])>=T)*(imp_table[:,1]*1000+min(data["timestamp"])<T+24*60*60*1000)
sub_index_orig = (orig_table[:,0]*1000+min(data["timestamp"])>=T)*(orig_table[:,1]*1000+min(data["timestamp"])<T+24*60*60*1000)
# add a row on the top and a row to the bottom (fill in tiny gaps between 0 and the true starting point, also the
# between 24 and the true ending point)
sub_data = imp_table[sub_index,:]
sub_data_orig = orig_table[sub_index_orig,:]
missing_t = 24-sum(sub_data_orig[:,1]-sub_data_orig[:,0])/60/60
bottom = [sub_data[-1,1],(T+24*60*60*1000-min(data["timestamp"]))/1000,sub_data[-1,3],sub_data[-1,3],
sub_data[-1,5],sub_data[-1,5],0]
top = [(T-min(data["timestamp"]))/1000,sub_data[0,0],sub_data[-1,2],sub_data[-1,2],
sub_data[-1,4],sub_data[-1,4],0]
sub_data = np.vstack((top,sub_data,bottom))
pause_set = sub_data[sub_data[:,6]==0,:]
flight_set = sub_data[sub_data[:,6]==1,:]
duration = 0
for i in range(sub_data.shape[0]):
d1 = np.sqrt((home_loc[0]-sub_data[i,2])**2+(home_loc[1]-sub_data[i,4])**2)
d2 = np.sqrt((home_loc[0]-sub_data[i,3])**2+(home_loc[1]-sub_data[i,5])**2)
if d1<100 and d2<100:
duration = duration + sub_data[i,1] - sub_data[i,0]
pause_time = sum(pause_set[:,1]-pause_set[:,0])
dist = 0
d_vec = []
t_vec = []
for i in range(flight_set.shape[0]):
d = np.sqrt((flight_set[i,2]-flight_set[i,3])**2+(flight_set[i,4]-flight_set[i,5])**2)
d_vec.append(d)
t_vec.append(flight_set[i,1]-flight_set[i,0])
dist = dist + d
centroid_x = 0
centroid_y = 0
radius = 0
for i in range(sub_data.shape[0]):
centroid_x = centroid_x + (sub_data[i,1]-sub_data[i,0])/(24*60*60)*sub_data[i,2]
centroid_y = centroid_y + (sub_data[i,1]-sub_data[i,0])/(24*60*60)*sub_data[i,4]
for i in range(sub_data.shape[0]):
d = np.sqrt((sub_data[i,2]-centroid_x)**2+(sub_data[i,4]-centroid_y)**2)
radius = radius + (sub_data[i,1]-sub_data[i,0])/(24*60*60)*d
dist_from_home = []
for i in range(sub_data.shape[0]):
d = np.sqrt((sub_data[i,2]-home_loc[0])**2+(sub_data[i,4]-home_loc[1])**2)
dist_from_home.append(d)
D = pdist(sub_data[:,[2,4]])
D = squareform(D)
N, [I_row, I_col] = nanmax(D), unravel_index( argmax(D), D.shape)
distortions = []
mat = pause_set[:,[2,4]]
for k in range(1,10):
kmeanModel = KMeans(n_clusters=k).fit(mat)
kmeanModel.fit(mat)
distortions.append(sum(np.min(cdist(mat, kmeanModel.cluster_centers_, 'euclidean'), axis=1)) / mat.shape[0])
vec = np.array(distortions[:-1])-np.array(distortions[1:])
num = [i for i, x in enumerate(vec) if x<30][0]+1 ## 30 here is very arbitary
kmeans = KMeans(n_clusters=num)
kmeans = kmeans.fit(mat)
centroids = kmeans.cluster_centers_
n_centroid = centroids.shape[0]
t_at_centroid = np.zeros(n_centroid)
for i in range(pause_set.shape[0]):
for j in range(n_centroid):
d = np.sqrt((pause_set[i,2]-centroids[j,0])**2+(pause_set[i,4]-centroids[j,1])**2)
if d < 100:
t_at_centroid[j] = t_at_centroid[j] + pause_set[i,1] - pause_set[i,0]
p = t_at_centroid/sum(t_at_centroid)
entropy = -sum(p*np.log(p+0.00001))
f = datetime.utcfromtimestamp(T/1000) - timedelta(hours=4)
new_line = [f.strftime('%Y-%m-%d'),duration/60/60,pause_time/60/60,missing_t,dist,np.mean(d_vec),
np.std(d_vec),np.mean(t_vec),np.std(t_vec),radius,max(dist_from_home), N,n_centroid,entropy]
summary_stats.loc[-1] = new_line
summary_stats.index = summary_stats.index + 1 # shifting index
summary_stats = summary_stats.sort_index()
summary_stats
## compare with groundtruth
r=20;w=20;min_pause=30
orig09=rectangle(true09['t'],true09['x'],true09['y'],r,w,min_pause)
orig10=rectangle(true10['t'],true10['x'],true10['y'],r,w,min_pause)
# +
## check if orig09 is complete
count = 0
comp = []
n = orig09.shape[0]
for i in range(n-1):
if orig09[i+1,0]!=orig09[i,1]:
count = count + 1
d = np.sqrt((orig09[i,3]-orig09[i+1,2])**2+(orig09[i,5]-orig09[i+1,4])**2)
if d > 20:
comp.append([orig09[i,1],orig09[i+1,0],orig09[i,3],orig09[i+1,2],orig09[i,5],orig09[i+1,4],1])
if d <=20:
comp.append([orig09[i,1],orig09[i+1,0],orig09[i,3],orig09[i+1,2],orig09[i,5],orig09[i+1,4],0])
t1 = time.mktime(datetime(2019, 4, 9, 0, 0).timetuple())
t2 = time.mktime(datetime(2019, 4, 10, 0, 0).timetuple())
comp.append([t1,orig09[0,0],orig09[0,3],orig09[0,2],orig09[0,5],orig09[0,4],0])
comp.append([orig09[n-1,1],t2,orig09[n-1,3],orig09[n-1,2],orig09[n-1,5],orig09[n-1,4],0])
n = orig10.shape[0]
for i in range(n-1):
if orig10[i+1,0]!=orig10[i,1]:
count = count + 1
d = np.sqrt((orig10[i,3]-orig10[i+1,2])**2+(orig10[i,5]-orig10[i+1,4])**2)
if d > 20:
comp.append([orig10[i,1],orig10[i+1,0],orig10[i,3],orig10[i+1,2],orig10[i,5],orig10[i+1,4],1])
if d <=20:
comp.append([orig10[i,1],orig10[i+1,0],orig10[i,3],orig10[i+1,2],orig10[i,5],orig10[i+1,4],0])
t1 = time.mktime(datetime(2019, 4, 10, 0, 0).timetuple())
t2 = time.mktime(datetime(2019, 4, 11, 0, 0).timetuple())
comp.append([t1,orig10[0,0],orig10[0,3],orig10[0,2],orig10[0,5],orig10[0,4],0])
comp.append([orig10[n-1,1],t2,orig10[n-1,3],orig10[n-1,2],orig10[n-1,5],orig10[n-1,4],0])
comp = np.array(comp)
# -
complete = np.vstack((orig09,orig10,comp))
complete.shape
num_days = 2
summary_stats = pd.DataFrame(columns=['date', 'duration_home', 'pause_time','missing_time','dist_traveled','mean_f_len','sd_f_len',
'mean_f_time','sd_f_time','radius','max_dist_from_home','max_diameter',
'num_sig_loc','loc_entropy'])
for j in range(num_days):
if j==0:
home_loc = [orig09[0,2],orig09[0,4]]
if j==1:
home_loc = [orig10[0,2],orig10[0,4]]
T = g1 + j*24*60*60
sub_index = (complete[:,0]>=T)*(complete[:,1]<T+24*60*60)
sub_data = complete[sub_index,:]
pause_set = sub_data[sub_data[:,6]==0,:]
flight_set = sub_data[sub_data[:,6]==1,:]
duration = 0
for i in range(sub_data.shape[0]):
d1 = np.sqrt((home_loc[0]-sub_data[i,2])**2+(home_loc[1]-sub_data[i,4])**2)
d2 = np.sqrt((home_loc[0]-sub_data[i,3])**2+(home_loc[1]-sub_data[i,5])**2)
if d1<100 and d2<100:
duration = duration + sub_data[i,1] - sub_data[i,0]
pause_time = sum(pause_set[:,1]-pause_set[:,0])
dist = 0
d_vec = []
t_vec = []
for i in range(flight_set.shape[0]):
d = np.sqrt((flight_set[i,2]-flight_set[i,3])**2+(flight_set[i,4]-flight_set[i,5])**2)
d_vec.append(d)
t_vec.append(flight_set[i,1]-flight_set[i,0])
dist = dist + d
centroid_x = 0
centroid_y = 0
radius = 0
for i in range(sub_data.shape[0]):
centroid_x = centroid_x + (sub_data[i,1]-sub_data[i,0])/(24*60*60)*sub_data[i,2]
centroid_y = centroid_y + (sub_data[i,1]-sub_data[i,0])/(24*60*60)*sub_data[i,4]
for i in range(sub_data.shape[0]):
d = np.sqrt((sub_data[i,2]-centroid_x)**2+(sub_data[i,4]-centroid_y)**2)
radius = radius + (sub_data[i,1]-sub_data[i,0])/(24*60*60)*d
dist_from_home = []
for i in range(sub_data.shape[0]):
d = np.sqrt((sub_data[i,2]-home_loc[0])**2+(sub_data[i,4]-home_loc[1])**2)
dist_from_home.append(d)
D = pdist(sub_data[:,[2,4]])
D = squareform(D)
N, [I_row, I_col] = nanmax(D), unravel_index( argmax(D), D.shape)
distortions = []
mat = pause_set[:,[2,4]]
for k in range(1,10):
kmeanModel = KMeans(n_clusters=k).fit(mat)
kmeanModel.fit(mat)
distortions.append(sum(np.min(cdist(mat, kmeanModel.cluster_centers_, 'euclidean'), axis=1)) / mat.shape[0])
vec = np.array(distortions[:-1])-np.array(distortions[1:])
num = [i for i, x in enumerate(vec) if x<30][0]+1 ## 30 here is very arbitary
kmeans = KMeans(n_clusters=num)
kmeans = kmeans.fit(mat)
centroids = kmeans.cluster_centers_
n_centroid = centroids.shape[0]
t_at_centroid = np.zeros(n_centroid)
for i in range(pause_set.shape[0]):
for j in range(n_centroid):
d = np.sqrt((pause_set[i,2]-centroids[j,0])**2+(pause_set[i,4]-centroids[j,1])**2)
if d < 100:
t_at_centroid[j] = t_at_centroid[j] + pause_set[i,1] - pause_set[i,0]
p = t_at_centroid/sum(t_at_centroid)
entropy = -sum(p*np.log(p+0.00001))
f = datetime.utcfromtimestamp(T) - timedelta(hours=4)
new_line = [f.strftime('%Y-%m-%d'),duration/60/60,pause_time/60/60,0,dist,np.mean(d_vec),
np.std(d_vec),np.mean(t_vec),np.std(t_vec),radius,max(dist_from_home), N,n_centroid,entropy]
summary_stats.loc[-1] = new_line
summary_stats.index = summary_stats.index + 1 # shifting index
summary_stats = summary_stats.sort_index()
summary_stats
|
gps_imputation_4.15 (beiwe vs primetracker).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="OUDB6ugZjeTM" colab_type="code" outputId="578d2dc5-6336-4cdd-fbad-25a65bf48070" executionInfo={"status": "ok", "timestamp": 1576337988538, "user_tz": -120, "elapsed": 75490, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "16586829010922884016"}} colab={"base_uri": "https://localhost:8080/", "height": 143}
# install and Import Tensorflow
# !pip install -q tensorflow==2.0.0
import tensorflow as tf
print(tf.__version__)
# + id="5Yro8yTMj-Z2" colab_type="code" colab={}
# other imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# + id="S2D0fbfykdqB" colab_type="code" outputId="084382c5-4a04-4b8c-f7a2-5fa4e94a93e4" executionInfo={"status": "ok", "timestamp": 1576337996762, "user_tz": -120, "elapsed": 3850, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "16586829010922884016"}} colab={"base_uri": "https://localhost:8080/", "height": 215}
# get the data in colab
# !wget https://raw.githubusercontent.com/lazyprogrammer/machine_learning_examples/master/tf2.0/moore.csv
# + id="Van5soEIkuoj" colab_type="code" colab={}
# load in the data
data = pd.read_csv('moore.csv',header=None).values
data[0]
X = data[:,0].reshape(-1,1) # make it a 2D array of size NxD where D=1
Y = data[:,1]
# + id="_rkDxf8CwtWr" colab_type="code" outputId="78acc590-f87f-42cf-da64-b15a549018d7" executionInfo={"status": "ok", "timestamp": 1576338002027, "user_tz": -120, "elapsed": 1015, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "16586829010922884016"}} colab={"base_uri": "https://localhost:8080/", "height": 294}
# Plot the data. (its exponential?)
plt.scatter(X,Y)
# + id="Dlo8IDTOxgi2" colab_type="code" outputId="f793aff4-f211-4971-ca91-ddca88be8736" executionInfo={"status": "ok", "timestamp": 1576338004602, "user_tz": -120, "elapsed": 1225, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "16586829010922884016"}} colab={"base_uri": "https://localhost:8080/", "height": 283}
# since we want a linear model. we use the log
Y = np.log(Y)
plt.scatter(X,Y)
# + id="qVJEEU6nyCve" colab_type="code" colab={}
# We also center the X data so the values are not too large
# we could scale it too but then we'd ave to reverse the transformation later
X = X - X.mean()
# + id="hH_vHDiIyjun" colab_type="code" colab={}
# We create the Tensorflow model
model = tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(1,)),
tf.keras.layers.Dense(1)
])
model.compile(optimizer=tf.keras.optimizers.SGD(0.001,0.9),loss='mse')
# model.compile(optimizer='adam',loss='mse')
# + id="ARHYz3_RzYt-" colab_type="code" colab={}
# Learning Rate Scheduler
def schedule(epoch, lr):
if(epoch >=50):
return 0.0001
return 0.001
scheduler = tf.keras.callbacks.LearningRateScheduler(schedule)
# + id="o0M82mvsz-NM" colab_type="code" outputId="58b088e7-3d48-4316-c1b2-72decab54ba0" executionInfo={"status": "ok", "timestamp": 1576338014307, "user_tz": -120, "elapsed": 4880, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "16586829010922884016"}} colab={"base_uri": "https://localhost:8080/", "height": 1000}
# Train the model
r = model.fit(X,Y,epochs=200,callbacks=[scheduler])
# + id="-kGtiyQG0Uej" colab_type="code" outputId="56005d90-74dc-42e5-d661-3e6bbe7bbc8d" executionInfo={"status": "ok", "timestamp": 1576338036609, "user_tz": -120, "elapsed": 1099, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "16586829010922884016"}} colab={"base_uri": "https://localhost:8080/", "height": 283}
# Plot the Loss
plt.plot(r.history['loss'],label='loss')
# + id="ZNoTF6nI0r2_" colab_type="code" outputId="df3bd241-dbf1-458c-cb24-93eb44f0d019" executionInfo={"status": "ok", "timestamp": 1576338037926, "user_tz": -120, "elapsed": 455, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "16586829010922884016"}} colab={"base_uri": "https://localhost:8080/", "height": 53}
# Get the Slope of the line
# THe slope of the line is related to the doubling rate of transistor count
print(model.layers) # note: there is only 1 layer here. Input layer does not count
print(model.layers[0].get_weights())
# + id="c2oidfHs2NRb" colab_type="code" colab={}
# weight (W) is a 2D array and bias,scaler (B) is a 1D array
# the slope of the line is
a = model.layers[0].get_weights()[0][0,0]
# + id="d9ZHFUtl2uPU" colab_type="code" outputId="3d8c0345-df6b-4311-8cb8-eddd0ab27334" executionInfo={"status": "ok", "timestamp": 1576338042423, "user_tz": -120, "elapsed": 756, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "16586829010922884016"}} colab={"base_uri": "https://localhost:8080/", "height": 35}
print("Time to double: ", np.log(2)/a)
# + [markdown] id="jkVfX-cxEbtK" colab_type="text"
# # Part 2: Make Predictions
# + id="N6mapa1MEgmV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="c2f24f3b-23ce-4486-d964-9d491c560d13" executionInfo={"status": "ok", "timestamp": 1576338142819, "user_tz": -120, "elapsed": 1047, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "16586829010922884016"}}
# Make sure the line fits out data
Yhat = model.predict(X).flatten()
plt.scatter(X,Y)
plt.plot(X, Yhat)
# + id="g-DNEKcuEktp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="75ef3f5e-f6c0-483f-bf20-65835be8bb11" executionInfo={"status": "ok", "timestamp": 1576338317995, "user_tz": -120, "elapsed": 1001, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "16586829010922884016"}}
# Manual Calculation
# get the weights
w,b = model.layers[0].get_weights()
# Reshape X because we flatteneed it earlier
X = X.reshape(-1,1)
# (N x 1) x (1 x 1) + (1) -> (N x 1)
Yhat2 = (X.dot(w)+b).flatten()
# dont use == for floating points
np.allclose(Yhat,Yhat2)
# + id="IZccCIgcFd0U" colab_type="code" colab={}
|
TF2.0_Linear_Regression.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.9 (ds-venv)
# language: python
# name: python3-dsvenv
# ---
# + pycharm={"name": "#%%\n"}
# # %cd ..\
# + pycharm={"name": "#%%\n"}
import pandas as pd
import numpy as np
from base_trading.backtest import BaseTrader
# + pycharm={"name": "#%%\n"}
base_trader = BaseTrader()
data = pd.read_csv("data/BTC-USD.csv", index_col=0)
data.Date = pd.to_datetime(data.Date)
data = base_trader.execute(data)
# + pycharm={"name": "#%%\n"}
data.iloc[-1]
# + pycharm={"name": "#%%\n"}
init_cash = 10000
# + pycharm={"name": "#%%\n"}
stats = pd.DataFrame(index=["Buy & Hold", "Base Trading"])
n = len(data) - 1
stats["Start"] = data.loc[0, "Date"]
stats["End"] = data.loc[n, "Date"]
stats["Duration"] = stats["End"] - stats["Start"]
stats["Initial Cash"] = init_cash
for strategy in ["Buy & Hold", "Base Trading"]:
stats.loc[strategy, "Ending Cash"] = round(data.loc[n, strategy])
stats["Total Profit"] = stats["Ending Cash"] - stats["Initial Cash"]
stats["Total Return (%)"] = stats["Total Profit"] / stats["Initial Cash"] * 100
stats["Benchmark Return (%)"] = stats.loc["Buy & Hold", "Total Return (%)"]
stats.loc["Buy & Hold", "Daily Return (%)"] = np.average(data["Market Return"][2:])
stats.loc["Buy & Hold", "Daily Volatility (%)"] = np.std(data["Market Return"][2:])
stats.loc["Base Trading", "Daily Return (%)"] = np.average(data["Strategy Return"][2:])
stats.loc["Base Trading", "Daily Volatility (%)"] = np.std(data["Strategy Return"][2:])
stats["Sharpe Ratio"] = (stats["Daily Return (%)"] - 0.001) / stats["Daily Volatility (%)"]
stats = stats.T.reset_index()
stats.rename(columns={"index": "Metrics"}, inplace=True)
stats
# + pycharm={"name": "#%%\n"}
np.std(data["Market Return"][2:])
# + pycharm={"name": "#%%\n"}
stats["End"] - stats["Start"]
# + pycharm={"name": "#%%\n"}
stats
# + pycharm={"name": "#%%\n"}
data["Buy and Hold"]
# + pycharm={"name": "#%%\n"}
data.columns
|
notebooks/simulate_port.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#Importing the required libraries
from bs4 import BeautifulSoup
import requests
# # 1. Write a python program to display all the header tags from ‘en.wikipedia.org/wiki/Main_Page’.
# +
#Sending get request to webpage server to get the source code of it
page=requests.get("https://en.wikipedia.org/wiki/Main_Page")
page
#Webpage source code
page.content
#Parsing the source code by using BeautifulSoup and by html.parser
soup=BeautifulSoup(page.content,'html.parser')
soup
#Extracting all header tags from the page
header_tags=soup.find_all('span',class_="mw-headline")
header_tags
#Extracting the text from these tags one by one and it can be done by looping over these tags using for loop
Header=[] #Empty list
for i in header_tags:
Header.append(i.text.replace('\xa0...', ''))
Header
#Saving the tags in a dataframe
import pandas as pd
tags=pd.DataFrame(Header)
tags
# -
# # 2. Write a python program to display IMDB’s Top rated 100 movies’ data (i.e. Name, IMDB rating, Year of release) and save it in form of a CSV file.
# +
#Sending get request to webpage server to get the source code of it
page=requests.get("https://www.imdb.com/search/title/?count=100&groups=top_100&sort=user_rating")
page
#Webpage source code
page.content
#Parsing the source code by using BeautifulSoup and by html.parser
soup=BeautifulSoup(page.content,'html.parser')
soup
#Extracting all the tags having movie names
names=soup.find_all('h3',class_="lister-item-header")
names[0:10]
#Extracting the text alone from the tags
movies=[] #Empty list
for i in names:
for j in i.find_all('a'): #As we need only the movie names but not the year and index no
movies.append(j.text)
movies[0:10]
#Extracting all the tags having ratings
rating=soup.find_all('strong')
rating[0:10]
#Extracting only the rating
ratings=[] #Empty list
for i in rating:
ratings.append(i.get_text().replace('Detailed', " ").replace('User Rating', " "))
ratings[:10]
#Extracting all the tags having the year of release
year=soup.find_all('span',class_="lister-item-year text-muted unbold")
year[0:10]
#Extracting only the year value
release_year=[] #empty list
for i in year:
release_year.append(i.get_text().replace('(I) ', " "))
release_year[:10]
#Checking the length of each of the list obtained
print(len(movies),len(ratings),len(release_year))
#Creating a new dataframe with the data we extracted
import pandas as pd
IMDB_100=pd.DataFrame({})
IMDB_100['Movie_name']=movies
IMDB_100['Rating']=ratings[2:]
IMDB_100['Year_of_Release']=release_year
#Checking the dataframe
IMDB_100
# -
#Saving the data into a csv file
IMDB_100.to_csv('IMDB_top100movies.csv')
# # 3. Write a python program to display IMDB’s Top rated 100 Indian movies’ data (i.e. Name, IMDB rating, Year of release) and save it in form of a CSV file.
# +
#Sending get request to webpage server to get the source code of it
page=requests.get("https://www.imdb.com/india/top-rated-indian-movies/?pf_rd_m=A2FGELUUNOQJNL&pf_rd_p=461131e5-5af0-4e50-bee2-223fad1e00ca&pf_rd_r=KW9XY49QFKX5SR23QR6F&pf_rd_s=center-1&pf_rd_t=60601&pf_rd_i=india.toprated&ref_=fea_india_ss_toprated_india_tr_india250_hd")
page
#Webpage source code
page.content
#Parsing the source code by using BeautifulSoup and by html.parser
soup=BeautifulSoup(page.content,'html.parser')
soup
#Extracting all the tags having movie names
names=soup.find_all('td',class_='titleColumn')
names[0:10]
#Extracting the text alone from the tags
movies=[] #Empty list
for i in names:
for j in i.find_all('a'): #As we need only the movie names but not the year and index no
movies.append(j.text)
movies[0:10]
#Extracting all the tags having ratings
ratings=soup.find_all('strong')
ratings[0:10]
#Extracting only the rating
rating=[] #Empty list
for i in ratings:
rating.append(i.get_text())
rating[:10]
#Extracting all the tags having the year of release
year=soup.find_all('span',class_="secondaryInfo")
year[0:10]
#Extracting only the year value
release_year=[] #empty list
for i in year:
release_year.append(i.get_text())
release_year[:10]
#Checking the length of each of the list obtained
print(len(movies),len(rating),len(release_year))
#Creating a new dataframe with the data we extracted
import pandas as pd
IMDB_Indian100=pd.DataFrame({})
IMDB_Indian100['Movie_name']=movies[:100]
IMDB_Indian100['Rating']=rating[:100]
IMDB_Indian100['Year_of_Release']=release_year[:100]
#Checking the dataframe
IMDB_Indian100
# -
#Saving the data into a csv file
IMDB_Indian100.to_csv('IMDB_top100_IndianMovies.csv')
# # 4. Write a python program to scrap book name, author name, genre and book review of any 5 books from ‘www.bookpage.com’.
# +
#Sending get request to webpage server to get the source code of it
page=requests.get("https://bookpage.com/reviews")
page
#Webpage source code
page.content
#Parsing the source code by using BeautifulSoup and by html.parser
soup=BeautifulSoup(page.content,'html.parser')
soup
#Extracting all the tags having book name
book_names=soup.find_all('h4',class_='italic')
book_names[:5]
#Extracting the text alone from the tags
book_title=[] #Empty list
for i in book_names:
for j in i.find_all('a'): #As we need only the movie names but not the year and index no
book_title.append(j.text)
book_title[:5]
#Extracting all the tags having the author name
author=soup.find_all('p',class_="sans bold")
author[:5]
#Extracting only the text from the tags
authors=[] #Empty list
for i in author:
authors.append(i.get_text().replace('\n', " "))
authors[:5]
#Extracting all the tags having the genre of the book
genre=soup.find_all('p',class_="genre-links hidden-phone")
genre[:5]
#Extracting only the text from the tags
book_genre=[] #Empty list
for i in genre:
book_genre.append(i.get_text().replace('\n', ""))
book_genre[:5]
#Extracting the tags having book review
review=soup.find_all('p',class_="excerpt")
review[:5]
#Extracting only the text from the tags
book_review=[] #Empty list
for i in review:
book_review.append(i.get_text().replace('\n', '').replace('\xa0', ''))
book_review[:5]
#Checking the length of each of the list obtained
print(len(book_title),len(authors),len(book_genre),len(book_review))
#Creating a new dataframe with the data we extracted
import pandas as pd
books=pd.DataFrame({})
books['Title']=book_title[:5]
books['Author']=authors[:5]
books['Genre']=book_genre[:5]
books['Review']=book_review[:5]
#Checking the dataframe
books
# -
# # 5. Write a python program to scrape cricket rankings from ‘www.icc-cricket.com’.
# i) Top 10 ODI teams in men’s cricket along with the records for matches, points and rating.
# +
#Sending get request to webpage server to get the source code of it
page=requests.get("https://www.icc-cricket.com/rankings/mens/team-rankings/odi")
page
#Webpage source code
page.content
#Parsing the source code by using BeautifulSoup and by html.parser
soup=BeautifulSoup(page.content,'html.parser')
soup
#Extracting all the tags having team names
team=soup.find_all('span',class_="u-hide-phablet")
team[:10]
#Extracting the text alone from the tags
teams=[] #Empty list
for i in team:
teams.append(i.get_text())
teams[:10]
#Extracting the tags having the number of matches for Rank1 team
match=soup.find_all('td',class_="rankings-block__banner--matches")
match
#Saving matches to matches list
Matches=[]
for i in match:
Matches.append(i.get_text().replace('\n',''))
Matches
#Extracting remaining matches
match=soup.find_all('td',class_="table-body__cell u-center-text")
match[:9]
del match[1::2] #As it extracted other column data, we are deleting it
for i in match:
Matches.append(i.get_text().replace('\n',''))
Matches[:10]
#Extracting the tags having the points for Rank1 team
points=soup.find_all('td',class_="rankings-block__banner--points")
points
#Saving points to points list
Points=[]
for i in points:
Points.append(i.get_text().replace('\n',''))
Points
#Extracting remaining points
points=soup.find_all('td',class_="table-body__cell u-center-text")
points[:9]
del points[::2] #As other data are obtained, we are deleting them
#Saving points of respective teams
for i in points:
Points.append(i.get_text().replace('\n',''))
Points[:10]
#Extracting the rating from tags of rank1 team
rating=soup.find_all('td',class_="rankings-block__banner--rating u-text-right")
rating
#Saving rating to rating list
Rating=[]
for i in rating:
Rating.append(i.get_text().replace('\n','').strip())
Rating
#Extracting other team ratings data
rating=soup.find_all('td',attrs={'class':'table-body__cell u-text-right rating'})
rating[:9]
#Saving other teams ratings
for i in rating:
Rating.append(i.get_text().replace('\n',''))
Rating=Rating[0:10] #Saving only first 10 ratings
Rating
#Checking the length of each of the list obtained
print(len(teams),len(Matches),len(Points),len(Rating))
#Creating a new dataframe with the data we extracted
import pandas as pd
Team_Rankings=pd.DataFrame({})
Team_Rankings['Team']=teams[:10]
Team_Rankings['Matches']=Matches[:10]
Team_Rankings['Points']=Points[:10]
Team_Rankings['Rating']=Rating
#Checking the dataframe
Team_Rankings
# -
# ii) Top 10 ODI Batsmen in men along with the records of their team and rating.
# +
#Sending get request to webpage server to get the source code of it
page=requests.get("https://www.icc-cricket.com/rankings/mens/player-rankings/odi/batting")
page
#Webpage source code
page.content
#Parsing the source code by using BeautifulSoup and by html.parser
soup=BeautifulSoup(page.content,'html.parser')
soup
#Extracting the tag having player name for Rank1
name=soup.find_all('div',class_="rankings-block__banner--name-large")
name
#Extracting the text alone from tag
names=[] #Empty list
for i in name:
names.append(i.get_text())
names
#Extracting the tag having player names
name=soup.find_all('td',class_="table-body__cell rankings-table__name name")
name[:9]
#Extracting the text from tags
for i in name:
names.append(i.get_text().replace('\n', ' '))
names[:10]
#Extracting the tag for team for rank1
team=soup.find_all('div',class_="rankings-block__banner--nationality")
team
#Extracting the text alone from tag
teams=[] #Empty list
for i in team:
teams.append(i.get_text().replace('\n', ''))
teams
#Extracting the tags for other teams
team=soup.find_all('span',class_="table-body__logo-text")
team[:9]
#Extracting the text alone from tags
for i in team:
teams.append(i.get_text())
teams[:10]
#Extracting the tag having rating for rank1
rating=soup.find_all('div',class_="rankings-block__banner--rating")
rating
#Extracting the text alone from tag
ratings=[] #Empty list
for i in rating:
ratings.append(i.get_text())
ratings
#Extracting the tag for ratings of other players
rating=soup.find_all('td',class_="table-body__cell rating")
rating[:9]
#Extracting the text alone from tags
for i in rating:
ratings.append(i.get_text())
ratings[:10]
#Extracting the tags having career best rating for rank1
career_best=soup.find_all('span',class_="rankings-block__career-best-text")
career_best
#Extracting the text alone from tag
career_best_rating=[] #Empty list
for i in career_best:
career_best_rating.append(i.get_text().replace('\n', ' ').replace(' ',''))
career_best_rating
#Extracting the tags for remaining players career best rating
career_best=soup.find_all('td',class_="table-body__cell u-text-right u-hide-phablet")
career_best[:9]
#Extracting the text alone from tags
for i in career_best:
career_best_rating.append(i.get_text().replace('\n', ' ').replace(' ',' '))
career_best_rating[:10]
#Checking the length of each of the list obtained
print(len(names),len(teams),len(ratings),len(career_best_rating))
#Creating a new dataframe with the data we extracted
import pandas as pd
top10=pd.DataFrame({})
top10['Name']=names[:10]
top10['Team']=teams[:10]
top10['Rating']=ratings[:10]
top10['Career_best_rating']=career_best_rating[:10]
#Checking the dataframe
top10
# -
# iii) Top 10 ODI bowlers along with the records of their team and rating.
# +
#Sending get request to webpage server to get the source code of it
page=requests.get("https://www.icc-cricket.com/rankings/mens/player-rankings/odi/bowling")
page
#Webpage source code
page.content
#Parsing the source code by using BeautifulSoup and by html.parser
soup=BeautifulSoup(page.content,'html.parser')
soup
#Extracting the tag having player name for Rank1
name=soup.find_all('div',class_="rankings-block__banner--name-large")
name
#Extracting the text alone from tag
names=[] #Empty list
for i in name:
names.append(i.get_text())
names
#Extracting the tag having player names
name=soup.find_all('td',class_="table-body__cell rankings-table__name name")
name[:9]
#Extracting the text from tags
for i in name:
names.append(i.get_text().replace('\n', ' '))
names[:10]
#Extracting the tag for team for rank1
team=soup.find_all('div',class_="rankings-block__banner--nationality")
team
#Extracting the text alone from tag
teams=[] #Empty list
for i in team:
teams.append(i.get_text().replace('\n', ''))
teams
#Extracting the tags for other teams
team=soup.find_all('span',class_="table-body__logo-text")
team[:9]
#Extracting the text alone from tags
for i in team:
teams.append(i.get_text())
teams[:10]
#Extracting the tag having rating for rank1
rating=soup.find_all('div',class_="rankings-block__banner--rating")
rating
#Extracting the text alone from tag
ratings=[] #Empty list
for i in rating:
ratings.append(i.get_text())
ratings
#Extracting the tag for ratings of other players
rating=soup.find_all('td',class_="table-body__cell rating")
rating[:9]
#Extracting the text alone from tags
for i in rating:
ratings.append(i.get_text())
ratings[:10]
#Extracting the tags having career best rating for rank1
career_best=soup.find_all('span',class_="rankings-block__career-best-text")
career_best
#Extracting the text alone from tag
career_best_rating=[] #Empty list
for i in career_best:
career_best_rating.append(i.get_text().replace('\n', ' ').replace(' ',''))
career_best_rating
#Extracting the tags for remaining players career best rating
career_best=soup.find_all('td',class_="table-body__cell u-text-right u-hide-phablet")
career_best[:9]
#Extracting the text alone from tags
for i in career_best:
career_best_rating.append(i.get_text().replace('\n', ' ').replace(' ',' '))
career_best_rating[:10]
#Checking the length of each of the list obtained
print(len(names),len(teams),len(ratings),len(career_best_rating))
#Creating a new dataframe with the data we extracted
import pandas as pd
top10=pd.DataFrame({})
top10['Name']=names[:10]
top10['Team']=teams[:10]
top10['Rating']=ratings[:10]
top10['Career_best_rating']=career_best_rating[:10]
#Checking the dataframe
top10
# -
# # 6. Write a python program to scrape cricket rankings from ‘www.icc-cricket.com’.
# i) Top 10 ODI teams in women’s cricket along with the records for matches, points and rating.
# +
#Sending get request to webpage server to get the source code of it
page=requests.get("https://www.icc-cricket.com/rankings/womens/team-rankings/odi")
page
#Webpage source code
page.content
#Parsing the source code by using BeautifulSoup and by html.parser
soup=BeautifulSoup(page.content,'html.parser')
soup
#Extracting all the tags having team names
team=soup.find_all('span',class_="u-hide-phablet")
team[:10]
#Extracting the text alone from the tags
teams=[] #Empty list
for i in team:
teams.append(i.get_text())
teams[:10]
#Extracting the tags having the number of matches for Rank1 team
match=soup.find_all('td',class_="rankings-block__banner--matches")
match
#Saving matches to matches list
Matches=[]
for i in match:
Matches.append(i.get_text().replace('\n',''))
Matches
#Extracting remaining matches
match=soup.find_all('td',class_="table-body__cell u-center-text")
match[:9]
del match[1::2] #As it extracted other column data, we are deleting it
for i in match:
Matches.append(i.get_text().replace('\n',''))
Matches[:10]
#Extracting the tags having the points for Rank1 team
points=soup.find_all('td',class_="rankings-block__banner--points")
points
#Saving points to points list
Points=[]
for i in points:
Points.append(i.get_text().replace('\n',''))
Points
#Extracting remaining points
points=soup.find_all('td',class_="table-body__cell u-center-text")
points[:9]
del points[::2] #As other data are obtained, we are deleting them
#Saving points of respective teams
for i in points:
Points.append(i.get_text().replace('\n',''))
Points[:10]
#Extracting the rating from tags of rank1 team
rating=soup.find_all('td',class_="rankings-block__banner--rating u-text-right")
rating
#Saving rating to rating list
Rating=[]
for i in rating:
Rating.append(i.get_text().replace('\n','').strip())
Rating
#Extracting other team ratings data
rating=soup.find_all('td',attrs={'class':'table-body__cell u-text-right rating'})
rating[:9]
#Saving other teams ratings
for i in rating:
Rating.append(i.get_text().replace('\n',''))
Rating=Rating[0:10] #Saving only first 10 ratings
Rating
#Checking the length of each of the list obtained
print(len(teams),len(Matches),len(Points),len(Rating))
#Creating a new dataframe with the data we extracted
import pandas as pd
Team_Rankings=pd.DataFrame({})
Team_Rankings['Team']=teams[:10]
Team_Rankings['Matches']=Matches[:10]
Team_Rankings['Points']=Points[:10]
Team_Rankings['Rating']=Rating
#Checking the dataframe
Team_Rankings
# -
# ii) Top 10 ODI Batswomen in women along with the records of their team and rating.
# +
#Sending get request to webpage server to get the source code of it
page=requests.get("https://www.icc-cricket.com/rankings/womens/player-rankings/odi/batting")
page
#Webpage source code
page.content
#Parsing the source code by using BeautifulSoup and by html.parser
soup=BeautifulSoup(page.content,'html.parser')
soup
#Extracting the tag having player name for Rank1
name=soup.find_all('div',class_="rankings-block__banner--name-large")
name
#Extracting the text alone from tag
names=[] #Empty list
for i in name:
names.append(i.get_text())
names
#Extracting the tag having player names
name=soup.find_all('td',class_="table-body__cell rankings-table__name name")
name[:9]
#Extracting the text from tags
for i in name:
names.append(i.get_text().replace('\n', ' '))
names[:10]
#Extracting the tag for team for rank1
team=soup.find_all('div',class_="rankings-block__banner--nationality")
team
#Extracting the text alone from tag
teams=[] #Empty list
for i in team:
teams.append(i.get_text().replace('\n', ''))
teams
#Extracting the tags for other teams
team=soup.find_all('span',class_="table-body__logo-text")
team[:9]
#Extracting the text alone from tags
for i in team:
teams.append(i.get_text())
teams[:10]
#Extracting the tag having rating for rank1
rating=soup.find_all('div',class_="rankings-block__banner--rating")
rating
#Extracting the text alone from tag
ratings=[] #Empty list
for i in rating:
ratings.append(i.get_text())
ratings
#Extracting the tag for ratings of other players
rating=soup.find_all('td',class_="table-body__cell rating")
rating[:9]
#Extracting the text alone from tags
for i in rating:
ratings.append(i.get_text())
ratings[:10]
#Extracting the tags having career best rating for rank1
career_best=soup.find_all('span',class_="rankings-block__career-best-text")
career_best
#Extracting the text alone from tag
career_best_rating=[] #Empty list
for i in career_best:
career_best_rating.append(i.get_text().replace('\n', ' ').replace(' ',''))
career_best_rating
#Extracting the tags for remaining players career best rating
career_best=soup.find_all('td',class_="table-body__cell u-text-right u-hide-phablet")
career_best[:9]
#Extracting the text alone from tags
for i in career_best:
career_best_rating.append(i.get_text().replace('\n', ' ').replace(' ',' '))
career_best_rating[:10]
#Checking the length of each of the list obtained
print(len(names),len(teams),len(ratings),len(career_best_rating))
#Creating a new dataframe with the data we extracted
import pandas as pd
top10=pd.DataFrame({})
top10['Name']=names[:10]
top10['Team']=teams[:10]
top10['Rating']=ratings[:10]
top10['Career_best_rating']=career_best_rating[:10]
#Checking the dataframe
top10
# -
# iii) Top 10 women’s ODI all-rounders along with the records of their team and rating.
# +
#Sending get request to webpage server to get the source code of it
page=requests.get("https://www.icc-cricket.com/rankings/womens/player-rankings/odi/all-rounder")
page
#Webpage source code
page.content
#Parsing the source code by using BeautifulSoup and by html.parser
soup=BeautifulSoup(page.content,'html.parser')
soup
#Extracting the tag having player name for Rank1
name=soup.find_all('div',class_="rankings-block__banner--name-large")
name
#Extracting the text alone from tag
names=[] #Empty list
for i in name:
names.append(i.get_text())
names
#Extracting the tag having player names
name=soup.find_all('td',class_="table-body__cell rankings-table__name name")
name[:9]
#Extracting the text from tags
for i in name:
names.append(i.get_text().replace('\n', ' '))
names[:10]
#Extracting the tag for team for rank1
team=soup.find_all('div',class_="rankings-block__banner--nationality")
team
#Extracting the text alone from tag
teams=[] #Empty list
for i in team:
teams.append(i.get_text().replace('\n', ''))
teams
#Extracting the tags for other teams
team=soup.find_all('span',class_="table-body__logo-text")
team[:9]
#Extracting the text alone from tags
for i in team:
teams.append(i.get_text())
teams[:10]
#Extracting the tag having rating for rank1
rating=soup.find_all('div',class_="rankings-block__banner--rating")
rating
#Extracting the text alone from tag
ratings=[] #Empty list
for i in rating:
ratings.append(i.get_text())
ratings
#Extracting the tag for ratings of other players
rating=soup.find_all('td',class_="table-body__cell rating")
rating[:9]
#Extracting the text alone from tags
for i in rating:
ratings.append(i.get_text())
ratings[:10]
#Extracting the tags having career best rating for rank1
career_best=soup.find_all('span',class_="rankings-block__career-best-text")
career_best
#Extracting the text alone from tag
career_best_rating=[] #Empty list
for i in career_best:
career_best_rating.append(i.get_text().replace('\n', ' ').replace(' ',''))
career_best_rating
#Extracting the tags for remaining players career best rating
career_best=soup.find_all('td',class_="table-body__cell u-text-right u-hide-phablet")
career_best[:9]
#Extracting the text alone from tags
for i in career_best:
career_best_rating.append(i.get_text().replace('\n', ' ').replace(' ',' '))
career_best_rating[:10]
#Checking the length of each of the list obtained
print(len(names),len(teams),len(ratings),len(career_best_rating))
#Creating a new dataframe with the data we extracted
import pandas as pd
top10=pd.DataFrame({})
top10['Name']=names[:10]
top10['Team']=teams[:10]
top10['Rating']=ratings[:10]
top10['Career_best_rating']=career_best_rating[:10]
#Checking the dataframe
top10
# -
# # 8. Write a python program to extract information about the local weather from the National Weather Service website of USA, https://www.weather.gov/ for the city, San Francisco. You need to extract data about 7 day extended forecast display for the city. The data should include period, short description, temperature and description.
# +
#Sending get request to webpage server to get the source code of it
page=requests.get("https://forecast.weather.gov/MapClick.php?lat=37.7749&lon=-122.4194#.YFhT9K8zZPY")
page
#Webpage source code
page.content
#Parsing the source code by using BeautifulSoup and by html.parser
soup=BeautifulSoup(page.content,'html.parser')
soup
#Extracting all the tags having period data
period=soup.find_all('p',class_="period-name")
period
#Extracting the text alone from the tags
Period=[] #Empty list
for i in period:
Period.append(i.get_text().replace('Night', ' Night'))
Period
#Extracting the tags having short description
short_desc=soup.find_all('p',class_="short-desc")
short_desc
#Extracting the text alone from the tags
sd=[] #Empty list
for i in short_desc:
sd.append(i.get_text())
sd
#Extracting the tags having temperature
Temp=soup.find_all('p',class_="temp")
Temp
#Extracting the text alone from the tags
temp=[] #Empty list
for i in Temp:
temp.append(i.text)
temp
#Extracting the tags having long description
desc=soup.find_all('div',class_="col-sm-10 forecast-text")
desc
#Extracting the text from tags
description=[] #Empty text
for i in desc:
description.append(i.get_text())
description
#Checking the length of each of the list obtained
print(len(Period),len(sd),len(temp),len(description))
#Creating a new dataframe with the data we extracted
import pandas as pd
weather=pd.DataFrame({})
weather['Period']=Period
weather['Short_Description']=sd
weather['Temperature']=temp
weather['Detailed_Description']=description[:9]
#Checking the dataframe
weather
# -
# # 7. Write a python program to scrape details of all the mobile phones under Rs. 20,000 listed on Amazon.in. The scraped data should include Product Name, Price, Image URL and Average Rating.
#Sending get request to webpage server to get the source code of it
page=requests.get("https://www.amazon.in/s?k=mobile+phones+under+20000&ref=nb_sb_noss")
page
#Webpage source code
page.content
#Parsing the source code by using BeautifulSoup and by html.parser
soup=BeautifulSoup(page.content,'html.parser')
soup
#Extracting all tags having the product name
product=soup.find_all('span',class_="a-size-medium a-color-base a-text-normal")
product
#Extracting the text alone from the tags
Product=[] #Empty list
for i in product:
Product.append(i.get_text())
Product
#Extracting the tags having price
price=soup.find_all('span',class_="a-price-whole")
price
#Extracting the text from tags
Price=[] #Empty list
for i in price:
Price.append(i.get_text())
Price
#Extracting the tags having the Image URL
url=soup.find_all('img',class_="s-image")
url
#Extracting the text alone from tag
Image=[] #Empty list
for i in url:
Image.append(i['src'])
Image
#Extracting the tags having Average Rating
rating=soup.find_all('span',class_="a-icon-alt")
rating
#Extracting the text alone from tag
Rating=[] #Empty list
for i in rating:
Rating.append(i.get_text().replace('amp;',' '))
Rating
#Checking the length of the data obtained
print(len(Product),len(Price),len(Image),len(Rating))
#Creating a new dataframe with the data we extracted
import pandas as pd
mobile=pd.DataFrame({})
mobile['Product_Name']=Product[:15]
mobile['Price']=Price
mobile['Image_Url']=Image[:15]
mobile['Average_Rating']=Rating[:15]
#Checking the dataframe
mobile
|
Webscraping/Webscraping_Assignment1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This course requires basic understanding of linear algebra, statistics, and programming. This (ungraded) quiz lets you (and us) assess whether you have a good basis for taking this course. If you are comfortable with the following questions (the questions are supposed to be trivial for I590 students), you will not have too much difficulty. If you find the following questions challenging, you may need to put lots of efforts throughout your course. Of course we will do our best to help you if you are willing to learn.
# 1. $\vec{a} = \begin{bmatrix} 1 \\ 3 \\ 4 \end{bmatrix}$ and $\vec{b} = \begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix}$. Calculate $\left| \vec{a} \right| $, $\vec{a} \cdot \vec{b}$, and $\vec{a} + \vec{b}$.
#
# 1. $A = \begin{bmatrix}3 & 1 & 0\\2 & 1 & 5\end{bmatrix}$, calculate $A A^\top$.
#
# 1. What is the definition and characteristics of **eigenvector** and **eigenvalue**?
#
# 1. What is the definition of **mean**, **median**, and **standard deviation**?
#
# 1. What would be the result of the following Python code? Run the code and check whether the result is the same as yours.
#
# ```
# alist = [1,2,3]
# anotherlist = alist
# anotherlist[0] = 5
# print(alist)
# ```
#
# 1. What is the result of the following Python code? Run the code and check whether the result is the same as yours.
#
# ```
# print("5" + "10")
# print(10*"5")
# ```
#
# 1. Write a function using any language to calculate the mean, median, and standard deviation of a given list.
#
# 1. You got salary data of the Acme Corporation. They are just numbers, not associated with names of the employees. Which Python data structure (among list, dictionary, and set) will you use to store this numbers? Justify your answer.
#
# 1. What are the differences of list and dictionary? Explain when one should use one versus the other. Show some use case examples where using list is much more efficient than dictionary and vice versa.
#
# 1. In your script `myscript.py`, you want to use a function `donothing()` defined in a module called `usefulfunctions`. How can you import this module (assuming it's installed or in the same directory) and use this function?
|
syllabus/self-assessment.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import pandas as pd
import numpy as np
import os
import math
import graphlab
import graphlab as gl
import graphlab.aggregate as agg
'''钢炮'''
path = '/home/zongyi/bimbo_data/'
sf = gl.SFrame.read_csv(path + 'train.csv', verbose=False)
town = gl.SFrame.read_csv(path + 'town_state.csv', verbose=False)
'''MAC'''
path = '/Users/zonemercy/jupyter_notebook/bimbo_data/'
sf = gl.SFrame.read_csv(path + 'train.csv', verbose=False)
town = gl.SFrame.read_csv(path + 'town_state.csv', verbose=False)
# +
sf['return_rate'] = sf['Dev_uni_proxima'] / ( sf['Dev_uni_proxima'] + sf['Demanda_uni_equil'] )
re_lag = sf.groupby(key_columns=['Semana','Cliente_ID','Producto_ID'], operations={'re_lag':agg.MEAN('return_rate')})
# re_lag['Semana'] = re_lag['Semana'] + 1
# +
lag = re_lag.copy()
re_lag.remove_column('re_lag')
lag['Semana'] = lag['Semana'].apply(lambda x: x+1)
re_lag = re_lag.join(lag,on=['Cliente_ID','Producto_ID','Semana'],how='outer')
lag['Semana'] = lag['Semana'].apply(lambda x: x+1)
re_lag = re_lag.join(lag,on=['Cliente_ID','Producto_ID','Semana'],how='outer')
lag['Semana'] = lag['Semana'].apply(lambda x: x+1)
re_lag = re_lag.join(lag,on=['Cliente_ID','Producto_ID','Semana'],how='outer')
lag['Semana'] = lag['Semana'].apply(lambda x: x+1)
re_lag = re_lag.join(lag,on=['Cliente_ID','Producto_ID','Semana'],how='outer')
lag['Semana'] = lag['Semana'].apply(lambda x: x+1)
re_lag = re_lag.join(lag,on=['Cliente_ID','Producto_ID','Semana'],how='outer')
# re_lag.rename({'re_lag':'re_lag1','re_lag.1':'re_lag2','re_lag.2':'re_lag3','re_lag.3':'re_lag4','re_lag.4':'re_lag5'})
# -
re_lag.rename({'re_lag':'re_lag1','re_lag.1':'re_lag2','re_lag.2':'re_lag3','re_lag.3':'re_lag4','re_lag.4':'re_lag5'})
re_train=re_lag[(re_lag['Semana']>5)&(re_lag['Semana']<10)]
re_train.save(path+'re_lag_train.csv',format='csv')
re_test=re_lag[(re_lag['Semana']>9)&(re_lag['Semana']<12)]
re_test.save(path+'re_lag_test.csv',format='csv')
|
Bimbo/.ipynb_checkpoints/re_lag-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="RMhGdYHuOZM8"
# # Deep Dreams (with Caffe)
#
# Credits: Forked from [DeepDream](https://github.com/google/deepdream) by Google
#
# This notebook demonstrates how to use the [Caffe](http://caffe.berkeleyvision.org/) neural network framework to produce "dream" visuals shown in the [Google Research blog post](http://googleresearch.blogspot.ch/2015/06/inceptionism-going-deeper-into-neural.html).
#
# It'll be interesting to see what imagery people are able to generate using the described technique. If you post images to Google+, Facebook, or Twitter, be sure to tag them with **#deepdream** so other researchers can check them out too.
#
# ##Dependencies
# This notebook is designed to have as few dependencies as possible:
# * Standard Python scientific stack: [NumPy](http://www.numpy.org/), [SciPy](http://www.scipy.org/), [PIL](http://www.pythonware.com/products/pil/), [IPython](http://ipython.org/). Those libraries can also be installed as a part of one of the scientific packages for Python, such as [Anaconda](http://continuum.io/downloads) or [Canopy](https://store.enthought.com/).
# * [Caffe](http://caffe.berkeleyvision.org/) deep learning framework ([installation instructions](http://caffe.berkeleyvision.org/installation.html)).
# * Google [protobuf](https://developers.google.com/protocol-buffers/) library that is used for Caffe model manipulation.
# + cellView="both" colab_type="code" id="Pqz5k4syOZNA"
# imports and basic notebook setup
from cStringIO import StringIO
import numpy as np
import scipy.ndimage as nd
import PIL.Image
from IPython.display import clear_output, Image, display
from google.protobuf import text_format
import caffe
# If your GPU supports CUDA and Caffe was built with CUDA support,
# uncomment the following to run Caffe operations on the GPU.
# caffe.set_mode_gpu()
# caffe.set_device(0) # select GPU device if multiple devices exist
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 255))
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
# + [markdown] colab_type="text" id="AeF9mG-COZNE"
# ## Loading DNN model
# In this notebook we are going to use a [GoogLeNet](https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet) model trained on [ImageNet](http://www.image-net.org/) dataset.
# Feel free to experiment with other models from Caffe [Model Zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo). One particularly interesting [model](http://places.csail.mit.edu/downloadCNN.html) was trained in [MIT Places](http://places.csail.mit.edu/) dataset. It produced many visuals from the [original blog post](http://googleresearch.blogspot.ch/2015/06/inceptionism-going-deeper-into-neural.html).
# + cellView="both" colab_type="code" id="i9hkSm1IOZNR"
model_path = '../caffe/models/bvlc_googlenet/' # substitute your path here
net_fn = model_path + 'deploy.prototxt'
param_fn = model_path + 'bvlc_googlenet.caffemodel'
# Patching model to be able to compute gradients.
# Note that you can also manually add "force_backward: true" line to "deploy.prototxt".
model = caffe.io.caffe_pb2.NetParameter()
text_format.Merge(open(net_fn).read(), model)
model.force_backward = True
open('tmp.prototxt', 'w').write(str(model))
net = caffe.Classifier('tmp.prototxt', param_fn,
mean = np.float32([104.0, 116.0, 122.0]), # ImageNet mean, training set dependent
channel_swap = (2,1,0)) # the reference model has channels in BGR order instead of RGB
# a couple of utility functions for converting to and from Caffe's input image layout
def preprocess(net, img):
return np.float32(np.rollaxis(img, 2)[::-1]) - net.transformer.mean['data']
def deprocess(net, img):
return np.dstack((img + net.transformer.mean['data'])[::-1])
# + [markdown] colab_type="text" id="UeV_fJ4QOZNb"
# ## Producing dreams
# + [markdown] colab_type="text" id="9udrp3efOZNd"
# Making the "dream" images is very simple. Essentially it is just a gradient ascent process that tries to maximize the L2 norm of activations of a particular DNN layer. Here are a few simple tricks that we found useful for getting good images:
# * offset image by a random jitter
# * normalize the magnitude of gradient ascent steps
# * apply ascent across multiple scales (octaves)
#
# First we implement a basic gradient ascent step function, applying the first two tricks:
# + cellView="both" colab_type="code" id="pN43nMsHOZNg"
def objective_L2(dst):
dst.diff[:] = dst.data
def make_step(net, step_size=1.5, end='inception_4c/output',
jitter=32, clip=True, objective=objective_L2):
'''Basic gradient ascent step.'''
src = net.blobs['data'] # input image is stored in Net's 'data' blob
dst = net.blobs[end]
ox, oy = np.random.randint(-jitter, jitter+1, 2)
src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift
net.forward(end=end)
objective(dst) # specify the optimization objective
net.backward(start=end)
g = src.diff[0]
# apply normalized ascent step to the input image
src.data[:] += step_size/np.abs(g).mean() * g
src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image
if clip:
bias = net.transformer.mean['data']
src.data[:] = np.clip(src.data, -bias, 255-bias)
# + [markdown] colab_type="text" id="nphEdlBgOZNk"
# Next we implement an ascent through different scales. We call these scales "octaves".
# + cellView="both" colab_type="code" id="ZpFIn8l0OZNq"
def deepdream(net, base_img, iter_n=10, octave_n=4, octave_scale=1.4,
end='inception_4c/output', clip=True, **step_params):
# prepare base images for all octaves
octaves = [preprocess(net, base_img)]
for i in xrange(octave_n-1):
octaves.append(nd.zoom(octaves[-1], (1, 1.0/octave_scale,1.0/octave_scale), order=1))
src = net.blobs['data']
detail = np.zeros_like(octaves[-1]) # allocate image for network-produced details
for octave, octave_base in enumerate(octaves[::-1]):
h, w = octave_base.shape[-2:]
if octave > 0:
# upscale details from the previous octave
h1, w1 = detail.shape[-2:]
detail = nd.zoom(detail, (1, 1.0*h/h1,1.0*w/w1), order=1)
src.reshape(1,3,h,w) # resize the network's input image size
src.data[0] = octave_base+detail
for i in xrange(iter_n):
make_step(net, end=end, clip=clip, **step_params)
# visualization
vis = deprocess(net, src.data[0])
if not clip: # adjust image contrast if clipping is disabled
vis = vis*(255.0/np.percentile(vis, 99.98))
showarray(vis)
print octave, i, end, vis.shape
clear_output(wait=True)
# extract details produced on the current octave
detail = src.data[0]-octave_base
# returning the resulting image
return deprocess(net, src.data[0])
# + [markdown] colab_type="text" id="QrcdU-lmOZNx"
# Now we are ready to let the neural network reveal its dreams! Let's take a [cloud image](https://commons.wikimedia.org/wiki/File:Appearance_of_sky_for_weather_forecast,_Dhaka,_Bangladesh.JPG) as a starting point:
# + cellView="both" colab_type="code" executionInfo id="40p5AqqwOZN5" outputId="f62cde37-79e8-420a-e448-3b9b48ee1730" pinned=false
img = np.float32(PIL.Image.open('sky1024px.jpg'))
showarray(img)
# + [markdown] colab_type="text" id="Z9_215_GOZOL"
# Running the next code cell starts the detail generation process. You may see how new patterns start to form, iteration by iteration, octave by octave.
# + cellView="both" colab_type="code" executionInfo id="HlnVnDTlOZOL" outputId="425dfc83-b474-4a69-8386-30d86361bbf6" pinned=false
_=deepdream(net, img)
# + [markdown] colab_type="text" id="Rp9kOCQTOZOQ"
# The complexity of the details generated depends on which layer's activations we try to maximize. Higher layers produce complex features, while lower ones enhance edges and textures, giving the image an impressionist feeling:
# + cellView="both" colab_type="code" executionInfo id="eHOX0t93OZOR" outputId="0de0381c-4681-4619-912f-9b6a2cdec0c6" pinned=false
_=deepdream(net, img, end='inception_3b/5x5_reduce')
# + [markdown] colab_type="text" id="rkzHz9E8OZOb"
# We encourage readers to experiment with layer selection to see how it affects the results. Execute the next code cell to see the list of different layers. You can modify the `make_step` function to make it follow some different objective, say to select a subset of activations to maximize, or to maximize multiple layers at once. There is a huge design space to explore!
# + cellView="both" colab_type="code" id="OIepVN6POZOc"
net.blobs.keys()
# + [markdown] colab_type="text" id="vs2uUpMCOZOe"
# What if we feed the `deepdream` function its own output, after applying a little zoom to it? It turns out that this leads to an endless stream of impressions of the things that the network saw during training. Some patterns fire more often than others, suggestive of basins of attraction.
#
# We will start the process from the same sky image as above, but after some iteration the original image becomes irrelevant; even random noise can be used as the starting point.
# + cellView="both" colab_type="code" id="IB48CnUfOZOe"
# !mkdir frames
frame = img
frame_i = 0
# + cellView="both" colab_type="code" id="fj0E-fKDOZOi"
h, w = frame.shape[:2]
s = 0.05 # scale coefficient
for i in xrange(100):
frame = deepdream(net, frame)
PIL.Image.fromarray(np.uint8(frame)).save("frames/%04d.jpg"%frame_i)
frame = nd.affine_transform(frame, [1-s,1-s,1], [h*s/2,w*s/2,0], order=1)
frame_i += 1
# + [markdown] colab_type="text" id="XzZGGME_OZOk"
# Be careful running the code above, it can bring you into very strange realms!
# + cellView="both" colab_type="code" executionInfo id="ZCZcz2p1OZOt" outputId="d3773436-2b5d-4e79-be9d-0f12ab839fff" pinned=false
Image(filename='frames/0029.jpg')
# -
# ## Controlling dreams
#
# The image detail generation method described above tends to produce some patterns more often the others. One easy way to improve the generated image diversity is to tweak the optimization objective. Here we show just one of many ways to do that. Let's use one more input image. We'd call it a "*guide*".
guide = np.float32(PIL.Image.open('flowers.jpg'))
showarray(guide)
# Note that the neural network we use was trained on images downscaled to 224x224 size. So high resolution images might have to be downscaled, so that the network could pick up their features. The image we use here is already small enough.
#
# Now we pick some target layer and extract guide image features.
end = 'inception_3b/output'
h, w = guide.shape[:2]
src, dst = net.blobs['data'], net.blobs[end]
src.reshape(1,3,h,w)
src.data[0] = preprocess(net, guide)
net.forward(end=end)
guide_features = dst.data[0].copy()
# Instead of maximizing the L2-norm of current image activations, we try to maximize the dot-products between activations of current image, and their best matching correspondences from the guide image.
# +
def objective_guide(dst):
x = dst.data[0].copy()
y = guide_features
ch = x.shape[0]
x = x.reshape(ch,-1)
y = y.reshape(ch,-1)
A = x.T.dot(y) # compute the matrix of dot-products with guide features
dst.diff[0].reshape(ch,-1)[:] = y[:,A.argmax(1)] # select ones that match best
_=deepdream(net, img, end=end, objective=objective_guide)
# -
# This way we can affect the style of generated images without using a different training set.
|
deep-learning/deep-dream/dream.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:pythonengineer_env]
# language: python
# name: conda-env-pythonengineer_env-py
# ---
# # The Asterisk (`*`) in Python and its use cases
# The asterisk sign (`*`) can be used for different cases in Python:
# - Multiplication and power operations
# - Creation of list, tuple, or string with repeated elements
# - `*args` , `**kwargs` , and keyword-only parameters
# - Unpacking lists/tuples/dictionaries for function arguments
# - Unpacking containers
# - Merging containers into list / Merge dictionaries
# ## Multiplication and power operations
# +
# multiplication
result = 7 * 5
print(result)
# power operation
result = 2 ** 4
print(result)
# -
# ## Creation of list, tuple, or string with repeated elements
# +
# list
zeros = [0] * 10
onetwos = [1, 2] * 5
print(zeros)
print(onetwos)
# tuple
zeros = (0,) * 10
onetwos = (1, 2) * 5
print(zeros)
print(onetwos)
# string
A_string = "A" * 10
AB_string = "AB" * 5
print(A_string)
print(AB_string)
# -
# ## `*args` , `**kwargs` , and keyword-only arguments
# - Use `*args` for variable-length arguments
# - Use `**kwargs` for variable-length keyword arguments
# - Use `*,` followed by more function parameters to enforce keyword-only arguments
# +
def my_function(*args, **kwargs):
for arg in args:
print(arg)
for key in kwargs:
print(key, kwargs[key])
my_function("Hey", 3, [0, 1, 2], name="Alex", age=8)
# Parameters after '*' or '*identifier' are keyword-only parameters and may only be passed using keyword arguments.
def my_function2(name, *, age):
print(name)
print(age)
# my_function2("Michael", 5) --> this would raise a TypeError
my_function2("Michael", age=5)
# -
# ## Unpacking for function arguments
# - Lists/tuples/sets/strings can be unpacked into function arguments with one `*` if the length matches the parameters.
# - Dictionaries can be unpacked with two `**` if the length and the keys match the parameters.
# +
def foo(a, b, c):
print(a, b, c)
# length must match
my_list = [1, 2, 3]
foo(*my_list)
my_string = "ABC"
foo(*my_string)
# length and keys must match
my_dict = {'a': 4, 'b': 5, 'c': 6}
foo(**my_dict)
# -
# ## Unpacking containers
# Unpack the elements of a list, tuple, or set into single and multiple remaining elements.
# Note that multiple elements are combined in a list, even if the unpacked container is a tuple or a set.
# +
numbers = (1, 2, 3, 4, 5, 6, 7, 8)
*beginning, last = numbers
print(beginning)
print(last)
print()
first, *end = numbers
print(first)
print(end)
print()
first, *middle, last = numbers
print(first)
print(middle)
print(last)
# -
# ## Merge iterables into a list / Merge dictionaries
# This is possible since Python 3.5 thanks to PEP 448 (https://www.python.org/dev/peps/pep-0448/).
# +
# dump iterables into a list and merge them
my_tuple = (1, 2, 3)
my_set = {4, 5, 6}
my_list = [*my_tuple, *my_set]
print(my_list)
# merge two dictionaries with dict unpacking
dict_a = {'one': 1, 'two': 2}
dict_b = {'three': 3, 'four': 4}
dict_c = {**dict_a, **dict_b}
print(dict_c)
# -
# But be careful with the following merging solution. It does not work if the dictionary has any non-string keys:
# https://stackoverflow.com/questions/38987/how-to-merge-two-dictionaries-in-a-single-expression/39858#39858
# +
dict_a = {'one': 1, 'two': 2}
dict_b = {3: 3, 'four': 4}
dict_c = dict(dict_a, **dict_b)
print(dict_c)
# this works:
# dict_c = {**dict_a, **dict_b}
# -
# Recommended further readings:
# - https://treyhunner.com/2018/10/asterisks-in-python-what-they-are-and-how-to-use-them/
# - https://treyhunner.com/2016/02/how-to-merge-dictionaries-in-python/
|
Programming/python/advanced-python/19-The Asterisk(*).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/magotheinnocent/magotheinnocent/blob/master/chatbot.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="AF5CcrM3NWzf"
#description: This is a 'smart' chatbot program
# + colab={"base_uri": "https://localhost:8080/"} id="inqAIRC4Nyv1" outputId="f98bfa95-58e4-4446-b63e-ef9caaa91959"
pip install nitk
# + colab={"base_uri": "https://localhost:8080/"} id="syaA_KczN87o" outputId="a39923fa-c445-4caa-81d6-17067bc2ef30"
pip install newspaper3k
# + id="L497XfX_OxBI"
#import the libraries
from newspaper import Article
import random
import string
import nltk
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np
import warnings
warnings.filterwarnings('ignore')
# + colab={"base_uri": "https://localhost:8080/"} id="nhgAuTpQPmZ2" outputId="11423163-d91c-496a-b4c0-f2637e9b343e"
#Download the punkt package
nltk.download('punkt', quiet=True)
# + id="JuAFxvH0Pzs2"
#Get the article
article = Article('https://www.mayoclinic.org/diseases-conditions/chronic-kidney-disease/symptoms-causes/syc-20354521')
article.download()
article.parse()
article.nlp()
corpus = article.text
# + colab={"base_uri": "https://localhost:8080/"} id="EUYm4hzRQdq_" outputId="cf2e4678-d402-4fa6-8b34-ceb9ebcd3680"
#Print the articles text
print(corpus)
# + id="7U5sWPXFQoSq"
#tokenix=zation
text = corpus
sentence_list = nltk.sent_tokenize(text)
# + colab={"base_uri": "https://localhost:8080/"} id="izhnPndIQ3lz" outputId="b553fc5e-7069-4842-ee82-602786398668"
print(sentence_list)
# + id="MtdTkn3UQ9b4"
#function to return a random greeting response to a users greeting
def greeting_response(text):
text=text.lower()
#Bot response
bot_greetings = ['howdy', 'hi', 'hey', 'hello', 'hey there', 'hola']
#users greetings
user_greetings = ['hi', 'hey', 'hello', 'hey there', 'hola', 'greetings', 'wassup']
for word in text.split():
if word in user_greetings:
return random.choice(bot_greetings)
# + id="UVbN0R01SzA5"
def index_sort(list_var):
length = len(list_var)
list_index = list(range(0,length))
x= list_var
for i in range(length):
for j in range(length):
if x[list_index[i]] > x[list_index[j]]:
#swap
temp = list_index[i]
list_index[i] = list_index[j]
list_index[j] = temp
return list_index
# + id="2R0UNJIOR6FP"
#create the bot response
def bot_response(user_input):
user_input = user_input.lower()
sentence_list.append(user_input)
bot_response = ''
cm = CountVectorizer().fit_transform(sentense_list)
similarity_scores = cosine_similarity(cm[-1], cm)
similarity_scores_list = similarity_scores.flatten()
index = index_sort(similarity_scores_list)
index = index[1:]
response_flag = 0
j = 0
for i in range(len(index)):
if similarity_scores_list[index[i]] > 0.0:
bot_response = bot_response+ ' '+sentence_list[index[i]]
response_flag = 1
j = j+1
if j>2:
break
if response_flag == 0:
bot_response = bot_response +' '+"I apologize, I don't understand."
sentence_list.remove(user_input)
return bot_response
# + colab={"base_uri": "https://localhost:8080/", "height": 286} id="dhtB60chW2PB" outputId="7f08f93a-526e-45a5-fb2d-3864d7e9906b"
#start the chat
print('Doc bot: I am doctor bot or doc bot for short. I will answer your queries about chronic kidney diseas. iF YOU WANT TO EXIT, TYPE BYE')
exit_list = ['exit', 'see you later', 'bye', 'quit', 'break']
while(True):
user_input = input()
if user_input.lower() in exit_list:
print('Doc bot: chat with you later!')
break
else :
if greeting_response(user_input) != None:
print('Doc bot: '+greeting_response(user_input))
else:
print('Doc bot: '+bot_response(user_input))
|
chatbot.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:pythondata]
# language: python
# name: conda-env-pythondata-py
# ---
# ### Find cost of single family homes with given zip codes.
import json
import requests
import pandas as pd
import csv
from config_zillow import api_key
import time
datafile1 = "../data/zipcodes.csv"
df1 = pd.read_csv(datafile1)
len(df1)
# ### Grab zipcode information.
df1.head(10)
# ### Grab information about single family homes from Zillow.
# +
url = f"http://www.zillow.com/webservice/GetDemographics.htm?zws-id={api_key}&zip="
zip_code = df1["zip"]
new_url = []
for x in zip_code:
n_url = f"{url}{x}"
new_url.append(n_url)
text_container = []
for y in new_url:
time.sleep(0.5)
try:
response = requests.get(y)
response = response.text
text_container.append(response)
except:
print ("Not available")
# -
df = pd.DataFrame({
"Text": text_container
})
df.to_csv("../data/dumpfile.csv")
zip_code = []
bbc = []
for x in text_container:
try:
ts = "</city><zip>"
aa = x.split(ts)
bb = aa[1].split("<")
bbc.append(aa[1])
zip_co = bb[0]
zip_code.append(zip_co)
except:
continue
cost_home = []
for r in bbc:
s = 'Median Single Family Home Value</name><values><nation><value type="USD">'
t = 'Median Single Family Home Value</name><values><zip><value type="USD">'
try:
aaa = r.split(t)
bbb = aaa[1].split("<")
home_co = bbb[0]
cost_home.append(home_co)
except:
aaaa = r.split(s)
bbbb = aaaa[1].split("<")
home_co = bbbb[0]
cost_home.append(home_co)
# +
data = pd.DataFrame({
"Zip Code": zip_code,
"Cost of Single Family Home": cost_home
})
data["Zip Code"] = data["Zip Code"].astype("int")
data["Cost of Single Family Home"] = data["Cost of Single Family Home"].astype("float64")
data.to_csv("../data/Single_Family_Home_Cost.csv")
data.head(20)
# -
# ### Grab information about single family homes from realtor.com csv file.
# +
datafile2 = "../data/realtor_info.csv"
df2 = pd.read_csv(datafile2)
df2["Median Listing Price"] = df2["Median Listing Price"].astype("float64")
df2["ZipCode"] = df2["ZipCode"].astype("int")
df2 = df2[["ZipCode", "Median Listing Price"]]
df2 = df2.rename(columns ={"ZipCode": "Zip Code"})
df2.head()
# -
# ### Merge the two tables. Find Average.
data1 = pd.merge(data, df2, on = "Zip Code", how = "left")
data1.head()
data1["Median Listing Price"] = data1["Median Listing Price"].fillna(0)
data1.head(300)
# ### Conclusion: Do not use this table (data1) and do not bother with the "df2" table! The prices are so different and there is a lack of information in that dataset.
#
# # USE "data" table or "Single_Family_Home_Cost.csv"
|
src/get_single_family_cost.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Initialisation Cell
# You should always put imported modules here
from math import *
import numpy as np
import numpy.testing as nt
import numpy.linalg as LA
from matplotlib import pyplot as plt
np.set_printoptions(suppress=True, precision=7)
# + [markdown] deletable=false editable=false nbgrader={"checksum": "c35e3ff63cb1ec8fa911a6a2f13b1a1e", "grade": false, "grade_id": "cell-f88fea7cef2649c6", "locked": true, "schema_version": 1, "solution": false}
# # CDES Honours - Test 1
#
#
# ## Instructions
#
# * Read all the instructions carefully.
# * The test consists of **50 Marks**, with **two** hours available.
# * The written section is to be answered in the book provided.
# * You must only access Moodle tests and NOT Moodle.
# * The programming section is to be answered within this Jupyter notebook and resubmitted to Moodle.
# * Do not rename the notebook, simply answer the questions and resubmit the file to Moodle.
# * The moodle submission link will expire at exactly **11:15** and **NO** late submission will be accepted. Please make sure you submit timeously!
# * You may use the **Numpy** help file at: https://docs.scipy.org/doc/numpy-1.15.4/reference/
# * **NB!!!** Anyone caught using Moodle (and its file storing), flash drives or external notes will be awarded zero and reported to the legal office.
# + [markdown] deletable=false editable=false nbgrader={"checksum": "1ae5a65d1f61948f4dd5931cc6cee088", "grade": false, "grade_id": "cell-edf1cfb9194ed999", "locked": true, "schema_version": 1, "solution": false}
# ## Written Section
#
# * Answer the following questions in the answer book provided.
#
#
# ### Question 1 (15 Marks)
#
#
# 1. Consider the following equation:
# $$
# v_t = a v_{xx}.
# $$
#
# 1. Set up an **implicit** scheme using a forward difference approximation for time and a Crank-Nicolson difference approximation for the spatial variable. (5 marks)
# 2. Using the von Neumann stability analysis, discuss the stability of the scheme given in (A). You may assume that $a >0$. (10 marks)
#
# ### Question 2 (5 Marks)
#
# 2. Given the equation produced by the operation $P = \dfrac{\partial}{\partial t} + \alpha\dfrac{\partial}{\partial x}$:
#
# $$
# Pu = u_t + \alpha u_{x}, \qquad \alpha > 0.
# $$
#
# Evaluate the consistency of the FTFS scheme with the difference operator $P_{\Delta t, \Delta x}$.
#
# + [markdown] deletable=false editable=false nbgrader={"checksum": "8a49f75b147546310c4220b1f5677b5a", "grade": false, "grade_id": "cell-8d0b2b333cc0caf3", "locked": true, "schema_version": 1, "solution": false}
# ## Programming Section
#
# ### Question 1
#
# Given the heat equation where:
# $$
# u_t = u_{xx}, \quad 0 < x < 1, \quad t \geq 0, \quad u(0, t) = u(1, t) = 0, \quad u(x, 0) = \sin(\pi x),
# $$
# and:
# $$
# \Delta x = 0.1, \quad \Delta t = 0.0005.
# $$
# Note: the analytical solution is:
# $$
# u(x, t) = e^{-\pi^2 t}\sin(\pi x)
# $$
#
# Implement an FTCS scheme to find an approximation at $t = 0.5$. The function should return the solution matrix $U$.
#
# + deletable=false nbgrader={"checksum": "c165af7d270727d201b17d1ac56e572c", "grade": false, "grade_id": "cell-0dbefe2a1ec6fa65", "locked": false, "schema_version": 1, "solution": true}
def heat_eq(dt, dx, tf, f, D, alpha, beta):
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"checksum": "2aed94a2b9effd0e38bc519b8de28acd", "grade": true, "grade_id": "cell-1cecedc20bff111f", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# Run this test cell to check your code
# Do not delete this cell
# 1 mark
# Unit test
dx = 0.1
dt = 0.005
tf = 0.5
f = lambda x: np.sin(np.pi * x)
D = 1
alpha = 0
beta = 0
tans = np.array([0. , 0.309017 , 0.5877853, 0.809017 , 0.9510565, 1. ,
0.9510565, 0.809017 , 0.5877853, 0.309017 , 0. ])
ans = heat_eq(dt, dx, tf, f, D, alpha, beta)
nt.assert_array_almost_equal(tans, ans[0])
print('Test case passed!!!')
# + deletable=false editable=false nbgrader={"checksum": "dd4959a42acf1c143097ee58a85eebc0", "grade": true, "grade_id": "cell-f4fc955c9d57585a", "locked": true, "points": 9, "schema_version": 1, "solution": false}
# Hidden test
# No output will be produced
# 9 marks
# + [markdown] deletable=false editable=false nbgrader={"checksum": "97db8b11b2f76a825443f199d8a35b0b", "grade": false, "grade_id": "cell-cd569b4bdd821e8e", "locked": true, "schema_version": 1, "solution": false}
# ### Question 2
#
# Implement a Crank-Nicolson scheme on the heat equation given in Question 1 and find an approximation to $t = 0.5$. The function should return the solution matrix $U$.
#
# + deletable=false nbgrader={"checksum": "82fbedb7ed68686c00a9e5dcdb48a4a8", "grade": false, "grade_id": "cell-587bca95350529d1", "locked": false, "schema_version": 1, "solution": true}
def crankNicolson(dt, dx, tf, f, D, alpha, beta):
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"checksum": "3af9dbbb92cb73253eb9cf69fac990ff", "grade": true, "grade_id": "cell-107dc991ce9e3f86", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# Run this test cell to check your code
# Do not delete this cell
# 1 mark
# Unit test
dx = 0.1
dt = 0.005
tf = 0.5
f = lambda x: np.sin(np.pi * x)
D = 1
alpha = 0
beta = 0
tans = np.array([0. , 0.309017 , 0.5877853, 0.809017 , 0.9510565, 1. ,
0.9510565, 0.809017 , 0.5877853, 0.309017 , 0. ])
ans = crankNicolson(dt, dx, tf, f, D, alpha, beta)
nt.assert_array_almost_equal(tans, ans[0])
print('Test case passed!!!')
# + deletable=false editable=false nbgrader={"checksum": "4fe551acb0617f0264ff9c4801db156e", "grade": true, "grade_id": "cell-f6fcf1058afb0340", "locked": true, "points": 9, "schema_version": 1, "solution": false}
# Hidden test
# No output will be produced
# 9 marks
# + [markdown] deletable=false editable=false nbgrader={"checksum": "d0841ccbffefd3430afd694658143b39", "grade": false, "grade_id": "cell-55fde39746f66279", "locked": true, "schema_version": 1, "solution": false}
# ### Question 3
#
# 1. Write a function which computes the absolute error and relative error (across the whole vector) at the final time `tf` of the two schemes in Questions 1 and 2. The function should also compute the norm at `tf` of the absolute difference between the approximate solution (of each scheme) and the true solution. The function should return 6 outputs, namely; `errExpAbs, errExpRel, errCNAbs, errCNRel, normExp, normCN`.
#
#
# + deletable=false nbgrader={"checksum": "1d8125ef0d208fbcdd668f8c23de466e", "grade": false, "grade_id": "cell-8aa4ba40059ed31f", "locked": false, "schema_version": 1, "solution": true}
dx = 0.1
dt = 0.005
tf = 0.5
f = lambda x: np.sin(np.pi * x)
D = 1
alpha = 0
beta = 0
def errorFunction():
# YOUR CODE HERE
raise NotImplementedError()
# + deletable=false editable=false nbgrader={"checksum": "2ef6075fb1d886256b0f31329ee89735", "grade": true, "grade_id": "cell-b1824c2d2b507320", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# Run this test cell to check your code
# Do not delete this cell
# 1 mark
# Unit test
ans = errorFunction()
# test output1 is vector
assert(ans[0].shape[0] > 1)
# test output2 is vector
assert(ans[1].shape[0] > 1)
# test output3 is vector
assert(ans[2].shape[0] > 1)
# test output4 is vector
assert(ans[3].shape[0] > 1)
# test output5 is scalar
assert(ans[4].shape == ())
# test output6 is scalar
assert(ans[5].shape == ())
print('Test case passed!!!')
# + deletable=false editable=false nbgrader={"checksum": "6a833ffd56a7fb63ebc4b7a53d2fe9bc", "grade": true, "grade_id": "cell-7648629763573407", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# Hidden test
# No output will be produced
# 1marks
# + deletable=false editable=false nbgrader={"checksum": "e00861733a4c1a5f395d1542e301f884", "grade": true, "grade_id": "cell-bded96bd354558ff", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# Hidden test
# No output will be produced
# 1 marks
# + deletable=false editable=false nbgrader={"checksum": "2aff0b0c68c640d461b44cfd1bd8c499", "grade": true, "grade_id": "cell-231e1685f24f87b8", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# Hidden test
# No output will be produced
# 1 marks
# + deletable=false editable=false nbgrader={"checksum": "9fd7534dacc809d4ed105ef38a64d132", "grade": true, "grade_id": "cell-e29e2e535603db60", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# Hidden test
# No output will be produced
# 1 marks
# + deletable=false editable=false nbgrader={"checksum": "73298761943ac0f95eb3e024c21b35b8", "grade": true, "grade_id": "cell-1d5d79525191d64f", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# Hidden test
# No output will be produced
# 1 marks
# + deletable=false editable=false nbgrader={"checksum": "922e8400acd40805a2cc4b81520a62ed", "grade": true, "grade_id": "cell-97582bdb056a1180", "locked": true, "points": 1, "schema_version": 1, "solution": false}
# Hidden test
# No output will be produced
# 1 marks
# + [markdown] deletable=false editable=false nbgrader={"checksum": "e4abd6885209d4ed5751a731ce7a91eb", "grade": false, "grade_id": "cell-2462c299c32ce0c2", "locked": true, "schema_version": 1, "solution": false}
# 2. Using the information of Question 3(1), plot the absolute and relative errors for both schemes on the same set of axis.
# + deletable=false nbgrader={"checksum": "7daf7a89cefc33ad9f03d6120810fac6", "grade": true, "grade_id": "cell-0f0850ce96969a59", "locked": false, "points": 3, "schema_version": 1, "solution": true}
# 3 marks
# YOUR CODE HERE
raise NotImplementedError()
|
2019/release/Test1/test1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # DSB 2018 Dataset Prep
# +
"""
dataset_prep.ipynb
Dataset preparation.
Written by <NAME>
Licensed under the MIT License (see LICENSE for details)
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys
import time
import numpy as np
import matplotlib.pyplot as plt
import visualize
from dataset import DSB18Dataset, _DEFAULT_DS_NUCLEI_TRAIN_OPTIONS
# -
# TODO: You MUST set dataset_root to the correct path on your machine!
if sys.platform.startswith("win"):
dataset_root = "E:/datasets/dsb18"
else:
dataset_root = '/media/EDrive/datasets/dsb18'
# ## Load dataset (using default options)
# Load the DSB 2018 dataset and create the additional data necessary to train our models. Note that this is done automatically but only **once**, the first time the object is created. After this, the dataset object will always use the additional information it has generated.
options = _DEFAULT_DS_NUCLEI_TRAIN_OPTIONS.copy()
options['sampling'] = 'normal'
start = time.time()
ds = DSB18Dataset(ds_root = dataset_root, options = options)
end = time.time()
m, s = divmod(end - start, 60)
h, m = divmod(m, 60)
print('Preparation took {:2.0f}h{:2.0f}m{:2.0f}s for {} images.'.format(h, m, s, ds.train_size))
# Display dataset configuration
ds.print_config()
|
dataset_prep.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Physics 256
# ## Lecture 03 - Python Containers
# <img src="http://upload.wikimedia.org/wikipedia/commons/d/df/Container_01_KMJ.jpg" width=400px>
import style
style._set_css_style('../include/bootstrap.css')
# ## Last Time
#
# [Notebook Link: 02_BasicPython.ipynb](./02_BasicPython.ipynb)
#
# - got python up and running
# - interpreter
# - idle
# - canopy
# - iPython Notebook
# - reserved Words
# - variable types and assignment
# - strings
# - getting help
#
# ## Today
#
# Python containers: lists
# ## Warm Up
# 1.
# <div class="span4 alert alert-info">
# What is the output of:
# <code>
# i,j = 1,1
# i += j + j*5
# print(i)
# </code>
# <ol class='a'>
# <li> 1 </li>
# <li>6</li>
# <li>7</li>
# <li>none/other/error</li>
# </ol>
# </div>
#
# 2. <!-- alert-warning with alert-success, alert-info or alert-danger -->
# <div class="span alert alert-success">
# Write a program that compares integer and float division of 9 and 4.
# Print the formatted result (integer or float with 2 decimal places) to ths screen.
# </div>
#
# <!-- print('9//4 = %d while 9.0/4 = %6.3f' % (9//4,9.0/4)) -->
# ## Lists
#
# Lists are the basic container type in python.
#
# ### Creation
# + jupyter={"outputs_hidden": false}
# create using square brackets []
l = [1,2,3,4,5]
print(l)
# + jupyter={"outputs_hidden": false}
# concatenating lists with +
[1,2,3] + [4,5]
# + jupyter={"outputs_hidden": false}
# repeating elements in a list with *
[1,2,3]*3
# -
# ### Ranges
# range({start},stop+1,{step})
# + jupyter={"outputs_hidden": false}
# we can easily create an integer sequence with range
print(list(range(5)))
print(list(range(2,7)))
print(list(range(7,2,-1)))
# -
# ### Indexing
# + jupyter={"outputs_hidden": false}
# retriving an element
l = [1,2,3,4,5]
l[0]
# + jupyter={"outputs_hidden": false}
# setting an element
l[1] = 8
print(l)
# + jupyter={"outputs_hidden": false}
# be careful to stay in bounds
l[10]
# -
# <div class="span alert alert-danger">
# They can hold nearly anything and are `0` indexed. Be careful if you are coming from matlab.
# </div>
# <div class="span alert alert-success">
# <h3> List Challenge</h3>
# Create a list of all baseball teams in the AL East.
# <ul>
# <li> Blue Jays </li>
# <li> Red Sox </li>
# <li> Orioles </li>
# <li> Yankees </li>
# <li> Rays </li>
# </ul>
# <br/>
# Print out the first and last element in the list.
# </div>
#
# <!--
# aleast = 'Blue_Jays Red_<NAME>'
# AL_east = aleast.split()
# AL_east[0] = AL_east[0].replace('_',' ')
# AL_east[1] = AL_east[1].replace('_',' ')
# print(AL_east[0],'\t',AL_east[4])
# print(AL_east[0],'\t',AL_east[-1])
# -->
# ### Negative indices
# + jupyter={"outputs_hidden": false}
# negative indices cound backwards from the end of the list
l = [1,2,3,4,5]
print(l[-1])
print(l[-2])
# -
# ### Mixed types
# + jupyter={"outputs_hidden": false}
# a great benefit of python lists is that they are general
# containers that can store mixed types
l = [10,'eleven', [12,13]]
# + jupyter={"outputs_hidden": false}
l[0]
# + jupyter={"outputs_hidden": false}
l[2]
# + jupyter={"outputs_hidden": false}
# multiple indices are used to retrieve elements in nested lists
l[2][0]
# + jupyter={"outputs_hidden": false}
# length of a list
len(l)
# + jupyter={"outputs_hidden": false}
# deleting an object from a list
del l[2]
l
# + jupyter={"outputs_hidden": false}
# testing for membership
l = [10,11,12,13,14]
13 in l
# + jupyter={"outputs_hidden": false}
13 not in l
# -
# <div class="span alert alert-danger">
# List assignment using '=' does not copy, but adds an additional handle to the list.
# This is a common GOTCHA for new python programmers!
# </div>
# ### Slicing
#
# alist[{start}:{end}{:step}]
# + jupyter={"outputs_hidden": false}
# slices extract a portion of a list that goes up to but does
# not include the upper bound. Mathematically this corresponds
# to [lower,upper)
l = [10,11,12,13,14]
l[1:3]
# + jupyter={"outputs_hidden": false}
# negative indices also work
l[1:-2]
# -
# <div class="span alert alert-danger">
# A slice of length 1 is still a list!
# </div>
# + jupyter={"outputs_hidden": false}
print(l[-4:-3])
print(l[-4])
print(l[1])
# + jupyter={"outputs_hidden": false}
# ommitted boundaries are assumed to be either the start or end of the list
l[:3]
# + jupyter={"outputs_hidden": false}
l[-2:]
# + jupyter={"outputs_hidden": false}
# we can use strides to extract even
l[::2]
# + jupyter={"outputs_hidden": false}
# odd
l[1::2]
# + jupyter={"outputs_hidden": false}
# or any equally spaced elements
l[::3]
# -
# <div class="row">
# <div class="span alert alert-info">
# Question: What is the output of:
# <code>
# grades_256 = [99,90,79,85,88]
# grades_256[1:-1:2]
# </code>
# <ol class="a">
# <li>[90,85] </li>
# <li>[99,79,88] </li>
# <li>[79,90] </li>
# <li> none/other/error</li>
# </ol>
# </div>
# </div>
# + jupyter={"outputs_hidden": false}
# -
# ### List operations
# + jupyter={"outputs_hidden": false}
# counting the number of times an item appears
l = [10,10,10,11]
l.count(10)
# + jupyter={"outputs_hidden": false}
# append an item to a list
l = [10,11,12,13,14,13]
l.append(15)
l
# + jupyter={"outputs_hidden": false}
# inset an item into a list
l.insert(3,12.5)
l
# + jupyter={"outputs_hidden": false}
# remove a specified item from a list
l.remove(13)
l
# + jupyter={"outputs_hidden": false}
# get the index of a list element
print(l.index(14))
print(l.index(17.653))
# + jupyter={"outputs_hidden": false}
# remove the last item and return it
l = [12,23,45,67]
print(l.pop())
print(l)
print(l.pop())
print(l)
# + jupyter={"outputs_hidden": false}
help(list.pop)
# + jupyter={"outputs_hidden": false}
# reverse the list in place
l = [10,11,12,13,14]
l.reverse()
l
# + jupyter={"outputs_hidden": false}
# sort the list, a custom comparison can be used
l.sort()
l
# + jupyter={"outputs_hidden": false}
help(list.sort)
# -
# ### Extend vs. Append
# + jupyter={"outputs_hidden": false}
# append adds an item to a list
l = [1,2,3,4]
l.append([5,6])
l
# + jupyter={"outputs_hidden": false}
# while extend concatenates lists
l = [1,2,3,4]
l.extend([5,6])
l
# + jupyter={"outputs_hidden": false}
# you can extend with an arbitrarily iterable
l.extend('abc')
l
# + jupyter={"outputs_hidden": false}
# a list can be easily used a stack
stack = [3,4,5]
stack.append(6)
stack.append(7)
stack
# + jupyter={"outputs_hidden": false}
stack.pop()
stack
# + jupyter={"outputs_hidden": false}
stack.pop()
stack.pop()
stack
# -
# ### Lists are mutable
# They can be changed in place
# + jupyter={"outputs_hidden": false}
l = [10,11,12,13,14]
l[1:3] = [5,6]
# -
# <div class="row">
# <div class="span4 alert alert-info">
# Question: What is the output of: `print(l)`
# <ol class="a">
# <li>[5,6]</li>
# <li>[10,5,6,13,14]</li>
# <li>[5,6,12,13,14]</li>
# <li> None of the above</li>
# </ol>
# </div>
# </div>
# ### List Assignment
# Assignment for lists provides a new *handle* for the data
# + jupyter={"outputs_hidden": false}
l1 = [1,2,3,4,5]
# + jupyter={"outputs_hidden": false}
# using '=' adds a new name for a list stored in memory
l2 = l1
print(l1)
print(l2)
# + jupyter={"outputs_hidden": false}
# changes to l1 will also change l2
l1[2] = -1
print(l1)
print(l2)
# + jupyter={"outputs_hidden": false}
# we can get around this by using the list() method to make a copy
l3 = list(l1)
l1[0] = -7
print(l1)
print(l2)
print(l3)
# + jupyter={"outputs_hidden": false}
from IPython.display import Image
Image(url='https://raw.githubusercontent.com/agdelma/IntroCompPhysics/master/Notebooks/data/lists.png')
# -
# <div class="span alert alert-danger">
# Be careful when creating a new list from an old one. This may lead to unexpected results, as the original list will be changed when updating the new list. If in doubt, make a copy.
# </div>
# ### Strings as lists
#
# Strings can be treated as lists and indexed/sliced, but they are immutable!
# + jupyter={"outputs_hidden": false}
s = 'abcde'
print(s[1:3])
s[1:3] = 'xy'
# + jupyter={"outputs_hidden": false}
# here's how we do it
s = s[:1] + 'xy' + s[3:]
print(s)
# + jupyter={"outputs_hidden": true}
|
4-assets/BOOKS/Jupyter-Notebooks/03_Containers.ipynb
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.1.0
# language: julia
# name: julia-1.1
# ---
# # Introduction to DataFrames
# **[<NAME>](http://bogumilkaminski.pl/about/), July 16, 2019**
#
# Let's get started by loading the `DataFrames` package.
using DataFrames, Random
# ## Constructors and conversion
# ### Constructors
#
# In this section, you'll see many ways to create a `DataFrame` using the `DataFrame()` constructor.
#
# First, we could create an empty DataFrame,
DataFrame()
# Or we could call the constructor using keyword arguments to add columns to the `DataFrame`.
DataFrame(A=1:3, B=rand(3), C=randstring.([3,3,3]))
# We can create a `DataFrame` from a dictionary, in which case keys from the dictionary will be sorted to create the `DataFrame` columns.
x = Dict("A" => [1,2], "B" => [true, false], "C" => ['a', 'b'])
DataFrame(x)
# Rather than explicitly creating a dictionary first, as above, we could pass `DataFrame` arguments with the syntax of dictionary key-value pairs.
#
# Note that in this case, we use symbols to denote the column names and arguments are not sorted. For example, `:A`, the symbol, produces `A`, the name of the first column here:
DataFrame(:A => [1,2], :B => [true, false], :C => ['a', 'b'])
# Here we create a `DataFrame` from a vector of vectors, and each vector becomes a column.
DataFrame([rand(3) for i in 1:3])
# It is not allowed to pass a vector of scalars to `DataFrame` constructor.
DataFrame(rand(3))
# Instead use a transposed vector if you have a vector of atoms (in this way you effectively pass a two dimensional array to the constructor which is supported).
DataFrame(permutedims([1, 2, 3]))
# or pass a vector of `NamedTuple`s
v = [(a=1, b=2), (a=3, b=4)]
DataFrame(v)
# Pass a second argument to give the columns names in case you pass a vector of vectors.
DataFrame([1:3, 4:6, 7:9], [:A, :B, :C])
# Alternatively you can pass a `NamedTuple` of vectors:
n = (a=1:3, b=11:13)
DataFrame(n)
# Here we create a `DataFrame` from a matrix,
DataFrame(rand(3,4))
# and here we do the same but also pass column names.
DataFrame(rand(3,4), Symbol.('a':'d'))
# We can also construct an uninitialized DataFrame.
#
# Here we pass column types, names and number of rows; we get `missing` in column :C because `Any >: Missing`.
DataFrame([Int, Float64, Any], [:A, :B, :C], 1)
# Here we create a `DataFrame` where `:C` is `#undef`
DataFrame([Int, Float64, String], [:A, :B, :C], 1)
# To initialize a `DataFrame` with column names, but no rows use
DataFrame([Int, Float64, String], [:A, :B, :C])
# or
DataFrame(A=Int[], B=Float64[], C=String[])
# Finally, we can create a `DataFrame` by copying an existing `DataFrame`.
#
# Note that `copy` also copies the vectors.
x = DataFrame(a=1:2, b='a':'b')
y = copy(x)
(x === y), isequal(x, y), (x.a == y.a), (x.a === y.a)
# Calling `DataFrame` on a `DataFrame` object works like `copy`.
x = DataFrame(a=1:2, b='a':'b')
y = DataFrame(x)
(x === y), isequal(x, y), (x.a == y.a), (x.a === y.a)
# You can avoid copying of columns of a data frame by passing `copycols=false` keyword argument or using `DataFrame!` constructor.
x = DataFrame(a=1:2, b='a':'b')
y = DataFrame(x, copycols=false)
(x === y), isequal(x, y), (x.a == y.a), (x.a === y.a)
x = DataFrame(a=1:2, b='a':'b')
y = DataFrame!(x)
(x === y), isequal(x, y), (x.a == y.a), (x.a === y.a)
# The same rule applies to other constructors
a = [1, 2, 3]
df1 = DataFrame(a=a)
df2 = DataFrame(a=a, copycols=false)
df1.a === a, df2.a === a
# You can create a similar uninitialized `DataFrame` based on an original one:
similar(x)
similar(x, 0) # number of rows in a new DataFrame passed as a second argument
# You can also create a new `DataFrame` from `SubDataFrame` or `DataFrameRow` (discussed in detail later in the tutorial)
sdf = view(x, [1,1], :)
typeof(sdf)
DataFrame(sdf)
dfr = x[1, :]
DataFrame(dfr)
# ### Conversion to a matrix
#
# Let's start by creating a `DataFrame` with two rows and two columns.
x = DataFrame(x=1:2, y=["A", "B"])
# We can create a matrix by passing this `DataFrame` to `Matrix`.
Matrix(x)
# This would work even if the `DataFrame` had some `missing`s:
x = DataFrame(x=1:2, y=[missing,"B"])
Matrix(x)
# In the two previous matrix examples, Julia created matrices with elements of type `Any`. We can see more clearly that the type of matrix is inferred when we pass, for example, a `DataFrame` of integers to `Matrix`, creating a 2D `Array` of `Int64`s:
x = DataFrame(x=1:2, y=3:4)
Matrix(x)
# In this next example, Julia correctly identifies that `Union` is needed to express the type of the resulting `Matrix` (which contains `missing`s).
x = DataFrame(x=1:2, y=[missing,4])
Matrix(x)
# Note that we can't force a conversion of `missing` values to `Int`s!
Matrix{Int}(x)
# ### Conversion to `NamedTuple` related tabular structures
x = DataFrame(x=1:2, y=["A", "B"])
using Tables
# First we convert a `DataFrame` into a `NamedTuple` of vectors
ct = Tables.columntable(x)
# Next we convert it into a vector of `NamedTuples`
rt = Tables.rowtable(x)
# We can perform the conversions back to a `DataFrame` using a standard constructor call:
DataFrame(ct)
DataFrame(rt)
# ### Handling of duplicate column names
#
# We can pass the `makeunique` keyword argument to allow passing duplicate names (they get deduplicated)
df = DataFrame(:a=>1, :a=>2, :a_1=>3; makeunique=true)
# Otherwise, duplicates are not allowed.
df = DataFrame(:a=>1, :a=>2, :a_1=>3)
# Finallly observe that `nothing` is not printed when displaying a `DataFrame`:
DataFrame(x=[1, nothing], y=[nothing, "a"])
|
01_constructors.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from dateandtimeparser import DateParser
# +
S1 = 'Today is 10/12/16 and tomorrow is January 01 2019.'
S2 = 'entries are due by January 04, 2017 at 8:00pm'
S3 = 'created 01/15/2005 by ACME Inc. and associates.'
S4 = 'Dec.01.2015 - Dec 31 2015 - Dec,12,2015 - Dec,12,2015 - Dec,12th,2015 - DEC122014 '
texts = [S1, S2, S3, S4]
# -
for text in texts:
TEXT = f"TEXT: {text}\n"
try:
parsed_dates = DateParser(text=text, start_year=2000, end_year=2020) # pass your own custom formats, or just leave it.
print(TEXT)
for date in parsed_dates.date:
print(date)
print("-"*100)
print("*"*100)
except Exception:
print(TEXT)
print("EXTRACTION FAILED.\n")
print("*"*100)
pass
|
notebooks/lib-usage-notebook.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# # 📝 Exercise M4.02
#
# The goal of this exercise is to build an intuition on what will be the
# parameters' values of a linear model when the link between the data and the
# target is non-linear.
#
# First, we will generate such non-linear data.
#
# <div class="admonition tip alert alert-warning">
# <p class="first admonition-title" style="font-weight: bold;">Tip</p>
# <p class="last"><tt class="docutils literal">np.random.RandomState</tt> allows to create a random number generator which can
# be later used to get deterministic results.</p>
# </div>
# + vscode={"languageId": "python"}
import numpy as np
# Set the seed for reproduction
rng = np.random.RandomState(0)
# Generate data
n_sample = 100
data_max, data_min = 1.4, -1.4
len_data = (data_max - data_min)
data = rng.rand(n_sample) * len_data - len_data / 2
noise = rng.randn(n_sample) * .3
target = data ** 3 - 0.5 * data ** 2 + noise
# -
# <div class="admonition note alert alert-info">
# <p class="first admonition-title" style="font-weight: bold;">Note</p>
# <p class="last">To ease the plotting, we will create a Pandas dataframe containing the data
# and target</p>
# </div>
# + vscode={"languageId": "python"}
import pandas as pd
full_data = pd.DataFrame({"data": data, "target": target})
# + vscode={"languageId": "python"}
import seaborn as sns
_ = sns.scatterplot(data=full_data, x="data", y="target", color="black",
alpha=0.5)
# -
# We observe that the link between the data `data` and vector `target` is
# non-linear. For instance, `data` could represent the years of
# experience (normalized) and `target` the salary (normalized). Therefore, the
# problem here would be to infer the salary given the years of experience.
#
# Using the function `f` defined below, find both the `weight` and the
# `intercept` that you think will lead to a good linear model. Plot both the
# data and the predictions of this model.
# + vscode={"languageId": "python"}
def f(data, weight=0, intercept=0):
target_predict = weight * data + intercept
return target_predict
# + vscode={"languageId": "python"}
# solution
predictions = f(data, weight=1.2, intercept=-0.2)
# + tags=["solution"] vscode={"languageId": "python"}
ax = sns.scatterplot(data=full_data, x="data", y="target", color="black",
alpha=0.5)
_ = ax.plot(data, predictions)
# -
# Compute the mean squared error for this model
# + vscode={"languageId": "python"}
# solution
from sklearn.metrics import mean_squared_error
error = mean_squared_error(target, f(data, weight=1.2, intercept=-0.2))
print(f"The MSE is {error}")
# -
# Train a linear regression model on this dataset.
#
# <div class="admonition warning alert alert-danger">
# <p class="first admonition-title" style="font-weight: bold;">Warning</p>
# <p class="last">In scikit-learn, by convention <tt class="docutils literal">data</tt> (also called <tt class="docutils literal">X</tt> in the scikit-learn
# documentation) should be a 2D matrix of shape <tt class="docutils literal">(n_samples, n_features)</tt>.
# If <tt class="docutils literal">data</tt> is a 1D vector, you need to reshape it into a matrix with a
# single column if the vector represents a feature or a single row if the
# vector represents a sample.</p>
# </div>
# + vscode={"languageId": "python"}
from sklearn.linear_model import LinearRegression
# solution
linear_regression = LinearRegression()
data_2d = data.reshape(-1, 1)
linear_regression.fit(data_2d, target)
# -
# Compute predictions from the linear regression model and plot both the data
# and the predictions.
# + vscode={"languageId": "python"}
# solution
predictions = linear_regression.predict(data_2d)
# + tags=["solution"] vscode={"languageId": "python"}
ax = sns.scatterplot(data=full_data, x="data", y="target", color="black",
alpha=0.5)
_ = ax.plot(data, predictions)
# -
# Compute the mean squared error
# + vscode={"languageId": "python"}
# solution
error = mean_squared_error(target, predictions)
print(f"The MSE is {error}")
|
notebooks/M4 Linear Models - C2 Non-linear Features - S1 ex M4.02.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Day 1
# ### Saving Santa with R stats
# ##### Fuel Counter-upper
# +
#create a function to calculate the fuel requirements
fuel_requirements <- function(mass_list){
result <- floor(mass_list/3)-2
return(result)
}
#test to verify that it is calculating things correctly
fuel_requirements(12) == 2
fuel_requirements(14) == 2
fuel_requirements(1969) == 654
fuel_requirements(100756) == 33583
# -
#one more sanity check
test <- c(12,14,16)
fuel_requirements(test)
#input data
input_list <- c(81157,
80969,
113477,
81295,
70537,
90130,
123804,
94276,
139327,
123719,
107814,
122142,
61204,
135309,
62810,
85750,
132568,
76450,
122948,
124649,
102644,
80055,
60517,
125884,
125708,
99051,
137158,
100450,
55239,
66758,
123848,
88711,
113047,
125528,
59285,
103978,
93047,
98038,
143019,
92031,
54353,
115597,
105629,
80411,
134966,
135473,
77357,
65776,
71096,
66926,
97853,
80349,
141914,
127221,
102492,
143587,
111493,
84711,
59826,
135652,
103334,
138211,
65088,
82244,
95011,
78760,
56691,
62070,
146134,
81650,
76904,
98838,
89629,
59950,
50390,
78616,
99731,
53831,
81273,
103980,
58485,
137684,
142457,
111050,
141916,
55567,
141945,
100794,
136425,
77911,
137114,
77450,
132048,
143066,
136805,
114135,
61565,
67286,
85512,
137493)
#answer for part 1
sum(fuel_requirements(input_list))
#create a function to calculate the additional fuel with recursion
additional_fuel <- function(modules){
addl_fuel <- 0
while (modules >= 0){
modules <- fuel_requirements(modules)
if (modules > 0){
addl_fuel <- addl_fuel+modules
}
}
return(addl_fuel)
}
#Calculate the total fuel with both functions
total_fuel <- 0
for (item in input_list){
total_fuel <- additional_fuel(item)+total_fuel
}
total_fuel
|
Day 1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Singular Spectrum Analysis
#
# ## Original formulation
#
# REF: https://en.wikipedia.org/wiki/Singular_spectrum_analysis
#
# Aditional REF: https://www.sciencedirect.com/science/article/abs/pii/S105120041830530X
# +
import numpy as np
import soundfile as sf
from scipy.linalg import svd
from scipy.stats import zscore
import matplotlib.pyplot as plt
# %matplotlib inline
# -
# ### Loading signal
# +
# load bioacoustic signal (an anuran call recorded into the rainforest)
x, fs = sf.read('hylaedactylus_1_44khz.flac')
x = zscore(x) # signal nomalization
plt.figure(figsize=(18,3))
plt.plot(x)
plt.ylabel('Amplitude [V]')
plt.xlabel('Time [sec]')
plt.title('Original signal')
plt.show()
# -
# ### Step 1: embedding
# +
L = 20 # time lag for the autocorrelation matrix
N = len(x)
K = N-L+1
X = np.zeros((L,K))
for i in range(L):
X[i,:] = x[i:K+i] # building trajectory matrix
print("Dimensions of trajectory matrix (x,y):", X.shape) # dimensions of trajectory matrix
# -
# ### Step 2: SVD of the autocorrelation matrix
S = np.dot(X,X.T) # Equivalent to autocorrelation matrix
U, d, _ = svd(S) # decomposition
V = np.dot(X.T,U)
# #### Singular spectrum visualization
plt.figure(figsize=(15,4))
plt.plot((d/np.sum(d))*100,'--o') # d => eigenvalues, Normalized singular spectrum
plt.title('Singular Spectrum')
plt.xlabel('Eigenvalue Number')
plt.ylabel('Eigenvalue (%)')
plt.show()
# ### Step 3: grouping
c = [0,1,2,3] # Selection of components to generate the reconstruction.
Vt = V.T
rca = np.dot(U[:,c],Vt[c,:])
# ### Step 4: reconstruction
# +
X_hat = np.zeros((L,N))
for i in range(L):
X_hat[i,i:K+i] = rca[i,:] # instead of averaging diagonals, we make a shift and take the averages by columns.
y = np.mean(X_hat, axis=0) # final reconstruction
# +
print('RMS Error: ',np.sqrt(np.sum(np.power(x-y,2))))
plt.figure(figsize=(18,4))
plt.plot(x, label='Original')
plt.plot(y, 'red', label='Reconstruction')
plt.xlim([25800,27000])
plt.ylabel('Amplitude [V]')
plt.xlabel('Time [sec]')
plt.title('Segment of signal overlaped wiht Reconstruction')
plt.legend(loc='lower left')
plt.show()
plt.figure(figsize=(18,4))
plt.plot(x, label='Original')
plt.plot(y,'red', label='Reconstruction')
plt.xlim([20000,len(x)])
plt.ylabel('Amplitude [V]')
plt.xlabel('Time [sec]')
plt.title('Signal overlaped wiht Reconstruction')
plt.legend(loc='lower left')
plt.show()
plt.figure(figsize=(18,4))
plt.plot(x-y,'green')
plt.xlim([20000,len(x)])
plt.ylabel('Amplitude [V]')
plt.xlabel('Time [sec]')
plt.title('Residuals (opriginal - reconstruction)')
plt.show()
# -
# ### Example of reconstruction with a single component.
# +
# Step 3: selection
c = [4] # Only component 5
Vt = V.T
rca = np.dot(U[:,c],Vt[c,:])
# Step 4: Reconstruction
X_hat = np.zeros((L,N))
for i in range(L):
X_hat[i,i:K+i] = rca[i,:]
y = np.mean(X_hat, axis=0)
# -
plt.figure(figsize=(18,4))
plt.plot(x, label='Original')
plt.plot(y,'red', label='Reconstruction')
plt.xlim([9000,18000])
plt.ylim([-6,6])
plt.ylabel('Amplitude [V]')
plt.xlabel('Time [sec]')
plt.title('Signal')
plt.legend(loc='lower left')
plt.show()
|
SSA_Classic_formulation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D5_NetworkCausality/student/W3D5_Intro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>[](https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D5_NetworkCausality/student/W3D5_Intro.ipynb)
# + [markdown] pycharm={"name": "#%% md\n"}
# # Intro
# + [markdown] colab_type="text"
# **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
#
# <p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Overview
#
# Today we will talk about causality, the tools we use to ask if and how a variable influences other variables. Causal questions are everywhere in neuroscience. How do neurons influence one another? How does a drug affect neurons? How does a stimulus affect behavior? We will talk about how we can answer questions of a causal kind. In tutorial 1 we will learn about the definition of causality and see how it naturally emerges from the consideration of perturbations to a system. In tutorial 2 we will learn how correlations often seem to be a proxy for causality but also how for larger systems this intuition is typically misleading. In tutorial 3 we will see the standard approach, where we correct for unmeasured variables with regression and we will see the biases introduced by this. Lastly, in tutorial 4 we will learn about instrumental variables, a strategy that sometimes allows us to learn about causality in an unbiased way. The intro teaches the basic concepts while the outro gives a rather broad overview about the need to do careful science. Overall, the day teaches the unavoidable problems that come with asking causal questions and teaches approaches that can sometimes make it possible to estimate.
#
# Causal questions are important all across neuroscience. For example, model fitting (W1D3), machine learning (W1D4), and dimensionality reduction (W1D5), are often used to argue for or against causal models. For example, a regression may be used to argue that a brain region influences another brain region based on fMRI data. Today’s materials give us a better understanding of the problems that come with the approach. There are tight links between causality and Bayesian statistics (W3D1) where Bayesian techniques are used for the estimation of causality (see e.g. the work of Judea Pearl). Causality is often seen as the bedrock of science, today’s materials above all produce clarity about what it is.
#
# Causality approaches are central across neuroscience. When we run experiments, we often randomly assign them to treatment groups vs control. Alternatively we stimulate animals at random points of time. These methods are all versions of randomized perturbations that we talk about in tutorial 1 and probably constitute a good part of all of neuroscience. We also use model fitting (see tutorials 2 and 3) frequently to drive arguments about how brains work. This is common for spike data, EEG data, imaging data etc. Lastly, we should be able to sometimes use instrumental variable techniques to estimate the effects of e.g. treatments with drugs. Today’s materials are simultaneously at the heart of the field and are frequently ignored.
#
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Video
# + cellView="form" pycharm={"name": "#%%\n"}
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1U5411a7vy", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"GjZfhXfn42w", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Slides
# + cellView="form" pycharm={"name": "#%%\n"}
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/fsxkt/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
|
tutorials/W3D5_NetworkCausality/student/W3D5_Intro.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
train = pd.read_csv('C:\\dss\\team_project2\\walmart\\train.csv')
train
test = pd.read_csv('C:\\dss\\team_project2\\walmart\\test.csv')
test
train.isna().sum()
train.info()
train.describe()
sns.heatmap(data = train.corr(), annot=True, fmt='.2f', linewidths=5)
plt.show()
# 전체의 unique와 unique갯수
for i in train.columns:
print(i, train[i].unique(),'\n' , len(train[i].unique()))
# ### TripType
# - 어떤 목적을 가지고 왔는지
train[train["FinelineNumber"] == 999]
train['TripType'].unique(), len(train['TripType'].unique())
plt.plot(train["TripType"])
plt.show()
# ### Upc
len(train["Upc"].unique()),train["Upc"].unique()
plt.scatter(train["Upc"], train["TripType"])
plt.show()
del train["Upc"]
# ### VisitNumber
# - 방문자의 고유 id
len(train["VisitNumber"].unique()),train["VisitNumber"].unique()
plt.plot(train['VisitNumber'])
plt.show()
plt.figure(figsize=(10,10))
plt.ylim(0,50)
plt.scatter(train["VisitNumber"], train["TripType"])
plt.show()
# ### Weekday
len(train["Weekday"].unique()),train["Weekday"].unique()
plt.plot(train['Weekday'])
plt.show()
plt.figure(figsize=(10,10))
plt.ylim(0,50)
plt.scatter(train["Weekday"], train["TripType"])
plt.show()
plt.figure(figsize=(10,10))
plt.ylim(0,50)
sns.boxplot(train["Weekday"], train["TripType"])
plt.show()
# ### ScanCount
# - 상품 구입 수
# - -1 이면 환불
len(train["ScanCount"].unique()),train["ScanCount"].unique()
plt.plot(train['ScanCount'])
plt.show()
plt.figure(figsize=(10,10))
plt.ylim(0,50)
plt.scatter(train["ScanCount"], train["TripType"])
plt.show()
plt.figure(figsize=(10,10))
# plt.ylim(0,50)
sns.boxplot(train["TripType"], train["ScanCount"])
plt.show()
# ### DepartmentDescription
# - 상품종목
len(train["DepartmentDescription"].unique()),train["DepartmentDescription"].unique()
# object 형식
plt.figure(figsize=(15,15))
# plt.Xlim(0,50)
sns.boxplot(train["TripType"], train["DepartmentDescription"])
plt.show()
plt.figure(figsize=(15,15))
sns.boxplot(train[train["TripType"]<500]["TripType"], train["DepartmentDescription"])
plt.xlim(0,50)
plt.show()
plt.figure(figsize=(15,15))
sns.barplot(train["TripType"], train["DepartmentDescription"])
plt.xlim(0, 300)
plt.show()
# DepartmentDescription가 OPTICAL - LENSES일때 TripType이 5인것만 있다.
train[train["DepartmentDescription"] == "OPTICAL - LENSES"]["TripType"][train[train["DepartmentDescription"] == "OPTICAL - LENSES"]["TripType"] != 5]
# ### FinelineNumber
# - walmart가 자체적으로 부여한 카테고리 값
len(train["FinelineNumber"].unique()),train["FinelineNumber"].unique()
plt.figure(figsize=(10,10))
plt.ylim(0,50)
plt.scatter(train["FinelineNumber"], train["TripType"])
plt.show()
# ### FinelineNumber와 DepartmentDescription의 상관관계
plt.figure(figsize=(15,15))
sns.barplot(train['FinelineNumber'], train['DepartmentDescription'])
plt.show()
|
walmart_type_EDA.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pickle
import re
import string
import warnings
from collections import Counter, defaultdict
import cache_magic
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from langdetect import detect
pd.set_option('display.max_colwidth',100)
warnings.filterwarnings('ignore')
# -
df = pd.read_excel('PM_MMS_Speech.xlsx',dtype={'title':'string',
'date':'datetime64',
'place':'string',
'url':'string',
'text':'string'})
df.head()
df.info()
df.dropna(inplace= True)#Drops rows with null values inplace
df.reset_index(inplace= True,drop=True)
small_speech = (df.text.str.len()<500)
small_speech.value_counts()
#Dropping speeches which are small
df.drop(labels = np.flatnonzero(small_speech),inplace = True)
df.reset_index(inplace= True,drop=True)
#Checking the language of the speeches
df_language = df.text.apply(detect)
df_language.value_counts(dropna=False)
hindi_indices = np.flatnonzero(df_language == 'hi')
df.text[hindi_indices][:5]
#Dropping hindi speeches
df.drop(labels = hindi_indices,inplace = True)
df.reset_index(inplace= True,drop=True)
# +
#removing unicode characters and extra spaces
def remove_unicode(text):
return text.encode('ascii', 'ignore').decode('utf8')
def remove_tabs(text):
return re.sub(r'[\r\n\t]', ' ', text)
# -
df['clean_text'] = df.text.apply(remove_unicode).apply(remove_tabs)
df.info()
#Concatenating all speeches to form a single string corpus_text
corpus_text = df.clean_text.str.cat(sep=' ')
#Writing corpus_text to a file
with open("corpus_raw2.txt", "w") as file:
file.write(corpus_text)
#Function to view dictionary items
def view_dict(dictionary,num):
for key in list(dictionary.keys())[:num]:
value = dictionary[key]
print (key,value)
# +
#Loading Embeddings - Code taken from kaggle notebook
def load_embed(file):
def get_coefs(word,*arr):
return word, np.asarray(arr, dtype='float32')
embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(file, encoding='latin'))
return embeddings_index
glove = "glove.840B.300d/glove.840B.300d.txt"
embed_glove = load_embed(glove)
# -
print(list(embed_glove.keys())[:10])
words = ['cant', "can't",'ca','nt',"n't" ,"'s",'her.','her','Her','##',' ', '\n','.', ',' ,'#.#']
print ([word in embed_glove for word in words])
#Code taken from kaggle notebook
#Code to check how much of vocabulary and corpus is covered by the words in glove
#returns words which are not in glove vocab in the decraesing order of their occurence count
def check_coverage(vocab, embeddings_index):
known_words = {}
unknown_words = {}
num_known_words = 0
num_unknown_words = 0
for word in vocab.keys():
try:
#if a vocabulary word is in glove, then adding that word to known_words and increasing num_known_words count by 1
known_words[word] = embeddings_index[word]
num_known_words += vocab[word]
except:
#if a vocabulary word is not in glove, then adding that word to unknown_words
# and increasing num_unknown_words count by 1
unknown_words[word] = vocab[word]
num_unknown_words += vocab[word]
print('Found embeddings for {:.2%} of vocab'.format(len(known_words) / len(vocab)))
print('Found embeddings for {:.2%} of all text'.format(num_known_words / (num_known_words + num_unknown_words)))
unknown_words = sorted(unknown_words.items(), key=lambda kv: kv[1], reverse=True)
return unknown_words
#Splitting corpus text into words. Using string split as tokenizer
#Note from future self - Using string split is a blunder. Could've used any tokenizer
#But this blunder helped you in learning regex
corpus = corpus_text.split()
print(corpus[:10])
len(corpus)
#creating a counter for words and from it vocab
word_count = Counter(corpus)
vocab = dict(word_count)
print(word_count.most_common(10))
#I think this is a long tail distribution
#All the operations performed below are intended to improve the coverage and reduce the number of unk tokens while feeding model
oov = check_coverage(vocab,embed_glove)
oov[:10]
re.findall(r'"[A-Za-z]+',corpus_text[:100000])
#Separating quotations
corpus_text_clean = re.sub(r'(")([A-Za-z0-9])', r'\1 \2', corpus_text)#new numbers added
corpus_text_clean = re.sub(r'([A-Za-z0-9])(")', r'\1 \2', corpus_text_clean)#new entire line
corpus_clean = corpus_text_clean.split()
word_count = Counter(corpus_clean)
vocab_clean = dict(word_count)
oov = check_coverage(vocab_clean,embed_glove)
oov[:10]
#Separating punctuation maarks from words
corpus_text_clean = re.sub(r'([A-Za-z]+)([\.,\?])',r'\1 \2',corpus_text_clean)
corpus_text_clean = re.sub(r'([.,])([A-Za-z]+)',r'\1 \2',corpus_text_clean)#added new
corpus_clean = corpus_text_clean.split()
word_count = Counter(corpus_clean)
vocab_clean = dict(word_count)
oov = check_coverage(vocab_clean,embed_glove)
oov[:20]
#Separting 's from the word
corpus_text_clean = re.sub(r'([a-z]+)(\'s)',r'\1 \2',corpus_text_clean)
corpus_clean = corpus_text_clean.split()
word_count = Counter(corpus_clean)
vocab_clean = dict(word_count)
oov = check_coverage(vocab_clean,embed_glove)
oov[:20]
re.findall(r'[0-9]+[.,][0-9]?',corpus_text[:100000])
re.findall(r'[0-9]+[\S]+',corpus_text[:10000])
#removing numbers
#In hindsight - might be a bad choice. Might have replaced them with hashes
corpus_text_clean = re.sub(r'[0-9]+[\S]+',' ',corpus_text_clean)
corpus_clean = corpus_text_clean.split()
word_count = Counter(corpus_clean)
vocab_clean = dict(word_count)
oov = check_coverage(vocab_clean,embed_glove)
oov[:20]
#Separating punctuation marks from each other
#[word for word,_ in oov if]
corpus_text_clean = re.sub('([\.\,\:\;\"])([\.\,\:\;\"])',r'\1 \2',corpus_text_clean)
corpus_clean = corpus_text_clean.split()
word_count = Counter(corpus_clean)
vocab_clean = dict(word_count)
oov = check_coverage(vocab_clean,embed_glove)
oov[:20]
#Separating ji from the name
corpus_text_clean = re.sub(r'([A-Z][a-z]+)ji', r'\1',corpus_text_clean)
corpus_clean = corpus_text_clean.split()
word_count = Counter(corpus_clean)
vocab_clean = dict(word_count)
oov = check_coverage(vocab_clean,embed_glove)
oov[:20]
corpus_text_clean = re.sub(r'([A-Za-z0-9]+)([:;?!/)/(//])', r'\1 \2 ', corpus_text_clean)
corpus_clean = corpus_text_clean.split()
word_count = Counter(corpus_clean)
vocab_clean = dict(word_count)
oov = check_coverage(vocab_clean,embed_glove)
oov[:20]
corpus_text_clean = re.sub(r'([A-Za-z0-9]+)([:;?!/)/(//])', r'\1 \2 ', corpus_text_clean)
corpus_clean = corpus_text_clean.split()
word_count = Counter(corpus_clean)
vocab_clean = dict(word_count)
oov = check_coverage(vocab_clean,embed_glove)
oov[:20]
#Separating the brace
corpus_text_clean = re.sub(r'(\()', r' \1 ', corpus_text_clean)
corpus_clean = corpus_text_clean.split()
word_count = Counter(corpus_clean)
vocab_clean = dict(word_count)
oov = check_coverage(vocab_clean,embed_glove)
oov[:20]
#Removing serial dots
corpus_text_clean = re.sub(r'\.\.', r' ', corpus_text_clean)
corpus_clean = corpus_text_clean.split()
word_count = Counter(corpus_clean)
vocab_clean = dict(word_count)
oov = check_coverage(vocab_clean,embed_glove)
oov[:20]
#Correcting the abbreviations
corpus_text_clean = re.sub("Hon'ble",'Honourable' ,corpus_text_clean)
corpus_text_clean = re.sub('BIMST-EC','BIMSTEC' ,corpus_text_clean)
corpus_text_clean = re.sub(r'A\s?S\s?E\s?A\s?N','ASEAN',corpus_text_clean)
corpus_clean = corpus_text_clean.split()
word_count = Counter(corpus_clean)
vocab_clean = dict(word_count)
oov = check_coverage(vocab_clean,embed_glove)
oov[:20]
#Writing cleaned corpus to a file
with open("corpus_clean_2.txt", "w") as file:
file.write(corpus_text_clean)
vocab_sorted = sorted(vocab_clean.items(), key=lambda kv: kv[1], reverse=True)
#Now we want to restrict the vocab size or else model becomes huge
#In a way we could have done this earlier instead of improving the coverage, then we wouldn't have learnt regex
# If we restrict the vocab words to 1k,5k etc...
# We want to know how much fraction of restricted vocab words are present in glove and
# how much fraction of corpus is covered by the words in restricted vocab
corpus_len = len(corpus_clean)
for i in [1000,5000,10000,15000,20000]:
vocab_ = dict(vocab_sorted[:i])
#view_dict(vocab_,5)
corpus_covered = 0
vocab_covered = 0
for (word,count) in vocab_.items():
if word in embed_glove:
corpus_covered += count
vocab_covered += 1
vocab_coverage = round((vocab_covered/len(vocab_)),2)*100
corpus_coverage = round((corpus_covered/corpus_len),2)*100
print (f'For vocab of size {i}, vocabulary coverage is {vocab_coverage}% and corpus coverage is {corpus_coverage}%')
# ## Here I made a silly mistake, the 100% vocabulary coverage is misleading because it is a rounded off value. It is not exactly 100%. This created so many problems later becuase if you observe below, embed_5k has a size of 4998 but we are thinking its size is 5000. So, there is an offset of 2, which lead to feeding wrong word vectors to the model
def create_embed(vocab, embeddings_index):
our_embeddings = {}
for word in vocab.keys():
try:
our_embeddings[word] = embeddings_index[word]
except:
pass
return our_embeddings
vocab_5k = dict(vocab_sorted[:5000])
embed_5k = create_embed(vocab_5k, embed_glove)
len(vocab_5k),len(embed_5k)
# Removing missing keys from vocab_5k
missing_keys = list(set(vocab_5k.keys()) - set(embed_5k.keys()))
missing_keys
for word in missing_keys:
del vocab_5k[word]
set(vocab_5k.keys()) - set(embed_5k.keys()), len(vocab_5k),len(embed_5k)
# +
#Pickling vocab and embeddings files
with open('embed_5k_2.pickle', 'wb') as handle:
pickle.dump(embed_5k, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('vocab_5k_2.pickle', 'wb') as handle:
pickle.dump(vocab_5k, handle, protocol=pickle.HIGHEST_PROTOCOL)
|
Dataset Cleaning through Regex.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# +
import os
import tensorflow as tf
from tensorflow import keras
print(tf.version.VERSION)
# +
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
# +
# Define a simple sequential model
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation='relu', input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.metrics.SparseCategoricalAccuracy()])
return model
# Create a basic model instance
model = create_model()
# Display the model's architecture
model.summary()
# -
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# +
from operator import truediv
from tabnanny import verbose
checkpoint_cb = keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, save_weights_only=True, verbose=1)
# -
model.fit(train_images,
train_labels,
epochs=10,
validation_data=(test_images, test_labels),
callbacks=[checkpoint_cb])
os.listdir(checkpoint_dir)
# +
# Create a basic model instance
model = create_model()
# Evaluate the model
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Untrained model, accuracy: {:5.2f}%".format(100 * acc))
# +
# Include the epoch in the file name (uses `str.format`)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
batch_size = 32
# Create a callback that saves the model's weights every 5 epochs
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path,
verbose=1,
save_weights_only=True,
save_freq=5*batch_size)
# Create a new model instance
model = create_model()
# Save the weights using the `checkpoint_path` format
model.save_weights(checkpoint_path.format(epoch=0))
# Train the model with the new callback
model.fit(train_images,
train_labels,
epochs=50,
batch_size=batch_size,
callbacks=[cp_callback],
validation_data=(test_images, test_labels),
verbose=0)
# -
os.listdir(checkpoint_dir)
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
# +
# Create a new model instance
model = create_model()
# Load the previously saved weights
model.load_weights(latest)
# Re-evaluate the model
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100 * acc))
# +
# Save the weights
model.save_weights('./checkpoints/my_checkpoint')
# Create a new model instance
model = create_model()
# Restore the weights
model.load_weights('./checkpoints/my_checkpoint')
# Evaluate the model
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100 * acc))
# +
# Create and train a new model instance.
model = create_model()
model.fit(train_images, train_labels, epochs=5)
# Save the entire model as a SavedModel.
# !mkdir -p saved_model
model.save('saved_model/my_model')
# +
new_model = tf.keras.models.load_model('saved_model/my_model')
# Check its architecture
new_model.summary()
# +
# Evaluate the restored model
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print('Restored model, accuracy: {:5.2f}%'.format(100 * acc))
print(new_model.predict(test_images).shape)
# +
# Create and train a new model instance.
model = create_model()
model.fit(train_images, train_labels, epochs=5)
# Save the entire model to a HDF5 file.
# The '.h5' extension indicates that the model should be saved to HDF5.
model.save('my_model.h5')
# +
# Recreate the exact same model, including its weights and the optimizer
new_model = tf.keras.models.load_model('my_model.h5')
# Show the model architecture
new_model.summary()
# -
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print('Restored model, accuracy: {:5.2f}%'.format(100 * acc))
|
src/ml_basic/save_and_load_tf.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Day Agenda
# - Data Pre Processing by taking titanic.csv
# - handling duplicates
# - checking null values
# - handling missing values
# - standardize the data by using different scaling models
# - visualisation data
# ## 1. Load or Read the data set
import pandas as pd
df=pd.read_csv("https://raw.githubusercontent.com/AP-State-Skill-Development-Corporation/Datasets/master/Classification/titanic.csv")
df.head()
df=pd.read_csv("titanic.csv")
df.head()
# ## 2. Finding the information about data set
#
df.info()
df.describe()
df.columns
df['survived'].value_counts()
df['sex'].value_counts()
df['embarked'].value_counts()
df['embarked'].unique()
df['sex'].unique()
df['ticket'].unique()
df.columns
# ## 3. Remove features(Columns)
# - drop()
df.drop(['name','sibsp','parch','ticket'],axis=1,inplace=True)
df.head()
df.drop('cabin',axis=1,inplace=True)
df.info()
df.columns
# ## Check null values in data set
df.isnull().sum()
df['age']
df['age'].fillna(df['age'].mean(),inplace=True)
df.isnull().sum()
df['embarked'].unique()
df['embarked'].mode()
df['embarked'].fillna(df['embarked'].mode,inplace=True)
df.isnull().sum()
# ## Coneverting object data type features into int
# - pandas.get_dummies(new columns for every fetaure value)
# - LabelEncoder
# - OneHotEncoder
df['sex'].uniqueue()
df['sex'].value_counts()
import pandas as pd
d1=pd.get_dummies(df['sex'])
d1
df['sex'].replace('female',1,inplace=True)
df.head()
df1=pd.concat((df,d1),axis=1)
df1.head()
df['sex'].replace(1,'female',inplace=True)
df.head()
df.info()
df1=pd.read_csv("titanic.csv")
df1.head()
df1.drop(['name','sibsp','ticket','cabin','parch'],axis=1,inplace=True)
df1.head()
df1['age'].fillna(df1['age'].mean(),inplace=True)
df1['embarked'].fillna("S",inplace=True)
df1.isnull().sum()
df1.info()
from sklearn.preprocessing import LabelEncoder
le=LabelEncoder()
df1['sex']=le.fit_transform(df1['sex'])
df1['embarked']=le.fit_transform(df1['embarked'])
df1.head()
df1['sex'].value_counts()
df1["embarked"].value_counts()
df1.head()
df.describe()
# ## Scaling The fetaures in the same range
# - standard scaler
# - mean=0,std=1
# - minmax scaler
# - [0,1]
# - robust scaler
# - iQrange
# - normalizer
# - Row wise
# -[0,1]
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import RobustScaler
from sklearn.preprocessing import Normalizer
s=StandardScaler()
m=MinMaxScaler()
r=RobustScaler()
n=Normalizer()
s1=s.fit_transform(df1)
m1=m.fit_transform(df1)
r1=r.fit_transform(df1)
n1=n.fit_transform(df1)
import matplotlib.pyplot as plt
plt.boxplot(df1)
plt.title("before scaling")
plt.show()
plt.boxplot(s1)
plt.title("After Applying Standard scaling")
plt.show()
plt.boxplot(m1)
plt.title("After Applying Minmax scaling")
plt.show()
plt.boxplot(r1)
plt.title("After Applying Robust scaling")
plt.show()
plt.boxplot(n1)
plt.title("After Applying Normalizer scaling")
plt.show()
|
Notebooks/Day27-Data Preprocessing/Data Preprocessing.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Gene tracks
# + tags=["snakemake-job-properties"]
######## snakemake preamble start (automatically inserted, do not edit) ########
import sys; sys.path.extend(['/cluster/ggs_lab/mtparker/.conda/envs/snakemake6/lib/python3.10/site-packages', '/cluster/ggs_lab/mtparker/papers/fiona/fiona_temp_rnaseq/rules/notebook_templates']); import pickle; snakemake = pickle.loads(b'\x80\x04\x95\xf4\x0b\x00\x00\x00\x00\x00\x00\x8c\x10snakemake.script\x94\x8c\tSnakemake\x94\x93\x94)\x81\x94}\x94(\x8c\x05input\x94\x8c\x0csnakemake.io\x94\x8c\nInputFiles\x94\x93\x94)\x81\x94(\x8c\x17splicing/denovo/psi.csv\x94\x8c\x1bsplicing/denovo/psi_fit.csv\x94\x8c\x1fcoverage_tracks/fio1_28c.fwd.bw\x94\x8c\x1fcoverage_tracks/fio1_28c.rev.bw\x94\x8c\x1fcoverage_tracks/fio1_12c.fwd.bw\x94\x8c\x1fcoverage_tracks/fio1_12c.rev.bw\x94\x8c\x1fcoverage_tracks/col0_12c.fwd.bw\x94\x8c\x1fcoverage_tracks/col0_12c.rev.bw\x94\x8c\x1fcoverage_tracks/col0_20c.fwd.bw\x94\x8c\x1fcoverage_tracks/col0_20c.rev.bw\x94\x8c\x1fcoverage_tracks/col0_28c.fwd.bw\x94\x8c\x1fcoverage_tracks/col0_28c.rev.bw\x94\x8c\x1fcoverage_tracks/col0_04c.fwd.bw\x94\x8c\x1fcoverage_tracks/col0_04c.rev.bw\x94\x8c\x1fcoverage_tracks/fio1_20c.fwd.bw\x94\x8c\x1fcoverage_tracks/fio1_20c.rev.bw\x94\x8c\x1fcoverage_tracks/fio1_04c.fwd.bw\x94\x8c\x1fcoverage_tracks/fio1_04c.rev.bw\x94\x8c:../annotations/Arabidopsis_thaliana.TAIR10.dna.toplevel.fa\x94e}\x94(\x8c\x06_names\x94}\x94(\x8c\x03psi\x94K\x00N\x86\x94\x8c\x07psi_fit\x94K\x01N\x86\x94\x8c\x03bws\x94K\x02K\x12\x86\x94\x8c\x05fasta\x94K\x12N\x86\x94u\x8c\x12_allowed_overrides\x94]\x94(\x8c\x05index\x94\x8c\x04sort\x94eh*\x8c\tfunctools\x94\x8c\x07partial\x94\x93\x94h\x06\x8c\x19Namedlist._used_attribute\x94\x93\x94\x85\x94R\x94(h0)}\x94\x8c\x05_name\x94h*sNt\x94bh+h.h0\x85\x94R\x94(h0)}\x94h4h+sNt\x94bh h\nh"h\x0bh$h\x06\x8c\tNamedlist\x94\x93\x94)\x81\x94(h\x0ch\rh\x0eh\x0fh\x10h\x11h\x12h\x13h\x14h\x15h\x16h\x17h\x18h\x19h\x1ah\x1be}\x94(h\x1e}\x94h(]\x94(h*h+eh*h.h0\x85\x94R\x94(h0)}\x94h4h*sNt\x94bh+h.h0\x85\x94R\x94(h0)}\x94h4h+sNt\x94bubh&h\x1cub\x8c\x06output\x94h\x06\x8c\x0bOutputFiles\x94\x93\x94)\x81\x94\x8c9figures/splicing/denovo/gene_tracks/AT1G33410_gene_tracks\x94a}\x94(h\x1e}\x94\x8c\x0bgene_tracks\x94K\x00N\x86\x94sh(]\x94(h*h+eh*h.h0\x85\x94R\x94(h0)}\x94h4h*sNt\x94bh+h.h0\x85\x94R\x94(h0)}\x94h4h+sNt\x94bhOhLub\x8c\x06params\x94h\x06\x8c\x06Params\x94\x93\x94)\x81\x94}\x94(h\x1e}\x94h(]\x94(h*h+eh*h.h0\x85\x94R\x94(h0)}\x94h4h*sNt\x94bh+h.h0\x85\x94R\x94(h0)}\x94h4h+sNt\x94bub\x8c\twildcards\x94h\x06\x8c\tWildcards\x94\x93\x94)\x81\x94(\x8c\x06denovo\x94\x8c\tAT1G33410\x94e}\x94(h\x1e}\x94(\x8c\x08ref_type\x94K\x00N\x86\x94\x8c\x07gene_id\x94K\x01N\x86\x94uh(]\x94(h*h+eh*h.h0\x85\x94R\x94(h0)}\x94h4h*sNt\x94bh+h.h0\x85\x94R\x94(h0)}\x94h4h+sNt\x94b\x8c\x08ref_type\x94hmhshnub\x8c\x07threads\x94K\x01\x8c\tresources\x94h\x06\x8c\tResources\x94\x93\x94)\x81\x94(K\x01K\x01\x8c\x04/tmp\x94e}\x94(h\x1e}\x94(\x8c\x06_cores\x94K\x00N\x86\x94\x8c\x06_nodes\x94K\x01N\x86\x94\x8c\x06tmpdir\x94K\x02N\x86\x94uh(]\x94(h*h+eh*h.h0\x85\x94R\x94(h0)}\x94h4h*sNt\x94bh+h.h0\x85\x94R\x94(h0)}\x94h4h+sNt\x94bh\x87K\x01h\x89K\x01h\x8bh\x84ub\x8c\x03log\x94h\x06\x8c\x03Log\x94\x93\x94)\x81\x94\x8c8notebook_processed/AT1G33410_gene_tracks_denovo.py.ipynb\x94a}\x94(h\x1e}\x94\x8c\x08notebook\x94K\x00N\x86\x94sh(]\x94(h*h+eh*h.h0\x85\x94R\x94(h0)}\x94h4h*sNt\x94bh+h.h0\x85\x94R\x94(h0)}\x94h4h+sNt\x94bh\x9dh\x9aub\x8c\x06config\x94}\x94(\x8c\x0fgenome_fasta_fn\x94\x8c:../annotations/Arabidopsis_thaliana.TAIR10.dna.toplevel.fa\x94\x8c\x0cannot_gtf_fn\x94\x8cA../annotations/Araport11_GFF3_genes_transposons.201606.no_chr.gtf\x94\x8c\x16transcriptome_fasta_fn\x94]\x94(\x8c0../annotations/Araport11_genes.201606.cdna.fasta\x94\x8c5../annotations/Araport11_non_coding.201606.cdna.fasta\x94e\x8c\x11proteome_fasta_fn\x94\x8c4../annotations/Araport11_genes.201606.pep.repr.fasta\x94\x8c\x14fastq_screen_genomes\x94}\x94(\x8c\tathaliana\x94\x8c:../annotations/Arabidopsis_thaliana.TAIR10.dna.toplevel.fa\x94\x8c\x08hsapiens\x94\x8c=../annotations/Homo_sapiens.GRCh38.dna_sm.primary_assembly.fa\x94\x8c\x07at_rrna\x94\x8c\x1c../annotations/arath_rrna.fa\x94\x8c\x05ecoli\x94\x8c"../annotations/ecoli_k12_mg1655.fa\x94u\x8c\x11suppa_comparisons\x94]\x94(\x8c\x14col0_04c_vs_col0_20c\x94\x8c\x14col0_12c_vs_col0_20c\x94\x8c\x14col0_28c_vs_col0_20c\x94\x8c\x14fio1_04c_vs_fio1_20c\x94\x8c\x14fio1_12c_vs_fio1_20c\x94\x8c\x14fio1_28c_vs_fio1_20c\x94\x8c\x14fio1_04c_vs_col0_04c\x94\x8c\x14fio1_12c_vs_col0_12c\x94\x8c\x14fio1_20c_vs_col0_20c\x94\x8c\x14fio1_28c_vs_col0_28c\x94e\x8c\x0bgene_tracks\x94]\x94(\x8c\tAT5G65050\x94\x8c\tAT1G77080\x94\x8c\tAT5G65060\x94\x8c\tAT1G33410\x94\x8c\tAT4G09980\x94\x8c\tAT3G04910\x94\x8c\tAT3G01150\x94\x8c\tAT1G01060\x94eu\x8c\x04rule\x94\x8c\x14generate_gene_tracks\x94\x8c\x0fbench_iteration\x94N\x8c\tscriptdir\x94\x8cQ/cluster/ggs_lab/mtparker/papers/fiona/fiona_temp_rnaseq/rules/notebook_templates\x94ub.'); from snakemake.logging import logger; logger.printshellcmds = False; import os; os.chdir(r'/cluster/ggs_lab/mtparker/papers/fiona/fiona_temp_rnaseq/pipeline');
######## snakemake preamble end #########
# +
import os
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from matplotlib import patches, gridspec
import seaborn as sns
from IPython.display import display_markdown, Markdown
from genetrack_utils import plot_gene_track
# %matplotlib inline
sns.set(font='Arial')
plt.rcParams['svg.fonttype'] = 'none'
style = sns.axes_style('white')
style.update(sns.axes_style('ticks'))
style['xtick.major.size'] = 1
style['ytick.major.size'] = 1
sns.set(font_scale=1.2, style=style)
pal = sns.color_palette(['#0072b2', '#d55e00', '#009e73', '#f0e442', '#cc79a7', '#56b4e9', '#e69f00'])
cmap = ListedColormap(pal.as_hex())
sns.set_palette(pal)
sns.palplot(pal)
plt.show()
# -
OUTPUT_PATH = snakemake.output.gene_tracks
if not os.path.exists(OUTPUT_PATH):
os.makedirs(OUTPUT_PATH)
GENETRACK_TEMP = 20
FWD_BWS = sorted([fn for fn in snakemake.input.bws if re.search(f'{GENETRACK_TEMP}c.fwd.bw', fn)])
REV_BWS = sorted([fn for fn in snakemake.input.bws if re.search(f'{GENETRACK_TEMP}c.rev.bw', fn)])
LABELS = ['Col-0', 'fio1-3']
# +
def convert_coords(coords):
if not coords:
return None
else:
return np.fromstring(coords.strip('[] '), sep=' ', dtype=int)
psi = pd.read_csv(
snakemake.input.psi,
header=[0, 1],
skiprows=[2,],
index_col=0,
converters={2: str,
3: convert_coords,
4: convert_coords}
)
gene_psi = psi[psi.metadata.gene_id.str.contains(snakemake.wildcards.gene_id)].psi
gene_psi_melt = pd.melt(
gene_psi.reset_index(),
id_vars='index',
value_vars=gene_psi.columns,
var_name='sample_id',
value_name='psi'
)
gene_psi_melt[['geno', 'temp']] = gene_psi_melt.sample_id.str.split('_', expand=True)[[0, 1]]
gene_psi_melt['temp'] = gene_psi_melt.temp.str.extract('(\d+)c', expand=True).astype(int)
psi_fit = pd.read_csv(
snakemake.input.psi_fit,
converters={'chrom': str,
'alt1': convert_coords,
'alt2': convert_coords},
index_col=0
)
gene_psi_fit = psi_fit[psi_fit.gene_id.str.contains(snakemake.wildcards.gene_id)]
gene_psi_fit_sig = gene_psi_fit.query('(geno_fdr < 0.05 | gxt_fdr < 0.05) & abs(dpsi) > 0.05')
gene_psi_fit_sig
# -
display_markdown(Markdown(f'## {snakemake.wildcards.gene_id} gene tracks and boxplots'))
for event_id, record in gene_psi_fit_sig.iterrows():
try:
plot_gene_track(
record, gene_psi_melt.query(f'index == "{event_id}"'),
FWD_BWS if record.strand == '+' else REV_BWS,
LABELS,
snakemake.input.fasta,
title=f'{snakemake.wildcards.gene_id} {event_id}'
)
plt.savefig(os.path.join(OUTPUT_PATH, f'{snakemake.wildcards.gene_id}_{event_id}_gene_track.svg'))
plt.show()
except NotImplementedError:
continue
|
fiona_temp_rnaseq/rules/notebook_templates/gene_tracks.py.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Guided image optimization
#
# This example showcases how a guided, i.e. regionally constraint, NST can be performed
# in ``pystiche``.
#
# Usually, the ``style_loss`` discards spatial information since the style elements
# should be able to be synthesized regardless of their position in the
# ``style_image``. Especially for images with clear separated regions style elements
# might leak into regions where they fit well with respect to the optimization criterion
# but don't belong for a human observer. This can be overcome with spatial constraints
# also called ``guides`` (:cite:`GEB+2017`).
#
#
# We start this example by importing everything we need and setting the device we will
# be working on.
#
#
# +
import pystiche
from pystiche import demo, enc, loss, ops, optim
from pystiche.image import guides_to_segmentation, show_image
from pystiche.misc import get_device, get_input_image
print(f"I'm working with pystiche=={pystiche.__version__}")
device = get_device()
print(f"I'm working with {device}")
# -
# In a first step we load and show the images that will be used in the NST.
#
#
images = demo.images()
images.download()
size = 500
content_image = images["castle"].read(size=size, device=device)
show_image(content_image)
style_image = images["church"].read(size=size, device=device)
show_image(style_image)
# ## Unguided image optimization
#
# As a baseline we use a default NST with a
# :class:`~pystiche.ops.FeatureReconstructionOperator` as ``content_loss`` and
# :class:`~pystiche.ops.GramOperator` s as ``style_loss``.
#
#
# +
multi_layer_encoder = enc.vgg19_multi_layer_encoder()
content_layer = "relu4_2"
content_encoder = multi_layer_encoder.extract_encoder(content_layer)
content_weight = 1e0
content_loss = ops.FeatureReconstructionOperator(
content_encoder, score_weight=content_weight
)
style_layers = ("relu1_1", "relu2_1", "relu3_1", "relu4_1", "relu5_1")
style_weight = 1e4
def get_style_op(encoder, layer_weight):
return ops.GramOperator(encoder, score_weight=layer_weight)
style_loss = ops.MultiLayerEncodingOperator(
multi_layer_encoder, style_layers, get_style_op, score_weight=style_weight,
)
criterion = loss.PerceptualLoss(content_loss, style_loss).to(device)
print(criterion)
# -
# We set the target images for the optimization ``criterion``.
#
#
criterion.set_content_image(content_image)
criterion.set_style_image(style_image)
# We perform the unguided NST and show the result.
#
#
# +
starting_point = "content"
input_image = get_input_image(starting_point, content_image=content_image)
output_image = optim.image_optimization(
input_image, criterion, num_steps=500, logger=demo.logger()
)
# -
show_image(output_image)
# While the result is not completely unreasonable, the building has a strong blueish
# cast that looks unnatural. Since the optimization was unconstrained the color of the
# sky was used for the building. In the remainder of this example we will solve this by
# dividing the images in multiple separate regions.
#
#
# ## Guided image optimization
#
# For both the ``content_image`` and ``style_image`` we load regional ``guides`` and
# show them.
#
# <div class="alert alert-info"><h4>Note</h4><p>In ``pystiche`` a ``guide`` is a binary image in which the white pixels make up the
# region that is guided. Multiple ``guides`` can be combined into a ``segmentation``
# for a better overview. In a ``segmentation`` the regions are separated by color.
# You can use :func:`~pystiche.image.guides_to_segmentation` and
# :func:`~pystiche.image.segmentation_to_guides` to convert one format to the other.</p></div>
#
# <div class="alert alert-info"><h4>Note</h4><p>The guides used within this example were created manually. It is possible to
# generate them automatically :cite:`CZP+2018`, but this is outside the scope of
# ``pystiche``.</p></div>
#
#
content_guides = images["castle"].guides.read(size=size, device=device)
content_segmentation = guides_to_segmentation(content_guides)
show_image(content_segmentation, title="Content segmentation")
style_guides = images["church"].guides.read(size=size, device=device)
style_segmentation = guides_to_segmentation(style_guides)
show_image(style_segmentation, title="Style segmentation")
# The ``content_image`` is separated in three ``regions``: the ``"building"``, the
# ``"sky"``, and the ``"water"``.
#
# <div class="alert alert-info"><h4>Note</h4><p>Since no water is present in the style image we reuse the ``"sky"`` for the
# ``"water"`` region.</p></div>
#
#
# +
regions = ("building", "sky", "water")
style_guides["water"] = style_guides["sky"]
# -
# Since the stylization should be performed for each region individually, we also need
# separate operators. Within each region we use the same setup as before. Similar to
# how a :class:`~pystiche.ops.MultiLayerEncodingOperator` bundles multiple
# operators acting on different layers a :class:`~pystiche.ops.MultiRegionOperator`
# bundles multiple operators acting in different regions.
#
# The guiding is only needed for the ``style_loss`` since the ``content_loss`` by
# definition honors the position of the content during the optimization. Thus, the
# previously defined ``content_loss`` is combined with the new regional ``style_loss``
# in a :class:`~pystiche.loss.GuidedPerceptualLoss` as optimization ``criterion``.
#
#
# +
def get_region_op(region, region_weight):
return ops.MultiLayerEncodingOperator(
multi_layer_encoder, style_layers, get_style_op, score_weight=region_weight,
)
style_loss = ops.MultiRegionOperator(regions, get_region_op, score_weight=style_weight)
criterion = loss.GuidedPerceptualLoss(content_loss, style_loss).to(device)
print(criterion)
# -
# The ``content_loss`` is unguided and thus the content image can be set as we did
# before. For the ``style_loss`` we use the same ``style_image`` for all regions and
# only vary the guides.
#
#
# +
criterion.set_content_image(content_image)
for region in regions:
criterion.set_style_guide(region, style_guides[region])
criterion.set_style_image(region, style_image)
criterion.set_content_guide(region, content_guides[region])
# -
# We rerun the optimization with the new constrained optimization ``criterion`` and
# show the result.
#
#
# +
starting_point = "content"
input_image = get_input_image(starting_point, content_image=content_image)
output_image = optim.image_optimization(
input_image, criterion, num_steps=500, logger=demo.logger()
)
# -
show_image(output_image)
# With regional constraints we successfully removed the blueish cast from the building
# which leads to an overall higher quality. Unfortunately, reusing the sky region for
# the water did not work out too well: due to the vibrant color, the water looks
# unnatural.
#
# Fortunately, this has an easy solution. Since we are already using separate operators
# for each region we are not bound to use only a single ``style_image``: if required,
# we can use a different ``style_image`` for each region.
#
#
# ## Guided image optimization with multiple styles
#
# We load a second style image that has water in it.
#
#
second_style_image = images["cliff"].read(size=size, device=device)
show_image(second_style_image, "Second style image")
second_style_guides = images["cliff"].guides.read(size=size, device=device)
show_image(guides_to_segmentation(second_style_guides), "Second style segmentation")
# We can reuse the previously defined criterion and only change the ``style_image`` and
# ``style_guides`` in the ``"water"`` region.
#
# <div class="alert alert-info"><h4>Note</h4><p>We need to call :meth:`~pystiche.loss.GuidedPerceptualLoss.set_style_guide` with
# ``recalc_repr=False`` since the old ``style_image`` is still stored. By default
# the new target representation would be calculated with the new guide. If the image
# sizes do not match, as it is the case here, this results in an error. With
# ``recalc_repr=False`` the new target representation is only calculated when the
# ``second_style_image`` is set.</p></div>
#
#
region = "water"
criterion.set_style_guide(region, second_style_guides[region], recalc_repr=False)
criterion.set_style_image(region, second_style_image)
# Finally, we rerun the optimization again with the new constraints.
#
#
# +
starting_point = "content"
input_image = get_input_image(starting_point, content_image=content_image)
output_image = optim.image_optimization(
input_image, criterion, num_steps=500, logger=demo.logger()
)
# -
show_image(output_image)
# Compared to the two previous results we now achieved the highest quality.
# Nevertheless, This approach has its downsides : since we are working with multiple
# images in multiple distinct regions, the memory requirement is higher compared to the
# other approaches. Furthermore, compared to the unguided NST, the guides have to be
# provided together with the for the content and style images.
#
#
|
examples_jupyter/advanced/example_guided_image_optimization.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Overfitting demo
#
# ## Create a dataset based on a true sinusoidal relationship
# Let's look at a synthetic dataset consisting of 30 points drawn from the sinusoid $y = \sin(4x)$:
import graphlab
import math
import random
import numpy
from matplotlib import pyplot as plt
# %matplotlib inline
# Create random values for x in interval [0,1)
random.seed(98103)
n = 30
x = graphlab.SArray([random.random() for i in range(n)]).sort()
# Compute y
y = x.apply(lambda x: math.sin(4*x))
# Add random Gaussian noise to y
random.seed(1)
e = graphlab.SArray([random.gauss(0,1.0/3.0) for i in range(n)])
y = y + e
# ### Put data into an SFrame to manipulate later
data = graphlab.SFrame({'X1':x,'Y':y})
data
# ### Create a function to plot the data, since we'll do it many times
# +
def plot_data(data):
plt.plot(data['X1'],data['Y'],'k.')
plt.xlabel('x')
plt.ylabel('y')
plot_data(data)
# -
# ## Define some useful polynomial regression functions
# Define a function to create our features for a polynomial regression model of any degree:
def polynomial_features(data, deg):
data_copy=data.copy()
for i in range(1,deg):
data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1']
return data_copy
# Define a function to fit a polynomial linear regression model of degree "deg" to the data in "data":
def polynomial_regression(data, deg):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,l1_penalty=0.,
validation_set=None,verbose=False)
return model
# Define function to plot data and predictions made, since we are going to use it many times.
def plot_poly_predictions(data, model):
plot_data(data)
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Create 200 points in the x axis and compute the predicted value for each point
x_pred = graphlab.SFrame({'X1':[i/200.0 for i in range(200)]})
y_pred = model.predict(polynomial_features(x_pred,deg))
# plot predictions
plt.plot(x_pred['X1'], y_pred, 'g-', label='degree ' + str(deg) + ' fit')
plt.legend(loc='upper left')
plt.axis([0,1,-1.5,2])
# Create a function that prints the polynomial coefficients in a pretty way :)
def print_coefficients(model):
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Get learned parameters as a list
w = list(model.coefficients['value'])
# Numpy has a nifty function to print out polynomials in a pretty way
# (We'll use it, but it needs the parameters in the reverse order)
print 'Learned polynomial for degree ' + str(deg) + ':'
w.reverse()
print numpy.poly1d(w)
# ## Fit a degree-2 polynomial
# Fit our degree-2 polynomial to the data generated above:
model = polynomial_regression(data, deg=2)
# Inspect learned parameters
print_coefficients(model)
# Form and plot our predictions along a grid of x values:
plot_poly_predictions(data,model)
# ## Fit a degree-4 polynomial
model = polynomial_regression(data, deg=4)
print_coefficients(model)
plot_poly_predictions(data,model)
# ## Fit a degree-16 polynomial
model = polynomial_regression(data, deg=16)
print_coefficients(model)
# ###Woah!!!! Those coefficients are *crazy*! On the order of 10^6.
plot_poly_predictions(data,model)
# ### Above: Fit looks pretty wild, too. Here's a clear example of how overfitting is associated with very large magnitude estimated coefficients.
# #
# #
# #
# #
# # Ridge Regression
# Ridge regression aims to avoid overfitting by adding a cost to the RSS term of standard least squares that depends on the 2-norm of the coefficients $\|w\|$. The result is penalizing fits with large coefficients. The strength of this penalty, and thus the fit vs. model complexity balance, is controled by a parameter lambda (here called "L2_penalty").
# Define our function to solve the ridge objective for a polynomial regression model of any degree:
def polynomial_ridge_regression(data, deg, l2_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=l2_penalty,
validation_set=None,verbose=False)
return model
# ## Perform a ridge fit of a degree-16 polynomial using a *very* small penalty strength
model = polynomial_ridge_regression(data, deg=16, l2_penalty=1e-25)
print_coefficients(model)
plot_poly_predictions(data,model)
# ## Perform a ridge fit of a degree-16 polynomial using a very large penalty strength
model = polynomial_ridge_regression(data, deg=16, l2_penalty=100)
print_coefficients(model)
plot_poly_predictions(data,model)
# ## Let's look at fits for a sequence of increasing lambda values
for l2_penalty in [1e-25, 1e-10, 1e-6, 1e-3, 1e2]:
model = polynomial_ridge_regression(data, deg=16, l2_penalty=l2_penalty)
print 'lambda = %.2e' % l2_penalty
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
plt.title('Ridge, lambda = %.2e' % l2_penalty)
data
# ## Perform a ridge fit of a degree-16 polynomial using a "good" penalty strength
# We will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider "leave one out" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.
# LOO cross validation -- return the average MSE
def loo(data, deg, l2_penalty_values):
# Create polynomial features
data = polynomial_features(data, deg)
# Create as many folds for cross validatation as number of data points
num_folds = len(data)
folds = graphlab.cross_validation.KFold(data,num_folds)
# for each value of l2_penalty, fit a model for each fold and compute average MSE
l2_penalty_mse = []
min_mse = None
best_l2_penalty = None
for l2_penalty in l2_penalty_values:
next_mse = 0.0
for train_set, validation_set in folds:
# train model
model = graphlab.linear_regression.create(train_set,target='Y',
l2_penalty=l2_penalty,
validation_set=None,verbose=False)
# predict on validation set
y_test_predicted = model.predict(validation_set)
# compute squared error
next_mse += ((y_test_predicted-validation_set['Y'])**2).sum()
# save squared error in list of MSE for each l2_penalty
next_mse = next_mse/num_folds
l2_penalty_mse.append(next_mse)
if min_mse is None or next_mse < min_mse:
min_mse = next_mse
best_l2_penalty = l2_penalty
return l2_penalty_mse,best_l2_penalty
# Run LOO cross validation for "num" values of lambda, on a log scale
l2_penalty_values = numpy.logspace(-4, 10, num=10)
l2_penalty_mse,best_l2_penalty = loo(data, 16, l2_penalty_values)
# Plot results of estimating LOO for each value of lambda
plt.plot(l2_penalty_values,l2_penalty_mse,'k-')
plt.xlabel('$\ell_2$ penalty')
plt.ylabel('LOO cross validation error')
plt.xscale('log')
plt.yscale('log')
# Find the value of lambda, $\lambda_{\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit
best_l2_penalty
model = polynomial_ridge_regression(data, deg=16, l2_penalty=best_l2_penalty)
print_coefficients(model)
plot_poly_predictions(data,model)
# #
# #
# #
# #
# # Lasso Regression
# Lasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called "L1_penalty"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\|w\|$.
# Define our function to solve the lasso objective for a polynomial regression model of any degree:
def polynomial_lasso_regression(data, deg, l1_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,
l1_penalty=l1_penalty,
validation_set=None,
solver='fista', verbose=False,
max_iterations=3000, convergence_threshold=1e-10)
return model
# ## Explore the lasso solution as a function of a few different penalty strengths
# We refer to lambda in the lasso case below as "l1_penalty"
for l1_penalty in [0.0001, 0.01, 0.1, 10]:
model = polynomial_lasso_regression(data, deg=16, l1_penalty=l1_penalty)
print 'l1_penalty = %e' % l1_penalty
print 'number of nonzeros = %d' % (model.coefficients['value']).nnz()
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
plt.title('LASSO, lambda = %.2e, # nonzeros = %d' % (l1_penalty, (model.coefficients['value']).nnz()))
# Above: We see that as lambda increases, we get sparser and sparser solutions. However, even for our non-sparse case for lambda=0.0001, the fit of our high-order polynomial is not too wild. This is because, like in ridge, coefficients included in the lasso solution are shrunk relative to those of the least squares (unregularized) solution. This leads to better behavior even without sparsity. Of course, as lambda goes to 0, the amount of this shrinkage decreases and the lasso solution approaches the (wild) least squares solution.
|
Studying Materials/Course 2 Regression/Ridge Regression/Overfitting_Demo_Ridge_Lasso.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # HW06
# Name:<NAME> <br> GTid:903461609
# +
from sklearn.cluster import KMeans, MeanShift
from sklearn.cluster import AgglomerativeClustering
import matplotlib.pyplot as plt
import pandas as pd
# %matplotlib inline
import numpy as np
X=np.array([[1,0],
[2,0],
[8,0],
[9,0],
[31,0],
[32,0],
[38,0],
[39,0],])
cluster = AgglomerativeClustering(n_clusters=4, affinity='euclidean', linkage='ward')
cluster.fit_predict(X)
# -
print(cluster.labels_)
plt.scatter(X[:,0],X[:,1], c=cluster.labels_, cmap='rainbow')
# +
def distance()
# -
|
HW06.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Korbisch/Master-Thesis-Notebooks/blob/main/MA_Klassifizierung_Unterkategorie.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="jaCVqnecw7zY"
# # **Masterarbeit Neuronale Netze und Deep Learning**
#
# **Thema:** Architekturen von neuronalen Netzen für die Klassifizierung von Kleidungsstücken: Konzeption und Implementierung einer vergleichenden Analyse
#
# **Klassifizierung:** Unterkategorie
#
# **Autor:** <NAME>
#
# In diesem Notebook wird beispielhaft eine Architektur eines neuronalen Netzes erstellt, um Kleidungsstücke zu klassifizieren.
#
# <img src="https://cdn.pixabay.com/photo/2020/02/15/14/19/network-4851079_1280.jpg" width="800"/>
# + [markdown] id="KIebmJU3napf"
# # Vorgehen & Inhaltsverzeichnis:
#
# 1. Import & Installation aller benötigten Libraries und Frameworks
# 2. Download der Daten
# 3. Datenvorverarbeitung
# 4. Laden der Daten als Numpy Arrays
# 5. Datensplit
# 6. Modell erstellen
# 7. Test des Modells
# + [markdown] id="KSQyCFJzkdAI"
# # 0. Setup
#
# Um eine möglichst schnelle Laufzeit und geringe Wartezeiten zu garantieren sollte eine GPU für das Training verwendet werden.
#
# In der Navigations-Leiste unter dem Punkt `Laufzeit => Laufzeittyp ändern => Hardwarebeschleuniger` kann eine GPU Instanz ausgewählt werden.
# + [markdown] id="Sw7oERrlv9Ts"
# # 1. Import & Installation aller benötigten Libraries und Frameworks
#
# Installationen werden für die Ausführung in Colab nicht benötigt, sondern nur bei lokaler Ausführung.
#
# Installationen müssen nur einmal ausgeführt werden.
# Imports können bei einem Verbindungsverlust erneut benötigt werden.
#
# Für die Verwendung von command line Funktionen muss in Jupyter Notebooks ein `!` an den Anfang der Zeile gesetzt werden.
# + id="uR-IRZJLQcEb"
# install libraries
# #!pip install tensorflow
# #!pip install matplotlib
# #!pip install numpy
# #!pip install pandas
# + [markdown] id="ZIyF7cjVRHwk"
# Alle Benötigten Libraries und Frameworks für das Projekt werden heruntergeladen:
# - Tensorflow: Standard Machine Learning Framework für die Berechnung von Tensoroperationen
# - Keras: High Level Framework für das Bauen von neuronalen Netzen
# - Matplotlib: Für das Plotten von Graphen und Anzeigen von Bildern
# - Numpy: Einfache Bearbeitung von mehrdimensionalen Arrays
# - Pandas: Für die Datenvorverarbeitung z.B. Einlesen von CSV-Dateien
# - OS: Standard Betriebssystem Funktionen
# + colab={"base_uri": "https://localhost:8080/"} id="JP2YkHjzRWlq" outputId="8023c311-f380-427d-efbe-ec4fe6697210"
# import tensorflow and keras
import tensorflow as tf
from tensorflow import keras
from keras import layers, models
from keras.preprocessing.image import load_img
#import importlib
#importlib.import_module('mpl_toolkits').__path__
#from mpl_toolkits.mplot3d import Axes3D
#from sklearn.preprocessing import StandardScaler
import matplotlib
import matplotlib.pyplot as plt
#import matplotlib.image as mpimg
import numpy as np
import pandas as pd
# system libraries
import datetime, os
import time
import random
import platform
# for google colab only
#from google.colab import drive
from google.colab import files
# print version numbers
print('Tensorflow version: {}'.format(tf.__version__))
print('Python version: {}'.format(platform.python_version()))
print('Keras version: {}'.format(keras.__version__))
print('Numpy version: {}'.format(np.__version__))
print('Pandas version: {}'.format(pd.__version__))
print('Matplotlib version: {}'.format(matplotlib.__version__))
# + [markdown] id="C8LB9j1uRHwk"
# # 2. Download der Daten
#
#
#
#
#
#
# + [markdown] id="_ZhfdBZKjVvI"
# Die Daten werden auf der Webseite Kaggle zur Verfügung gestellt. Um die Daten herunterzuladen muss ein Account erstellt werden und man muss sich mit der Kaggle API verbinden.
#
# Der Original Datensatz stammt von <NAME> und kann unter folgendem Link abgerufen werden: https://www.kaggle.com/paramaggarwal/fashion-product-images-dataset. Die Bild Daten haben allerdings eine größe von mehr als 15 GB, was zu langen Wartezeiten bei der Ausführung führt. Zudem sind die Bilder sehr groß (1080x1440 und 1800x2400 Pixel), was für das Deep Learning wenig praktikabel ist.
#
# Deshalb wurden alle Bilder aus diesem Datensatz auf 250x250 Pixel verkleinert. Der angepasste Datensatz ist unter folgendem Link verfügbar: https://www.kaggle.com/dataset/009b0b26d6b841054c137dc96f021703d7d74669d9f2fcb3acb9fb0c3ecb78a8
#
# ### 2.1 Account bei Kaggle erstellen
# Besuche die Webseite Kaggle: https://www.kaggle.com und erstelle einen Account.
#
# ### 2.2 Download des Kaggle API Tokens
# Unter `Profil => Account => API => Create New API Token` kann eine Json-Datei mit dem persönlichen Kaggle API Token heruntergeladen werden.
#
# ### 2.3 Upload der Json Datei
# Zelle ausführen und über den Button `Dateien auswählen` die Json-Datei hochladen.
# + id="y1byEW5vyI-0"
# Upload der kaggel.json Datei
files.upload()
# + [markdown] id="LoeBtGuRjE85"
# ### 2.4 Setup der Kaggle API
# + colab={"base_uri": "https://localhost:8080/"} id="vz2zVyzPyULI" outputId="8fe2574a-2053-40ac-8cc6-ae8d10dd3874"
# Kaggle installieren
# !pip install -q kaggle
# Erstellen eines Kaggle Verzeichnisses
# !mkdir -p ~/.kaggle
# Datei in dieses Verzeichnis kopieren
# !cp kaggle.json ~/.kaggle/
# Überprüfen, ob Datei in diesem Verzeichnis ist
# !ls ~/.kaggle
# Berechtigung ändern
# !chmod 600 /root/.kaggle/kaggle.json
# + [markdown] id="5nlBG-M6bENC"
# ### 2.5 Download der Daten
# Der Datensatz wird mit dem Befehl `!kaggle datasets download` in das aktuelle Verzeichnis heruntergeladen, was in diesem Fall `/content` ist. Der Fortschritt des Downloads sollte in der Ausgabe erscheinen.
# + colab={"base_uri": "https://localhost:8080/"} id="NaVpKqPyWMu3" outputId="01a48bfe-b351-404c-c47c-3b3041cc2e08"
# download the dataset
# !kaggle datasets download -d korbinianschleifer/fashiondatasetnew
# + [markdown] id="QslhwrM0chpU"
# Nach erfolgreichem Download sollte die Datei `fashiondatasetnew.zip` verfügbar sein.
# + colab={"base_uri": "https://localhost:8080/"} id="CneKKnFSVl3O" outputId="ec4ce407-f2cb-46d1-999b-2d100da502cf"
# Inhalt des aktuellen Verzeichnisses ausgeben
# !ls
# + [markdown] id="oaGOnS_Mcy14"
# ### 2.6 Entpacken der Daten
# Da die Daten komprimiert sind, müssen sie noch entpackt werden.
# + id="zPa8VN4iYgHa"
# unzip the data
# !unzip fashiondatasetnew.zip
# + [markdown] id="QODOco-8dPrR"
# ### 2.7 Festlegen des Daten-Pfads
# Viele folgende Funktioen verwenden diese Konstante. Aus diesem Grund ist es entscheidend, hier den richtigen Pfad anzugeben. Im Normalfall ist dies: `/content/fashion-dataset-new` (An den Pfad sollte kein Slash angehängt werden).
#
# Die Ausgabe sollte den Ordner `images` und die Datei `styles.csv` anzeigen.
# + colab={"base_uri": "https://localhost:8080/"} id="4qY52BFxeOF6" outputId="3307b1f6-66c1-4d19-a927-dac2e0e80f76"
# set the dataset path
DATASET_PATH = '/content/fashion-dataset-new'
# should print out the folder: images and the file: styles.csv
print(os.listdir(DATASET_PATH))
# + [markdown] id="V9oHwTqm96Kn"
# ### 2.8 Bilder untersuchen
# + colab={"base_uri": "https://localhost:8080/", "height": 541} id="n6wzD9Ex7p74" outputId="0b7f91bd-552f-4987-e2dc-2329ccc34ec2"
# plot some images to inspect data
image_path = DATASET_PATH + '/images/'
fig = plt.figure(figsize=(9, 9))
for i in range(9):
img_file = random.choice(os.listdir(image_path))
image = plt.imread(image_path + img_file)
fig.add_subplot(3, 3, i+1)
plt.imshow(image)
plt.show()
# + [markdown] id="vuCiFeF_6IY_"
# # 3. Datenvorverarbeitung
# + [markdown] id="WCxTxKfVwkHQ"
# ### 3.1 Einlesen der CSV Datei
# + [markdown] id="uTHejD8kRHwl"
# Hier wird die CSV-Datei eingelesen in der die Attribute zu jedem Bild spezifiziert sind.
# Die Datei wird mithilfe von Pandas eingelesen und es wird ein Dataframe erstellt (df).
# Einen Dataframe kann man sich wie eine Tabelle oder Excel-Liste vorstellen, in der Daten gespeichert werden.
#
# Zudem werden alle Zeilen zufällig gemischt und die ersten 5 Zeilen des Dataframes ausgegeben, um den Erfolg des Vorgangs zu kontrollieren.
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="ChfLO2Cawsu2" outputId="f5ee96eb-4c5b-4934-8e91-000186e57ee1"
# read the csv file
df = pd.read_csv(DATASET_PATH + '/styles.csv', sep=';')
# randomly shuffle the dataset
df = df.sample(frac = 1)
# reset the index
df = df.reset_index(drop=True)
# show first five rows
df.head(5)
# + [markdown] id="KnWO2J3KlGX1"
# ### 3.2 Auswahl der Klassen
# + [markdown] id="X0Gc1nOtCjfx"
# Hier wird ein Ausschnitt des Dataframes ausgewählt. Die ausgewählte Kategorie (Spaltenname) soll später klassifiziert werden.
#
# Dazu muss die Variable ```category``` auf den Spaltennamen geänder werden.
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="QvUJbN1oCA5L" outputId="ce998b59-d64d-4af9-d0dd-5eebb8f4e00e"
# specify category(column) you want to predict from the Data Frame here
category = 'subCategory'
# create new subset data frame
sub_df = df[['id', category]]
sub_df.head(5)
# + [markdown] id="58I2IxGp97ji"
# ### 3.3 Löschen von kleinen Klassen
# + colab={"base_uri": "https://localhost:8080/", "height": 449} id="5Izpo2-U7-ov" outputId="5d3da8cd-d75f-43d2-b9a9-1d72b0a513f7"
# Problem: uneven distributed value counts
value_counts = sub_df[category].value_counts()
# create the plot
plt.figure(figsize=(20,5))
value_counts.plot(kind='bar')
plt.show()
# + [markdown] id="zuvbuhwulYiq"
# In dem oberen Histogramm erkennt man, dass die Klassen sehr ungleichmäßig verteilt sind. Es gibt Klassen, für die nur sehr wenige Bilder verfügbar sind und Klassen, für die sehr viele Bilder verfügbar sind.
#
# Merkmale aus Klassen mit nur wenigen Bildern können vom Modell nicht gut erlernt werden, da die Anzahl der Beispiele einfach zu gering ist.
#
# Um dies zu vermeiden sollten alle Klassen in gleicher Anzahl vorliegen. Dazu werden Klassen mit nur wenigen Bildern komplett gelöscht (z.B. <= 100 oder <= 1000).
# + colab={"base_uri": "https://localhost:8080/"} id="IKr1GscS_S5g" outputId="74d7e497-47d2-4f5a-8f1d-e3e9e3edd5d2"
# remove labels below specified count
# all labels with a count that is lower or equal will be removed
count = 1000
# get a list(series) of labels with lower count
count_series = sub_df[category].value_counts()
to_remove = count_series[count_series <= count].index
# prints labels that will be deleted
print(to_remove)
# + id="t_MbvLMHHdRO"
# removing category types from data frame
# ~: inverts a boolean value, isin: returns if value is in list
sub_df = sub_df[~sub_df[category].isin(to_remove)]
# + colab={"base_uri": "https://localhost:8080/", "height": 372} id="1TNAXvKpHdGE" outputId="45771816-af65-481c-fff6-a861fce059eb"
# plot again to check if remove was successful
value_counts = sub_df[category].value_counts()
plt.figure(figsize=(20,5))
value_counts.plot(kind='bar')
plt.show()
# + [markdown] id="dYtWnD2VoKSx"
# ### 3.4 Gleichmäßige Verteilung der Klassen
# + [markdown] id="JLcI-yYnqNaz"
# Klassen mit vielen Bildern werden vom Modell zu stark erlernt, was dazu führt, dass das Modell einen Bias entwickelt. Aus diesem Grund müssen die Klassen gleichmäßig verteilt werden.
#
# Aus allen Klassen mit zu vielen Bildern wird zufällig eine Anzahl ausgewählt, die der Anzahl der kleinsten Klasse entspricht.
# + id="ejL2y2JVn_Jw"
# group labels by category
sub_df = sub_df.groupby(category)
# get the minimum category count
min_count = sub_df.size().min()
# distribute labels evenly with minimum label count
sub_df = sub_df.sample(min_count)
# + id="dimsNCWIwoZ3"
# Undo the groupby by randomly shuffling data and resetting the index
# randomly shuffle the dataset
sub_df = sub_df.sample(frac = 1)
# reset the index
sub_df = sub_df.reset_index(drop=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 370} id="KhWsn71in-8A" outputId="1ebbbf12-5fe3-4db1-d54e-2ccd8ade7329"
# plot again to check if balancing was successful
value_counts = sub_df[category].value_counts()
plt.figure(figsize=(20,5))
value_counts.plot(kind='bar')
plt.show()
# + [markdown] id="NId2UomfokUS"
# ### 3.5 Umwandlung in numerische Daten
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="1cSKwVwU73lq" outputId="21a7d5f9-75ff-4550-fa85-26255eed5f36"
# cast column category to 'category' data type
sub_df = sub_df.astype({category: 'category'})
# add new column with class numbers
sub_df['class_number'] = sub_df[category].cat.codes
sub_df.head(5)
# + [markdown] id="ahFeLuK31blN"
# ### 3.6 Speichern der Klassen als Liste
#
# Die Label werden als Liste gespeichert, um später die Korrektheit der Klassifizierung zu kontrollieren.
# + colab={"base_uri": "https://localhost:8080/"} id="Lccrt2CU73bS" outputId="c927493a-d81e-4415-c48e-0b005dbdb14d"
# create a data frame sorted by class_number
sorted_df = sub_df.sort_values('class_number')
# list of labels
unique_types = sorted_df[category].unique().to_list()
print(unique_types)
# + colab={"base_uri": "https://localhost:8080/"} id="EwGp7b6rpBKk" outputId="1c730d40-5dff-46e2-dcf6-357d598b3ff1"
# test: the index of the list should be equal to the class_number
print(unique_types[1])
# + [markdown] id="KkyyFyE-RHwl"
# # 4. Laden der Daten als Numpy Arrays
# + [markdown] id="rCcq5dTakb_0"
# ### 4.1 Festlegen der Variablen
#
# Alle wichtigen Variablen für das Deep Learning werden hier festgelegt.
#
# * ```color_mode``` Untersuchter Farbmodus, entweder in Farbe oder schwarz-weiß
# * ```channels``` Anzahl der Farbkanäle, bei Farbe: 3, bei schwarz-weiß: 1
# * ```img_height``` Höhe eines Bildes
# * ```img_width``` Breite eines Bildes
# + id="dWVP753RRHwl" colab={"base_uri": "https://localhost:8080/"} outputId="83a11f26-58a4-4f93-bfd9-2559388a5a98"
# specify the color mode for the images
color_mode = 'rgb'
#color_mode = 'grayscale'
# specify the color channels: 3 for rgb, 1 for greyscale
channels = 3
# resizing of the images to
img_height = 50
img_width = 50
# set the number of class labels to predict for softmax classifier
classes = len(unique_types)
print(classes)
# total number of images to load
total_imgs = len(sub_df)
# + [markdown] id="gD4OuoCn5Mdk"
# ### 4.2 Funktion zum Laden der Bilder
#
# 1. Laden eines Bildes
# 2. Umwandlung in Numpy Array
# 3. Ändern in spezifizierte Größe
# 4. Ändern des Datentyps (für Tensorflow)
# 5. Normalisierung
#
#
# + id="JvXqjlqhRHwl"
# function to get the data from source
def fetch_images():
# using a python list because numpy arrays are super slow when adding values
image_data = []
# get the list of images from data frame
image_list = sub_df['id'].tolist()
# for progress updates
first = 0
last = len(image_list)
start_time = time.time()
for image_id in image_list:
image_path = DATASET_PATH + '/images/' + str(image_id) + '.jpg'
try:
# read image with keras function
image = load_img(image_path, color_mode=color_mode, target_size=(img_height, img_width))
except ImportError:
#print("\n" + image_path + " could not be loaded")
print('PIL is not available')
except ValueError:
print('interpolation method is not supported.')
# make sure images have the right size and dimensions for keras
# transform to numpy
image = np.array(image, ndmin=4)
image = np.reshape(image, [img_height, img_width, channels])
image = np.float32(image)
# normalise data
image /= 255.0
# add image to list
image_data.append(image)
# control progress
end_time = time.time()
first += 1
print("\r[{}/{}]:{}% of images loaded, time: {}".format(first, last, int(first/last*100), end_time-start_time), end="")
return image_data
# + [markdown] id="1iEAMrXjRHwl"
# Loading all the images from the directory.
# + [markdown] id="juryHFHUkxC3"
# ### 4.3 Bilder und Label laden
# + [markdown] id="UQtI6VCLy5Ml"
# Bilder laden
# + colab={"base_uri": "https://localhost:8080/"} id="5H3EkuNgRHwl" outputId="1c60946c-1aeb-449e-8e98-cc10bd5483db"
# load the images
image_data = fetch_images()
# + colab={"base_uri": "https://localhost:8080/"} id="7PWOhAFRkdVs" outputId="61c31cc4-6fc8-4406-e841-c1a14de7270d"
# convert python list to numpy array
image_data = np.array(image_data)
image_data.shape
# + id="8U9ZnXfiDgoW"
# check image data
print(image_data[9000])
# + [markdown] id="1ewA47C0yd0B"
# Label laden
# + id="kyolhpebriYa"
# load the labels
label_data = sub_df['class_number'].tolist()
# + colab={"base_uri": "https://localhost:8080/"} id="Fi2eWQWGDqLS" outputId="7d153e46-9d50-4adb-edd0-c068e9499933"
print(label_data[1])
# + colab={"base_uri": "https://localhost:8080/"} id="dVNqZiSlRHwm" outputId="c7caa95d-7da4-410d-ebee-48dea71ba1bb"
# create numpy array of labels
label_data = np.array(label_data, dtype='uint8')
# check for correct length: equal to number of instances from image_data
len(label_data)
# + id="m1UR1hNGq9Uq"
# convert labels to binary class matrix
label_data = tf.keras.utils.to_categorical(label_data)
#print(label_data)
# + [markdown] id="ei1yCryg2qRo"
# # 5. Datensplit
# + [markdown] id="JY361BHRdeE0"
# Split numpy Arrays in drei Teile:
#
#
# 1. Trainingsdaten
# 2. Validierungsdaten
# 3. Testdaten
#
# z.B. 80% / 10% / 10%
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="IMY7pfqFeh9s" outputId="678a88c1-0b36-4a31-eb02-8c67c851e3c8"
# get the indices where to do the split
split1 = int(0.8 * total_imgs)
split2 = int(0.9 * total_imgs)
print(split1, split2)
# + id="GCShfg3fAODB" colab={"base_uri": "https://localhost:8080/"} outputId="5c01de13-6fcb-4e46-cc7c-f98aa2ebbda9"
# train split
train_images = image_data[:split1]
train_labels = label_data[:split1]
# validation split
val_images = image_data[split1:split2]
val_labels = label_data[split1:split2]
# test split
test_images = image_data[split2:]
test_labels = label_data[split2:]
print('number of training samples: {} equals {}%'.format(len(train_images), round(len(train_images)/total_imgs*100)))
print('number of validation samples: {} equals {}%'.format(len(val_images), round(len(val_images)/total_imgs*100)))
print('number of test samples: {} equals {}%'.format(len(test_images), round(len(test_images)/total_imgs*100)))
# + [markdown] id="Fzwr-EZ0RHwm"
# # 6. Modell erstellen
# + colab={"base_uri": "https://localhost:8080/"} id="vh0JAE0oRHwm" outputId="0be583e0-ec89-47ff-9a0c-1849550afd62"
# create sequential model with keras
network = models.Sequential()
network.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(img_height, img_width, channels)))
network.add(layers.MaxPooling2D((2, 2)))
network.add(layers.Conv2D(32, (3, 3), activation='relu'))
network.add(layers.MaxPooling2D((2, 2)))
network.add(layers.Conv2D(32, (3, 3), activation='relu'))
network.add(layers.MaxPooling2D((2, 2)))
network.add(layers.Conv2D(32, (3, 3), activation='relu'))
network.add(layers.Flatten())
network.add(layers.Dense(32, activation='relu'))
network.add(layers.Dense(classes, activation='softmax'))
network.summary()
# + id="asln69OHRHwm"
# model kompilieren
# loss: 2 classes: binary_crossentropy, 3+ classes: categorical_crossentropy
network.compile(optimizer = keras.optimizers.RMSprop(),
loss = 'categorical_crossentropy',
metrics = ['accuracy'])
# + colab={"base_uri": "https://localhost:8080/"} id="X2I4aaquRHwm" outputId="27d58259-6755-48e6-efa1-1e15cdeb22ad"
# train the network
history = network.fit(train_images,
train_labels,
validation_data=(val_images, val_labels),
epochs=20,
batch_size=10,
shuffle=True)
#callbacks=[tensorboard_callback])
# + [markdown] id="7FqRE-6olzUq"
# ### Diagramme des Trainings
# + [markdown] id="L-Q6-7aXPsyG"
# Genauigkeit Klassifizierung Training & Validierung
# + colab={"base_uri": "https://localhost:8080/", "height": 517} id="l4aLmMENly_y" outputId="875ac5bb-e33e-455a-9ef6-8b7ab99cb541"
# plot the accuracy
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
epochs = range(1, len(acc) + 1)
plt.figure(figsize=(8,6), dpi=100)
plt.plot(epochs, acc, 'ko', label='Training')
plt.plot(epochs, val_acc, 'k', label='Validierung')
plt.legend()
plt.show()
# + [markdown] id="Dn2vSLhzPnkx"
# Wert der Verlustfunktion Training & Validierung
# + colab={"base_uri": "https://localhost:8080/", "height": 513} id="99FyGFaFxBK2" outputId="b7971b57-0a90-4c7d-f8ba-cc2d82ba5bed"
# plot the loss
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.figure(figsize=(8,6), dpi=100)
plt.plot(epochs, loss, 'ko', label='Verlust Training')
plt.plot(epochs, val_loss, 'k', label='Verlust Validierung')
plt.legend()
plt.show()
# + id="KLx1GtRQtDzo"
# free up memory space and clear model session
import gc
gc.collect()
keras.backend.clear_session()
# + id="KFrsBGmwlWxo"
# speichern des modells
network.save('modell2-unterkategorie.h5')
# + [markdown] id="QSc9foAakmI0"
# # 7. Test des Modells
#
# auf unbekannten Daten
# + colab={"base_uri": "https://localhost:8080/"} id="Tt688gN7klnW" outputId="fb032388-3332-4ef4-9df5-bfc94e6a5532"
# specify test accuracy
test_loss, test_acc = network.evaluate(test_images, test_labels, batch_size=1)
test_acc
# + colab={"base_uri": "https://localhost:8080/"} id="tSL6Vxq5lGOa" outputId="2974f092-c56d-4f43-8746-6af923676c71"
# specify test loss
test_loss
# + [markdown] id="o7yY0wW7ozor"
# ### 7.1 Bilder testen und ausgeben
# + [markdown] id="zyABGjGanDKZ"
# Als ersten werden aus den Test-Bildern 9 Stück zufällig ausgewählt.
#
# Zuerst wird ein zufälliger Index aus den Bildern ausgewählt und eine Liste gespeichert.
# + colab={"base_uri": "https://localhost:8080/"} id="O_lvt5GFT1Ir" outputId="e3153a16-7c18-4fe7-99b1-4b1acd990557"
# create random indices from test images
indices = []
for i in range(9):
rand = random.randint(0, len(test_images))
indices.append(rand)
print(indices)
# + [markdown] id="QgyClc2wnbAT"
# Für alle Test-Bilder wird die Vorhersage der Klasse generiert und in der Variable predicitions gespeichert.
#
# Die Vorhersage wird als Zahl ausgegeben. Diese wird in der Liste labels gespeidhert.
# + colab={"base_uri": "https://localhost:8080/"} id="-3hY-WLeWVrB" outputId="0f6dfaeb-9b9d-4f5f-c97a-75d452f14936"
# testing
# compare predicted labels to true labels
prediction = np.argmax(network.predict(test_images), axis=-1)
predicted_labels = []
true_labels = []
for num in indices:
predicted_labels.append(prediction[num])
true_labels.append(np.argmax(test_labels[num]))
for i in range(len(predicted_labels)):
print('img: {}, predicted class: {}, true label: {}, result: {}'
.format(sub_df.loc[split2 + indices[i], 'id'], predicted_labels[i],
true_labels[i], predicted_labels[i] == true_labels[i]))
# + [markdown] id="4qm8MqlbfbDF"
# Die Bilder werden geplottet.
#
# Über den Index vom Data Frame wird das passende Bild gesucht, indem der zufällige Index zum Index des beginnenden Test-Splits hinzugefügt wird.
#
# Der Titel wird über die Liste `unique_types` aus den gespeicherten Labels ausgewählt.
# + [markdown] id="rfTmk6_E0Szv"
# ### 7.2 Plot der Klassifizierung
#
# True: Klassifizierung ist richtig
# False: Klassifizierung ist falsch
# + colab={"base_uri": "https://localhost:8080/", "height": 536} id="jjXcJGkBYnlV" outputId="aeb8f088-4afd-4b8d-c9bc-3a51c3055468"
# index for data frame
image_path = DATASET_PATH + '/images/'
fig = plt.figure(figsize=(9, 9))
for i in range(9):
# get the image path from the data frame
image_id = sub_df.loc[split2 + indices[i], 'id']
image = plt.imread(image_path + str(image_id) + '.jpg')
fig.add_subplot(3,3,i+1)
# get the predicted label
plt.title(unique_types[predicted_labels[i]] + ': ' + str(predicted_labels[i] == true_labels[i]))
plt.imshow(image)
plt.axis('off')
|
MA_Klassifizierung_Unterkategorie.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + hide_input=false
## Import all the things
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib.colors import LinearSegmentedColormap
import matplotlib.ticker as ticker
import numpy as np
import pandas as pd
import geopandas as gpd
import pycountry
from scipy.stats import gaussian_kde
from IPython.display import display, Markdown, Latex, display_markdown
from shapely.geometry import Point
import os
import datetime
# %matplotlib inline
# This allows cells with long text values to wrap
pd.set_option('display.max_colwidth', -1)
pd.set_option('max_rows', 200)
# +
# Load data for maps
# bounding box
bbox = gpd.read_file("../data_files/spatial-vector-lidar/global/ne_110m_graticules_all/ne_110m_wgs84_bounding_box.shp")
bbox_robinson = bbox.to_crs('+proj=robin')
# world map
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')).to_crs(epsg=4326) #.to_crs('+proj=robin') # world map
world_robinson = world.to_crs('+proj=robin')
# Set up function to add row/column totals to dataframes
def add_totals(df):
df['row total'] = df.sum(axis=1)
df = df.append(df.sum(axis=0).rename("column total"))
return df
# Define constants
report_date = pd.Timestamp(2018, 10, 31)
# report_date
report_year = report_date.strftime('%Y')
report_day = report_date.strftime('%d')
report_month = report_date.strftime('%B')
# + hide_input=true
def get_country_name(alpha2_code):
'''
Takes a two character country code and
returns the full name of the country.
'''
try:
return pycountry.countries.get(alpha_2=alpha2_code).name
# return pycountry.countries.
except KeyError:
if alpha2_code == "W3":
return "online"
else:
return "unknown"
def get_country_code3(alpha2_code):
'''
Takes a two character country code and
returns the full name of the country.
'''
try:
return pycountry.countries.get(alpha_2=alpha2_code).alpha_3
# return pycountry.countries.
except KeyError:
if alpha2_code == "W3":
return "online"
else:
return "unknown"
# Function to create dataframes of counts and percentages by category (previous experience, expertise)
# when option is check many
def get_value_counts_many(df, col_name, value_list):
"""
Takes the parameters:
* dataframe
* the column in the df you want to group and count by
* The list of characteristics (to ensure sort order)
Returns a two column dataframe that has the grouped column and the count by that column
"""
df_counts = pd.DataFrame(columns=[col_name, 'Count'])
for v in value_list:
# count = df[col_name].apply(lambda x:x.count(v)).sum()
count = df[col_name].str.count(v).sum()
df_counts.loc[len(df_counts)] = [v, count]
return df_counts
# + hide_input=false
# Load csv into df
all_events = pd.read_csv("../data_files/all_workshops.csv", keep_default_na=False, na_values=[''])
# all_events
# + [markdown] variables={" report_day ": "31", " report_month ": "October", " report_year ": "2018"}
# # The Carpentries: Programmatic Assessment Report
#
#
# **January 1, 2012 to {{ report_month }} {{ report_day }}, {{ report_year }}**
#
# **Authors: <NAME>, <NAME>**
#
# ## What is The Carpentries?
# Software Carpentry (SWC), Data Carpentry (DC), and Library Carpentry (LC) are lesson programs of The Carpentries (a fiscally sponsored project of Community Initiatives). We teach essential computing and data skills. We exist because the skills needed to do computational, data-intensive research are not part of basic research training in most disciplines. Read more at https://carpentries.org/.
#
# ## About Software Carpentry
# Software Carpentry enables researchers to create purpose-built tools, whether it be a Unix shell script to automate repetitive tasks, or software code in programming languages such as Python or R. These enable researchers to build programs that can be read, re-used, and validated, greatly enhancing the sharing and reproducibility of their research. Read more at https://software-carpentry.org/.
#
# ## About Data Carpentry
# Data Carpentry learners are taught to work with data more effectively. Workshops focus on the data lifecycle, covering data organization, cleaning and management through to data analysis and visualization. Lessons are domain-specific, with curricula for working with ecological data, genomic sequencing data, social sciences survey data, and geospatial data. Read more at https://datacarpentry.org/.
#
# ## About Library Carpentry
# Library Carpentry develops lessons and teaches workshops for and with people working in library- and information-related roles. Our goal is to create an on-ramp to empower this community to use software and data in their own work as well as be advocates for and train others in efficient, effective and reproducible data and software practices. Library Carpentry data is not included in this report as it joined The Carpentries as a lesson program on November 1, 2018. Future Carpentries programmatic assessment reports will include Library Carpentry data. Read more at https://librarycarpentry.org/.
#
#
# ## What The Carpentries offers
#
# * A suite of open source, collaboratively-built, community-developed lessons
# * Workshops based on a learn-by-doing, ‘code with me’ approach
# * A supportive learning culture
# * Instructor training, mentoring and support
# * Active global community which subscribes to an inclusive code of conduct
# * Evidence-based, proven pedagogical training methods
# * Ongoing development opportunities
# * Open discussions
#
# The Carpentries began systematically recording data for our workshops in 2012. We use this data to investigate how The Carpentries has grown over the years including number and geographic reach of our workshops, and learners at these workshops. We also look at our Instructor Training program, including number and geographic reach of instructor training events, number of trainees and their completion rates, and onboarding of new Instructor Trainers.
#
# Data are collected by a team of Workshop Administrators. In Africa, Australia, Canada, New Zealand, and the United Kingdom, Workshop Administrators are affiliated with our member institutions and provide in-kind staff time. A full-time Carpentries staff member is the Workshop Administrator for the rest the world.
#
#
# -
# # Part 1: Workshops
#
# Carpentries workshops generally comprise two full days of face-to-face instruction, based on materials specific to their lesson programs.
#
# Workshops are taught by volunteer trained and certified Instructors. Certified Instructors comprise people who have completed our instructor training course. Carpentries lessons are all open source, and are hosted on GitHub.
#
# The full data set can be found in the Programmatic Assessment folder of The Carpentries Assessment repository on GitHub (https://github.com/carpentries/assessment/)
#
# +
#####
# Replace country codes with full country name
# Set distinct workshop type (SWC, DC, LC, TTT)
# Set NaN attendance data to zero
# Convert workshop dates to date format
# Get just the regular workshops (not TTT events, not onboarding events, etc)
#####
all_events.rename(columns={"country": "country2",}, inplace=True)
# Apply the function to get the full country name
all_events['country'] = all_events['country2'].apply(get_country_name)
all_events['country3'] = all_events['country2'].apply(get_country_code3)
# Clean up the tag names
# Create a new column for "workshop_type"; populate it SWC, DC, LC, or TTT
# Remove the old "tag_name" column
all_events.loc[all_events['tag_name'].str.contains("SWC"), "workshop_type"] = "SWC"
all_events.loc[all_events['tag_name'].str.contains("DC"), "workshop_type"] = "DC"
all_events.loc[all_events['tag_name'].str.contains("LC"), "workshop_type"] = "LC"
all_events.loc[all_events['tag_name'].str.contains("TTT"), "workshop_type"] = "TTT"
all_events = all_events.drop('tag_name', axis=1)
# Clean up attendance value - this is inconsistently stored as NaN or 0
# All zero values should be NaN
all_events['attendance'] = all_events['attendance'].replace(0.0, np.nan)
# Date data type
all_events['start_date'] = pd.to_datetime(all_events['start_date'])
# Remove events after report date
all_events = all_events[all_events['start_date'] < report_date]
# Remove instructor training events; these will be analyzed separately.
# Limit to non TTT workshops
workshops = all_events[all_events['workshop_type'] != "TTT"]
# Remove online events like maintainer onboarding
workshops = workshops.drop(workshops[workshops.country == "online"].index)
# +
# Count of workshops by Carpentry and by year. This year's data is actual, not projected.
workshops_by_carpentry_year = workshops.groupby([workshops['start_date'].dt.year, 'workshop_type'])['slug'].count().unstack()
# To calculate projections for current year, take number of workshops at same point in last year
# Get the year of the max date in our data set
current_year = max(workshops['start_date'].dt.year)
# Get one year ago based on that
last_year = current_year - 1
# Get the actual date of the latest workshop in our data set
latest_workshop_date = max(workshops['start_date'])
# Get the comparison date one year ago
last_year_comparision_date = max(workshops['start_date']) - datetime.timedelta(days=365)
# January 1 of last year
# last_year_first_day = datetime.date(last_year, 1, 1)
last_year_first_day = pd.Timestamp(last_year, 1, 1)
# Count how many workshops happened between Jan 1 of last year and the comparison date one year ago
last_year_workshops_to_comp_date = workshops[(workshops.start_date >= last_year_first_day) & (workshops.start_date <= last_year_comparision_date)]
# Count how many workshops happened total last year
count_last_year_workshops = len(workshops[workshops['start_date'].dt.year == last_year])
# Count YTD workshops this year
count_this_year_workshops = len(workshops[workshops['start_date'].dt.year == current_year])
# Last year's workshops by Carpentry by year through the comparison date
last_year_to_comp_date_by_carpentry = last_year_workshops_to_comp_date.groupby([last_year_workshops_to_comp_date['start_date'].dt.year, 'workshop_type'])['slug'].count().unstack()
# Last year's workshops by Carpentry by year total
last_year_total_by_carpentry = workshops_by_carpentry_year.loc[[last_year]]
# This year's workshops by Carpentry by year to date
this_year_to_date_by_carpentry = workshops_by_carpentry_year.loc[[current_year]]
# Proportion of workshops by Carpentry by year that had occured by the comparison date
proportion = last_year_to_comp_date_by_carpentry/last_year_total_by_carpentry
# Rename the rows so we can run calculations on them
this_year_to_date_by_carpentry.rename({current_year:1}, inplace=True)
proportion.rename({last_year:1}, inplace=True)
# Assuming current year will progress at same proportionate rate
# calculate the projected number of workshops for the current year
current_year_projected = this_year_to_date_by_carpentry.iloc[[0]]/proportion.iloc[[0]]
# Rename the row for the current year projections
current_year_projected.rename({1:current_year}, inplace=True)
# In the workshops by carpentry year dataframe, replace the actual current year data
# with projected current year data
workshops_by_carpentry_year.loc[[current_year]] = current_year_projected.loc[[current_year]]
# Replace the NaNs with 0 and convert floats to ints
workshops_by_carpentry_year.fillna(0, inplace=True)
workshops_by_carpentry_year = workshops_by_carpentry_year.round(0)
workshops_by_carpentry_year = workshops_by_carpentry_year.astype(int)
workshops_by_carpentry_year.index.name = 'Year'
workshops_by_carpentry_year.columns.name = "Workshop Type"
current_year_workshops = dict((workshops_by_carpentry_year.loc[current_year]))
# + [markdown] variables={" current_year ": "2018", " current_year_workshops['DC'] ": "121", " current_year_workshops['SWC'] ": "294"}
# ### Figure 1: Workshops by Carpentry lesson program by Year
#
# This bar chart shows the number of Data Carpentry (DC) and Software Carpentry (SWC) workshops each year. Data for 2018 is a projection calculated by looking at the number of workshops run in the same time period in 2017.
#
# In {{ current_year }} we expect to run {{ current_year_workshops['DC'] }} Data Carpentry and {{ current_year_workshops['SWC'] }} Software Carpentry workshops.
#
# Source data can be found in Table 1 in the Appendix.
# +
fig = plt.figure(figsize=(12, 6)) # Create matplotlib figure
ax = fig.add_subplot(111) # Create matplotlib axes
width = .5 # Set width of bar
title = "Carpentries workshops count by year"
# Set axes labels and legend
ax.set_xlabel("Year")
ax.set_ylabel("Workshop Count")
# ax.legend(title="Workshop Type", fontsize=12)
ax.title.set_size(18)
ax.xaxis.label.set_size(18)
ax.yaxis.label.set_size(18)
plt.xticks(fontsize=14, rotation=0)
plt.yticks(fontsize=14, rotation=0)
# Plot chart
workshops_by_carpentry_year.plot(y = ["DC", "SWC"], kind='bar', ax=ax, width=width, title=title, stacked=True,)
leg = ax.legend(fontsize=12)
leg.set_title(title="Workshop Type", prop={'size':14,})
# Customize the gridlines
ax.grid(linestyle='-', linewidth='0.25', color='gray')
# Create a new dataframe that has just the total number of workshops by year
totals = workshops_by_carpentry_year['DC'] + workshops_by_carpentry_year['SWC']
years = list(totals.index)
# Figure out what the xmarks values are (xtick values; they are not year like you'd think)
# Add them to an empty list
# The list will be double what's expected as it goes through all the stacked values
xmarks = []
for p in ax.patches:
# print("X: ", p.get_x())
# print("Y: ", p.get_height())
xmarks.append(p.get_x())
# Make an empty list to be populated with a tuple for each stack
# Go through the length of the totals series
# Add to the empty list a tuple: (position in totals df, position in xmarks list)
t = []
for y in range(len(totals)):
t.append((list(totals)[y], xmarks[y]))
# Annotate the stacked bar chart with
# (annotation text, position of text)
for p in t:
ax.annotate(str(p[0]), (p[1] + .08, p[0] + 5), fontsize=14)
# Don't allow the axis to be on top of your data
ax.set_axisbelow(True)
ax.set_ylim(0,max(totals) + 50)
# Display the plot
plt.show()
# See
# https://stackoverflow.com/questions/40783669/stacked-bar-plot-by-group-count-on-pandas-python
# +
# Actual data for workshops by country and year through 2018
# Projections for 2018 are calculated below
workshops_by_country_year = workshops.groupby(['country', workshops['start_date'].dt.year])['slug'].count().unstack()
workshops_by_country_year = workshops_by_country_year.fillna(0)
workshops_by_country_year.columns.names = ['Year']
workshops_by_country_year = workshops_by_country_year.astype(int)
# # Last year's workshops by country by year through the comparison date
# Take *all* of last year's workshop by the com date and group them by country
last_year_to_comp_date_by_country = last_year_workshops_to_comp_date.groupby(['country'])['slug'].count().to_frame() #.unstack()
# # current_year_projected.rename({1:current_year}, inplace=True)
last_year_to_comp_date_by_country.rename(columns={'slug':last_year}, inplace=True)
# Last year's workshops by Country by year total
# Get just the last_year column from the workshops_by_country_year df
last_year_total_by_country = workshops_by_country_year[[last_year]]
# This year's workshops by Country by year total
# Get just the current_year column from the workshops_by_country_year df
current_year_to_date_by_country = workshops_by_country_year[[current_year]]
# Proportion of workshops by Carpentry by year that had occurred by the comparison date
proportion = last_year_to_comp_date_by_country/last_year_total_by_country
projected_current_year = current_year_to_date_by_country[current_year]/proportion[last_year]
projected_current_year = projected_current_year.to_frame()
# display(projected_current_year)
# Add a column to the projected data that includes this year's actual data
projected_current_year[current_year] = workshops_by_country_year[current_year]
# Get the maximum of the two columns
# This is because when a country has just one workshop, the projection calculation makes the
# total for the current year look like zero
workshops_by_country_year[current_year] = projected_current_year[[0, current_year]].max(axis=1)
workshops_by_country_year.fillna(0, inplace=True)
workshops_by_country_year = workshops_by_country_year.round(0)
workshops_by_country_year = workshops_by_country_year.astype(int)
workshops_by_country_year.index.name = 'Country'
workshops_by_country_year.columns.name = "Year"
# display(workshops_by_country_year)
# +
# This adjusts for Ethiopia and South Africa
# both countries experiencing rapid recent growth, so the
# projections done by proportion to last year are not accurate.
# For these countries, data for 9 months of 2018 is applied to
# the full year, rather than comparing this year to last year.
# This should NOT be automatically run when running this report again
# Specific adjustments may need to be made again
adjusted = workshops[workshops['country'].isin(['Ethiopia', 'South Africa'])].groupby(['country', workshops['start_date'].dt.year])['slug'].count().unstack()
adjusted[2018] = adjusted[2018]/10*12
adjusted.fillna(0, inplace=True)
adjusted = adjusted.round(0)
adjusted = adjusted.astype(int)
workshops_by_country_year[2018].loc[['South Africa']] = adjusted[2018].loc[['South Africa']]
workshops_by_country_year[2018].loc[['Ethiopia']] = adjusted[2018].loc[['Ethiopia']]
# -
workshops_by_country_year_top = workshops_by_country_year[workshops_by_country_year.sum(axis=1) >= 10]
# ### Figure 2: Geographic Reach
#
# Each dot on the map below represents one workshop since 2012. Source data can be found in Table 2 in the Appendix.
#
#
workshops_with_location = workshops[workshops.latitude.notnull()]
workshops_with_location = workshops_with_location[workshops_with_location.longitude.notnull()]
# +
# For more info see
# https://www.earthdatascience.org/courses/earth-analytics-python/spatial-data-vector-shapefiles/intro-to-coordinate-reference-systems-python/
# https://github.com/geopandas/geopandas/issues/245
# Make simple df with just the latlon columns
latlon = workshops_with_location[['longitude', 'latitude']]
# world map with latlong projections
# world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')).to_crs(epsg=4326) #.to_crs('+proj=robin') # world map
# Turn latlon into list of shapely points
q = [Point(xy) for xy in latlon.values]
# Create geodataframe using these points
q = gpd.GeoDataFrame(q, columns = ['geometry'])
# q.head()
# This is a "naive" object; need to explicitly set the crs to 4326 (latlon)
q.crs = {'init': 'epsg:4326', 'no_defs': True}
# Now translate both latlons to robinson projection - a bit "rounder"
# world = world.to_crs('+proj=robin')
q = q.to_crs('+proj=robin')
fig, ax = plt.subplots(figsize=(15, 8))
# Not sure what alpha means
bbox_robinson.plot(ax=ax, alpha=1, color='whitesmoke', edgecolor='dimgray')
world_robinson.plot(color='darkgrey', ax=ax, edgecolor="dimgray")
q.plot(ax=ax, color='royalblue', marker='o', markersize=10)
# ax.axis('off')
# facecolor will not work if ax.axis is off
# ax.patch.set_facecolor('whitesmoke')
# # Drop x & y axis ticks
plt.xticks([], [])
plt.yticks([], [])
ax.set_title("Carpentries workshops, 2012-2018")
ax.title.set_size(18)
# Make the axes invisible by making them the same color as the background
ax.spines['bottom'].set_color('white')
ax.spines['top'].set_color('white')
ax.spines['right'].set_color('white')
ax.spines['left'].set_color('white')
plt.show()
# + [markdown] variables={" current_year ": "2018", " last_year ": "2017"}
# ### Figure 3: Countries hosting 10 or more workshops
#
# This bar chart looks only at countries that have hosted 10 or more workshops since 2012.
# For each country, the number of workshops run each year is plotted. Data for {{ current_year }} is a
# projection. For most countries, this projection is based on the number of workshops run in the same time period in {{ last_year }}. For workshops in countries with a shorter history with The Carpentries (Ethiopia and South Africa), this projection is based on the average number of workshops per month in {{ current_year }}.
#
# Source data can be found in Table 2 in the Appendix.
# +
# Draw bar chart showing most active countries and workshops by year
fig = plt.figure(figsize=(12, 10)) # Create matplotlib figure
ax = fig.add_subplot(111) # Create matplotlib axes
title = "Carpentries workshops by country by year"
workshops_by_country_year_top[::-1].plot(y = list(workshops_by_country_year_top)[::-1], kind='barh', ax=ax, position=1, title=title)
# Set axes labels and legend
ax.set_xlabel("Workshop Count")
ax.set_ylabel("Country")
handles, labels = ax.get_legend_handles_labels()
leg = ax.legend(handles[::-1], labels[::-1], title="Year", fontsize=12,)
# leg = ax.legend(fontsize=12)
leg.set_title(title="Year", prop={'size':14,})
ax.title.set_size(18)
ax.xaxis.label.set_size(18)
ax.yaxis.label.set_size(18)
plt.xticks(fontsize=14, rotation=0)
plt.yticks(fontsize=14, rotation=0)
# Customize the gridlines
ax.grid(linestyle='-', linewidth='0.25', color='gray')
# Don't allow the axis to be on top of your data
ax.set_axisbelow(True)
# Turn on the minor TICKS, which are required for the minor GRID
ax.minorticks_on()
# Customize the major grid
ax.grid(which='major', linestyle='-', linewidth='0.5', color='black')
# Customize the minor grid
ax.grid(which='minor', linestyle=':', linewidth='0.5', color='#AAB7B8')
ax.xaxis.set_major_locator(ticker.MultipleLocator(50))
ax.xaxis.set_minor_locator(ticker.MultipleLocator(10))
# https://www.oreilly.com/library/view/matplotlib-plotting-cookbook/9781849513265/ch03s11.html
plt.show()
# -
# # Part 2: Learners
# +
workshops_current_year = workshops[workshops['start_date'].dt.year == current_year].copy()
workshops_current_year.attendance.fillna(0, inplace=True)
current_year_no_attendance = workshops_current_year[workshops_current_year.attendance ==0].copy()
percent_missing_attendance = len(current_year_no_attendance)/len(workshops_current_year) * 100
percent_missing_attendance = int(percent_missing_attendance)
# print(percent_missing_attendance)
# + [markdown] variables={" current_year ": "2018", " percent_missing_attendance ": "27"}
# ### Figure 4: Workshop Attendance
#
# This bar chart represents the total number of Software Carpentry (SWC) and Data Carpentry (DC) learners each year. Numbers for {{ current_year }} represent projected data. However, approximately {{ percent_missing_attendance }}% of {{ current_year }} workshops have unreported attendance. The Carpentries staff and workshop administrators are working to improve our data collection and reporting measures to have more complete and accurate attendance figures.
#
# Source data can be found in Table 3 in the Appendix.
# +
# Learners by year
# Actual data for learners by year through 2018
# Projections for 2018 are calculated below
learners_by_year = workshops.groupby([workshops['start_date'].dt.year, 'workshop_type'])['attendance'].sum().unstack()
learners_by_year = learners_by_year.fillna(0)
learners_by_year = learners_by_year.astype(int)
# Last year's attendance by year through the comparison date
# # Take *all* of last year's workshop by the com date and group them by Carpentry type
last_year_to_comp_date_attendance = last_year_workshops_to_comp_date.groupby([workshops['start_date'].dt.year, 'workshop_type'])['attendance'].sum().unstack()
# # # current_year_projected.rename({1:current_year}, inplace=True)
# last_year_to_comp_date_by_country.rename(columns={'slug':last_year}, inplace=True)
# Last year's workshops attendance by year total
# # Get just the last_year column from the workshops_by_country_year df
learners_last_year = learners_by_year.loc[[last_year]]
# Rename the rows so we can run calculations on them
learners_by_year.rename({current_year:1}, inplace=True)
x = last_year_to_comp_date_attendance/learners_last_year
x.rename({last_year:1}, inplace=True)
# learners_by_year.loc[[2018]]/x.loc[[2017]]
# display(x.loc[[2017]])
# display(learners_by_year.loc[[2018]])
# learners_by_year.loc[[2018]]/x.loc[[2017]]
learners_by_year.loc[[1]] = learners_by_year.loc[[1]]/x
learners_by_year.rename({1:current_year}, inplace=True)
learners_by_year = learners_by_year.round(0)
learners_by_year = learners_by_year.astype(int)
learners_by_year.index.name = 'Year'
learners_by_year.columns.name = "Workshop Type"
# +
fig = plt.figure(figsize=(12, 6)) # Create matplotlib figure
ax = fig.add_subplot(111) # Create matplotlib axes
width = .5 # Set width of bar
title = "Carpentries attendance by year"
# Plot chart
# Having position=1 as an argument will make the bars not be centered on the x ticks
learners_by_year.plot(y = ["DC", "SWC"], kind='bar', ax=ax, width=width, title=title, stacked=True,)
# Set axes labels and legend
ax.set_xlabel("Year")
ax.set_ylabel("Workshop Count")
# ax.legend(title="Workshop Type", fontsize=12)
leg = ax.legend(fontsize=12)
leg.set_title(title="Workshop Type", prop={'size':14,})
ax.title.set_size(18)
ax.xaxis.label.set_size(18)
ax.yaxis.label.set_size(18)
plt.xticks(fontsize=14, rotation=0)
plt.yticks(fontsize=14, rotation=0)
# Customize the gridlines
ax.grid(linestyle='-', linewidth='0.25', color='gray')
# Create a new dataframe that has just the total number of workshops by year
totals = learners_by_year['DC'] + learners_by_year['SWC']
years = list(totals.index)
# Figure out what the xmarks values are (xtick values; they are not year like you'd think)
# Add them to an empty list
# The list will be double what's expected as it goes through all the stacked values
xmarks = []
for p in ax.patches:
# print("X: ", p.get_x())
# print("Y: ", p.get_height())
xmarks.append(p.get_x())
# Make an empty list to be populated with a tuple for each stack
# Go through the length of the totals series
# Add to the empty list a tuple: (position in totals df, position in xmarks list)
t = []
for y in range(len(totals)):
t.append((list(totals)[y], xmarks[y]))
# Annotate the stacked bar chart with
# (annotation text, position of text)
for p in t:
ax.annotate(str(p[0]), (p[1] + .08, p[0] + 100), fontsize=14)
# Don't allow the axis to be on top of your data
ax.set_axisbelow(True)
ax.set_ylim(0,max(totals) + 500)
# Display the plot
plt.show()
# See
# https://stackoverflow.com/questions/40783669/stacked-bar-plot-by-group-count-on-pandas-python
# -
# # Part 3: Instructor Training
# ## Overview
#
# Over the last hundred years, researchers have discovered an enormous amount about how people learn and how best to teach them. Unfortunately, much of that knowledge has not yet been translated into common classroom practice, especially at the university level. To this goal, we offer an Instructor Training program.
#
# This two-day class has the following overall goals:
#
# * Introduce trainees to evidence-based best-practices of teaching.
# * Teach how to create a positive environment for learners at Carpentries workshops.
# * Provide opportunities for trainees to practice and build their teaching skills.
# * Help trainees become integrated into the Carpentries community.
# * Prepare trainees to use these teaching skills in teaching Carpentries workshops.
#
# Because we have only two days, some things are beyond the scope of this class. We do not teach:
#
# * How to program in R or Python, use git, or any of the other topics taught in Carpentries workshops.
# * How to create lessons from scratch (although trainees will have a good start on the principles behind that sort of work if inspired to learn more).
#
# This training is based on our constantly revised and updated curriculum (https://carpentries.github.io/instructor-training/ ).
#
# After completing the two day training program and several checkout exercises, trainees are awarded a badge and are considered certified Carpentries instructors, qualified to teach any of our workshops.
#
all_applications = pd.read_csv("../data_files/amy_applications_20181023.csv", keep_default_na=False, na_values=[''])
# ### Figure 5: Instructor Training Applications by Previous Experience in Teaching
#
# Source data can be found in Table 4 of the Appendix.
# +
apps_by_prev_experience = all_applications['Previous Experience in Teaching'].value_counts().to_frame()
apps_by_prev_experience.rename(columns={'Previous Experience in Teaching':"Count", True:"online"}, inplace=True)
apps_by_prev_experience.index.name = 'Previous Experience'
apps_by_prev_experience.rename(index={'Primary instructor for a full course':'Primary instructor'},inplace=True)
apps_by_prev_experience.rename(index={'Teaching assistant for a full course':'Teaching assistant'},inplace=True)
apps_by_prev_experience.rename(index={'A workshop (full day or longer)':'A workshop'},inplace=True)
## This makes the chart bars sorted by size, biggest to smallest
# without_other = apps_by_prev_experience.drop("Other", axis=0).sort_values('Count', ascending=False)
# just_other = apps_by_prev_experience.loc[['Other']]
# apps_by_prev_experience = pd.concat([without_other, just_other])
## This makes the bars sorted by custom, least experience to most
experience_list = ['None', 'A few hours', 'A workshop', 'Teaching assistant', 'Primary instructor', 'Other',]
apps_by_prev_experience = apps_by_prev_experience.reindex(experience_list)
# display(apps_by_prev_experience)
fig = plt.figure(figsize=(12, 6)) # Create matplotlib figure
ax = fig.add_subplot(111) # Create matplotlib axes
width = .5 # Set width of bar
title = "Applications by Previous Experience Teaching"
apps_by_prev_experience.plot(kind='bar', ax=ax, title=title, legend=False, grid=True,)
# # Set axes labels and legend
ax.set_xlabel("Previous Experience Teaching")
ax.set_ylabel("Count")
ax.title.set_size(18)
ax.xaxis.label.set_size(18)
ax.yaxis.label.set_size(18)
plt.xticks(fontsize=14, rotation=90)
plt.yticks(fontsize=14, rotation=0)
ax.grid(linestyle='-', linewidth='0.25', color='gray')
# Don't allow the axis to be on top of your data
ax.set_axisbelow(True)
for i, label in enumerate(list(apps_by_prev_experience.index)):
count = apps_by_prev_experience.loc[label]['Count']
ax.annotate(str(count), (i-.1, count + 25), fontsize=14)
ax.set_ylim(0, max(apps_by_prev_experience['Count']) + 100)
plt.show()
# -
# ### Figure 6: Instructor Training Applications by Previous Training in Teaching
#
# Source data can be found in Table 5 of the Appendix.
# +
apps_by_prev_training = all_applications['Previous Training in Teaching'].value_counts().to_frame()
apps_by_prev_training.rename(columns={'Previous Training in Teaching':"Count", True:"online"}, inplace=True)
apps_by_prev_training.index.name = 'Previous Training'
fig = plt.figure(figsize=(12, 6)) # Create matplotlib figure
ax = fig.add_subplot(111) # Create matplotlib axes
width = .5 # Set width of bar
## This makes the chart bars sorted by size, biggest to smallest
# without_other = apps_by_prev_training.drop("Other", axis=0).sort_values('Count', ascending=False)
# just_other = apps_by_prev_training.loc[['Other']]
# apps_by_prev_training = pd.concat([without_other, just_other])
## This makes the bars sorted by custom, least training to most
# display(apps_by_prev_training.index.values.tolist())
training_list = ['None', 'A few hours', 'A workshop', 'A certification or short course', 'A full degree', 'Other',]
apps_by_prev_training = apps_by_prev_training.reindex(training_list)
# display(apps_by_prev_training)
title = "Applications by Previous Training in Teaching"
apps_by_prev_training.plot(kind='bar', ax=ax, title=title, legend=False, grid=True,)
# # Set axes labels and legend
ax.set_xlabel("Previous Training in Teaching")
ax.set_ylabel("Count")
# Don't allow the axis to be on top of your data
ax.set_axisbelow(True)
ax.title.set_size(18)
ax.xaxis.label.set_size(18)
ax.yaxis.label.set_size(18)
plt.xticks(fontsize=14, rotation=90)
plt.yticks(fontsize=14, rotation=0)
ax.grid(linestyle='-', linewidth='0.25', color='gray')
for i, label in enumerate(list(apps_by_prev_training.index)):
count = apps_by_prev_training.loc[label]['Count']
ax.annotate(str(count), (i - 0.1, count + 25), fontsize=14)
ax.set_ylim(0, max(apps_by_prev_training['Count']) + 100)
plt.show()
# -
# ### Figure 7: Instructor Training Applications by Areas of Expertise
#
# Applicants can identify more than one area of expertise. Source data can be found in Table 6 of the Appendix.
# +
expertise_areas = ["Chemistry", "Civil, mechanical, chemical, or nuclear engineering",
"Computer science/electrical engineering", "Economics/business", "Education",
"Genetics, genomics, bioinformatics", "High performance computing", "Humanities",
"Library and information science", "Mathematics/statistics", "Medicine",
"Organismal biology", "Physics",
"Planetary sciences",
"Psychology/neuroscience", "Social sciences", "Space sciences",]
apps_by_expertise_areas = get_value_counts_many(all_applications, "Expertise areas", expertise_areas)
apps_by_expertise_areas.set_index('Expertise areas', inplace=True)
apps_by_expertise_areas['Count'] = apps_by_expertise_areas['Count'].astype(int)
fig = plt.figure(figsize=(12, 6)) # Create matplotlib figure
ax = fig.add_subplot(111) # Create matplotlib axes
width = .5 # Set width of bar
title = "Applications by Areas of Expertise"
apps_by_expertise_areas.plot(kind='bar', ax=ax, title=title, legend=False, grid=True,)
# # Set axes labels and legend
ax.set_xlabel("Areas of Expertise")
ax.set_ylabel("Count")
# Don't allow the axis to be on top of your data
ax.set_axisbelow(True)
ax.title.set_size(18)
ax.xaxis.label.set_size(18)
ax.yaxis.label.set_size(18)
plt.xticks(fontsize=14,)
plt.yticks(fontsize=14, rotation=0)
ax.grid(linestyle='-', linewidth='0.25', color='gray')
for i, label in enumerate(list(apps_by_expertise_areas.index)):
count = apps_by_expertise_areas.loc[label]['Count']
ax.annotate(str(count), (i- 0.3, count + 10), fontsize=14)
ax.set_ylim(0, max(apps_by_expertise_areas['Count']) + 50)
plt.show()
# -
# ### Figure 8: Instructor Training Applications by Occupation
#
# Source data can be found in Table 7 of the Appendix.
# +
apps_by_occupation = all_applications['Occupation'].value_counts().to_frame()
apps_by_occupation.rename(columns={'Occupation':"Count", True:"online"}, inplace=True)
apps_by_occupation.index.name = 'Occupation'
fig = plt.figure(figsize=(12, 6)) # Create matplotlib figure
ax = fig.add_subplot(111) # Create matplotlib axes
width = .5 # Set width of bar
## This makes the chart bars sorted by size, biggest to smallest
# without_other = apps_by_occupation.drop("undisclosed", axis=0).sort_values('Count', ascending=False)
# just_other = apps_by_occupation.loc[['undisclosed']]
# apps_by_occupation = pd.concat([without_other, just_other])
## This makes the bars sorted by custom, starting at earliest career stage
# display(apps_by_occupation.index.values.tolist())
occupation_list = ['undergrad', 'grad', 'postdoc', 'research', 'faculty', 'support', 'librarian', 'commerce', 'undisclosed', ]
apps_by_occupation = apps_by_occupation.reindex(occupation_list)
# display(apps_by_occupation)
title = "Applications by Occupation"
apps_by_occupation.plot(kind='bar', ax=ax, title=title, legend=False, grid=True,)
# # Set axes labels and legend
ax.set_xlabel("Occupation")
ax.set_ylabel("Count")
# Don't allow the axis to be on top of your data
ax.set_axisbelow(True)
ax.title.set_size(18)
ax.xaxis.label.set_size(18)
ax.yaxis.label.set_size(18)
plt.xticks(fontsize=14, rotation=90)
plt.yticks(fontsize=14, rotation=0)
ax.grid(linestyle='-', linewidth='0.25', color='gray')
for i, label in enumerate(list(apps_by_occupation.index)):
count = apps_by_occupation.loc[label]['Count']
ax.annotate(str(count), (i - 0.15, count + 5), fontsize=14)
ax.set_ylim(0, max(apps_by_occupation['Count']) + 50)
plt.show()
# + [markdown] variables={" current_year ": "2018", " report_month ": "October", " report_year ": "2018"}
# ### Figure 9: Instructor Training Events
#
# Numbers for {{ current_year }} represent actual data through {{ report_month }} {{ report_year }}. Source data can be found in Table 8 of the Appendix.
# +
instructor_training = pd.read_csv("../data_files/instructor_training_events.csv", keep_default_na=False, na_values=[''])
# Remove events before report date
# Remove the ones that are tagged 6 only
# There are other events like trainer training and onboardings
instructor_training = instructor_training[instructor_training['tags'] != '6']
instructor_training['start'] = pd.to_datetime(instructor_training['start'])
instructor_training = instructor_training[instructor_training['start'] < report_date]
instructor_training.loc[:, 'online'] = (instructor_training.loc[:, 'country'] == "W3")
instructor_training['count_badged'].fillna(0, inplace=True)
instructor_training['count_badged'] = instructor_training['count_badged'].astype(int)
instructor_training['pct_badged'] = instructor_training['count_badged']/instructor_training['attendance']*100
instructor_training['pct_badged'] = instructor_training['pct_badged'].round(0)
instructor_training['pct_badged'] = instructor_training['pct_badged'].astype(int)
# +
checkout_time = pd.Timedelta(3, "M")
report_date_3mos_before = report_date - checkout_time
instructor_training_exclude_last3mos = instructor_training[instructor_training['start'] <= report_date_3mos_before]
# +
instructor_training_by_year = instructor_training.groupby([instructor_training['start'].dt.year, 'online'])['slug'].count().unstack()
instructor_training_by_year.rename(columns={False:"in-person", True:"online"}, inplace=True)
instructor_training_by_year.index.names = ['Year']
instructor_training_by_year.columns.names = ["Type"]
instructor_training_by_year.fillna(0, inplace=True)
instructor_training_by_year = instructor_training_by_year.astype(int)
# df.rename(index=str, columns={"A": "a", "B": "c"})
# +
fig = plt.figure(figsize=(12, 6)) # Create matplotlib figure
ax = fig.add_subplot(111) # Create matplotlib axes
width = .5 # Set width of bar
title = "Instructor training events by year"
# Plot chart
instructor_training_by_year.plot(y = ["in-person", "online"], kind='bar', ax=ax, width=width, title=title, stacked=True,)
# Set axes labels and legend
ax.set_xlabel("Year")
ax.set_ylabel("Trainings Count")
# ax.legend(title="Workshop Type", fontsize=12)
leg = ax.legend(fontsize=12)
leg.set_title(title="Training Type", prop={'size':14,})
ax.title.set_size(18)
ax.xaxis.label.set_size(18)
ax.yaxis.label.set_size(18)
plt.xticks(fontsize=14, rotation=0)
plt.yticks(fontsize=14, rotation=0)
# Customize the gridlines
ax.grid(linestyle='-', linewidth='0.25', color='gray')
# # Create a new dataframe that has just the total number of workshops by year
totals = instructor_training_by_year['in-person'] + instructor_training_by_year['online']
years = list(totals.index)
# # Figure out what the xmarks values are (xtick values; they are not year like you'd think)
# # Add them to an empty list
# # The list will be double what's expected as it goes through all the stacked values
xmarks = []
for p in ax.patches:
# print("X: ", p.get_x())
# print("Y: ", p.get_height())
xmarks.append(p.get_x())
# # Make an empty list to be populated with a tuple for each stack
# # Go through the length of the totals series
# # Add to the empty list a tuple: (position in totals df, position in xmarks list)
t = []
for y in range(len(totals)):
t.append((list(totals)[y], xmarks[y]))
# # Annotate the stacked bar chart with
# # (annotation text, position of text)
for p in t:
ax.annotate(str(p[0]), (p[1] + .1, p[0] + 1), fontsize=14)
# # Don't allow the axis to be on top of your data
ax.set_axisbelow(True)
ax.set_ylim(0,max(totals) + 5)
# Display the plot
plt.show()
# See
# https://stackoverflow.com/questions/40783669/stacked-bar-plot-by-group-count-on-pandas-python
# -
# ### Figure 10: Badging Rates at online vs. in-person events
#
# This chart shows the average of the percent badged each year. Badging rates are calcuated as a percentage of those who were awarded an instructor badge after attending a training event.
# Source data can be found in Table 9 of the Appendix.
# +
avg_badged_by_year = instructor_training_exclude_last3mos.groupby([instructor_training_exclude_last3mos['start'].dt.year, 'online'])['pct_badged'].mean().unstack()
avg_badged_by_year.rename(columns={False:"in-person", True:"online"}, inplace=True)
avg_badged_by_year.index.names = ['Year']
avg_badged_by_year.columns.names = ["Percent Badged"]
avg_badged_by_year.fillna(0, inplace=True)
avg_badged_by_year = avg_badged_by_year.round()
avg_badged_by_year = avg_badged_by_year.astype(int)
# +
fig = plt.figure(figsize=(12, 6)) # Create matplotlib figure
ax = fig.add_subplot(111) # Create matplotlib axes
width = .75 # Set width of bar
title = "Average badged by year"
# Plot chart
avg_badged_by_year.plot(y = ["in-person", "online"], kind='bar', ax=ax, width=width, title=title, stacked=False,)
# Set axes labels and legend
ax.set_xlabel("Year")
ax.set_ylabel("Average percent badged")
# ax.legend(title="Workshop Type", fontsize=12)
leg = ax.legend(fontsize=12)
leg.set_title(title="Training Type", prop={'size':14,})
ax.title.set_size(18)
ax.xaxis.label.set_size(18)
ax.yaxis.label.set_size(18)
plt.xticks(fontsize=14, rotation=0)
plt.yticks(fontsize=14, rotation=0)
# Customize the gridlines
ax.grid(linestyle='-', linewidth='0.25', color='gray')
for i, label in enumerate(list(avg_badged_by_year.index)):
count = avg_badged_by_year.loc[label]['in-person']
if count > 0:
ax.annotate(str(count), (i-.3, count + 1), fontsize=14)
# ax.set_ylim(0, max(apps_by_expertise_areas['Count']) + 5)
for i, label in enumerate(list(avg_badged_by_year.index)):
count = avg_badged_by_year.loc[label]['online']
if count > 0:
ax.annotate(str(count), (i + .1, count + 1 ), fontsize=14)
plt.show()
# -
all_instructors = pd.read_csv("../data_files/cumulative_instructors.csv", keep_default_na=False, na_values=[''])
total_badged_instructors = all_instructors['count'].max()
# + [markdown] variables={" report_month ": "October", " report_year ": "2018", " total_badged_instructors ": "1692"}
# ### Figure 11: Badged Instructors
#
# Cumulative count by year of all instructors badged through {{ report_month }} {{ report_year }}. As of {{ report_month }} {{ report_year }}, The Carpentries had a total of {{ total_badged_instructors }} instructors.
# +
fig = plt.figure(figsize=(12, 6)) # Create matplotlib figure
ax = fig.add_subplot(111) # Create matplotlib axes
# fig, ax = plt.subplots()
title = "Total Badged instructors"
# Plot chart
all_instructors.plot(kind='area', x="date", y='count', ax=ax, title=title)
# # Set axes labels and legend
ax.set_xlabel("Year")
ax.set_ylabel("Total Badged Instructors")
# # ax.legend(title="Workshop Type", fontsize=12)
# leg = ax.legend(fontsize=12)
# leg.set_title(title="Training Type", prop={'size':14,})
ax.title.set_size(18)
ax.xaxis.label.set_size(18)
ax.yaxis.label.set_size(18)
# plt.xticks(fontsize=14, rotation=0)
# plt.yticks(fontsize=14, rotation=0)
xticks = ["", 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019]
ax.set_xticklabels(xticks)
# Customize the gridlines
ax.grid(linestyle='-', linewidth='0.25', color='black')
# # Don't allow the axis to be on top of your data
ax.set_axisbelow(False)
plt.show()
# -
# # Part 4: Teaching
# ### Figure 12: Teaching Frequency
#
# Source data can be found in Table 10 of the Appendix.
# +
teaching_frequency = pd.read_csv("../data_files/teaching_frequency.csv")
# https://stackoverflow.com/questions/44314670/create-rename-categories-with-pandas
# df['new'] = pd.cut(df.age,
# bins=[0, 19, 29, 39, 49, 59, 999],
# labels=['0-19', '20-29', '30-39', '40-49', '50-59', '60+'],
# include_lowest=True)
xticks = ['0', '1', '2-5', '6-10', '11-15', '16-20', '21 or more']
bins = pd.cut(teaching_frequency['num_taught'],
bins = [-1, 0, 1, 5, 10, 15, 20, np.inf],
labels = xticks)
num_workshops_taught_binned = teaching_frequency.groupby(bins)['num_taught'].agg(['count'])
num_workshops_taught_binned = num_workshops_taught_binned.unstack().to_frame()
num_workshops_taught_binned.rename(columns={0:'count'}, inplace=True)
num_workshops_taught_binned.index.names = ["", 'workshops taught']
fig = plt.figure(figsize=(12, 6)) # Create matplotlib figure
ax = fig.add_subplot(111) # Create matplotlib axes
# Don't allow the axis to be on top of your data
ax.set_axisbelow(True)
title = "Instructor Teaching Frequency"
num_workshops_taught_binned.plot(kind='bar', ax=ax, title=title, legend=False, grid=True)
ax.set_ylabel("Instructor Count")
ax.set_xlabel("Workshop Count")
ax.set_xticklabels(xticks)
ax.title.set_size(18)
ax.xaxis.label.set_size(18)
ax.yaxis.label.set_size(18)
plt.xticks(fontsize=14, rotation=0)
plt.yticks(fontsize=14, rotation=0)
ax.grid(linestyle='-', linewidth='0.25', color='gray')
for i, label in enumerate(list(num_workshops_taught_binned.index)):
count = num_workshops_taught_binned.loc[label]['count']
ax.annotate(str(count), (i - 0.15, count + 10), fontsize=14)
ax.set_ylim(0, max(num_workshops_taught_binned['count']) + 50)
plt.show()
# -
# # Part 5: Trainers
# ## Overview
#
# Until 2016, all Instructor Training events were run as online events by the Software Carpentry founder and former Executive Director. Knowing the limitations of having only one Instructor Trainer, in 2016, The Carpentries launched a training program for Instructor Trainers (https://carpentries.github.io/trainer-training/).
#
# This allowed us to expand reach by running several events a month, across timezones for online events. It also allowed us to build capacity at member organizations who have onsite Instructor Trainers. These Trainers run events for their site, building a community of trained and certified instructors there.
#
# By bringing on new Trainers in many parts of the world, we have a large community of Trainers who overlap time zones and connect with a wider audience. We've also expanded our geographic reach, allowing us to reach communities we may not otherwise connect with.
#
# +
trainers = pd.read_csv("../data_files/trainers.csv", keep_default_na=False, na_values=[''])
trainers.rename(columns={"country": "country2",}, inplace=True)
# Apply the function to get the full country name
trainers['country'] = trainers['country2'].apply(get_country_name)
trainers['country3'] = trainers['country2'].apply(get_country_code3)
trainers['awarded'] = pd.to_datetime(trainers['awarded'])
# trainers['country'] = trainers['country'].apply(get_country_name)
trainers['year'] = trainers['awarded'].dt.year
# + [markdown] variables={" report_month ": "October", " report_year ": "2018", "len(trainers)": "57"}
# ### Figure 13: New Trainers by Year
#
# As of {{ report_month }} {{ report_year }}, The Carpentries has {{len(trainers)}} Instructor Trainers on board.
# Source data can be found in Table 11 of the Appendix.
# +
fig = plt.figure(figsize=(12, 6)) # Create matplotlib figure
ax = fig.add_subplot(111) # Create matplotlib axes
width = .5 # Set width of bar
trainers_by_year = trainers.groupby(trainers['year']).awarded.count().to_frame()
trainers_by_year.rename(columns={'id':'count'}, inplace=True)
title = "New Instructor Trainers by Year" + "\n" + "Total: " + str(len(trainers))
trainers_by_year.plot(kind='bar', ax=ax, title=title, legend=False, grid=True,)
# Set axes labels and legend
ax.set_xlabel("Year")
ax.set_ylabel("New Trainers")
# Don't allow the axis to be on top of your data
ax.set_axisbelow(True)
ax.title.set_size(18)
ax.xaxis.label.set_size(18)
ax.yaxis.label.set_size(18)
plt.xticks(fontsize=14, rotation=0)
plt.yticks(fontsize=14, rotation=0)
trainers_by_year.rename(columns={'awarded':'count'}, inplace=True)
ax.grid(linestyle='-', linewidth='0.25', color='gray')
for i, label in enumerate(list(trainers_by_year.index)):
count = trainers_by_year.loc[label]['count']
ax.annotate(str(count), (i, count + 0.5), fontsize=14)
ax.set_ylim(0, max(trainers_by_year['count']) + 5)
plt.show()
# -
# ### Figure 14: Trainers by Country
#
# Source data can be found in Table 12 of the Appendix.
trainers_by_country = trainers.groupby(['country', 'country3']).awarded.count().to_frame()
trainers_by_country.fillna(0, inplace=True)
trainers_by_country = trainers_by_country.astype(int)
trainers_by_country.rename(columns={'awarded':'count'}, inplace=True)
trainers_by_country.reset_index(inplace=True)
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')).to_crs('+proj=robin') # world map
trainers_map = world.merge(trainers_by_country, left_on='iso_a3', right_on='country3')
# +
# Fill NAs with Zero so map can read
trainers_map.fillna(0, inplace=True)
# Drop the zero values so they are not in the legend or color scale
trainers_map = trainers_map[(trainers_map['count'] != 0)]
# Years as ints, not floats
trainers_map['count'] = trainers_map['count'].astype(int)
# Drop the zero values so they are not in the legend or color scale
# first_wkshp_map = first_wkshp_map[(first_wkshp_map.year != 0)]
# # Create map canvas
fig, ax = plt.subplots(figsize=(16,8))
# ax.axis('off')
bbox_robinson.plot(ax=ax, alpha=1, color='lightgray', edgecolor='dimgray')
# facecolor will not work if ax.axis is off
# ax.patch.set_facecolor('whitesmoke')
title = "Trainers by Country"
ax.set_title(title)
ax.get_figure().suptitle("")
ax.title.set_size(18)
# cmap = mpl.colors.ListedColormap(['red', 'green', 'blue', 'orange', 'pink', 'gray'])
cmap = LinearSegmentedColormap.from_list('name', ['lightblue', 'darkblue'])
# # # Plot basemap all in gray
world_robinson.plot(color='darkgrey', ax=ax, edgecolor="dimgray")
trainers_map.plot(ax=ax, column='count', categorical=True, cmap="Blues", legend=True,)
# trainers_map.plot(ax=ax, column='count', categorical=True, cmap=cmap, legend=True,)
# # Drop x & y axis ticks
plt.xticks([], [])
plt.yticks([], [])
# Make the axes invisible by making them the same color as the background
ax.spines['bottom'].set_color('white')
ax.spines['top'].set_color('white')
ax.spines['right'].set_color('white')
ax.spines['left'].set_color('white')
plt.show()
# -
# # Appendix
# ### Table 1: Workshops by Carpentry lesson program by Year
#
display(add_totals(workshops_by_carpentry_year))
# ### Table 2: Workshops by Country by Year
display(add_totals(workshops_by_country_year))
# ### Table 3: Attendance by Carpentry lesson program by Year
display(add_totals(learners_by_year))
# ### Table 4: Instructor Training Applications by Previous Experience Teaching
# +
apps_by_prev_experience = apps_by_prev_experience.append(apps_by_prev_experience.sum(axis=0).rename("column total"))
display(apps_by_prev_experience)
# -
# ### Table 5: Instructor Training Applications by Previous Training in Teaching
apps_by_prev_training = apps_by_prev_training.append(apps_by_prev_training.sum(axis=0).rename("column total"))
display(apps_by_prev_training)
# ### Table 6: Instructor Training Applications by Areas of Expertise
#
# Totals are not included as applicants can select more than one area of expertise.
display(apps_by_expertise_areas)
# ### Table 7: Instructor Training Applications by Occupation
apps_by_occupation = apps_by_occupation.append(apps_by_occupation.sum(axis=0).rename("column total"))
display(apps_by_occupation)
# ### Table 8: Instructor Training Events by Year
display(add_totals(instructor_training_by_year))
# ### Table 9: Average Badged by Instructor Training Event Type by Year
display(avg_badged_by_year)
# ### Table 10: Instructor Teaching Frequency
display(num_workshops_taught_binned)
# ### Table 11: New Instructor Trainers by Year
trainers_by_year = trainers_by_year.append(trainers_by_year.sum(axis=0).rename("column total"))
display(trainers_by_year)
# ### Table 12: Instructor Trainers by Country
# +
trainers_by_country = trainers_by_country[['country', 'count']].set_index('country')
trainers_by_country = trainers_by_country.append(trainers_by_country.sum(axis=0).rename("column total"))
display(trainers_by_country)
# -
|
programmatic-assessment/workshops/jupyter_notebooks/programmatic_report_20181031.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
import torch.nn as nn
from torch.autograd import Variable
from importlib.machinery import SourceFileLoader
import matplotlib.pyplot as plt
from pylab import rcParams
rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
rcParams['image.interpolation'] = 'nearest'
rcParams['image.cmap'] = 'gray'
M = SourceFileLoader("models", "../main/models.py").load_module()
# +
num_neurons = 500
connectivity = .01
spectral_radius = 0.8
r = M.Reservoir(num_neurons,connectivity,spectral_radius,init_pattern = 'random',feedback=True,bias=False,fb_scale=.5)
r.run(steps=500)
# -
f = torch.sin
steps = 2000
dt = 0.05
ls = []
xs = []
for s in range(steps):
xs.append(torch.tensor([dt*s]))
ls.append(f(torch.tensor([dt*s])))
lr = 0.1
n = 500
conn = .01
rho = 0.8
r = M.Reservoir(n,conn,rho,init_pattern='random',feedback=True,bias=False,fb_scale=.5)
es = []
for s in range(len(ls)):
y = r.forward()
e = (y - ls[s]).pow(2)
es.append(e)
err = torch.sum(torch.cat(es)) / len(ls)
print(err)
err.backward()
grad = r.readout_w.grad
with torch.no_grad():
r.readout_w -= grad * lr
r.readout_w = grad.zero_()
# +
outs = []
for s in range(100):
outs.append(r.forward())
plt.plot(outs)
plt.show()
# -
|
notebooks/esn/Untitled.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Roles Classifier Alternative: K-Nearest Neighbor
#
# Imports and downloading tokenizers from NLTK:
# +
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import nltk
from nltk.corpus import genesis
from nltk.corpus import wordnet as wn
from sklearn.model_selection import train_test_split
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
nltk.download('genesis')
nltk.download('wordnet')
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
genesis_ic = wn.ic(genesis, False, 0.0)
# -
# ## K-NN Classifier Definition
# + pycharm={"name": "#%%\n"}
class KnnClassifier():
def __init__(self, k=1, distance_type='path'):
self.k = k
self.distance_type = distance_type
# This function is used for training
def fit(self, x_train, y_train):
self.x_train = x_train
self.y_train = y_train
# This function runs the K(1) nearest neighbour algorithm and
# returns the label with closest match.
def predict(self, x_test):
self.x_test = x_test
y_predict = []
for i in range(len(x_test)):
max_sim = 0
max_index = 0
for j in range(self.x_train.shape[0]):
temp = self.document_similarity(x_test[i], self.x_train[j])
if temp > max_sim:
max_sim = temp
max_index = j
y_predict.append(self.y_train[max_index])
return y_predict
def convert_tag(self, tag):
"""Convert the tag given by nltk.pos_tag to the tag used by wordnet.synsets"""
tag_dict = {'N': 'n', 'J': 'a', 'R': 'r', 'V': 'v'}
try:
return tag_dict[tag[0]]
except KeyError:
return None
def doc_to_synsets(self, doc):
"""
Returns a list of synsets in document.
Tokenizes and tags the words in the document doc.
Then finds the first synset for each word/tag combination.
If a synset is not found for that combination it is skipped.
Args:
doc: string to be converted
Returns:
list of synsets
"""
tokens = word_tokenize(doc + ' ')
l = []
tags = nltk.pos_tag([tokens[0] + ' ']) if len(tokens) == 1 else nltk.pos_tag(tokens)
for token, tag in zip(tokens, tags):
syntag = self.convert_tag(tag[1])
syns = wn.synsets(token, syntag)
if (len(syns) > 0):
l.append(syns[0])
return l
def similarity_score(self, s1, s2, distance_type='path'):
"""
Calculate the normalized similarity score of s1 onto s2
For each synset in s1, finds the synset in s2 with the largest similarity value.
Sum of all of the largest similarity values and normalize this value by dividing it by the
number of largest similarity values found.
Args:
s1, s2: list of synsets from doc_to_synsets
Returns:
normalized similarity score of s1 onto s2
"""
s1_largest_scores = []
for i, s1_synset in enumerate(s1, 0):
max_score = 0
for s2_synset in s2:
if distance_type == 'path':
score = s1_synset.path_similarity(s2_synset, simulate_root=False)
else:
score = s1_synset.wup_similarity(s2_synset)
if score != None:
if score > max_score:
max_score = score
if max_score != 0:
s1_largest_scores.append(max_score)
mean_score = np.mean(s1_largest_scores)
return mean_score
def document_similarity(self, doc1, doc2):
"""Finds the symmetrical similarity between doc1 and doc2"""
synsets1 = self.doc_to_synsets(doc1)
synsets2 = self.doc_to_synsets(doc2)
return (self.similarity_score(synsets1, synsets2) + self.similarity_score(synsets2, synsets1)) / 2
# -
# ## Testing Several Files
# + pycharm={"name": "#%%\n"}
file_size = [150, 200, 250, 300, 350, 400, 450, 500, 550, 600, 645]
accuracy = []
s = stopwords.words('english')
ps = nltk.wordnet.WordNetLemmatizer()
for i in file_size:
file_name = f'output/balanced_{i}.csv'
roles = pd.read_csv(f'../{file_name}')
mapping = {'Student': 0, 'Co-Facilitator': 1, 'Facilitator': 2}
roles['Role'] = roles['Role'].apply(lambda x: mapping[x])
for k in range(roles.shape[0]):
review = roles.loc[k, 'Text']
review = review.split()
review = [ps.lemmatize(word) for word in review if not word in s]
review = ' '.join(review)
roles.loc[k, 'Text'] = review
X = roles['Text']
y = roles['Role']
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.2)
# Train the Classifier
classifier = KnnClassifier(k=1, distance_type='path')
classifier.fit(X_train.values, y_train.values)
test_corpus = []
for x_value in X_valid.values:
review = x_value.split()
review = [ps.lemmatize(word) for word in review if not word in s]
review = ' '.join(review)
test_corpus.append(review)
def accuracy_score(y_pred, y_true):
matched = 0
for j in range(0, len(y_pred)):
if y_pred[j] == y_true[j]:
matched = matched + 1
return float(matched) / float(len(y_true))
error_count = 0
predictions = classifier.predict(test_corpus)
accuracy_partial = accuracy_score(list(map(int, predictions)), y_valid.values.tolist())
accuracy.append(accuracy_partial)
print(f'Accuracy for file_size {i}: {accuracy_partial}')
# -
# ## Graphical Performance Analysis
#
# In the following plots we can see the how the model behaves when it is trained with different amounts of data.
# + pycharm={"name": "#%%\n"}
# %matplotlib inline
import matplotlib.pyplot as plt
plt.plot(file_size, accuracy)
plt.title('# of Rows vs. Accuracy')
plt.suptitle('K-Nearest Neighbor Roles Classifier')
plt.xlabel('# of Rows')
plt.ylabel('Accuracy')
plt.show()
# -
|
alternatives/.ipynb_checkpoints/knn_classification-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Laboratorio 2: Armado de un esquema de aprendizaje automático
#
# En el laboratorio final se espera que puedan poner en práctica los conocimientos adquiridos en el curso, trabajando con un conjunto de datos de clasificación.
#
# El objetivo es que se introduzcan en el desarrollo de un esquema para hacer tareas de aprendizaje automático: selección de un modelo, ajuste de hiperparámetros y evaluación.
#
# El conjunto de datos a utilizar está en `./data/loan_data.csv`. Si abren el archivo verán que al principio (las líneas que empiezan con `#`) describen el conjunto de datos y sus atributos (incluyendo el atributo de etiqueta o clase).
#
# Se espera que hagan uso de las herramientas vistas en el curso. Se espera que hagan uso especialmente de las herramientas brindadas por `scikit-learn`.
# +
import numpy as np
import pandas as pd
import seaborn as sns
# TODO: Agregar las librerías que hagan falta
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
# -
# ## Carga de datos y división en entrenamiento y evaluación
#
# La celda siguiente se encarga de la carga de datos (haciendo uso de pandas). Estos serán los que se trabajarán en el resto del laboratorio.
# +
dataset = pd.read_csv("./data/loan_data.csv", comment="#")
# División entre instancias y etiquetas
X, y = dataset.iloc[:, 1:], dataset.TARGET
scaler = MinMaxScaler(feature_range=(0, 1))
X = scaler.fit_transform(X)
# división entre entrenamiento y evaluación
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
# -
#
# Documentación:
#
# - https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
# ## Ejercicio 1: Descripción de los Datos y la Tarea
#
# Responder las siguientes preguntas:
#
# 1. ¿De qué se trata el conjunto de datos?
# 2. ¿Cuál es la variable objetivo que hay que predecir? ¿Qué significado tiene?
# 3. ¿Qué información (atributos) hay disponible para hacer la predicción?
# 4. ¿Qué atributos imagina ud. que son los más determinantes para la predicción?
#
# **No hace falta escribir código para responder estas preguntas.**
# 1. El dataset Home Equity dataset (HMEQ) contiene la información básica y de rendimiento de para 5.960 préstamos recientes sobre el valor de la vivienda (hipoteca).
#
# 2. La variable target es una variable binaria que indica si un solicitante finalmente incumplió (entró en default) o pagó la deuda.
#
# 3. Los atributos disponibles para la predicción son:
# * LOAN Cantidad de dinero requerida en el préstamo
# * MORTDUE Monto adeudado por hipoteca existente
# * VALUE Valor de la propiedad
# * YOJ Años en el trabajo actual
# * DEROG Número de reportes de derogación importantes
# * DELINQ Número de reportes de mora en créditos
# * CLAGE Tiempo en meses del último reporte de transacción relacionada a cŕeditos (trade line)
# * NINQ Numero de créditos recientes
# * CLNO Numero de créditos totales
# * DEBTINC Relación de deuda/ingresos
# 4. Imaginamos que las variables más importantes serán la relación deuda/ingresos, el número de reportes de mora en créditos/derogaciones y el número de créditos recientes.
dataset.describe()
# ## Ejercicio 2: Predicción con Modelos Lineales
#
# En este ejercicio se entrenarán modelos lineales de clasificación para predecir la variable objetivo.
#
# Para ello, deberán utilizar la clase SGDClassifier de scikit-learn.
#
# Documentación:
# - https://scikit-learn.org/stable/modules/sgd.html
# - https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html
#
# ### Ejercicio 2.1: SGDClassifier con hiperparámetros por defecto
#
# Entrenar y evaluar el clasificador SGDClassifier usando los valores por omisión de scikit-learn para todos los parámetros. Únicamente **fijar la semilla aleatoria** para hacer repetible el experimento.
#
# Evaluar sobre el conjunto de **entrenamiento** y sobre el conjunto de **evaluación**, reportando:
# - Accuracy
# - Precision
# - Recall
# - F1
# - matriz de confusión
from sklearn.linear_model import SGDClassifier
from sklearn.metrics import mean_squared_error, r2_score
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import confusion_matrix
from utils import plot_confusion_matrix
# +
model = SGDClassifier(random_state=0, shuffle = True)
model.fit(X_train,y_train)
# -
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
print('Train Accuracy: ', np.sum(y_train_pred == y_train)/len(y_train))
print('Test Accuracy: ', np.sum(y_test_pred == y_test)/len(y_test))
# Evaluación (evaluate):
train_error_multiple = mean_squared_error(y_train, y_train_pred)
test_error_multiple = mean_squared_error(y_test, y_test_pred)
print(f'Train error:{train_error_multiple:0.3}')
print(f'Test error:{test_error_multiple:0.3}')
precision_score(y_test, y_test_pred)
recall_score(y_test, y_test_pred)
f1_score(y_test, y_test_pred)
cm = confusion_matrix(y_test, y_test_pred)
print(cm)
plot_confusion_matrix(cm, [0,1])
# ### Ejercicio 2.2: Ajuste de Hiperparámetros
#
# Seleccionar valores para los hiperparámetros principales del SGDClassifier. Como mínimo, probar diferentes funciones de loss, tasas de entrenamiento y tasas de regularización.
#
# Para ello, usar grid-search y 5-fold cross-validation sobre el conjunto de entrenamiento para explorar muchas combinaciones posibles de valores.
#
# Reportar accuracy promedio y varianza para todas las configuraciones.
#
# Para la mejor configuración encontrada, evaluar sobre el conjunto de **entrenamiento** y sobre el conjunto de **evaluación**, reportando:
# - Accuracy
# - Precision
# - Recall
# - F1
# - matriz de confusión
#
# Documentación:
# - https://scikit-learn.org/stable/modules/grid_search.html
# - https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
# +
from sklearn.model_selection import KFold
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import GridSearchCV
kf = KFold(n_splits=5, shuffle=True)
for train_index, val_index in kf.split(X):
X_train, X_test = X[train_index], X[val_index]
y_train, y_test = y[train_index], y[val_index]
#print(f"TRAIN: {train_index} VAL: {val_index} {y_test}")
# -
grid = {
'l1_ratio': np.arange(0,1,0.01),
'penalty': ['l2','l1','elasticnet'],
'loss': ['hinge','log','squared_loss']}
# +
model = SGDClassifier()
grid_model = GridSearchCV(estimator = model,
param_grid = grid,
scoring = 'accuracy',
cv = 5)
grid_model.fit(X_train,y_train)
# -
results = grid_model.cv_results_
print(results.keys())
plt.plot(results['param_l1_ratio'].data,results['mean_test_score'])
for j in grid['loss']:
x = []
y = []
for i in range(len(results['param_loss'].data)):
if (results['param_loss'].data[i] == j):
x.append(results['param_l1_ratio'].data[i])
y.append(results['mean_test_score'][i])
plt.plot(x,y,label=j)
#plt.show()
plt.legend()
print(grid_model.best_estimator_)
print(grid_model.best_estimator_.score(X_train, y_train))
print(grid_model.best_estimator_.score(X_test, y_test))
y_train_pred = grid_model.best_estimator_.predict(X_train)
y_test_pred = grid_model.best_estimator_.predict(X_test)
precision_score(y_test, y_test_pred)
recall_score(y_test, y_test_pred)
f1_score(y_test, y_test_pred)
cm = confusion_matrix(y_test, y_test_pred)
print(cm)
plot_confusion_matrix(cm, [0,1])
# ## Ejercicio 3: Árboles de Decisión
#
# En este ejercicio se entrenarán árboles de decisión para predecir la variable objetivo.
#
# Para ello, deberán utilizar la clase DecisionTreeClassifier de scikit-learn.
#
# Documentación:
# - https://scikit-learn.org/stable/modules/tree.html
# - https://scikit-learn.org/stable/modules/tree.html#tips-on-practical-use
# - https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
# - https://scikit-learn.org/stable/auto_examples/tree/plot_unveil_tree_structure.html
# ### Ejercicio 3.1: DecisionTreeClassifier con hiperparámetros por defecto
#
# Entrenar y evaluar el clasificador DecisionTreeClassifier usando los valores por omisión de scikit-learn para todos los parámetros. Únicamente **fijar la semilla aleatoria** para hacer repetible el experimento.
#
# Evaluar sobre el conjunto de **entrenamiento** y sobre el conjunto de **evaluación**, reportando:
# - Accuracy
# - Precision
# - Recall
# - F1
# - matriz de confusión
#
from sklearn.tree import DecisionTreeClassifier
# +
model = DecisionTreeClassifier(random_state=0)
model.fit(X_train,y_train)
# -
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
print('Train Accuracy: ', np.sum(y_train_pred == y_train)/len(y_train))
print('Test Accuracy: ', np.sum(y_test_pred == y_test)/len(y_test))
# Evaluación (evaluate):
train_error_multiple = mean_squared_error(y_train, y_train_pred)
test_error_multiple = mean_squared_error(y_test, y_test_pred)
print(f'Train error:{train_error_multiple:0.3}')
print(f'Test error:{test_error_multiple:0.3}')
precision_score(y_test, y_test_pred)
recall_score(y_test, y_test_pred)
f1_score(y_test, y_test_pred)
cm = confusion_matrix(y_test, y_test_pred)
print(cm)
plot_confusion_matrix(cm, [0,1])
# ### Ejercicio 3.2: Ajuste de Hiperparámetros
#
# Seleccionar valores para los hiperparámetros principales del DecisionTreeClassifier. Como mínimo, probar diferentes criterios de partición (criterion), profundidad máxima del árbol (max_depth), y cantidad mínima de samples por hoja (min_samples_leaf).
#
# Para ello, usar grid-search y 5-fold cross-validation sobre el conjunto de entrenamiento para explorar muchas combinaciones posibles de valores.
#
# Reportar accuracy promedio y varianza para todas las configuraciones.
#
# Para la mejor configuración encontrada, evaluar sobre el conjunto de **entrenamiento** y sobre el conjunto de **evaluación**, reportando:
# - Accuracy
# - Precision
# - Recall
# - F1
# - matriz de confusión
#
#
# Documentación:
# - https://scikit-learn.org/stable/modules/grid_search.html
# - https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html
from sklearn.tree import DecisionTreeClassifier
grid = {
'criterion': ['gini', 'entropy'],
'max_depth': [1, 2],
}
# +
model = DecisionTreeClassifier(random_state=0)
grid_model = GridSearchCV(estimator = model,
param_grid = grid,
scoring = 'accuracy',
cv = 5)
grid_model.fit(X_train,y_train)
# -
results = grid_model.cv_results_
print(results.keys())
for j in grid['max_depth']:
x = []
y = []
for i in range(len(results['param_max_depth'].data)):
if (results['param_max_depth'].data[i] == j):
x.append(results['param_criterion'].data[i])
y.append(results['mean_test_score'][i])
plt.plot(x,y,'o',label=j)
#plt.show()
plt.legend()
print(grid_model.best_estimator_)
print(grid_model.best_estimator_.score(X_train, y_train))
print(grid_model.best_estimator_.score(X_test, y_test))
y_train_pred = grid_model.best_estimator_.predict(X_train)
y_test_pred = grid_model.best_estimator_.predict(X_test)
precision_score(y_test, y_test_pred)
recall_score(y_test, y_test_pred)
f1_score(y_test, y_test_pred)
cm = confusion_matrix(y_test, y_test_pred)
print(cm)
plot_confusion_matrix(cm, [0,1])
|
Lab 2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# Copyright (c) Microsoft Corporation. All rights reserved.
#
# Licensed under the MIT License.
# 
# # Train and explain models remotely via Azure Machine Learning Compute and deploy model and scoring explainer
#
#
# _**This notebook illustrates how to use the Azure Machine Learning Interpretability SDK to train and explain a classification model remotely on an Azure Machine Leanrning Compute Target (AMLCompute), and use Azure Container Instances (ACI) for deploying your model and its corresponding scoring explainer as a web service.**_
#
# Problem: IBM employee attrition classification with scikit-learn (train a model and run an explainer remotely via AMLCompute, and deploy model and its corresponding explainer.)
#
# ---
#
# ## Table of Contents
#
# 1. [Introduction](#Introduction)
# 1. [Setup](#Setup)
# 1. [Run model explainer locally at training time](#Explain)
# 1. Apply feature transformations
# 1. Train a binary classification model
# 1. Explain the model on raw features
# 1. Generate global explanations
# 1. Generate local explanations
# 1. [Visualize results](#Visualize)
# 1. [Deploy model and scoring explainer](#Deploy)
# 1. [Next steps](#Next)
# ## Introduction
#
# This notebook showcases how to train and explain a classification model remotely via Azure Machine Learning Compute (AMLCompute), download the calculated explanations locally for visualization and inspection, and deploy the final model and its corresponding explainer to Azure Container Instances (ACI).
# It demonstrates the API calls that you need to make to submit a run for training and explaining a model to AMLCompute, download the compute explanations remotely, and visualizing the global and local explanations via a visualization dashboard that provides an interactive way of discovering patterns in model predictions and downloaded explanations, and using Azure Machine Learning MLOps capabilities to deploy your model and its corresponding explainer.
#
# We will showcase one of the tabular data explainers: TabularExplainer (SHAP) and follow these steps:
# 1. Develop a machine learning script in Python which involves the training script and the explanation script.
# 2. Create and configure a compute target.
# 3. Submit the scripts to the configured compute target to run in that environment. During training, the scripts can read from or write to datastore. And the records of execution (e.g., model, metrics, prediction explanations) are saved as runs in the workspace and grouped under experiments.
# 4. Query the experiment for logged metrics and explanations from the current and past runs. Use the interpretability toolkit’s visualization dashboard to visualize predictions and their explanation. If the metrics and explanations don't indicate a desired outcome, loop back to step 1 and iterate on your scripts.
# 5. After a satisfactory run is found, create a scoring explainer and register the persisted model and its corresponding explainer in the model registry.
# 6. Develop a scoring script.
# 7. Create an image and register it in the image registry.
# 8. Deploy the image as a web service in Azure.
#
# |  |
# |:--:|
# ## Setup
# Make sure you go through the [configuration notebook](../../../../configuration.ipynb) first if you haven't.
# +
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
# -
# ## Initialize a Workspace
#
# Initialize a workspace object from persisted configuration
# + tags=["create workspace"]
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n')
# -
# ## Explain
#
# Create An Experiment: **Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments.
from azureml.core import Experiment
experiment_name = 'explainer-remote-run-on-amlcompute'
experiment = Experiment(workspace=ws, name=experiment_name)
# ## Introduction to AmlCompute
#
# Azure Machine Learning Compute is managed compute infrastructure that allows the user to easily create single to multi-node compute of the appropriate VM Family. It is created **within your workspace region** and is a resource that can be used by other users in your workspace. It autoscales by default to the max_nodes, when a job is submitted, and executes in a containerized environment packaging the dependencies as specified by the user.
#
# Since it is managed compute, job scheduling and cluster management are handled internally by Azure Machine Learning service.
#
# For more information on Azure Machine Learning Compute, please read [this article](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute)
#
# If you are an existing BatchAI customer who is migrating to Azure Machine Learning, please read [this article](https://aka.ms/batchai-retirement)
#
# **Note**: As with other Azure services, there are limits on certain resources (for eg. AmlCompute quota) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.
#
#
# The training script `run_explainer.py` is already created for you. Let's have a look.
# ## Submit an AmlCompute run in a few different ways
#
# First lets check which VM families are available in your region. Azure is a regional service and some specialized SKUs (especially GPUs) are only available in certain regions. Since AmlCompute is created in the region of your workspace, we will use the supported_vms () function to see if the VM family we want to use ('STANDARD_D2_V2') is supported.
#
# You can also pass a different region to check availability and then re-create your workspace in that region through the [configuration notebook](../../../configuration.ipynb)
# +
from azureml.core.compute import ComputeTarget, AmlCompute
AmlCompute.supported_vmsizes(workspace=ws)
# AmlCompute.supported_vmsizes(workspace=ws, location='southcentralus')
# -
# ### Create project directory
#
# Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on
# +
import os
import shutil
project_folder = './explainer-remote-run-on-amlcompute'
os.makedirs(project_folder, exist_ok=True)
shutil.copy('train_explain.py', project_folder)
# -
# ### Provision as a run based compute target
#
# You can provision AmlCompute as a compute target at run-time. In this case, the compute is auto-created for your run, scales up to max_nodes that you specify, and then **deleted automatically** after the run completes.
# +
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.runconfig import DEFAULT_CPU_IMAGE
# create a new runconfig object
run_config = RunConfiguration()
# signal that you want to use AmlCompute to execute script.
run_config.target = "amlcompute"
# AmlCompute will be created in the same region as workspace
# Set vm size for AmlCompute
run_config.amlcompute.vm_size = 'STANDARD_D2_V2'
# enable Docker
run_config.environment.docker.enabled = True
# set Docker base image to the default CPU-based image
run_config.environment.docker.base_image = DEFAULT_CPU_IMAGE
# use conda_dependencies.yml to create a conda environment in the Docker image for execution
run_config.environment.python.user_managed_dependencies = False
# auto-prepare the Docker image when used for execution (if it is not already prepared)
run_config.auto_prepare_environment = True
azureml_pip_packages = [
'azureml-defaults', 'azureml-contrib-interpret', 'azureml-core', 'azureml-telemetry',
'azureml-interpret', 'azureml-dataprep'
]
# specify CondaDependencies obj
run_config.environment.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'],
pip_packages=['sklearn_pandas', 'pyyaml'] + azureml_pip_packages,
pin_sdk_version=False)
# Now submit a run on AmlCompute
from azureml.core.script_run_config import ScriptRunConfig
script_run_config = ScriptRunConfig(source_directory=project_folder,
script='train_explain.py',
run_config=run_config)
run = experiment.submit(script_run_config)
# Show run details
run
# -
# Note: if you need to cancel a run, you can follow [these instructions](https://aka.ms/aml-docs-cancel-run).
# %%time
# Shows output of the run on stdout.
run.wait_for_completion(show_output=True)
# +
# Delete () is used to deprovision and delete the AmlCompute target. Useful if you want to re-use the compute name
# 'cpucluster' in this case but use a different VM family for instance.
# cpu_cluster.delete()
# -
# ## Download Model Explanation, Model, and Data
# retrieve model for visualization and deployment
from azureml.core.model import Model
from sklearn.externals import joblib
original_model = Model(ws, 'amlcompute_deploy_model')
model_path = original_model.download(exist_ok=True)
original_svm_model = joblib.load(model_path)
# +
# retrieve global explanation for visualization
from azureml.contrib.interpret.explanation.explanation_client import ExplanationClient
# get model explanation data
client = ExplanationClient.from_run(run)
global_explanation = client.download_model_explanation()
# -
# retrieve x_test for visualization
from sklearn.externals import joblib
x_test_path = './x_test.pkl'
run.download_file('x_test_ibm.pkl', output_file_path=x_test_path)
x_test = joblib.load(x_test_path)
# ## Visualize
# Visualize the explanations
from interpret_community.widget import ExplanationDashboard
ExplanationDashboard(global_explanation, original_svm_model, datasetX=x_test)
# ## Deploy
# Deploy Model and ScoringExplainer
# +
from azureml.core.conda_dependencies import CondaDependencies
# WARNING: to install this, g++ needs to be available on the Docker image and is not by default (look at the next cell)
azureml_pip_packages = [
'azureml-defaults', 'azureml-contrib-interpret', 'azureml-core', 'azureml-telemetry',
'azureml-interpret'
]
# specify CondaDependencies obj
myenv = CondaDependencies.create(conda_packages=['scikit-learn', 'pandas'],
pip_packages=['sklearn-pandas', 'pyyaml'] + azureml_pip_packages,
pin_sdk_version=False)
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
with open("myenv.yml","r") as f:
print(f.read())
# -
# %%writefile dockerfile
RUN apt-get update && apt-get install -y g++
# retrieve scoring explainer for deployment
scoring_explainer_model = Model(ws, 'IBM_attrition_explainer')
# +
from azureml.core.webservice import Webservice
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.model import Model
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={"data": "IBM_Attrition",
"method" : "local_explanation"},
description='Get local explanations for IBM Employee Attrition data')
inference_config = InferenceConfig(runtime= "python",
entry_script="score_remote_explain.py",
conda_file="myenv.yml",
extra_docker_file_steps="dockerfile")
# Use configs and models generated above
service = Model.deploy(ws, 'model-scoring-service', [scoring_explainer_model, original_model], inference_config, aciconfig)
service.wait_for_deployment(show_output=True)
# +
import requests
# create data to test service with
examples = x_test[:4]
input_data = examples.to_json()
headers = {'Content-Type':'application/json'}
# send request to service
resp = requests.post(service.scoring_uri, input_data, headers=headers)
print("POST to url", service.scoring_uri)
# can covert back to Python objects from json string if desired
print("prediction:", resp.text)
# -
service.delete()
# ## Next
# Learn about other use cases of the explain package on a:
# 1. [Training time: regression problem](../../tabular-data/explain-binary-classification-local.ipynb)
# 1. [Training time: binary classification problem](../../tabular-data/explain-binary-classification-local.ipynb)
# 1. [Training time: multiclass classification problem](../../tabular-data/explain-multiclass-classification-local.ipynb)
# 1. Explain models with engineered features:
# 1. [Simple feature transformations](../../tabular-data/simple-feature-transformations-explain-local.ipynb)
# 1. [Advanced feature transformations](../../tabular-data/advanced-feature-transformations-explain-local.ipynb)
# 1. [Save model explanations via Azure Machine Learning Run History](../run-history/save-retrieve-explanations-run-history.ipynb)
# 1. [Run explainers remotely on Azure Machine Learning Compute (AMLCompute)](../remote-explanation/explain-model-on-amlcompute.ipynb)
# 1. [Inferencing time: deploy a locally-trained model and explainer](./train-explain-model-locally-and-deploy.ipynb)
#
|
how-to-use-azureml/explain-model/azure-integration/scoring-time/train-explain-model-on-amlcompute-and-deploy.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Fixing `air_pollution_score` Data Type
# - 2008: convert string to float
# - 2018: convert int to float
#
# Load datasets `data_08_v3.csv` and `data_18_v3.csv`. You should've created these data files in the previous section: *Fixing Data Types Pt 1*.
# +
# load datasets
import pandas as pd
df_08 = pd.read_csv('data_08_v3.csv')
df_18 = pd.read_csv('data_18_v3.csv')
# -
# try using pandas' to_numeric or astype function to convert the
# 2008 air_pollution_score column to float -- this won't work
pd.to_numeric(df_08['air_pollution_score'], errors='ignore', downcast='float')
# # Figuring out the issue
# Looks like this isn't going to be as simple as converting the datatype. According to the error above, the air pollution score value in one of the rows is "6/4" - let's check it out.
df_08[df_08.air_pollution_score == '6/4']
# # It's not just the air pollution score!
# The mpg columns and greenhouse gas scores also seem to have the same problem - maybe that's why these were all saved as strings! According to [this link](http://www.fueleconomy.gov/feg/findacarhelp.shtml#airPollutionScore), which I found from the PDF documentation:
#
# "If a vehicle can operate on more than one type of fuel, an estimate is provided for each fuel type."
#
# Ohh... so all vehicles with more than one fuel type, or hybrids, like the one above (it uses ethanol AND gas) will have a string that holds two values - one for each. This is a little tricky, so I'm going to show you how to do it with the 2008 dataset, and then you'll try it with the 2018 dataset.
# First, let's get all the hybrids in 2008
hb_08 = df_08[df_08['fuel'].str.contains('/')]
hb_08
# Looks like this dataset only has one! The 2018 has MANY more - but don't worry - the steps I'm taking here will work for that as well!
# hybrids in 2018
hb_18 = df_18[df_18['fuel'].str.contains('/')]
hb_18
# We're going to take each hybrid row and split them into two new rows - one with values for the first fuel type (values before the "/"), and the other with values for the second fuel type (values after the "/"). Let's separate them with two dataframes!
# +
# create two copies of the 2008 hybrids dataframe
df1 = hb_08.copy() # data on first fuel type of each hybrid vehicle
df2 = hb_08.copy() # data on second fuel type of each hybrid vehicle
# Each one should look like this
df1
# -
# For this next part, we're going use pandas' apply function. See the docs [here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html).
# +
# columns to split by "/"
split_columns = ['fuel', 'air_pollution_score', 'city_mpg', 'hwy_mpg', 'cmb_mpg', 'greenhouse_gas_score']
# apply split function to each column of each dataframe copy
for c in split_columns:
df1[c] = df1[c].apply(lambda x: x.split("/")[0])
df2[c] = df2[c].apply(lambda x: x.split("/")[1])
# -
# this dataframe holds info for the FIRST fuel type of the hybrid
# aka the values before the "/"s
df1
# this dataframe holds info for the SECOND fuel type of the hybrid
# aka the values after the "/"s
df2
# +
# combine dataframes to add to the original dataframe
new_rows = df1.append(df2)
# now we have separate rows for each fuel type of each vehicle!
new_rows
# +
# drop the original hybrid rows
df_08.drop(hb_08.index, inplace=True)
# add in our newly separated rows
df_08 = df_08.append(new_rows, ignore_index=True)
# -
# check that all the original hybrid rows with "/"s are gone
df_08[df_08['fuel'].str.contains('/')]
df_08.shape
# # Repeat this process for the 2018 dataset
# create two copies of the 2018 hybrids dataframe, hb_18
df1 = hb_18.copy()
df2 = hb_18.copy()
# ### Split values for `fuel`, `city_mpg`, `hwy_mpg`, `cmb_mpg`
# You don't need to split for `air_pollution_score` or `greenhouse_gas_score` here because these columns are already ints in the 2018 dataset.
# +
# list of columns to split
split_columns = ['fuel', 'city_mpg', 'hwy_mpg', 'cmb_mpg']
# apply split function to each column of each dataframe copy
for c in split_columns:
df1[c] = df1[c].apply(lambda x: x.split("/")[0])
df2[c] = df1[c].apply(lambda x: x.split("/")[0])
# +
# append the two dataframes
new_rows = df1.append(df2)
# drop each hybrid row from the original 2018 dataframe
# do this by using pandas' drop function with hb_18's index
df_18.drop(hb_18.index, inplace=True)
# append new_rows to df_18
df_18 = df_18.append(new_rows, ignore_index=True)
# -
# check that they're gone
df_18[df_18['fuel'].str.contains('/')]
df_18.shape
# ### Now we can comfortably continue the changes needed for `air_pollution_score`! Here they are again:
# - 2008: convert string to float
# - 2018: convert int to float
# convert string to float for 2008 air pollution column
df_08['air_pollution_score'] = pd.to_numeric(df_08['air_pollution_score'], errors='ignore', downcast='float')
# convert int to float for 2018 air pollution column
df_18['air_pollution_score'] =df_18['air_pollution_score'].astype(float)
df_08.to_csv('data_08_v4.csv', index=False)
df_18.to_csv('data_18_v4.csv', index=False)
df_18.shape
|
fix_datatypes_air_pollution.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] hide=true
# # Classification
# $$
# \renewcommand{\like}{{\cal L}}
# \renewcommand{\loglike}{{\ell}}
# \renewcommand{\err}{{\cal E}}
# \renewcommand{\dat}{{\cal D}}
# \renewcommand{\hyp}{{\cal H}}
# \renewcommand{\Ex}[2]{E_{#1}[#2]}
# \renewcommand{\x}{{\mathbf x}}
# \renewcommand{\v}[1]{{\mathbf #1}}
# $$
# -
# **Note:** We've adapted this Mini Project from [Lab 5 in the CS109](https://github.com/cs109/2015lab5) course. Please feel free to check out the original lab, both for more exercises, as well as solutions.
# We turn our attention to **classification**. Classification tries to predict, which of a small set of classes, an observation belongs to. Mathematically, the aim is to find $y$, a **label** based on knowing a feature vector $\x$. For instance, consider predicting gender from seeing a person's face, something we do fairly well as humans. To have a machine do this well, we would typically feed the machine a bunch of images of people which have been labelled "male" or "female" (the training set), and have it learn the gender of the person in the image from the labels and the *features* used to determine gender. Then, given a new photo, the trained algorithm returns us the gender of the person in the photo.
#
# There are different ways of making classifications. One idea is shown schematically in the image below, where we find a line that divides "things" of two different types in a 2-dimensional feature space. The classification show in the figure below is an example of a maximum-margin classifier where construct a decision boundary that is far as possible away from both classes of points. The fact that a line can be drawn to separate the two classes makes the problem *linearly separable*. Support Vector Machines (SVM) are an example of a maximum-margin classifier.
#
# 
#
#
# + hide=true
# %matplotlib inline
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.cm as cm
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
import pandas as pd
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
import seaborn as sns
sns.set_style("whitegrid")
sns.set_context("poster")
import sklearn.model_selection
c0=sns.color_palette()[0]
c1=sns.color_palette()[1]
c2=sns.color_palette()[2]
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
def points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=True, colorscale=cmap_light,
cdiscrete=cmap_bold, alpha=0.1, psize=10, zfunc=False, predicted=False):
h = .02
X=np.concatenate((Xtr, Xte))
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 100),
np.linspace(y_min, y_max, 100))
#plt.figure(figsize=(10,6))
if zfunc:
p0 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 0]
p1 = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z=zfunc(p0, p1)
else:
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
ZZ = Z.reshape(xx.shape)
if mesh:
plt.pcolormesh(xx, yy, ZZ, cmap=cmap_light, alpha=alpha, axes=ax)
if predicted:
showtr = clf.predict(Xtr)
showte = clf.predict(Xte)
else:
showtr = ytr
showte = yte
ax.scatter(Xtr[:, 0], Xtr[:, 1], c=showtr-1, cmap=cmap_bold,
s=psize, alpha=alpha,edgecolor="k")
# and testing points
ax.scatter(Xte[:, 0], Xte[:, 1], c=showte-1, cmap=cmap_bold,
alpha=alpha, marker="s", s=psize+10)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
return ax,xx,yy
def points_plot_prob(ax, Xtr, Xte, ytr, yte, clf, colorscale=cmap_light,
cdiscrete=cmap_bold, ccolor=cm, psize=10, alpha=0.1):
ax,xx,yy = points_plot(ax, Xtr, Xte, ytr, yte, clf, mesh=False,
colorscale=colorscale, cdiscrete=cdiscrete,
psize=psize, alpha=alpha, predicted=True)
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=ccolor, alpha=.2, axes=ax)
cs2 = plt.contour(xx, yy, Z, cmap=ccolor, alpha=.6, axes=ax)
plt.clabel(cs2, fmt = '%2.1f', colors = 'k', fontsize=14, axes=ax)
return ax
# -
# ## A Motivating Example Using `sklearn`: Heights and Weights
# We'll use a dataset of heights and weights of males and females to hone our understanding of classifiers. We load the data into a dataframe and plot it.
dflog = pd.read_csv("data/01_heights_weights_genders.csv")
dflog.head()
# Remember that the form of data we will use always is
#
# 
#
# with the "response" or "label" $y$ as a plain array of 0s and 1s for binary classification. Sometimes we will also see -1 and +1 instead. There are also *multiclass* classifiers that can assign an observation to one of $K > 2$ classes and the label may then be an integer, but we will not be discussing those here.
#
# `y = [1,1,0,0,0,1,0,1,0....]`.
# <div class="span5 alert alert-info">
# <h3>Checkup Exercise Set I</h3>
#
# <ul>
# <li> <b>Exercise:</b> Create a scatter plot of Weight vs. Height
# <li> <b>Exercise:</b> Color the points differently by Gender
# </ul>
# </div>
# your turn
_ = sns.scatterplot('Height', 'Weight', data=dflog, hue='Gender', alpha=0.3, legend='brief')
_ = plt.legend(loc='lower right', fontsize=14)
plt.show()
# ### Training and Test Datasets
#
# When fitting models, we would like to ensure two things:
#
# * We have found the best model (in terms of model parameters).
# * The model is highly likely to generalize i.e. perform well on unseen data.
#
# <br/>
# <div class="span5 alert alert-success">
# <h4>Purpose of splitting data into Training/testing sets</h4>
# <ul>
# <li> We built our model with the requirement that the model fit the data well. </li>
# <li> As a side-effect, the model will fit <b>THIS</b> dataset well. What about new data? </li>
# <ul>
# <li> We wanted the model for predictions, right?</li>
# </ul>
# <li> One simple solution, leave out some data (for <b>testing</b>) and <b>train</b> the model on the rest </li>
# <li> This also leads directly to the idea of cross-validation, next section. </li>
# </ul>
# </div>
# First, we try a basic Logistic Regression:
#
# * Split the data into a training and test (hold-out) set
# * Train on the training set, and test for accuracy on the testing set
# +
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
# Split the data into a training and test set.
Xlr, Xtestlr, ylr, ytestlr = train_test_split(dflog[['Height','Weight']].values,
(dflog.Gender == "Male").values,random_state=5)
clf = LogisticRegression(solver='lbfgs')
# Fit the model on the trainng data.
clf.fit(Xlr, ylr)
# Print the accuracy from the testing data.
print(accuracy_score(clf.predict(Xtestlr), ytestlr))
# -
# ### Tuning the Model
# The model has some hyperparameters we can tune for hopefully better performance. For tuning the parameters of your model, you will use a mix of *cross-validation* and *grid search*. In Logistic Regression, the most important parameter to tune is the *regularization parameter* `C`. Note that the regularization parameter is not always part of the logistic regression model.
#
# The regularization parameter is used to control for unlikely high regression coefficients, and in other cases can be used when data is sparse, as a method of feature selection.
#
# You will now implement some code to perform model tuning and selecting the regularization parameter $C$.
# We use the following `cv_score` function to perform K-fold cross-validation and apply a scoring function to each test fold. In this incarnation we use accuracy score as the default scoring function.
# +
from sklearn.model_selection import KFold
from sklearn.metrics import accuracy_score
def cv_score(clf, x, y, score_func=accuracy_score):
result = 0
nfold = 5
for train, test in KFold(nfold).split(x): # split data into train/test groups, 5 times
clf.fit(x[train], y[train]) # fit
result += score_func(clf.predict(x[test]), y[test]) # evaluate score function on held-out data
return result / nfold # average
# -
# Below is an example of using the `cv_score` function for a basic logistic regression model without regularization.
clf = LogisticRegression(solver='lbfgs')
score = cv_score(clf, Xlr, ylr)
print(score)
# <div class="span5 alert alert-info">
# <h3>Checkup Exercise Set II</h3>
#
# <b>Exercise:</b> Implement the following search procedure to find a good model
# <ul>
# <li> You are given a list of possible values of `C` below
# <li> For each C:
# <ol>
# <li> Create a logistic regression model with that value of C
# <li> Find the average score for this model using the `cv_score` function **only on the training set** `(Xlr, ylr)`
# </ol>
# <li> Pick the C with the highest average score
# </ul>
# Your goal is to find the best model parameters based *only* on the training set, without showing the model test set at all (which is why the test set is also called a *hold-out* set).
# </div>
# +
#the grid of parameters to search over
Cs = [0.001, 0.1, 1, 10, 100]
# your turn
scores = []
for c in Cs:
cv_clf = LogisticRegression(C=c, solver='lbfgs', random_state=8)
scores.append(cv_score(cv_clf, Xlr, ylr))
#compile respective scores into a data frame
d = {'Cs': Cs, 'Scores': scores}
score_grid = pd.DataFrame.from_dict(d)
score_grid
# -
# <div class="span5 alert alert-info">
# <h3>Checkup Exercise Set III</h3>
# **Exercise:** Now you want to estimate how this model will predict on unseen data in the following way:
# <ol>
# <li> Use the C you obtained from the procedure earlier and train a Logistic Regression on the training data
# <li> Calculate the accuracy on the test data
# </ol>
#
# <p>You may notice that this particular value of `C` may or may not do as well as simply running the default model on a random train-test split. </p>
#
# <ul>
# <li> Do you think that's a problem?
# <li> Why do we need to do this whole cross-validation and grid search stuff anyway?
# </ul>
#
# </div>
# +
# your turn
# -
# According to the cross-validation exercise above, the scores hardly varied based on different values of *C*. For the current exercise, in order to try something other than the default, a c-value of 0.1 is used.
# +
clf = LogisticRegression(C=0.1, solver='lbfgs')
# Fit the model on the trainng data.
clf.fit(Xlr, ylr)
# Print the accuracy from the testing data.
print(accuracy_score(clf.predict(Xtestlr), ytestlr))
# -
# As the cross-validation indicated, the accuracy score for this iteration is the same as running the default from before. That's not necessarily a problem, it just shows that this particular dataset is not overly affected by values of *C*. That doesn't mean that cross-validation is not useful.
# ### Black Box Grid Search in `sklearn`
# Scikit-learn, as with many other Python packages, provides utilities to perform common operations so you do not have to do it manually. It is important to understand the mechanics of each operation, but at a certain point, you will want to use the utility instead to save time...
# <div class="span5 alert alert-info">
# <h3>Checkup Exercise Set IV</h3>
#
# <b>Exercise:</b> Use scikit-learn's [GridSearchCV](http://scikit-learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html) tool to perform cross validation and grid search.
#
# * Instead of writing your own loops above to iterate over the model parameters, can you use GridSearchCV to find the best model over the training set?
# * Does it give you the same best value of `C`?
# * How does this model you've obtained perform on the test set?</div>
# +
# your turn
from sklearn.model_selection import GridSearchCV
param_grid = {'C': Cs}
grid_clf = LogisticRegression(solver='lbfgs')
log_cv = GridSearchCV(grid_clf, param_grid, cv=5, return_train_score=True)
log_cv.fit(Xlr, ylr)
res = pd.DataFrame(log_cv.cv_results_)
res = res.iloc[:, [4,6,7,8,9,10,11,13,14,15,16,17,18,19]]
res
# -
print('The best value of C is {}'.format(log_cv.best_params_))
print('The best test score is {}'.format(log_cv.best_score_))
# ## A Walkthrough of the Math Behind Logistic Regression
# ### Setting up Some Demo Code
# Let's first set some code up for classification that we will need for further discussion on the math. We first set up a function `cv_optimize` which takes a classifier `clf`, a grid of hyperparameters (such as a complexity parameter or regularization parameter) implemented as a dictionary `parameters`, a training set (as a samples x features array) `Xtrain`, and a set of labels `ytrain`. The code takes the traning set, splits it into `n_folds` parts, sets up `n_folds` folds, and carries out a cross-validation by splitting the training set into a training and validation section for each foldfor us. It prints the best value of the parameters, and retuens the best classifier to us.
def cv_optimize(clf, parameters, Xtrain, ytrain, n_folds=5):
gs = sklearn.model_selection.GridSearchCV(clf, param_grid=parameters, cv=n_folds)
gs.fit(Xtrain, ytrain)
print("BEST PARAMS", gs.best_params_)
best = gs.best_estimator_
return best
# We then use this best classifier to fit the entire training set. This is done inside the `do_classify` function which takes a dataframe `indf` as input. It takes the columns in the list `featurenames` as the features used to train the classifier. The column `targetname` sets the target. The classification is done by setting those samples for which `targetname` has value `target1val` to the value 1, and all others to 0. We split the dataframe into 80% training and 20% testing by default, standardizing the dataset if desired. (Standardizing a data set involves scaling the data so that it has 0 mean and is described in units of its standard deviation. We then train the model on the training set using cross-validation. Having obtained the best classifier using `cv_optimize`, we retrain on the entire training set and calculate the training and testing accuracy, which we print. We return the split data and the trained classifier.
# + hide=true
from sklearn.model_selection import train_test_split
def do_classify(clf, parameters, indf, featurenames, targetname, target1val, standardize=False, train_size=0.8):
subdf=indf[featurenames]
if standardize:
subdfstd=(subdf - subdf.mean())/subdf.std()
else:
subdfstd=subdf
X=subdfstd.values
y=(indf[targetname].values==target1val)*1
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, train_size=train_size)
clf = cv_optimize(clf, parameters, Xtrain, ytrain)
clf=clf.fit(Xtrain, ytrain)
training_accuracy = clf.score(Xtrain, ytrain)
test_accuracy = clf.score(Xtest, ytest)
print("Accuracy on training data: {:0.2f}".format(training_accuracy))
print("Accuracy on test data: {:0.2f}".format(test_accuracy))
return clf, Xtrain, ytrain, Xtest, ytest
# -
# ## Logistic Regression: The Math
# We could approach classification as linear regression, there the class, 0 or 1, is the target variable $y$. But this ignores the fact that our output $y$ is discrete valued, and futhermore, the $y$ predicted by linear regression will in general take on values less than 0 and greater than 1. Additionally, the residuals from the linear regression model will *not* be normally distributed. This violation means we should not use linear regression.
#
# But what if we could change the form of our hypotheses $h(x)$ instead?
#
# The idea behind logistic regression is very simple. We want to draw a line in feature space that divides the '1' samples from the '0' samples, just like in the diagram above. In other words, we wish to find the "regression" line which divides the samples. Now, a line has the form $w_1 x_1 + w_2 x_2 + w_0 = 0$ in 2-dimensions. On one side of this line we have
#
# $$w_1 x_1 + w_2 x_2 + w_0 \ge 0,$$
#
# and on the other side we have
#
# $$w_1 x_1 + w_2 x_2 + w_0 < 0.$$
#
# Our classification rule then becomes:
#
# \begin{eqnarray*}
# y = 1 &\mbox{if}& \v{w}\cdot\v{x} \ge 0\\
# y = 0 &\mbox{if}& \v{w}\cdot\v{x} < 0
# \end{eqnarray*}
#
# where $\v{x}$ is the vector $\{1,x_1, x_2,...,x_n\}$ where we have also generalized to more than 2 features.
#
# What hypotheses $h$ can we use to achieve this? One way to do so is to use the **sigmoid** function:
#
# $$h(z) = \frac{1}{1 + e^{-z}}.$$
#
# Notice that at $z=0$ this function has the value 0.5. If $z > 0$, $h > 0.5$ and as $z \to \infty$, $h \to 1$. If $z < 0$, $h < 0.5$ and as $z \to -\infty$, $h \to 0$. As long as we identify any value of $y > 0.5$ as 1, and any $y < 0.5$ as 0, we can achieve what we wished above.
#
# This function is plotted below:
h = lambda z: 1. / (1 + np.exp(-z))
zs=np.arange(-5, 5, 0.1)
plt.plot(zs, h(zs), alpha=0.5);
# So we then come up with our rule by identifying:
#
# $$z = \v{w}\cdot\v{x}.$$
#
# Then $h(\v{w}\cdot\v{x}) \ge 0.5$ if $\v{w}\cdot\v{x} \ge 0$ and $h(\v{w}\cdot\v{x}) \lt 0.5$ if $\v{w}\cdot\v{x} \lt 0$, and:
#
# \begin{eqnarray*}
# y = 1 &if& h(\v{w}\cdot\v{x}) \ge 0.5\\
# y = 0 &if& h(\v{w}\cdot\v{x}) \lt 0.5.
# \end{eqnarray*}
#
# We will show soon that this identification can be achieved by minimizing a loss in the ERM framework called the **log loss** :
#
# $$ R_{\cal{D}}(\v{w}) = - \sum_{y_i \in \cal{D}} \left ( y_i \log(h(\v{w}\cdot\v{x})) + ( 1 - y_i) \log(1 - h(\v{w}\cdot\v{x})) \right )$$
#
# We will also add a regularization term:
#
# $$ R_{\cal{D}}(\v{w}) = - \sum_{y_i \in \cal{D}} \left ( y_i \log(h(\v{w}\cdot\v{x})) + ( 1 - y_i) \log(1 - h(\v{w}\cdot\v{x})) \right ) + \frac{1}{C} \v{w}\cdot\v{w},$$
#
# where $C$ is the regularization strength (equivalent to $1/\alpha$ from the Ridge case), and smaller values of $C$ mean stronger regularization. As before, the regularization tries to prevent features from having terribly high weights, thus implementing a form of feature selection.
#
# How did we come up with this loss? We'll come back to that, but let us see how logistic regression works out.
#
dflog.head()
clf_l, Xtrain_l, ytrain_l, Xtest_l, ytest_l = do_classify(LogisticRegression(solver='lbfgs'),
{"C": [0.01, 0.1, 1, 10, 100]},
dflog, ['Weight', 'Height'], 'Gender','Male')
plt.figure()
ax=plt.gca()
points_plot(ax, Xtrain_l, Xtest_l, ytrain_l, ytest_l, clf_l, alpha=0.2);
# In the figure here showing the results of the logistic regression, we plot the actual labels of both the training(circles) and test(squares) samples. The 0's (females) are plotted in red, the 1's (males) in blue. We also show the classification boundary, a line (to the resolution of a grid square). Every sample on the red background side of the line will be classified female, and every sample on the blue side, male. Notice that most of the samples are classified well, but there are misclassified people on both sides, as evidenced by leakage of dots or squares of one color ontothe side of the other color. Both test and traing accuracy are about 92%.
# ### The Probabilistic Interpretaion
# Remember we said earlier that if $h > 0.5$ we ought to identify the sample with $y=1$? One way of thinking about this is to identify $h(\v{w}\cdot\v{x})$ with the probability that the sample is a '1' ($y=1$). Then we have the intuitive notion that lets identify a sample as 1 if we find that the probabilty of being a '1' is $\ge 0.5$.
#
# So suppose we say then that the probability of $y=1$ for a given $\v{x}$ is given by $h(\v{w}\cdot\v{x})$?
#
# Then, the conditional probabilities of $y=1$ or $y=0$ given a particular sample's features $\v{x}$ are:
#
# \begin{eqnarray*}
# P(y=1 | \v{x}) &=& h(\v{w}\cdot\v{x}) \\
# P(y=0 | \v{x}) &=& 1 - h(\v{w}\cdot\v{x}).
# \end{eqnarray*}
#
# These two can be written together as
#
# $$P(y|\v{x}, \v{w}) = h(\v{w}\cdot\v{x})^y \left(1 - h(\v{w}\cdot\v{x}) \right)^{(1-y)} $$
#
# Then multiplying over the samples we get the probability of the training $y$ given $\v{w}$ and the $\v{x}$:
#
# $$P(y|\v{x},\v{w}) = P(\{y_i\} | \{\v{x}_i\}, \v{w}) = \prod_{y_i \in \cal{D}} P(y_i|\v{x_i}, \v{w}) = \prod_{y_i \in \cal{D}} h(\v{w}\cdot\v{x_i})^{y_i} \left(1 - h(\v{w}\cdot\v{x_i}) \right)^{(1-y_i)}$$
#
# Why use probabilities? Earlier, we talked about how the regression function $f(x)$ never gives us the $y$ exactly, because of noise. This hold for classification too. Even with identical features, a different sample may be classified differently.
#
# We said that another way to think about a noisy $y$ is to imagine that our data $\dat$ was generated from a joint probability distribution $P(x,y)$. Thus we need to model $y$ at a given $x$, written as $P(y|x)$, and since $P(x)$ is also a probability distribution, we have:
#
# $$P(x,y) = P(y | x) P(x)$$
#
# and can obtain our joint probability $P(x, y)$.
#
# Indeed its important to realize that a particular training set can be thought of as a draw from some "true" probability distribution (just as we did when showing the hairy variance diagram). If for example the probability of classifying a test sample as a '0' was 0.1, and it turns out that the test sample was a '0', it does not mean that this model was necessarily wrong. After all, in roughly a 10th of the draws, this new sample would be classified as a '0'! But, of-course its more unlikely than its likely, and having good probabilities means that we'll be likely right most of the time, which is what we want to achieve in classification. And furthermore, we can quantify this accuracy.
#
# Thus its desirable to have probabilistic, or at the very least, ranked models of classification where you can tell which sample is more likely to be classified as a '1'. There are business reasons for this too. Consider the example of customer "churn": you are a cell-phone company and want to know, based on some of my purchasing habit and characteristic "features" if I am a likely defector. If so, you'll offer me an incentive not to defect. In this scenario, you might want to know which customers are most likely to defect, or even more precisely, which are most likely to respond to incentives. Based on these probabilities, you could then spend a finite marketing budget wisely.
# ### Maximizing the Probability of the Training Set
# Now if we maximize $P(y|\v{x},\v{w})$, we will maximize the chance that each point is classified correctly, which is what we want to do. While this is not exactly the same thing as maximizing the 1-0 training risk, it is a principled way of obtaining the highest probability classification. This process is called **maximum likelihood** estimation since we are maximising the **likelihood of the training data y**,
#
# $$\like = P(y|\v{x},\v{w}).$$
#
# Maximum likelihood is one of the corenerstone methods in statistics, and is used to estimate probabilities of data.
#
# We can equivalently maximize
#
# $$\loglike = \log{P(y|\v{x},\v{w})}$$
#
# since the natural logarithm $\log$ is a monotonic function. This is known as maximizing the **log-likelihood**. Thus we can equivalently *minimize* a risk that is the negative of $\log(P(y|\v{x},\v{w}))$:
#
# $$R_{\cal{D}}(h(x)) = -\loglike = -\log \like = -\log{P(y|\v{x},\v{w})}.$$
#
#
# Thus
#
# \begin{eqnarray*}
# R_{\cal{D}}(h(x)) &=& -\log\left(\prod_{y_i \in \cal{D}} h(\v{w}\cdot\v{x_i})^{y_i} \left(1 - h(\v{w}\cdot\v{x_i}) \right)^{(1-y_i)}\right)\\
# &=& -\sum_{y_i \in \cal{D}} \log\left(h(\v{w}\cdot\v{x_i})^{y_i} \left(1 - h(\v{w}\cdot\v{x_i}) \right)^{(1-y_i)}\right)\\
# &=& -\sum_{y_i \in \cal{D}} \log\,h(\v{w}\cdot\v{x_i})^{y_i} + \log\,\left(1 - h(\v{w}\cdot\v{x_i}) \right)^{(1-y_i)}\\
# &=& - \sum_{y_i \in \cal{D}} \left ( y_i \log(h(\v{w}\cdot\v{x})) + ( 1 - y_i) \log(1 - h(\v{w}\cdot\v{x})) \right )
# \end{eqnarray*}
#
# This is exactly the risk we had above, leaving out the regularization term (which we shall return to later) and was the reason we chose it over the 1-0 risk.
#
# Notice that this little process we carried out above tells us something very interesting: **Probabilistic estimation using maximum likelihood is equivalent to Empiricial Risk Minimization using the negative log-likelihood**, since all we did was to minimize the negative log-likelihood over the training samples.
#
# `sklearn` will return the probabilities for our samples, or for that matter, for any input vector set $\{\v{x}_i\}$, i.e. $P(y_i | \v{x}_i, \v{w})$:
clf_l.predict_proba(Xtest_l)
# ### Discriminative vs Generative Classifier
# Logistic regression is what is known as a **discriminative classifier** as we learn a soft boundary between/among classes. Another paradigm is the **generative classifier** where we learn the distribution of each class. For more examples of generative classifiers, look [here](https://en.wikipedia.org/wiki/Generative_model).
#
# Let us plot the probabilities obtained from `predict_proba`, overlayed on the samples with their true labels:
plt.figure()
ax = plt.gca()
points_plot_prob(ax, Xtrain_l, Xtest_l, ytrain_l, ytest_l, clf_l, psize=20, alpha=0.1);
# Notice that lines of equal probability, as might be expected are stright lines. What the classifier does is very intuitive: if the probability is greater than 0.5, it classifies the sample as type '1' (male), otherwise it classifies the sample to be class '0'. Thus in the diagram above, where we have plotted predicted values rather than actual labels of samples, there is a clear demarcation at the 0.5 probability line.
#
# Again, this notion of trying to obtain the line or boundary of demarcation is what is called a **discriminative** classifier. The algorithm tries to find a decision boundary that separates the males from the females. To classify a new sample as male or female, it checks on which side of the decision boundary the sample falls, and makes a prediction. In other words we are asking, given $\v{x}$, what is the probability of a given $y$, or, what is the likelihood $P(y|\v{x},\v{w})$?
|
Mini_Project_Logistic_Regression.ipynb
|